For the design of this technique, we performed three initial evaluations.
The first one was about the most appropriate visual feedforward, and was inspired by a similar work for 2D proximity selection \cite{guillon15}. The second evaluation was about the transfer function for the movement of the cursor, and the last one was about the benefits of filtering the inputs that control the ray with a 1\euro~filter~\cite{casiez12}.
After this, we designed a semi-automatic RayCursor that combines RayCursor and raycasting (\reffig{fig:raycursor}).
-Finally we compared the performance of the two versions of RayCursor, raycasting and another technique of the literature \cite{ro17}
+Finally, we compared the performance of the two versions of RayCursor, raycasting, and another technique of the literature \cite{ro17}.
\begin{figure}[htb]
\def\fh{3.5cm}
- a)\hspace{-1mm}
+ %a)\hspace{-1mm}
\includegraphics[height=\fh]{raycursor_a}
\hfill
- b)\hspace{-4mm}
+ %b)\hspace{-4mm}
\includegraphics[height=\fh]{raycursor_b}
\hfill
\vrule
\hfill
- c)\hspace{-4mm}
+ %c)\hspace{-4mm}
\includegraphics[height=\fh]{raycursor_c}
\hfill
- d)\hspace{-4mm}
+ %d)\hspace{-4mm}
\includegraphics[height=\fh]{raycursor_d}
\hfill
- e)\hspace{-4mm}
+ %e)\hspace{-4mm}
\includegraphics[height=\fh]{raycursor_e}
\hfill
- f)\hspace{-4mm}
+ %f)\hspace{-4mm}
\includegraphics[height=\fh]{raycursor_c}
\label{fig:raycursor}
\caption[Illustration of Raycursor]{Two versions of Raycursor. The manual RayCursor (left) selects the nearest target from the cursor. The semi-automatic RayCursor acts like raycasting, a black cursor is position on the first intersected target. When the ray moves out of a target, it remains selected while it is the nearest target. The users can move the cursor manually with the touchpad, which turns red, to select another target. If the users lift their finger for more than 1s, the cursor switches back to its initial behavior.}
\subsubsection{Facial expression selection}
-Non verbal communication
-Facial expressions \cite{baloup21}
-
-task decomposition \cite{bowman04}
+Research on virtual reality and the possibility of creating immersive virtual environment inspired many science-fiction authors.
+%Twenty years after Sutherland's work \cite{sutherland65}, the Neuromancer describes the \defword{cyberspace} as an alternate reality out of the physical world, created with machines \cite{gibson84}.
+Decades after Sutherland's work \cite{sutherland65}, novels like Neuromancer \cite{gibson84}, Snow Crash \cite{stephenson92}, and more recently Ready Player One \cite{cline11} describe immersive virtual worlds as alternate realities in which people can socialize, play, or even work.
+The new brand of Facebook, Meta, is a reference to Snow Crash's \defword{metaverse} and shows the new focus of the company on immersive social network, with Spaces then Horizon\footnote{\href{https://web.archive.org/web/20191005002238/https://www.facebook.com/spaces}{https://www.facebook.com/spaces}, \href{https://www.oculus.com/facebookhorizon/}{https://www.oculus.com/facebookhorizon/}}.
+Similarly to other immersive social network like Mozilla Hubs\footnote{\href{https://hubs.mozilla.com/}{https://hubs.mozilla.com/}}, VRChat\footnote{\href{https://hello.vrchat.com/}{https://hello.vrchat.com/}}, and RecRoom\footnote{\href{https://recroom.com/}{https://recroom.com/}}, the objective is to enable people to get together in a virtual environment, and interact with each other as if they were in the same room.
+
+Such immersive virtual environments require easy and usable ways to perform atomic actions such as the study presented in the previous section.
+%selecting and manipulating objects, or navigating the environment.
+But above all, communication is certainly the most important aspect of social networks in general.
+Text entry remains a difficult and tedious task in immersive virtual environments, and the current best solution is to simply use voice.
+However, non-verbal communication such as face expressions is also an essential aspect of communication, whether for speech in the physical world or by writing \cite{carter13}.
+%In this work, we focused on a particular type of non-verbal communication: facial expressions.
+One way to enable users to control the face expression of their avatar is to detect their own face expression, we call this isomorphic control.
+Vision-based techniques use either external depth cameras\cite{weise11,lugrin16} or cameras embedded in a VR headset \cite{li15,suzuki16}.
+However, with such techniques expressions are limited to expressions users are able to perform and users cannot give their avatar a different expression than their own.
+Therefore we were interested in non-isomorphic control of face expressions, with interaction techniques \cite{baloup21}.
+
+The fine control of face expressions requires many degrees of freedom.
+The FACS standard defines 24 Action Units \cite{ekman78}, and the MPEG-4 proposes 68 Facial Animation Parameters\cite{pandzic03}.
+Therefore we propose to reduce the number of degrees of freedom by decomposing the selection of a face expression into several sub-tasks, similarly to Bowman's decomposition of 3D interaction tasks \cite{bowman04}.
+The first sub-task consist in selecting a face expression among a list of pre-defined expressions.
+Each pre-defined expression is a configuration of FACS action units values.
+The \reffig{fig:faceexpressions} shows four of the face expression selection techniques we designed, the fifth one uses voice commands.
+The visual representation of face expressions are emojis, because people use them frequently, they can distinguish them with a reasonnable size and they represent face expressions that are not necessarily emotions.
+The cirular menu though is based on Plutchik's wheel of emotions \cite{plutchik01}.
+The difference is that we mapped the maximum intensity to the edge of the circle rather than to the middle so that the center represents the neutral face.
\begin{figure}[htb]
\def\fh{3.7cm}
- a)
+ %a)
\includegraphics[height=\fh]{emoraye_2d-menu}
\hfill
- b)
+ %b)
\includegraphics[height=\fh]{emoraye_touchpad-menu}
\hfill
- c)
+ %c)
\includegraphics[height=\fh]{emoraye_touchpad-gesture}
\hfill
- d)
+ %d)
\includegraphics[height=\fh]{emoraye_rmw}
- \label{fig:emoraye}
- \caption{Four of the interaction techniques designed: a) menu presents a grid menu in front of the user with raycasting used to select an expression, b) touchpad presents a circular menu above the controller and selection is made using the controller's touchpad, c) gestures is based on gestures on the controller’s touchpad to define an expression, and d) rmw is a result of our first experiment, which presents a circular menu in front of the user with raycasting used to select an expression.
- All techniques present a feedforward or feedback of the expression, using a miniature version of the avatar's face.
+ \label{fig:faceexpressions}
+ \caption[Avatar face expression selection techniques in VR.]{Four of the face expression selection techniques: raycasting 2D grid menu, 2D circular menu arranged by emotion, touchpad gestures, raycasting on the 2D circular menu arranged by emotion.
+ %designed:
+ %a) menu presents a grid menu in front of the user with raycasting used to select an expression,
+ %b) touchpad presents a circular menu above the controller and selection is made using the controller's touchpad,
+ %c) gestures is based on gestures on the controller’s touchpad to define an expression, and
+ %d) rmw is a result of our first experiment, which presents a circular menu in front of the user with raycasting used to select an expression.
+ %All techniques present a feedforward or feedback of the expression, using a miniature version of the avatar's face.
}
\end{figure}
+The other sub-tasks consist in selecting the intensity, duration and ending of the face expression.
+
\subsubsection{Discussion}
Raycursor: issues with convex shapes, long shapes, dense areas.
+Emotions:
+
\section{Conclusion}
All these input techniques use hands dexterity and our capacity to touch and manipulate.