%In this work, we focused on a particular type of non-verbal communication: facial expressions.
One way to enable users to control the face expression of their avatar is to detect their own face expression, we call this isomorphic control.
Vision-based techniques use either external depth cameras\cite{weise11,lugrin16} or cameras embedded in a VR headset \cite{li15,suzuki16}.
-However, with such techniques expressions are limited to expressions users are able to perform and users cannot give their avatar a different expression than their own.
-Therefore we were interested in non-isomorphic control of face expressions, with interaction techniques \cite{baloup21}.
+However, with such techniques face expressions are limited to expressions users are able to perform.
+Moreover, users cannot give their avatar a different expression than their own.
+Therefore we investigated the non-isomorphic control of face expressions, with interaction techniques \cite{baloup21}.
The fine control of face expressions requires many degrees of freedom.
The FACS standard defines 24 Action Units \cite{ekman78}, and the MPEG-4 proposes 68 Facial Animation Parameters\cite{pandzic03}.
Therefore we propose to reduce the number of degrees of freedom by decomposing the selection of a face expression into several sub-tasks, similarly to Bowman's decomposition of 3D interaction tasks \cite{bowman04}.
-The first sub-task consist in selecting a face expression among a list of pre-defined expressions.
+The sub-tasks are: selecting a face expression, its intensity, duration, and ending.
+The selection consists in choosing a face expression among a list of pre-defined expressions.
Each pre-defined expression is a configuration of FACS action units values.
The \reffig{fig:faceexpressions} shows four of the face expression selection techniques we designed, the fifth one uses voice commands.
-The visual representation of face expressions are emojis, because people use them frequently, they can distinguish them with a reasonnable size and they represent face expressions that are not necessarily emotions.
-The cirular menu though is based on Plutchik's wheel of emotions \cite{plutchik01}.
-The difference is that we mapped the maximum intensity to the edge of the circle rather than to the middle so that the center represents the neutral face.
+These techniques are essentially item selection techniques, similar to what we would use for command selection.
+Similarly to command selection, some of the face expressions share properties, which we leveraged to structure the layout of items in some of the techniques.
+%Many of them represent emotions, therefore we leverage this information for the design of the selection techniques.
\begin{figure}[htb]
\def\fh{3.7cm}
}
\end{figure}
-The other sub-tasks consist in selecting the intensity, duration and ending of the face expression.
+The visual representation of face expressions in our techniques essentially use emojis.
+We made this choice because people use them frequently in messaging and social networks.
+Also, they are designed to be recognized even with a small size.
+Some of them represent emotions.
+If we restrict face expressions corresponding to emotions, we can leverage models of emotions such as PAD \cite{mehrabian96} or Plutchik's wheel of emotions \cite{plutchik01}.
+The layout of one of the face expression selection techniques we proposed is adapted from Plutchik's wheel, which organizes emotions along 8 axes representing 8 base emotions.
+The difference between Plutchik's wheel and our menu is that we mapped the maximum intensity to the edge of the circle rather than to the middle so that the center represents the neutral face.
+The grid menu allows the selection of any face expression, regardless if they represent an emotion or not.
+In particular users can select emojis with decorations such as hearts, tears, or glasses for example.
+In this case we render these decorations on top of the avatar's face.
+Transitions between two face expressions are made by interpolating the values of each FACS action unit.
+
+
+%The the layout of the circular menu is based on Plutchik's wheel of emotions \cite{plutchik01}.
+
+We designed several techniques for controlling the intensity, duration, and ending of face expressions all at once.
+The first technique maps the intensity to the controller trigger.
+With the second technique, users have to shake the controller and the intensity is mapped to the shaking speed.
+The third technique is similar, users must roll the controller and the intensity is mapped to the roll angle.
+The fourth technique draws a
+elastic
+touchpad position
\subsubsection{Discussion}
Raycursor: issues with convex shapes, long shapes, dense areas.
-Emotions:
+Emotions: trade-off between isomorphic and non isomorphic
\section{Conclusion}