Other research investigated technologies that sense other gestures properties such as the part of the finger touching the surface \cite{harrison11}, make a distinction between contacts from different users \cite{dietz01}, or analyze the hand posture \cite{murugappan12}.
This has been an active research area in the last decades, therefore this is just a tiny overview of what has been done on this topic.
+In this work, we were interested in \defword{finger identification}.
+It was the master internship work of Alix Goguey, which I co-supervised with Géry Casiez.
+We collaborated with Fanny Chevalier and Nicolas Roussel in our research team, as well as Daniel Vogel from the University of Waterloo.
-In this work, we were interested in \defword{finger identification}.
-It is an additional hand gesture property, which says which finger of which hand produced a given contact point.
+Finger identification is an additional hand gesture property, which says which finger of which hand produced a given contact point.
There is still no technology that senses this property directly in consumer electronic products.
A workaround with existing technologies is to ask users to press all fingers before releasing some of them \cite{lepinski10}.
Other projects use different processing of sensed data.
%Yet, this research field contributed to the relative success of the last generation of VR headsets.
%However, more research is still necessary to discover the benefits and best ways to interact in immersive virtual environments.
The studies I will describe in this section focus on input methods for immersive virtual reality.
-We take immersive virtuality reality as a context that constrains the input methods we can use, in which users have to perform specific tasks.
+This was Marc Baloup's work during his master internship that I co-supervised with Géry Casiez and his Ph.D. that I co-supervised with Géry Casiez and Martin Hachet.
+It was part of the Avatar project funded by Inria.
+In this work we take immersive virtuality reality as a context that constrains the input methods we can use, in which users have to perform specific tasks.
One of these constraints is that the users cannot see their own bodies because the virtual environment covers their entire field of view.
Therefore, either a motion capture system senses the users' body movements, or they hold input devices in their hands.
Users interact with their environment mostly with gestures, but also with buttons, touchpads, and joysticks on handheld devices.
% Et donc, comment faire ? Certain le font déjà (robotique comportementale basée sur la cognition, comme par exemple l’équipe FLOWERS à Bordeaux), mais comment inclure ça dans les systèmes interactifs ? Est-ce que ton modèle “seven stages of reaction” permettrait de mieux prendre ça en compte, ou le prendre en compte dans le système, etc. ?
% Et quelles perspectives tu ouvres ?
-A smooth interaction between a user and an interactive system requires efficient connections between them.
-
-User model and system model. Both models include how they behave and how they perceive its environment, which includes the other (resp. interactive system and user).
-We study the system with rational methods, but we study the user and the interaction of both with empirical methods.
-
-The user usually has the initiative, it is the case for computer-as-a-tool systems.
-Therefore the system must not be a limiting factor for the user's sensorimotor loop. For example, change blindness \cite{rensink97}
-Therefore the system must react in real-time…
-The actual speed depends on the human sensorimotor system involved.
-Visual system requires about \qty{100}{\hertz} response, haptics \qty{1000}{\hertz}, audio \qty{40000}{\hertz}.
-Lower
-Identify the blocking points is important.
-For example force feedback devices use force models computed over \qty{1000}{\hertz}, but the force model to be applied can switch at a lower frequency.
-The finer characterization of the Human-System loop
+The idea here is that interaction is not separate loops on the user and system side.
+Once connected, these loops form a Human-System loop that cycles between the user and the system.
+Therefore, a smooth interaction between a user and an interactive system requires efficient connections between them.
+Here we focus on computer-as-a-tool systems in which the users have the initiative.
+%The user usually has the initiative, it is the case for computer-as-a-tool systems.
+Hence, the system must not be a limiting factor to the user's sensorimotor loop, it must react in \defword{real-time}.
+But the notion of real-time is context-dependent.
+The visual system requires a \qty{100}{\hertz} loop.
+The touch system requires a \qty{1000}{\hertz} loop.
+The audio system requires a \qty{40000}{\hertz} loop.
+We know that breaking this loop affects perception.
+For example a disruption of the human visual input stream prevents people from seeing changes in their visual field~\cite{rensink97}.
+At the opposite, as we discussed in \refchap{chap:output}, force-feedback devices use force models computed over \qty{1000}{\hertz}, but the force model to be applied can be updated at a lower frequency without affecting the perception of a shape.
+Hence, the fine characterization of the bottlenecks in this Human-System loop is necessary to both avoid undesirable effect.
+But we can also leverage the limits of our sensorimotor loop to create illusions to extend interaction.
+More generally, combining the user and the system at the same level with complementary roles facilitates the holistic approach of my research.
+
+%User model and system model. Both models include how they behave and how they perceive its environment, which includes the other (resp. interactive system and user).
+%We study the system with rational methods, but we study the user and the interaction of both with empirical methods.
+
+
+%Therefore the system must not be a limiting factor for the user's sensorimotor loop.
+%For example, change blindness \cite{rensink97}
+%Therefore the system must react in real-time…
+%The actual speed depends on the human sensorimotor system involved.
+%Visual system requires about \qty{100}{\hertz} response, haptics \qty{1000}{\hertz}, audio \qty{40000}{\hertz}.
+% Lower
+% Identify the blocking points is important.
+% For example force feedback devices use force models computed over \qty{1000}{\hertz}, but the force model to be applied can switch at a lower frequency.
+%The finer characterization of the Human-System loop
% Loki project describes this as micro-dynamics.
% => meso dynamics: planning, tools selection