To illustrate this, I describe below two parts of rejected submissions and explain what was wrong with them.
It had to do with the motivations and the lack of focus on the sensorimotor loop.
It made me realize that this is a large hole in the two previous chapters.
-Therefore I will discuss the sensorimotor loop, its connection to both human and computing models, and postion this approach to the approach of the first two paragrtaphs.
+Therefore I will discuss the sensorimotor loop, its connection to both human and computing models, and postion this approach to the approach of the first two paragraphs.
%After this assessment, I will connect the various angles of the sensorimotor loop, and discuss its positioning
%what it brings to the different disciplines around HCI.
%\fixme{something here about how I extend it}
However, later we investigated a related question with the effect of haptic feedback on the sense of embodiment, which we will discuss in \refsec{sec:embodiment}.
+
+\section{Contributions}
+
+% 1 solve local problem rather than global problem
+% 2 do not measure the real benefits
+
+In the previous chapters, I presented contributions to improve output by leveraging the sense of touch, and input by leveraging the motor abilities.
+%In this chapter, we discussed in the \refsec{sec:limits} that this approach is not always sufficient to improve interaction.
+In the previous section, we discussed that this approach is not always sufficient to improve interaction.
+The two examples above illustrate two major problems.
+The first problem is when we solve a local problem rather than a global problem.
+Such solutions sometimes patch a couple of issues, but the problem is more general and it is necessary to take a step back and analyze the problem in a holistic way.
+The second problem is the difficulty to identify and measure the real benefits of proposed solutions.
+
+The contributions below use the orthogonal approach as discussed above to improve interaction by leveraging the sensorimotor loop.
+The first contribution provide quantitative benefits with two interaction paradigms that leverage gestural interaction and vibrotactile feedback.
+%The first one uses semaphoric gestures to replace pointing in mid-air gestural interaction.
+The second contribution investigates qualitatives benefits of the sensorimotor loop on the sense of embodiment of an avatar in Virtual Reality.
+
+\subsection{Haptic interaction paradigms}
+\label{sec:hapticparadigms}
+
+In \refsec{sec:limits} we first wanted to measure the quantitative benefits of haptic feedback for gestural interaction.
+The interaction paradigm we used was so inefficient that haptic feedback could not compensate for its limitations.
+Here we propose a new interaction paradigm that gets around these limitations.
+Then, we will discuss a new interaction paradigm that brings direct manipulation to tactile displays.
+I implemented both paradigms with the device described in \refsec{sec:limits}.
+
+\subsubsection{Summon interactions}
+\label{sec:summon}
+%Finger count \cite{bailly10}
+%Shoesense \cite{bailly12}
+
+The main limitations of 3D gestural interaction we discussed in \refsec{sec:limits} are tracking difficulties and the lack of segmentation.
+The users are tracked without interruption.
+Therefore gesture segmentation is difficult.
+Moreover, every gesture the users perform is potentially interpreted.
+This is called \defword{Midas Touch}, as a reference to the curse of the king that turned everything he touched into gold in the Greek mythology.
+There are also issues when the user is outside the sensor field of view, either on the edges or when it is occluded.
+The users need additional feedback for this in order to avoid usability issues.
+And even though, it makes interaction more complicated.
+
+These issues make it difficult to use standard GUI widgets.
+We discussed in \refsec{sec:limits} the simple case of buttons that require a different activation mechanism.
+3D gestural interfaces typically use dwell buttons that require users to hold their hand still over a button for a couple of seconds to select it.
+We proposed to simply add haptic feedback, but we were unable to measure a quantitative benefit.
+I believe we need deeper changes to improve interaction in this context.
+Therefore we proposed a different paradigm that does not rely on pointing \& selection.
+This new paradigm relies on summoning \& selection~\cite{gupta17}.
+
+This paradigm leverages a combination of semaphoric gestures, continuous gestures, and tactile feedback (\reffig{fig:summonexample}).
+We first defined a segmentation hand posture (an open hand) to summon the GUI elements.
+Then we defined a different hand posture for different kinds of widgets: buttons, sliders, knobs, switches, spinboxes, and paired buttons.
+It is of course possible to add other kinds of widgets with other hand postures.
+When the users perform one of these postures, they can select one of the widgets of this type.
+They receive a \qty{150}{\ms}/\qty{350}{\hertz} vibration pulse to confirm the selection, and the currently selected widget is visually highlighted.
+If the GUI has several widgets of this type, the users can disambiguate with a continuous movement.
+Then they perform a gesture for manipulating the widget and receive immediate haptic feedback with continuous vibrations on the thumb and index finger.
+For example, they can pinch and drag to move a slider knob.
+They can release the knob by releasing the pinch and release the slider by performing the segmentation gesture again.
+
+% Upon summoning, a 150ms pulse is played in both rings to indicate that the slider is summoned. When the user enters the drag state, a continuous pulse starts playing in both rings to mirror the grip of the slider bar. The pulse stops upon exit from the drag state. To reduce any per- ceived irritability from the vibration, the amplitude was set just above the perceivable level and the frequency was set at 350Hz. The 150ms pulse is played again upon release.
+
+We conducted user studies to measure the benefits of this new paradigm.
+In the first one, we showed that this paradigm avoids the Midas touch issues, and we compared two disambiguation mechanisms.
+In the second study, we showed that this paradigm has quantitative and qualitative benefits compared to midair pointing.
+Despite these benefits, this new paradigm has challenges that are still to be addressed.
+In particular, it relies on semaphoric gestures that users have to know.
+It contradicts Nielsen's \emph{recognition rather than recall} heuristic~\cite{nielsen90,nielsen94}, which is one of the essential benefits of the point \& select paradigm.
+Therefore, we still have to evaluate the discoverability and learnability of the gestures and improve them if necessary~\cite{cockburn14}.
+We can for example encourage learnability and discoverability with feedforward visual cues in the vicinity of the widgets~\cite{malacria13}.
+
+\input{figures/summonexample.tex}
+
+\subsubsection{Haptic Direct Manipulation}
+\label{sec:hapticdm}
+
+We discussed the concept of direct manipulation inf \refsec{sec:systemarch}~\cite{schneiderman83}.
+It is one of the most important concepts of GUIs.
+Its properties provide valuable usability benefits that highly contributed to the success of GUIs over command line interfaces.
+Yet, this paradigm was tailored for visual interfaces.
+The question of whether this concept could be used or adapted to tactile display was open.
+Therefore we studied the adaptation of direct manipulation to tactile displays~\cite{pietrzak15,gupta16,gupta16a}.
+
+The most challenging direct manipulation property for tactile display is certainly the one stating that objects of interest have to be visible.
+Contrary to vision, it is difficult to perceive an overview of the environment at a glance with the sense of touch.
+Vision is particularly efficient at glancing not only because of the high density and sensitivity of photoreceptor cells in the retina but also because of the high mobility of the eyes and the ability of the brain to process this sensorimotor loop.
+Therefore, we leveraged the sensorimotor loop with the sense of touch and the motor ability to make objects of interest \emph{touchable} and \emph{explorable}.
+
+In this new paradigm, the users can control a pointer that they can move continuously with gestures and perceive with tactile feedback.
+When the cursor moves, it feels like a vibration moving continuously on the skin.
+This property makes the tactile space explorable.
+Then, the sensation is different when the cursor hovers over a target or moves over the background, which makes objects touchable.
+Input modifiers such as the number of contact points or the number of contact repetitions are used to switch between the \emph{idle}, \emph{tracking}, and \emph{dragging} stages~\cite{buxton90}.
+With these interactions we can implement fundamental direct manipulation interaction techniques such as \emph{pointing}, \emph{selection}, and \emph{manipulation}.
+
+We implemented this paradigm with a proof of concept 1D \ang{360} tactile display around the wrist (\reffig{fig:tactiledm}).
+We used the prototype described in \refsec{sec:limits}, which has four EAI C2 tactors.
+The continuously moving cursor is implemented with the funelling illusion I described in \refchap{chap:output} and illustrated on \reffig{fig:illusions}.
+The display is divided into four quarters in between the actuators.
+The cursor is a phantom sensation created by interpolating the signal amplitude of the two edge actuators of the corresponding quarter.
+Targets are represented with a \qty{250}{\hertz} frequency and the background with \qty{100}{\hertz}.
+Not only it is an easily distinguishable vibration, but the \qty{100}{\hertz} is subtle and avoids or at least reduces numbness.
+The inputs use a multitouch smartwatch.
+Up and down swypes move the cursor in either direction.
+The tracking states trigger with one contact point and the dragging state triggers with two contact points.
+The cursor is not felt in the idle state, to avoid numbness.
+The details about feedback and states machines are presented in the paper~\cite{gupta16}.
+
+\input{figures/tactiledm.tex}
+
+We validated the concept with user evaluations of the proof of concept prototype.
+First, we validated that users are able to navigate and distinguish targets with a JND experiment on the maximum number of targets they were able to count.
+On average, participants were able to count up to 19 targets.
+Then we evaluated the pointing performance and confirmed it globally follows Fitts' law.
+We note however participants made faster selections when the targets were exactly over an actuator position compared to in-between.
+We proposed a refined pointing model that takes into account this observation.
+Finally, we designed two tactile menus with 4 and 8 items and we showed that participants were fast and accurate.
+
+\subsection{The sense of embodiment in Virtual Reality}
+\label{sec:embodiment}
+
+We discussed in \refsec{sec:qualitative} that haptic feedback has qualitative benefits, in particular when it restores haptic sensations that are non-existent or limited in gestural or multi-touch interaction.
+This is especially important in \defwords{virtual environments}{virtual environment} in which we would like to immerse users.
+Slater defined \defword{immersion} as “the extent to which the actual system delivers a surrounding environment”~\cite{slater97}.
+Therefore, this notion refers to technological aspects that contribute to immersing the user in the virtual environment.
+\defword{Presence} is rather the subjective feeling of being inside a virtual environment~\cite{slater09,slater99}.
+Witmer and Singer proposed a questionnaire to measure users' presence in a virtual environment~\cite{witmer98}.
+They identified four factors that influence the feeling of presence: the ability to \emph{control} objects, \emph{sensory} stimulation, \emph{distraction} from the real world, and \emph{realism}.
+We used this questionnaire in the study presented in \refsec{sec:qualitative} and observed that haptic feedback improved presence, in particular sensory and realism factors.
+
+%Virtual Reality headsets create immersive virtual environments that take the whole field of view of users.
+%Therefore they cannot even see their physical body
+The users are generally represented in virtual environments with an \defword{avatar}.
+This avatar usually has a visual representation, which is not necessarily realistic or even human~\cite{olivier20}.
+The users explore the virtual environment through this avatar.
+They can also perform operations that are impossible in the physical world, like telekinesis or teleportation.
+In fact, the appearance or behavior of the avatar has an influence on the way the users behave in the virtual environment.
+For example, the \defword{Proteus effect} describes the way the visual representation of an avatar influences the behavior of the users that control it~\cite{yee07}.
+On the opposite, visuotactile stimulation can lead people to consider a rubber hand as part of their body~\cite{botvinick98}, or that they have a sixth finger on their hand~\cite{hoyet16}.
+These effects are examples of extensions of the \defword{sense of embodiment} of a virtual body~\cite{kilteni12}.
+%, or artificial artifacts such as prostheses or tools for example~\cite{devignemont11}.
+% Embodiment: E is embodied if and only if some properties of E are processed in the same way as the properties of one’s body.
+Kilteni \etal discuss three subcomponents of the sense of embodiment that were extensively studied in the literature~\cite{kilteni12}.
+\defword{Self-location} refers to the “volume in space where one feels to be
+located”.
+\defword{Agency} refers to “the sense of having global motor control” over the virtual body.
+And \defword{ownership} refers to “one’s self-attribution of a body”.
+
+
+\subsubsection{Methodologies for measuring the sense of embodiment}
+
+There is a number of questionnaires in the literature to measure the sense of embodiment.
+We discuss some of them in one of our studies~\cite{richard22}.
+%We can measure the embodiment of an avatar in a virtual environment with questionnaires~\cite{roth20,gonzalezfranco18,peck21}.
+There are recent attempts to standardize these questionnaires.
+For example Roth \etal propose a questionnaire with subcomponents: \defword{ownership}, \defword{agency}, and perceived change in the \defword{body schema}~\cite{roth20}.
+The latter notion is larger than \emph{self-location} as it refers to any difference the users may perceive between their own body and the avatar.
+Gonzalez Franco and Peck proposed another questionnaire in which they added to Kilteni \etal's subcomponents : \emph{tactile sensations}, \emph{external appearance}, and \emph{response to external stimuli}~\cite{gonzalezfranco18}.
+They later improved and simplified their questionnaire, and evaluated it with many different tasks~\cite{peck21}.
+The subcomponents of this new questionnaire are: \emph{appearance}, \emph{response}, \emph{ownership}, and \emph{multi-sensory}~\cite{peck21}.
+Interestingly, \emph{agency} is not an identified subcomponent but rather distributed among the others, in particular to the \emph{response} subcomponent.
+%This does not mean that agency, touch or localization are not important for embodiment, (Kilteni et al., 2012), but rather that they are related to other senses and instead contribute to one of the four prominent embodiment categories. The questions on motor control and agency were mostly assigned to the Response category
+
+These questionnaires are typically used in controlled experiments after the participants performed a specific task in a virtual environment.
+We compare the overall embodiment and its subcomponents score in two or more conditions to identify the effects of these conditions.
+The experimental protocol we can use depends on the task.
+For example, some studies use a threat like a virtual fire or sharp blade as an objective measurement of embodiment~\cite{dewez19,argelaguet16}.
+Subjects are considered embodied if they attempt to avoid the threat despite its virtual nature.
+The issue is that this kind of metric requires participants to be surprised by the threat.
+However, this cannot be guaranteed with a \defword{within-subjects design} in which participants perform all the conditions one after the other.
+In such situations, the experiment must follow a \defword{between-subjects design}, in which separate groups of participants perform a different condition.
+% There are however several other factors that influence the choice of experimental setup.
+% For example, between-subjects studies require more participants to reach the same statistical power.
+% Each participant of a within-subject study provides less data per condition if we would like to keep the same experiment duration.
+
+%When designing virtual embodiment studies, one of the key choices is the nature of the experimental factors, either between-subjects or within-subjects. However, it is well known that each design has ad- vantages and disadvantages in terms of statistical power, sample size requirements and confounding factors. This paper reports a within- subjects experiment with 92 participants comparing self-reported embodiment scores under a visuomotor task with two conditions: synchronous motions and asynchronous motions with a latency of 300 ms. With the gathered data, using a Monte-Carlo method, we created numerous simulations of within- and between-subjects exper- iments by selecting subsets of the data. In particular, we explored the impact of the number of participants on the replicability of the results from the 92 within-subjects experiment. For the between-subjects simulations, only the first condition for each user was considered to create the simulations. The results showed that while the replicabil- ity of the results increased as the number of participants increased for the within-subjects simulations, no matter the number of partici- pants, between-subjects simulations were not able to replicate the initial results. We discuss the potential reasons that could have led to this surprising result and potential methodological practices to mitigate them.
+%galvanic skin response \cite{kokkinara14}
+
+\paragraph{User study}
+
+In a between-subjects study, participants are assigned to one of the conditions.
+There is therefore potentially a bias if the groups are not well balanced.
+We investigated this effect on embodiment studies~\cite{richard22}.
+We experimented a visuomotor task with a synchronous condition and an asynchronous condition with a latency of \qty{300}{\ms} between the inputs and output response.
+This value is known to have a medium effect on embodiment in the literature~\cite{botvinick98,kilteni12,kokkinara14}.
+We chose a simple experimental task that requires no special equipment to facilitate replication.
+Participants were seated on a chair, with their legs on a table, and had to perform gestures with their feet (\reffig{fig:expewithin}), similarly to~\cite{kokkinara14}.
+92 participants performed this task in a balanced within-subjects design.
+To study the effect of the sample size and its effect on the statistical analysis we analyzed random data subsets of 10 to 92 participants.
+To study the effect of the experiment design we simulated between-subjects designs by selecting the first condition every participant made.
+We considered the analysis of all participants with the within-subjects design as the ground truth.
+Similarly to the literature this analysis shows that latency reduces the sense of embodiment~\cite{botvinick98,kilteni12,kokkinara14}.
+
+\begin{figure}[htb]
+ \centering
+ \includegraphics[height=3.9cm]{figures/within-setup}\hfill
+ \includegraphics[height=3.9cm]{figures/within-environment}\hfill
+ \includegraphics[height=3.9cm]{figures/within-avatars}%
+ \caption[Setup of the embodiment methodology study.]{The user seated on a chair, performing leg movements, the virtual environment, and the two avatars.}
+ \label{fig:expewithin}
+\end{figure}
+
+Our results showed that all the random subsets with at least \num{40} participants with the within-subjects design gave the same result as the ground truth.
+However, regardless of the number of participants, we did not observe the ground truth effect with the between-subject analyses.
+Based on the debriefing with participants, our main explanation of this phenomenon is that participants needed a reference to provide a meaningful answer for each question.
+Therefore they calibrated their answers to the second condition relatively to the first one.
+Hence, we could not measure the effect with the first condition only.
+We discuss recommendations and possible mitigation strategies in the paper~\cite{richard22}.
+Interestingly, when we analyzed the second condition as a kind of calibrated between-subjects design we observed the ground truth effect.
+However, the effect size was about half the effect size of the within-subject analysis.
+Therefore, we wonder if both designs even measured the same phenomenon.
+We are still working on this subject, in particular to provide calibration methods and metrics to balance groups for between-subjects design in embodiment studies.
+
+\subsubsection{Haptics and the sense of embodiment}
+
+The study of the causes and effects of the sense of embodiment of an avatar in virtual reality is a hot topic in the Virtual Reality community.
+%Results show that the sense of agency is stronger for less realistic virtual hands which also provide less mismatch between the participant's actions and the animation of the virtual hand. In contrast, the sense of ownership is increased for the human virtual hand which provides a direct mapping between the degrees of freedom of the real and virtual hand.
+Interestingly, all the embodiment questionnaires such as those we discussed before have subcomponents related to the sensorimotor loop.
+It means that the sensorimotor loop is essential to the sense of embodiment.
+For example, people have a stronger sense of ownership when they perform actions with a visually realistic hand, and a stronger sense of agency when they embody an abstract-looking virtual hand~\cite{argelaguet16}.
+Following this idea, we studied the effect of haptics on the sense of embodiment.
+
+We performed a user study to compare embodiment for a drawing task with force feedback, tactile feedback, and a control condition with no haptic feedback~\cite{richard20}.
+The participants were seated on a chair, and they had to paint a mandala in an immersive virtual environment with a Phantom Desktop\footnote{Today called Touch X by 3D Systems \url{https://www.3dsystems.com/haptics-devices/touch-x}} device (\reffig{fig:expeembodiment}).
+In the force feedback condition, they felt the surface resistance of hard objects and the viscosity of the paint spheres at the bottom.
+In the tactile condition, they felt a \qty{250}{\hertz} vibration whose amplitude was proportional to the interpenetration distance to the canvas surface.
+We attached an EAI C2 tactor to vibrate the Phantom stylus (\reffig{fig:expeembodiment}).
+In the control condition, the Phantom was only used as an input device, with no force or vibration.
+We mesured embodiment with Gonzalez Franco and Peck's first standardized questionnaire\footnote{The second one was not published at the time.} with the \emph{agency}, \emph{self location}, \emph{ownership}, and \emph{tactile sensations} subcomponents~\cite{gonzalezfranco18}.
+
+\begin{figure}[htb]
+ \centering
+ \includegraphics[height=3cm]{figures/embodimentdevice}\hfill
+ \includegraphics[height=3cm]{figures/embodimentenvironment}\hfill
+ \includegraphics[height=3cm]{figures/embodimenttask}%
+ \caption[Setup of the haptics and embodiment study.]{Haptic device setup, virtual environment and task of the virtual embodiment study.}
+ \label{fig:expeembodiment}
+\end{figure}
+
+We observed a stronger embodiment in the force feedback condition compared to the control condition.
+In particular, participants had a higher sense of ownership.
+However, we did not observe these differences between the tactile and control conditions.
+Besides the detailed discussion in the paper, it is important to note that in some ways this task favored the force feedback condition over the tactile condition.
+Participants certainly expected to feel the stiffness of hard surfaces.
+Similarly to realistic visual feedback~\cite{argelaguet16}, this realistic force feedback aspect reinforced the sense of ownership.
+On the contrary the vibrotactile feedback was symbolic because participants only received tactile guidance.
+And we did not observe any improvement in embodiment.
+It does not necessarily mean that the sense of embodiment requires realistic haptic feedback.
+For example, non-realistic visual feedback improved the sense of agency~\cite{argelaguet16}.
+But in our task force feedback \emph{constrained} the stylus tip movement to prevent it from getting through the surface, while vibrotactile feedback only \emph{guided} it.
+Therefore I believe the force feedback condition helped participants focus on the painting task rather than controlling the stylus to paint the canvas , which reinforced sensorimotor integration.
+The workload analysis discussed in the paper gives supports this explanation.
+%It gave users immediate feedback that could guide them to stay close to the spatial location of the surface.
+Further studies should investigate other tasks or a variation of this one in which vibrotactile feedback promotes sensorimotor integration.
+% is expected, like feeling surface textures.
+
+
\section{Computing and the sensorimotor loop}
In the previous chapters, we discussed several examples of how haptics as the sense of touch on one side, and haptics as the motor ability on the other side provide useful interactive properties.
Gaver describes four possibilities~\cite{gaver91}.
Two of them are desired: a perceived affordance and a true reject (there is no affordance, and no affordance is perceived).
He also describes hidden affordances and false affordances.
-Because of this, Norman makes a distinction between an affordance, as a property, and a \defword{signifier} which is a perceivable property that advertises the existence of an affordance~\cite{norman02}.
+Because of this, Norman now makes a distinction between an affordance, as a property, and a \defword{signifier} which is a perceivable property that advertises the existence of an affordance~\cite{norman02}.
%Perception/action cycle~\cite{gibson79}
%Sensorimotor loop~\cite{oregan01a}
Therefore this is essentially a measure of motor performance.
This is an active research topic with new contributions every year for decades.
HCI research usually uses MacKenzie's throughput-based formulation \cite{mackenzie92}, and experimental protocols were adapted to 2D \cite{mackenzie92a}, 3D \cite{murata01}, and similar tasks like steering~\cite{accot97}.
+Other researchers propose alternative interpretations, as a time/error trade-off for example~\cite{guiard11}.
%ballistic\cite{meyer88}
Besides motor-behavior models, we discussed perceptual empirical evaluations and models in \refchap{chap:output}.
The models we discussed take into account human behavior, and to some extent the way interaction techniques work, but not necessarily how they are implemented.
For example, they do not take into account the transfer function between the pointing device and the cursor.
They do not necessarily take into account the integration or separation of degrees of freedom~\cite{mackinlay90} or the type of feedforward~\cite{vermeulen13}.
-In my opinion, this is a limitation of the generative aspect of these models, and I believe we must include more knowledge about the implementation into interaction models.
+In my opinion, this is a limitation of the generative aspect of these models, and I believe we must include more knowledge about the implementation into interaction models and conversely.
+This is one of the objectives of the Loki project team.
%Human processor: ~ : KLM
\defword{Arch}~\cite{arch92} and \defacronym{PAC}~\cite{coutaz87} rather combine inputs and outputs as a \emph{presentation} component, and add a \emph{controler} component that manages transitions between abstract inputs/outputs and domain-specific properties of the model/abstraction.
%The modern MVC architectures follow this structure as well.
The advantage of these architectures is to separate the objects of interest from the interaction with them.
-It is therefore easy to display several synchronized representations of the same object and provide multiple ways to manipulate them.
+It is therefore easier to display several synchronized representations of the same object and provide multiple ways to manipulate them.
These interactive properties contribute to leveraging human capacities and flexibility through \defword{multimodality}~\cite{nigay95,nigay04}.
%Seeheim \cite{green85}
%It is important to note that what the second entity does not perceive this physical effect, but its own interpretation of it.
-\paragraph{Seven stages of reaction}
+\subsection{Seven stages of reaction}
In \refsec{sec:humanbehavior} we discussed how people perceive their environment and in particular interactive systems.
We presented how Norman's theory of action (see \reffig{fig:sevenstages}) explains the difference between the conceptual model of the system, and the perceptual model the users have of it based on their perception.
-I suggest that interactive systems follow a similar perceptual scheme, as depicted on \reffig{fig:mysevenstages}.
+It is known that the implementation of systems depends on ethnographic background of programmers \cite{rode04}, and the system architecture reproduces the organization structure that designed it \cite{conway68}.
+Therefore, I suggest that interactive systems follow a similar perceptual scheme than the human perceptual scheme, and their interaction is an interactive loop in itself.
+
+I illustrate this approach with an adaptation of Norman's theory of action, depicted on \reffig{fig:mysevenstages}.
% The system senses a physical effect in its environment.
% Then interprets it to form input events, which are filtered and normalized interpretations of these effects.
% The system combines these events into input phrases with interaction techniques.
Last, the physical effect can be inconsistent for the same command.
Some haptic devices can behave differently depending on ambient conditions (\eg temperature, finger moisture, cleanliness).
-\paragraph{Human and system behavior}
+\subsection{Human and system behavior}
Norman's theory of action is typically used to describe differences between the user's perceptual model and the system's conceptual model.
The adaptation of this theory to systems describes a similar difference, but the conceptual and perceptual models are inverted.
The systems' perceptual model of the user is based on its own conceptual model that describes its ability to interact with humans.
This is what I describe with the seven stages of reaction above (\reffig{fig:mysevenstages}).
Usability issues occur when these behaviors cannot connect together.
+Norman's theory of action and the seven stages of reaction must not be two separate concepts, but a unified process that creates real-time loops between the human and the system.
+One of the interesting properties of such connections is their ability to connect and disconnect.
+Some of them are planned, when the users know the affordance, or perceive a signifier of this affordance.
+Others are serendipitous as the users explore the interactive system.
There is a fundamental difference between human and system behavior though.
We design the system behavior thanks to our engineering skills and scientific knowledge.
% It potentially runs forever on an infinite input stream.
% This is a necessary mechanism to model and implement interaction with external agents.
+\subsection{Discussion}
-
-\section{Contributions}
-
-In the previous chapters, I presented contributions to improve output by leveraging the sense of touch, and input by leveraging the motor abilities.
-In this chapter, we discussed in the \refsec{sec:limits} that this approach is not always sufficient to improve interaction.
-The contributions below use the orthogonal approach as discussed above to improve interaction by leveraging the sensorimotor loop.
-The first contribution provide quantitative benefits with two interaction paradigms that leverage gestural interaction and vibrotactile feedback.
-%The first one uses semaphoric gestures to replace pointing in mid-air gestural interaction.
-The second contribution investigates qualitatives benefits of the sensorimotor loop on the sense of embodiment of an avatar in Virtual Reality.
-
-\subsection{Haptic interaction paradigms}
-\label{sec:hapticparadigms}
-
-In \refsec{sec:limits} we first wanted to measure the quantitative benefits of haptic feedback for gestural interaction.
-The interaction paradigm we used was so inefficient that haptic feedback could not compensate for its limitations.
-Here we propose a new interaction paradigm that gets around these limitations.
-Then, we will discuss a new interaction paradigm that brings direct manipulation to tactile displays.
-I implemented both paradigms with the device described in \refsec{sec:limits}.
-
-\subsubsection{Summon interactions}
-\label{sec:summon}
-%Finger count \cite{bailly10}
-%Shoesense \cite{bailly12}
-
-The main limitations of 3D gestural interaction we discussed in \refsec{sec:limits} are tracking difficulties and the lack of segmentation.
-The users are tracked without interruption.
-Therefore gesture segmentation is difficult.
-Moreover, every gesture the users perform is potentially interpreted.
-This is called \defword{Midas Touch}, as a reference to the curse of the king that turned everything he touched into gold in the Greek mythology.
-There are also issues when the user is outside the sensor field of view, either on the edges or when it is occluded.
-The users need additional feedback for this in order to avoid usability issues.
-And even though, it makes interaction more complicated.
-
-These issues make it difficult to use standard GUI widgets.
-We discussed in \refsec{sec:limits} the simple case of buttons that require a different activation mechanism.
-3D gestural interfaces typically use dwell buttons that require users to hold their hand still over a button for a couple of seconds to select it.
-We proposed to simply add haptic feedback, but we were unable to measure a quantitative benefit.
-I believe we need deeper changes to improve interaction in this context.
-Therefore we proposed a different paradigm that does not rely on pointing \& selection.
-This new paradigm relies on summoning \& selection~\cite{gupta17}.
-
-This paradigm leverages a combination of semaphoric gestures, continuous gestures, and tactile feedback (\reffig{fig:summonexample}).
-We first defined a segmentation hand posture (an open hand) to summon the GUI elements.
-Then we defined a different hand posture for different kinds of widgets: buttons, sliders, knobs, switches, spinboxes, and paired buttons.
-It is of course possible to add other kinds of widgets with other hand postures.
-When the users perform one of these postures, they can select one of the widgets of this type.
-They receive a \qty{150}{\ms}/\qty{350}{\hertz} vibration pulse to confirm the selection, and the currently selected widget is visually highlighted.
-If the GUI has several widgets of this type, the users can disambiguate with a continuous movement.
-Then they perform a gesture for manipulating the widget and receive immediate haptic feedback with continuous vibrations on the thumb and index finger.
-For example, they can pinch and drag to move a slider knob.
-They can release the knob by releasing the pinch and release the slider by performing the segmentation gesture again.
-
-% Upon summoning, a 150ms pulse is played in both rings to indicate that the slider is summoned. When the user enters the drag state, a continuous pulse starts playing in both rings to mirror the grip of the slider bar. The pulse stops upon exit from the drag state. To reduce any per- ceived irritability from the vibration, the amplitude was set just above the perceivable level and the frequency was set at 350Hz. The 150ms pulse is played again upon release.
-
-We conducted user studies to measure the benefits of this new paradigm.
-In the first one, we showed that this paradigm avoids the Midas touch issues, and we compared two disambiguation mechanisms.
-In the second study, we showed that this paradigm has quantitative and qualitative benefits compared to midair pointing.
-Despite these benefits, this new paradigm has challenges that are still to be addressed.
-In particular, it relies on semaphoric gestures that users have to know.
-It contradicts Nielsen's \emph{recognition rather than recall} heuristic~\cite{nielsen90,nielsen94}, which is one of the essential benefits of the point \& select paradigm.
-Therefore, we still have to evaluate the discoverability and learnability of the gestures and improve them if necessary~\cite{cockburn14}.
-We can for example encourage learnability and discoverability with feedforward visual cues in the vicinity of the widgets~\cite{malacria13}.
-
-\input{figures/summonexample.tex}
-
-\subsubsection{Haptic Direct Manipulation}
-\label{sec:hapticdm}
-
-We discussed the concept of direct manipulation inf \refsec{sec:systemarch}~\cite{schneiderman83}.
-It is one of the most important concepts of GUIs.
-Its properties provide valuable usability benefits that highly contributed to the success of GUIs over command line interfaces.
-Yet, this paradigm was tailored for visual interfaces.
-The question of whether this concept could be used or adapted to tactile display was open.
-Therefore we studied the adaptation of direct manipulation to tactile displays~\cite{pietrzak15,gupta16,gupta16a}.
-
-The most challenging direct manipulation property for tactile display is certainly the one stating that objects of interest have to be visible.
-Contrary to vision, it is difficult to perceive an overview of the environment at a glance with the sense of touch.
-Vision is particularly efficient at glancing not only because of the high density and sensitivity of photoreceptor cells in the retina but also because of the high mobility of the eyes and the ability of the brain to process this sensorimotor loop.
-Therefore, we leveraged the sensorimotor loop with the sense of touch and the motor ability to make objects of interest \emph{touchable} and \emph{explorable}.
-
-In this new paradigm, the users can control a pointer that they can move continuously with gestures and perceive with tactile feedback.
-When the cursor moves, it feels like a vibration moving continuously on the skin.
-This property makes the tactile space explorable.
-Then, the sensation is different when the cursor hovers over a target or moves over the background, which makes objects touchable.
-Input modifiers such as the number of contact points or the number of contact repetitions are used to switch between the \emph{idle}, \emph{tracking}, and \emph{dragging} stages~\cite{buxton90}.
-With these interactions we can implement fundamental direct manipulation interaction techniques such as \emph{pointing}, \emph{selection}, and \emph{manipulation}.
-
-We implemented this paradigm with a proof of concept 1D \ang{360} tactile display around the wrist (\reffig{fig:tactiledm}).
-We used the prototype described in \refsec{sec:limits}, which has four EAI C2 tactors.
-The continuously moving cursor is implemented with the funelling illusion I described in \refchap{chap:output} and illustrated on \reffig{fig:illusions}.
-The display is divided into four quarters in between the actuators.
-The cursor is a phantom sensation created by interpolating the signal amplitude of the two edge actuators of the corresponding quarter.
-Targets are represented with a \qty{250}{\hertz} frequency and the background with \qty{100}{\hertz}.
-Not only it is an easily distinguishable vibration, but the \qty{100}{\hertz} is subtle and avoids or at least reduces numbness.
-The inputs use a multitouch smartwatch.
-Up and down swypes move the cursor in either direction.
-The tracking states trigger with one contact point and the dragging state triggers with two contact points.
-The cursor is not felt in the idle state, to avoid numbness.
-The details about feedback and states machines are presented in the paper~\cite{gupta16}.
-
-\input{figures/tactiledm.tex}
-
-We validated the concept with user evaluations of the proof of concept prototype.
-First, we validated that users are able to navigate and distinguish targets with a JND experiment on the maximum number of targets they were able to count.
-On average, participants were able to count up to 19 targets.
-Then we evaluated the pointing performance and confirmed it globally follows Fitts' law.
-We note however participants made faster selections when the targets were exactly over an actuator position compared to in-between.
-We proposed a refined pointing model that takes into account this observation.
-Finally, we designed two tactile menus with 4 and 8 items and we showed that participants were fast and accurate.
-
-\subsection{The sense of embodiment in Virtual Reality}
-\label{sec:embodiment}
-
-We discussed in \refsec{sec:qualitative} that haptic feedback has qualitative benefits, in particular when it restores haptic sensations that are non-existent or limited in gestural or multi-touch interaction.
-This is especially important in \defwords{virtual environments}{virtual environment} in which we would like to immerse users.
-Slater defined \defword{immersion} as “the extent to which the actual system delivers a surrounding environment”~\cite{slater97}.
-Therefore, this notion refers to technological aspects that contribute to immersing the user in the virtual environment.
-\defword{Presence} is rather the subjective feeling of being inside a virtual environment~\cite{slater09,slater99}.
-Witmer and Singer proposed a questionnaire to measure users' presence in a virtual environment~\cite{witmer98}.
-They identified four factors that influence the feeling of presence: the ability to \emph{control} objects, \emph{sensory} stimulation, \emph{distraction} from the real world, and \emph{realism}.
-We used this questionnaire in the study presented in \refsec{sec:qualitative} and observed that haptic feedback improved presence, in particular sensory and realism factors.
-
-%Virtual Reality headsets create immersive virtual environments that take the whole field of view of users.
-%Therefore they cannot even see their physical body
-The users are generally represented in virtual environments with an \defword{avatar}.
-This avatar usually has a visual representation, which is not necessarily realistic or even human~\cite{olivier20}.
-The users explore the virtual environment through this avatar.
-They can also perform operations that are impossible in the physical world, like telekinesis or teleportation.
-In fact, the appearance or behavior of the avatar has an influence on the way the users behave in the virtual environment.
-For example, the \defword{Proteus effect} describes the way the visual representation of an avatar influences the behavior of the users that control it~\cite{yee07}.
-On the opposite, visuotactile stimulation can lead people to consider a rubber hand as part of their body~\cite{botvinick98}, or that they have a sixth finger on their hand~\cite{hoyet16}.
-These effects are examples of extensions of the \defword{sense of embodiment} of a virtual body~\cite{kilteni12}.
-%, or artificial artifacts such as prostheses or tools for example~\cite{devignemont11}.
-% Embodiment: E is embodied if and only if some properties of E are processed in the same way as the properties of one’s body.
-Kilteni \etal discuss three subcomponents of the sense of embodiment that were extensively studied in the literature~\cite{kilteni12}.
-\defword{Self-location} refers to the “volume in space where one feels to be
-located”.
-\defword{Agency} refers to “the sense of having global motor control” over the virtual body.
-And \defword{ownership} refers to “one’s self-attribution of a body”.
-
-
-\subsubsection{Methodologies for measuring the sense of embodiment}
-
-There is a number of questionnaires in the literature to measure the sense of embodiment.
-We discuss some of them in one of our studies~\cite{richard22}.
-%We can measure the embodiment of an avatar in a virtual environment with questionnaires~\cite{roth20,gonzalezfranco18,peck21}.
-There are recent attempts to standardize these questionnaires.
-For example Roth \etal propose a questionnaire with subcomponents: \defword{ownership}, \defword{agency}, and perceived change in the \defword{body schema}~\cite{roth20}.
-The latter notion is larger than \emph{self-location} as it refers to any difference the users may perceive between their own body and the avatar.
-Gonzalez Franco and Peck proposed another questionnaire in which they added to Kilteni \etal's subcomponents : \emph{tactile sensations}, \emph{external appearance}, and \emph{response to external stimuli}~\cite{gonzalezfranco18}.
-They later improved and simplified their questionnaire, and evaluated it with many different tasks~\cite{peck21}.
-The subcomponents of this new questionnaire are: \emph{appearance}, \emph{response}, \emph{ownership}, and \emph{multi-sensory}~\cite{peck21}.
-Interestingly, \emph{agency} is not an identified subcomponent but rather distributed among the others, in particular to the \emph{response} subcomponent.
-%This does not mean that agency, touch or localization are not important for embodiment, (Kilteni et al., 2012), but rather that they are related to other senses and instead contribute to one of the four prominent embodiment categories. The questions on motor control and agency were mostly assigned to the Response category
-
-These questionnaires are typically used in controlled experiments after the participants performed a specific task in a virtual environment.
-We compare the overall embodiment and its subcomponents score in two or more conditions to identify the effects of these conditions.
-The experimental protocol we can use depends on the task.
-For example, some studies use a threat like a virtual fire or sharp blade as an objective measurement of embodiment~\cite{dewez19,argelaguet16}.
-Subjects are considered embodied if they attempt to avoid the threat despite its virtual nature.
-The issue is that this kind of metric requires participants to be surprised by the threat.
-However, this cannot be guaranteed with a \defword{within-subjects design} in which participants perform all the conditions one after the other.
-In such situations, the experiment must follow a \defword{between-subjects design}, in which separate groups of participants perform a different condition.
-% There are however several other factors that influence the choice of experimental setup.
-% For example, between-subjects studies require more participants to reach the same statistical power.
-% Each participant of a within-subject study provides less data per condition if we would like to keep the same experiment duration.
-
-%When designing virtual embodiment studies, one of the key choices is the nature of the experimental factors, either between-subjects or within-subjects. However, it is well known that each design has ad- vantages and disadvantages in terms of statistical power, sample size requirements and confounding factors. This paper reports a within- subjects experiment with 92 participants comparing self-reported embodiment scores under a visuomotor task with two conditions: synchronous motions and asynchronous motions with a latency of 300 ms. With the gathered data, using a Monte-Carlo method, we created numerous simulations of within- and between-subjects exper- iments by selecting subsets of the data. In particular, we explored the impact of the number of participants on the replicability of the results from the 92 within-subjects experiment. For the between-subjects simulations, only the first condition for each user was considered to create the simulations. The results showed that while the replicabil- ity of the results increased as the number of participants increased for the within-subjects simulations, no matter the number of partici- pants, between-subjects simulations were not able to replicate the initial results. We discuss the potential reasons that could have led to this surprising result and potential methodological practices to mitigate them.
-%galvanic skin response \cite{kokkinara14}
-
-\paragraph{User study}
-
-In a between-subjects study, participants are assigned to one of the conditions.
-There is therefore potentially a bias if the groups are not well balanced.
-We investigated this effect on embodiment studies~\cite{richard22}.
-We experimented a visuomotor task with a synchronous condition and an asynchronous condition with a latency of \qty{300}{\ms} between the inputs and output response.
-This value is known to have a medium effect on embodiment in the literature~\cite{botvinick98,kilteni12,kokkinara14}.
-We chose a simple experimental task that requires no special equipment to facilitate replication.
-Participants were seated on a chair, with their legs on a table, and had to perform gestures with their feet (\reffig{fig:expewithin}), similarly to~\cite{kokkinara14}.
-92 participants performed this task in a balanced within-subjects design.
-To study the effect of the sample size and its effect on the statistical analysis we analyzed random data subsets of 10 to 92 participants.
-To study the effect of the experiment design we simulated between-subjects designs by selecting the first condition every participant made.
-We considered the analysis of all participants with the within-subjects design as the ground truth.
-Similarly to the literature this analysis shows that latency reduces the sense of embodiment~\cite{botvinick98,kilteni12,kokkinara14}.
-
-\begin{figure}[htb]
- \centering
- \includegraphics[height=3.9cm]{figures/within-setup}\hfill
- \includegraphics[height=3.9cm]{figures/within-environment}\hfill
- \includegraphics[height=3.9cm]{figures/within-avatars}%
- \caption[Setup of the embodiment methodology study.]{The user seated on a chair, performing leg movements, the virtual environment, and the two avatars.}
- \label{fig:expewithin}
-\end{figure}
-
-Our results showed that all the random subsets with at least \num{40} participants with the within-subjects design gave the same result as the ground truth.
-However, regardless of the number of participants, we did not observe the ground truth effect with the between-subject analyses.
-Based on the debriefing with participants, our main explanation of this phenomenon is that participants needed a reference to provide a meaningful answer for each question.
-Therefore they calibrated their answers to the second condition relatively to the first one.
-Hence, we could not measure the effect with the first condition only.
-We discuss recommendations and possible mitigation strategies in the paper~\cite{richard22}.
-Interestingly, when we analyzed the second condition as a kind of calibrated between-subjects design we observed the ground truth effect.
-However, the effect size was about half the effect size of the within-subject analysis.
-Therefore, we wonder if both designs even measured the same phenomenon.
-We are still working on this subject, in particular to provide calibration methods and metrics to balance groups for between-subjects design in embodiment studies.
-
-\subsubsection{Haptics and the sense of embodiment}
-
-The study of the causes and effects of the sense of embodiment of an avatar in virtual reality is a hot topic in the Virtual Reality community.
-%Results show that the sense of agency is stronger for less realistic virtual hands which also provide less mismatch between the participant's actions and the animation of the virtual hand. In contrast, the sense of ownership is increased for the human virtual hand which provides a direct mapping between the degrees of freedom of the real and virtual hand.
-Interestingly, all the embodiment questionnaires such as those we discussed before have subcomponents related to the sensorimotor loop.
-It means that the sensorimotor loop is essential to the sense of embodiment.
-For example, people have a stronger sense of ownership when they perform actions with a visually realistic hand, and a stronger sense of agency when they embody an abstract-looking virtual hand~\cite{argelaguet16}.
-Following this idea, we studied the effect of haptics on the sense of embodiment.
-
-We performed a user study to compare embodiment for a drawing task with force feedback, tactile feedback, and a control condition with no haptic feedback~\cite{richard20}.
-The participants were seated on a chair, and they had to paint a mandala in an immersive virtual environment with a Phantom Desktop\footnote{Today called Touch X by 3D Systems \url{https://www.3dsystems.com/haptics-devices/touch-x}} device (\reffig{fig:expeembodiment}).
-In the force feedback condition, they felt the surface resistance of hard objects and the viscosity of the paint spheres at the bottom.
-In the tactile condition, they felt a \qty{250}{\hertz} vibration whose amplitude was proportional to the interpenetration distance to the canvas surface.
-We attached an EAI C2 tactor to vibrate the Phantom stylus (\reffig{fig:expeembodiment}).
-In the control condition, the Phantom was only used as an input device, with no force or vibration.
-We mesured embodiment with Gonzalez Franco and Peck's first standardized questionnaire\footnote{The second one was not published at the time.} with the \emph{agency}, \emph{self location}, \emph{ownership}, and \emph{tactile sensations} subcomponents~\cite{gonzalezfranco18}.
-
-\begin{figure}[htb]
- \centering
- \includegraphics[height=3cm]{figures/embodimentdevice}\hfill
- \includegraphics[height=3cm]{figures/embodimentenvironment}\hfill
- \includegraphics[height=3cm]{figures/embodimenttask}%
- \caption[Setup of the haptics and embodiment study.]{Haptic device setup, virtual environment and task of the virtual embodiment study.}
- \label{fig:expeembodiment}
-\end{figure}
-
-We observed a stronger embodiment in the force feedback condition compared to the control condition.
-In particular, participants had a higher sense of ownership.
-However, we did not observe these differences between the tactile and control conditions.
-Besides the detailed discussion in the paper, it is important to note that in some ways this task favored the force feedback condition over the tactile condition.
-Participants certainly expected to feel the stiffness of hard surfaces.
-Similarly to realistic visual feedback~\cite{argelaguet16}, this realistic force feedback aspect reinforced the sense of ownership.
-On the contrary the vibrotactile feedback was symbolic because participants only received tactile guidance.
-And we did not observe any improvement in embodiment.
-It does not necessarily mean that the sense of embodiment requires realistic haptic feedback.
-For example, non-realistic visual feedback improved the sense of agency~\cite{argelaguet16}.
-But in our task force feedback \emph{constrained} the stylus tip movement to prevent it from getting through the surface, while vibrotactile feedback only \emph{guided} it.
-Therefore I believe the force feedback condition helped participants focus on the painting task rather than controlling the stylus to paint the canvas , which reinforced sensorimotor integration.
-The workload analysis discussed in the paper gives supports this explanation.
-%It gave users immediate feedback that could guide them to stay close to the spatial location of the surface.
-Further studies should investigate other tasks or a variation of this one in which vibrotactile feedback promotes sensorimotor integration.
-% is expected, like feeling surface textures.
+% Stéphane :
+ % Je trouve que c’est dommage que tu enchaines ensuite avec des contributions sans développer un peu plus ce que t’apportes cette réflexion originale en terme de perspectives…
+ % Par exemple, une que je vois, c’est les notions “d’anticipation”, de “réaction” et d’”adaptation”. Dans le modèle de Norman, on voit quand même que l’humain a des moyens pour anticiper et corriger dans une certaine mesure les problèmes qui peuvent arriver pendant l’interaction, grâce justement à nos capacités cognitive qui permettent d’anticiper et corriger (dans une certaine mesure) et s’adapter. Par contre, le système lui, n’est en général pas capable de faire ce pour quoi il n’a pas été prévu (e.g. capter une input non prévue, avoir une réaction non définie, etc.), et c’est ce qui fait la différence entre cognition et computing dans ta figure III.10 ?
+ % Et donc, comment faire ? Certain le font déjà (robotique comportementale basée sur la cognition, comme par exemple l’équipe FLOWERS à Bordeaux), mais comment inclure ça dans les systèmes interactifs ? Est-ce que ton modèle “seven stages of reaction” permettrait de mieux prendre ça en compte, ou le prendre en compte dans le système, etc. ?
+ % Et quelles perspectives tu ouvres ?
+
+A smooth interaction between a user and an interactive system requires efficient connections between them.
+
+User model and system model. Both models include how they behave and how they perceive its environment, which includes the other (resp. interactive system and user).
+We study the system with rational methods, but we study the user and the interaction of both with empirical methods.
+
+The user usually has the initiative, it is the case for computer-as-a-tool systems.
+Therefore the system must not be a limiting factor for the user's sensorimotor loop. For example, change blindness \cite{rensink97}
+Therefore the system must react in real-time…
+The actual speed depends on the human sensorimotor system involved.
+Visual system requires about \qty{100}{\hertz} response, haptics \qty{1000}{\hertz}, audio \qty{40000}{\hertz}.
+Lower
+Identify the blocking points is important.
+For example force feedback devices use force models computed over \qty{1000}{\hertz}, but the force model to be applied can switch at a lower frequency.
+The finer characterization of the Human-System loop
+
+% Loki project describes this as micro-dynamics.
+% => meso dynamics: planning, tools selection
+% => macro dynamics: reflexivity, learning, discoverability
\section{Conclusion}