Spécialité : Interaction Homme-Machine\r
\r
\vspace*{\stretch{2}}\r
- {\LARGE\bfseries Control, humans, machines, WTF?}\r
+ {\LARGE\bfseries Forging digital hammers: the design and engineering of empowering interaction techniques and devices}\r
\r
\vspace*{\stretch{2}}\r
- XXX 2019\r
+ XXX 2020\r
\r
\vspace*{\stretch{2}}\r
\end{centering}\r
\hrule\r
\vspace{1mm}\r
\centering\r
- Habilitation à diriger des recherches préparée au sein de l'équipe LOKI commune à\\l'Université de Lille, CRIStAL --- UMR CNRS 9189 et Inria Lille - Nord Europe\\\r
+ Habilitation à diriger des recherches préparée au sein de l'équipe projet LOKI commune à\\l'Université de Lille, CRIStAL --- UMR CNRS 9189 et Inria Lille - Nord Europe\\\r
\vspace{1cm}\r
\includegraphics[width=3.5cm]{logo_univ_lille}\r
\hfill\r
\input{tex/introduction}\r
\cleardoublepage\r
\r
-\input{tex/engineering}\r
+\input{tex/evolutions}\r
\cleardoublepage\r
\r
-\input{tex/interactivedevices}\r
- \cleardoublepage\r
-\r
-\input{tex/gestural}\r
- \cleardoublepage\r
-\r
-\input{tex/haptic}\r
+\input{tex/vocabulary}\r
\cleardoublepage\r
\r
\r
\usepackage{geometry}
\geometry{ hmargin=2cm, vmargin=3cm }
-\linespread{1.2}
+\linespread{1.5}
\usepackage{multirow}
\usepackage{array}
+++ /dev/null
-%!TEX root = ../hdrmain.tex
-
-\chapter{Control with automation}
- \epigraph{\lorem}{Auteur}
-
-\begin{Abstract}
-\loremipsum
-\end{Abstract}
-
-\begin{figure}[htb]
-\centering
-\includegraphics[width=.8\textwidth]{continuum}
-\caption{Continuum between Beaudouin Lafon's Tool and Partner paradigms.}
-\label{fig:interactioncontinuum}
-\end{figure}
-
-\begin{itemize}
- \item Métamorphe \cite{bailly13}
- \item Living Desktop \cite{bailly16}
- \item Ctrl Mouse \cite{pietrzak14}
- \item RayCursor \cite{baloup18,baloup19}
-\end{itemize}
-
-Ongoing work: facial animation
\ No newline at end of file
+++ /dev/null
-%!TEX root = ../hdrmain.tex
-
-\chapter{Direct Manipulation}
- \epigraph{\lorem}{Auteur}
-
-\begin{Abstract}
-\loremipsum
-\end{Abstract}
-
-\section{Extending direct manipulation to haptic displays}
-
-Haptic Direct Manipulation \cite{pietrzak15,gupta16}
-
-\section{Direct manipulation without pointing}
-
-Summon \& Select \cite{gupta17}
-
-\section{Fingers as interactive instruments}
-
-Fingercuts \cite{goguey14,goguey14a,goguey17}
+++ /dev/null
-%!TEX root = ../hdrmain.tex
-
-\chapter{A model of interactive systems}
- \epigraph{\lorem}{Auteur}
-
-\begin{Abstract}
-Modeling the behavior of both an interactive system and its users is challenging because of their fundamental different nature.
- In his action theory, Norman presents how an agent interacting with world objects thanks to the seven stages of action.
- The actions the agent performs on the object, and the way she perceives its behavior depends on factors such as affordances or her knowledge about this object or similar ones.
- While this model is efficient at describing how users perceive interactive systems, it does not explain how systems interact with users because world objects are seen as black boxes.
- We present the seven stages of reaction, which mirrored the seven stages of action that describe the behavior of reactive systems.
- We discuss the similarities and differences of the two models, how they combine, as well as implications for the design of interactive systems.
-\end{Abstract}
-
-%\section{7 stages model}
-
-
-Studying interactive system is a complex task because it requires knowledge on a wide range of areas.
-It spans from human sciences such as psychology, biology or sociology, to technical sciences such as computer science, electronics, mechanics, automatics or mathematics.
-All these experts need a common ground for studying interactive systems together.
-In particular, despite the fact that computers are intended for interacting with humans, they are still not designed for this use.
-They are still designed for computation in mind.
-There is certainly an historical reason for this, exacerbated by inertia and resistance to change.
-However we argue here that the theoretical background computers are built upon is the main reason why today computers are designed for computation rather than for interaction.
-
-In this chapter we first present existing models of computation, software and users.
-Then we describe our own model.
-Finally, we describe two case studies, to demonstrate the descriptive and evaluative powers of our model.
-
-\begin{idee}
-None of the computational and software models consider the hardware part
-\end{idee}
-
-\section{Modelling interactive systems}
-
-\begin{idee}
-Reviewers questioned the similarity with past research due to insufficient positioning in the paper. Past research they mention on toolkits and tasks models essentially focus on the application, input phrase or encoding stages of our model. However, in the HCI community we observe a notable rise of custom hardware for the design of interactive systems in the past decade. Our model covers both the software and hardware parts of interactive systems, as{} a whole.
-\end{idee}
-
-\cite{hornbaek17}
-
-\subsection{Computational models}
-
-Computing models such as $\lambda$-calculus~\cite{church32} or Turing machines~\cite{turing38} focus on solving numerical problem, and overlook the interaction machines have with their environment.
-While people daily interact with a world full of interactive devices, machines constantly interact with the world full of humans.
-Because of that, Goldin and Wegner showed that interaction is a more general model of computing that Turing-complete models~\cite{goldin08}.
-They argue that the universality of the computing models above is due to the fact they all rely on induction~\cite{wegner99}.
-Interaction is rather a co-inductive phenomenon.
-
-The induction mechanism converges to base cases, which ensures computation always terminates.
-At the opposite, co-induction is a process that applies to streams as input are received.
-The question whether the co-inductive process terminates is not relevant.
-It potentially runs forever on an infinite input stream.
-This is a necessary mechanism to model and implement interaction with external agents.
-
-\begin{algorithm}[htb]
-\SetAlgoLined
-\caption{Typical main function of an interactive application}
-\While{true}{
- get inputs\;
- update internal model\;
- generate outputs\;
-}
-\end{algorithm}
-
-\subsection{Software models}
-
-PAC \cite{coutaz87}
-Arch \cite{arch92}
-MVC \cite{reenskaug79a}
-Seeheim \cite{green85}
-
-\subsection{User models}
-
-Many human behavior models are used in HCI, and there is an active community working on this.
-With GOMS~\cite{card83} and Keystroke~\cite{card80} we can predict the time it takes for a person to use a keyboard or a pointer for example, including mental activities.
-Fitts' Law is extensively used for modeling pointing~\cite{mackenzie92}, and the steering law for modeling how users follow a path~\cite{accot97}.
-However such models are specialized to specific tasks, therefore not generic enough for describing entire interactive systems.
-
-Norman's action theory is a more general description of how people interact with the world, in particular interactive systems~\cite{norman02}.
-Figure~\ref{fig:sevenstages} depicts the seven stages of actions he uses to describe his theory.
-The person starts with a goal in her mind.
-Then she successively forms intentions, specify and execute actions on the world.
-As a consequence, the state of the world changes, and the person perceive these changes.
-Then she interprets this state and evaluate the consequence of her actions on the world.
-%The way people interact with the world highly depends on actions enabled by shared properties between a user and an object,
-While this model is efficient at describing how people interact with interactive devices, the machines themselves are seen as black boxes.
-%However interactive systems have a process to interact with the world.
-
-%The seven stages of action as described by Norman is a direct application of Gibson's perception/action coupling~\cite{gibson50}.
-%Human beings act on the world to perceive it.
-%The way they interpret it depends on the action they performed to explore it.
-%O'Regan describes this interaction as the sensorimotor cycle~\cite{oregan01a}.
-%It is a continuous cycle of actions and perceptions, that shape our understanding of the world around us.
-%This phenomenon is the building block of the direct manipulation paradigm~\cite{schneiderman83}.
-%\fixme{Describe seven stages of action~\cite{norman02}.}
-
-\begin{figure}[htb]
-\centering
-\definecolor{cellred}{rgb} {0.98,0.17,0.15}
-
-\newcommand{\stage}[2]{
- \node[minimum width=2.5cm, minimum height=1.0cm,fill=cellred, text=white,text width=2cm, align=center, rounded corners=2ex, outer sep=0](#1) {#2};
-}
-\begin{tikzpicture}
- \small
- \matrix[row sep=3mm, column sep=2mm,inner sep=0, node distance=0, outer sep=5mm] (cells) {
- & \stage{goal}{Goal} & \\
- \stage{intention}{Intention} & & \stage{evaluation}{Evaluation} \\
- \stage{specification}{Specification actions} & & \stage{interpretation}{Interpretation} \\
- \stage{execution}{Execution actions} & & \stage{perception}{Perception} \\
- };
- \node[anchor=north, minimum width=8.6cm,minimum height=.8cm,fill=black!10](world) at (cells.south) {World};
- \draw [->, -stealth', thick]
- (perception) edge (interpretation)
- (interpretation) edge (evaluation)
- (evaluation) edge[out=90, in=0] (goal)
- (goal) edge[out=180, in=90] (intention)
- (intention) edge (specification)
- (specification) edge (execution);
- \draw [->, -stealth', thick, dashed, draw=black!50, fill=black!50]
- (perception|-world.north) edge (perception.south)
- (execution) to (execution|-world.north);
- \node[anchor=south, minimum width=2.6cm, rotate=90, outer sep=5mm](gulfexecution) at (specification.west) {Gulf of execution};
- \draw [->, -stealth', thick,transform canvas={xshift=1em}]
- (gulfexecution.east |- intention.north) to (gulfexecution.east |- execution.south);
- \node[anchor=south, minimum width=2.6cm, rotate=270, outer sep=5mm](gulfevaluation) at (interpretation.east) {Gulf of evaluation};
- \draw [->, -stealth', thick,transform canvas={xshift=-1em}]
- (gulfevaluation.west |- perception.south) to (gulfevaluation.west |- evaluation.north);
-% \node[anchor=south, minimum width=2.6cm, rotate=90, thick,draw=black!20,fill=black!20] at (n.north west) {Implementation};
-% \node[anchor=south, minimum height=0.6cm, minimum width=6cm, thick, draw=black!20,fill=black!20] at (nd.north east) {Computing affordance};
-\end{tikzpicture}
-\caption{Norman's seven stages of action~\protect\cite{norman02}. It describes how people interact with their environment.}
- \label{fig:sevenstages}
-\end{figure}
-
-
-We present the seven stages of reaction, a model of interactive systems behavior inspired by Normal's seven stages of action.
-Similarly to the designeering approach~\cite{huot13}, it advocates for the consideration of implementation aspects of interacting systems as a complement to user-centered design methods.
-Our model describes both the software and hardware part of interactive systems, and how it interacts with its environment.
-%We discuss how they work, how they interact with their environment, and how they evolve.
-After describing the model, we discuss its combination with Norman's model, and its implications to the design of interactive systems.
-
-
-\section{Seven stages of reaction}
-
-Interactive systems globally work in a similar way than humans.
-They sense the world, and they act on it.
-%However, we can identify a major difference in most cases.
-%While people generally have the initiative, the loop in the seven stages of action starts with a goal, interactive systems tend to react to the environment.
-%They get information from the world, interpret it, and act on the world in return.
-%Despite the tremendous progress of computers the past decades, they can still process a tiny part of
-%\loremipsum
-We describe below the \defword{seven stages of reaction} based on Norman's seven stages of action (Figure~\ref{fig:mysevenstages}).
-The model is upside down, because it starts with an input chain (on the left), a software part that interprets the inputs and produces outputs (on the right).
-%After reviewing the seven stages of reaction, we will discuss the pitfalls it reveals, and we will compare them with the seven stages of action.
-
-\begin{idee}
-The stages of our model are functional slicing of system parts by input/output: peripherals (sensing/Physical effect), driver (event/command), toolkit (phrase/encoding), and the application binds both sides with interaction techniques. Hence it not only covers the application, but also the full input and output chains.
-\end{idee}
-
-\begin{idee}
-Criticisms to Norman's model are not necessarily relevant since they focus on the human side. In our submission we get inspiration from this model to build a compatible model to describe the system behavior. We only discuss the human side on how it interacts with the system. But any variation of Norman's model should be compatible with our own.
-\end{idee}
-
-\begin{figure}[htb]
-\centering
-\definecolor{cellblue}{rgb} {0.17,0.60,0.99}
-
-\newcommand{\stage}[2]{
- \node[minimum width=2.5cm, minimum height=1.0cm,fill=cellblue, text=white,text width=2cm, align=center, rounded corners=2ex, outer sep=0](#1) {#2};
-}
-\begin{tikzpicture}
- \small
- \matrix[row sep=3mm, column sep=2mm,inner sep=0, node distance=0, outer sep=5mm] (cells) {
- \stage{sensing}{Sensing} & & \stage{physical}{Physical effect} \\
- \stage{events}{Input Events} & & \stage{command}{Command} \\
- \stage{phrase}{Input phrase} & & \stage{encoding}{Encoding} \\
- & \stage{application}{Application} & \\
- };
- \node[anchor=south, minimum width=8.6cm,minimum height=.8cm,fill=black!10](world) at (cells.north) {World};
- \draw [->, -stealth', thick]
- (sensing) edge (events)
- (events) edge (phrase)
- (phrase) edge[out=270, in=180] (application)
- (application) edge[out=0, in=270] (encoding)
- (encoding) edge (command)
- (command) edge (physical);
- \draw [->, -stealth', thick, dashed, draw=black!50, fill=black!50]
- (sensing|-world.south) edge (sensing.north)
- (physical) to (physical|-world.south);
- \node[anchor=south, minimum width=2.6cm, rotate=90, outer sep=5mm](funnelevaluation) at (events.west) {Funnel of evaluation};
- \draw [->, -stealth', thick,transform canvas={xshift=1em}]
- (funnelevaluation.east |- sensing.north) to (funnelevaluation.east |- phrase.south);
- \node[anchor=south, minimum width=2.6cm, rotate=270, outer sep=5mm](funnelexecution) at (command.east) {Funnel of evaluation};
- \draw [->, -stealth', thick,transform canvas={xshift=-1em}]
- (funnelexecution.west |- encoding.south) to (funnelexecution.west |- physical.north);
-% \node[anchor=south, minimum width=2.6cm, rotate=90, thick,draw=black!20,fill=black!20] at (n.north west) {Implementation};
-% \node[anchor=south, minimum height=0.6cm, minimum width=6cm, thick, draw=black!20,fill=black!20] at (nd.north east) {Computing affordance};
-\end{tikzpicture}
-\caption[Seven stages of interactive computation]{The seven stages of interactive computation, adapted from Norman's seven stages of action. It describes how interactive systems interact with their environment.}
-\label{fig:mysevenstages}
-\end{figure}
-
-\subsection{Input chain}
-
-The input stage is the mirror of the evaluation part of the seven stages of action.
-It comprises three stages that explain how interactive systems get informations from the environment.
-%world, and in particular the user.
-
-\paragraph{Sensing}
-
-The input chain begins with the \emph{sensing} stage.
-It mainly consists in hardware, sensors and their driving electronics, that measure physical properties of the environment.
-User movements is the favorite kind of information for interactive systems, but it can be various other information such as light~\cite{sonne17}, temperature~\cite{sarsenbayeva17}, moisture~~\cite{jia18}, vibrations~\cite{casiez17}.
-There is also an important software part that consists in encoding~\cite{song11}, filtering~\cite{casiez12} and transmitting the data.
-
-\paragraph{Input events}
-
-The low level stages of the system transform sensed information into \emph{input events}.
-It takes into account predefined information, such as calibration or transfer functions~\cite{casiez11a}.
-Raw capacitance values are transformed into contact points~\cite{lee85}.
-Body skeletons are computed from depth images~\cite{shotton11}.
-At this stage we notice that the infinite richness of the world is reduced to a relatively small number of digits.
-%Let's discuss the simple example of a keypress on a keyboard.
-%The only information in the digital world is whether a key is pressed or not. There is no information about the finger that pressed it, the speed of the finger or its trajectory.
-%There is neither the possibility if a finger is hovering the key, if several fingers are pressing it, or even if it was pressed with the nose.
-
-\paragraph{Input phrases}
-
-Input events are treated as tokens, or lexical units.
-They are interpreted as \emph{input phrases} with grammars or finite automatons~\cite{appert06} or more complex algorithms~\cite{wobbrock07}.
-They form the building blocks of \defword{interaction techniques}, also called modalities~\cite{nigay93}.
-A click, a drag \& drop or a pinch gesture are examples of interaction techniques.
-The combination of modalities, called multimodality~\cite{coutaz95}, expands the possible inputs.
-The joint use of a digal pen and multitouch on a interactive surface is such an example~\cite{hinckley10}.
-%Multimodality is the combination of several modalities.
-%\cite{oviatt99}.
-
-\subsection{Application}
-
-The \emph{application} layer is specialized for assisting users in their tasks.
-It executes actions as a result of input phrases, and produce outputs to give users feedback, and the result of their actions.
-The architecture of this stage is further detailed in models such as PAC~\cite{coutaz87}, Arch~\cite{arch92} or MVC~\cite{reenskaug79a}.
-These architectures define several layers between the user, seen through widgets, and the computer, seen through a functional core (or abstraction, model).
-
-% meta algorithm that chooses whether there are algorithms to run, which algorithms to run, and with which parameters.
-% It occurs typically when a command is selected, or an object is being manipulated.
-
-% Every computer system has at some point an effect on the world.
-% We extend the notion of dead code to not only code that is never executed, but also code which result has no effect whatsoever on the physical world.
-% Even a routing algorithm will at some point transmit data to a computer which will display it or print it in any way.
-% % Quantic information : stored in computer memory. Will it be observed?
-% Bringing a piece of information to the physical world requires several steps.
-
-%Software architecture models such as further detail this stage.
-
-\subsection{Output chain}
-
-The output chain mirrors the action part on the seven stages of action.
-It describes the way interactive systems act on the world.
-In the following we will use haptic interfaces as an example, because of the diversity of actuation mechanisms.
-However the model applies to any output modality.
-
-\paragraph{Encoding}
-
-First of all the systems must \emph{encode} the piece on information.
-It takes into account multiple parameters, as described by Bernsen's design space~\cite{bernsen93a}, or Bertin in the case of vision~\cite{bertin83}.
-At this stage, the application decides how an object or a piece of information will be represented in the physical world.
-%A visual encoding can be an icon, or a text~\cite{bernsen93a}.
-Audio encodings can be sounds~\cite{gaver93} or melodies~\cite{brewster93}.
-Various haptic encodings include vibrations~\cite{brewster04}, forces~\cite{maclean03a,pietrzak05a} or textures~\cite{pietrzak09}.
-Force feedback typically compute a force vector a a result of a force model that depend on the device position~\cite{zilles95}.
-%Proxy\cite{ruspini97}
-
-%Effectors can produce light (like screens), sounds, vibrations, forces, …
-
-\paragraph{Commands}
-
-Output devices have driving electronics which require specific \emph{commands}.
-For example force feedback devices commonly use DC motors.
-The output force depends on the voltage it receives.
-High force device require strong motors, therefore high voltage.
-Consequently, haptic devices need precise amplifiers.
-When these amplifiers do not have a linear response, the command has to be adjusted.
-%\fixme{closed loop}
-
-\paragraph{Physical effect}
-
-The command send to the device produces a \emph{physical effect}.
-These can be light, sounds, vibrations or forces for instance.
-The user feels these effects though his senses.
-Many external factors may disturb this effect.
-The vibration transmitted by a vibrotactile actuator to the skin of a user depends on how it is attached~\cite{yao10}, or how the users holds the device.
-The way a user sees an object displayed on a screen may be affected by ambient light or distance.
-
-
-\fixme{\subsection{Pitfalls}}
-
-With this model, we demonstrate that solving a problem with a interactive system is not only a matter of algorithmic computation.
-Sensing phenomenons of the environment, and producing effects on it resulting of the results of algorithmic computation is subject to non trivial issues.
-Algorithms can only observe the shadow of the physical world, under the light of input and output streams.
-They are like prisoners from a digital Plato's cave.
-Therefore, addressing interaction problems require a broader view than just observing algorithms.
-It requires identifying information the system needs, and how to convey a result efficiently.
-Norman's gulf of evaluation and gulf of execution are mirrored with a \defword{funnel of evaluation} and a \defword{funnel of execution} in this effect.
-
-\paragraph{Funnel of evaluation}
-
-The funnel of evaluation depicts the fact that the input stages reduce the world into few bits.
-A good design of the input chain senses the right phenomenons, at an appropriate amplitude, with a sufficient spatial and temporal resolution, and with little distortions.
-These information must me combined correctly to form a meaningful sequence of inputs.
-For example, the \emph{Midas touch} problem is a usual issue with 3D gestural interaction~\cite{gupta17}.
-Since the sensors observes all the users movements, there is no obvious segmentation.
-The system has no way to know if you move your hand to interact with the system, or for scratching your nose.
-At the opposite, occlusion prevents vision-based gesture sensors from getting position information for hidden objects.
-In these situations, we can grow the funnel of evaluation by adding segmentation gestures, and using multiple cameras.
-
-\paragraph{Funnel of execution}
-
-The funnel of execution is symmetrical.
-The software part of the system displays parts of its data and state in an intelligible way for users.
-The way data is shown to the user can have a huge impact on how he interacts with it~\cite{zhang94}.
-Therefore the encoding part is crucial, and it is a first filter for reducing the internal complexity of the system.
-The specifications of the output device is a second filter.
-There is a limitation of force, color, brightness, frequency, etc. each device can produce in theory.
-There is also a limit of precision, that greatly depends on amplifiers and digital to analog converters (DAC).
-Last the physical effect can be inconsistent for the same command.
-Some haptic devices can behave differently depending on ambient temperature, finger moisture, cleanliness, etc.
-
-
-\begin{idee}
-In the paper we mentioned the usefulness of this model for several audiences. We will elaborate more on the objectives for HCI researchers and practitioners.
-Our model provides a framework for describing the software and hardware parts of an interactive system. We will describe it along three dimensions Beaudouin Lafon proposed to evaluate interaction models~\cite{mbl04} descriptive, evaluative and generative.
-
-\begin{itemize}
-\item Descriptive power. Many research papers do not describe critical information such as transfer functions, input\&output mappings, actuator response, …. Our model is a systematic structure for describing interactive systems. Such a description enables highlighting useful implementation details. Benefits include replicability, and highlighting potential undesired side effects in psychophysical experiments for instance.
-\item Evaluative power. Implementing hardware+software interactive systems is a particular difficult task. Many of these, including published research prototypes, have implementation issues that could be avoided with using a systematic approach. For example input systems have jitter because they do not use filtering. Many vibrotactile systems give poor feedback because they have a high inertia. Describing interactive systems with this common framework would make it easier to compare their implementation, and identify implementation issues.
-\item Generative power. The description of interactive systems with a common framework also has the advantage of inspiring alternative designs, new combination of designs and transgressive uses of technology.
-\end{itemize}
-\end{idee}
-
-\section{Interaction between users and systems}
-
-The purpose of interactive systems is to assist users in their activities.
-We can model the interaction of a user with a system by simply plugging the seven stages of action to the seven stages of reaction.
-The seven stages of reaction is a detailed view of the “world” stage in Norman's model.
-The connection between both occurs when the user manipulates an input device, and when she feels the physical effects of output devices~\cite{lederman96}.
-We can improve interaction not only by studying the two models separately, but also by studying their connections, similarities and differences.
-%This is typically what we study in HCI as we design new input devices with original sensing technologies~\cite{fellion17}, but also when we design new output devices with cutting-edge actuators%~\cite{frisson17,potier12,potier16}.
-
-Users and interactive systems are both modeled with an internal running loop (Figure~\ref{fig:loops}).
-The cycles of actions and perceptions that help us exploring the world is the \defword{sensorimotor loop}~\cite{oregan01a}.
-The seven stages of action is an instance of this phenomenon.
-Both models follow Gibson's theory, according to which exploratory movements are part of our understanding of the sensations we felt as a result of them~\cite{gibson50}.
-With this phenomenon, users can control interactive systems continuously~\cite{gupta16}.
-It enables fast and incremental adjustments, which are two of the building blocks of \defword{direct manipulation}~\cite{schneiderman83}.
-In order to make it happen, the interactive system must also respond in real time.
-This means its \defword{execution loop} must be fast, and with low latency\cite{casiez17}.
-
-\begin{figure}[htb]
-\centering
-\definecolor{cellred}{rgb} {0.98,0.17,0.15}
-\definecolor{cellblue}{rgb} {0.17,0.60,0.99}
-
-\newcommand{\labelcell}[2]{
-\node[minimum width=3cm, minimum height=1.0cm,text width=1.7cm, align=center, outer sep=0](#1) {#2};
-}
-\newcommand{\bluecell}[2]{
- \node[minimum width=2.5cm, minimum height=1.0cm,fill=cellblue, text=white,text width=2cm, align=center, rounded corners=2ex, outer sep=0](#1) {#2};
-}
-\newcommand{\redcell}[2]{
- \node[minimum width=2.5cm, minimum height=1.0cm,fill=cellred, text=white,text width=2cm, align=center, rounded corners=2ex, outer sep=0](#1) {#2};
-}
-\begin{tikzpicture}
- \small
- \matrix[row sep=3mm, column sep=0,inner sep=0, node distance=0, outer sep=5mm] (cells) {
- & \labelcell{perception}{Perception} & \labelcell{output}{Output} & \\
- \redcell{user}{User} & \labelcell{sensorimotor}{Sensorimotor loop} & \labelcell{execution}{Execution loop} & \bluecell{system}{System}\\
- & \labelcell{action}{Action} & \labelcell{input}{Input} & \\
- };
- \draw [->, -stealth', thick]
- (sensorimotor.north east) edge[bend right] (user.north)
- (user.south) edge[bend right] (sensorimotor.south east)
- (execution.south west) edge[bend right] (system.south)
- (system.north) to[bend right] (execution.north west);
- \draw [ultra thick, draw=black!20, fill=black!50]
- (perception.north east) edge (action.south east);
-\end{tikzpicture}
-\caption{The similarity of a user and a system interacting with each other.}
-\label{fig:loops}
-\end{figure}
-
-\subsection{Initiative}
-
-In Norman's model, the first stage is the user's goal.
-This means in this extended model that the user has the initiative, and the system reacts to her actions.
-This is generally a desired property in user-centered designed systems.
-We depict this case in Figure~\ref{fig:extendedaction}-a.
-However we can also imagine the case in which the system has the initiative as illustrated on Figure~\ref{fig:extendedaction}-b.
-Following Beaudouin-Lafon's interaction paradigms~\cite{mbl04}, the first case is the computer-as-a-tool paradigm.
-The second case corresponds to the computer-as-a-partner paradigm.
-There is a third paradigm named computer-as-a-medium, which corresponds to the case of several users interacting with each other through computers (Figure~\ref{fig:extendedaction}-c).
-In this case the two users have the initiative, and the system reacts to both.
-
-In many cases, interactive systems are not binary though.
-They sometimes behave like tools, and sometimes like partners.
-There is a continuum between the tool and the partner paradigms.
-Systems in between are what Horvitz calls Mixed-initiative User Interfaces~\cite{horvitz99}.
-He describes factors to consider for the design of automated systems.
-Most of them are related to issues concerning the inference of users' goals.
-Other studies show that high controllability must be favored over automation accuracy~\cite{roy19}.
-Whether the user or the system has the initiative, we must keep in sight that the overall objective is to empower the user so that she can succeed in performing her activities.
-Designing interactive systems consists in combining users and machines strengths to compensate their weaknesses in order to empower users.
-
-\begin{idee}
-Regarding related work on AI, while the paper does mention these topics, it is not the focus on this submission. There is certainly more to say on this, but it would require a proper paper. The Intervention UI paradigm\cite{schmidt17} is relevant to our model though, and will be discussed in the Initiative paragraph, where we already discuss Horvitz's mixed initiatives. It is a form of Beaudouin Lafon's partner paradigm, and integrates well in our model (Figure 4b).
-\end{idee}
-
-\begin{figure}[htb]
-\centering
-\definecolor{cellred}{rgb} {0.98,0.17,0.15}
-\definecolor{cellblue}{rgb} {0.17,0.60,0.99}
-
-\newcommand{\labelcell}[2]{
-\node[minimum width=1.9cm, minimum height=.8cm,text width=1.7cm, align=center, outer sep=0](#1) {#2};
-}
-\newcommand{\bluecell}[2]{
- \node[minimum width=2cm, minimum height=.8cm,fill=cellblue, text=white,text width=2cm, align=center, rounded corners=2ex, outer sep=0](#1) {#2};
-}
-\newcommand{\redcell}[2]{
- \node[minimum width=2cm, minimum height=.8cm,fill=cellred, text=white,text width=2cm, align=center, rounded corners=2ex, outer sep=0](#1) {#2};
-}
-\newcommand{\barcell}[1]{
- \node[minimum width=4cm, minimum height=1mm,fill=black!10, align=center, outer sep=0](#1) {};
-}
-\begin{tikzpicture}[align=center,outer sep=5mm]
- \small
- \matrix[anchor=south, row sep=3mm, column sep=2mm,inner sep=0, node distance=0] (cells) at (0,0.3) {
- \redcell{user}{User} & \bluecell{system2}{System}\\
- \barcell{thebar} & \barcell{thebar2}\\
- \bluecell{system}{System} & \redcell{user2}{User}\\
- \redcell{user3}{User} & \redcell{user4}{User}\\
- };
- \node[anchor=south, minimum width=8.6cm,minimum height=1mm, inner sep=0,fill=black!10, outer sep=0](thebar3) at (0,0.35) {};
- \node[anchor=north, minimum width=8.6cm, minimum height=.8cm,fill=cellblue, text=white, align=center, rounded corners=2ex, outer sep=0](system3) at (0,0) {System};
- % fig a
- \node[anchor=south west] at (-4.9,1.4) {a)};
- \draw [->, -stealth', thick]
- (user.west) edge[bend right] ([xshift=-50]thebar.north)
- ([xshift=-50]thebar.south) edge[bend right] (system.west)
- (system.east) edge[bend right] ([xshift=50]thebar.south)
- ([xshift=50]thebar.north) to[bend right] (user.east);
- %fig b
- \node[anchor=south west] at (-0.4,1.4) {b)};
- \draw [->, -stealth', thick]
- (system2.west) edge[bend right] ([xshift=-50]thebar2.north)
- ([xshift=-50]thebar2.south) edge[bend right] (user2.west)
- (user2.east) edge[bend right] ([xshift=50]thebar2.south)
- ([xshift=50]thebar2.north) to[bend right] (system2.east);
- %fig c
- \node[anchor=south west] at (-4.9,0.7) {c)};
- \draw [->, -stealth', thick]
- (user3.west) edge[bend right] ([xshift=-110]thebar3.north)
- ([xshift=-110]thebar3.south) edge ([xshift=-110]system3.north)
- ([xshift=-5]system3.north) edge ([xshift=-5]thebar3.south)
- ([xshift=-5]thebar3.north) to[bend right] (user3.east);
-
- \draw [->, -stealth', thick]
- (user4.west) edge[bend right] ([xshift=5]thebar3.north)
- ([xshift=5]thebar3.south) edge ([xshift=5]system3.north)
- ([xshift=110]system3.north) edge ([xshift=110]thebar3.south)
- ([xshift=110]thebar3.north) to[bend right] (user4.east);
-
-\end{tikzpicture}
-\caption{Combinations of Norman's seven stages of action~\protect\cite{norman02} with the above 7 stages of computation. The three combinations represent Beaudouin-Lafon's three paradigms~\protect\cite{mbl04}: a) computer-as-a-tool; b) computer-as-a-partner; c) computer-as-a-medium. The agent(s) at the top have the initiative, while the agent at the bottom react to the other agents.}
-\label{fig:extendedaction}
-\end{figure}
-
-%: user delegates, then the system releases the control.
-
-%The interaction between a user and an interactive system is represented Figure~\ref{fig:extendedaction}.
-%The user's actions are connected to the system sensors of the input devices, and the physical effects produced by the output devices are connected to the user's sensory organs.
-
-\subsection{Computing affordance}
-
-The seven stages of actions are closely related to the notion of \emph{affordance}~\cite{gibson77}.
-An affordance is a property between a person an an object that enables this person to perform a set of physical actions on this object.
-%These properties are advertised by \emph{signifiers}.
-For example the $2 cm^2$ embossed surface of a button affords pressing it with a finger.
-The way we grasp an interactive device affords different actions we can perform on it~\cite{fellion17}.
-%The way users interact with an object such as an interactive system depends on affordances.
-The combination of affordances, past experiences and other knowledge enables users to create a mental model of an interactive system.
-%The user has a mental model of how the system works, which he created with past experience and knowledge of similar systems.
-%In the best case, the user can confirm or complete his mental model with further exploration through these 7 stages.
-Thanks to this, we can use systems we have never seen before.
-If the system follows standard usability guidelines~\cite{nielsen90}, the user can explore it further to complete her mental model.
-However the user model can differ from the design model.
-In this situation, the user's action can lead to results he did not expect.
-%The discrepancy between the user model and the design model reveals usability issues.
-These are generally usability issues that designers should fix.
-
-Similarly, interactive systems get information from the environment and act on it based on its programmed behavior.
-The equivalent of the user's mental model here is a set of assumptions that are crystallized in the system's program.
-It strongly limits what the system can perceive from the world, and the actions it can perform in it.
-%On top of that, what we can observe with the seven stages of reaction is that at each stage the result may differ from what the designer would like to achieve.
-%There can be noise in the sensed signal, differences between the command and the output signal or between the output signal and the physical effect.
-%Sensors may fail at sensing the intended phenomenon, and the physical effects produced by the interactive system can be altered by something in its environment.
-Wegner models interactive computation with input and output streams, which allow interaction machines to react continuously with their environment~\cite{wegner99}.
-He demonstrated that Turing machines cannot reproduce this behavior, and refuted the strong Church-Turing thesis because of this~\cite{goldin08}.
-Indeed, computability only concerns the computation of mathematical functions~\cite{turing38}.
-It cannot capture the essence of continuous streams of actions and reactions with an uncontrolled environment.
-Therefore we need a more general notion of “what interactive systems can do” than just computability.
-
-% Therefore we claim that this notion is more general than computability~\cite{turing38}.
-% The notion of computability is essentially algorithmic by nature.
-% One of the essential properties of algorithms is that their output have a specified relation to the inputs~\cite{knuth68}.
-% This property prevents algorithms to explore an open world, with unexpected objects with unknown behavior.
-% Interaction is a better paradigm in such situations, because interactive systems are connected to the outside world with input and output streams~\cite{wegner99}.
-
-We define the notion of \emph{computing affordance} as a behavior an interactive system can have with people or objects in their environment.
-It takes into account both its program and its sensing and actuating capabilities.
-They can either be desired or not desired, and can either happen or not with the interactive system implementation (Table~\ref{table:computingaffordance}).
-\emph{Existing features} are desired behaviors that can be the result of the interactive system implementation.
-It means appropriate sensing and actuating chains are working properly, and the program uses it in the intended way.
-\emph{Missing features} are desired behaviors that the interactive system cannot reproduce.
-It can be because it was not implemented, or because the implementation does not behave as intended.
-Indeed there can be noise in the sensed signal, differences between the command and the output signal or between the output signal and the physical effect.
-Sensors may fail at sensing the intended phenomenon, and the physical effects produced by the interactive system can be altered by something in its environment.
-\emph{Unwanted behavior} is a not desired behavior that can happen with the interactive system implementation.
-They can be the result of a malfunction, or an unpredicted side effect of another behavior.
-Finally, \emph{Not required features} are not desired behaviors that cannot occur with the interactive system implementation.
-
-\begin{idee}
-We will clarify our term “computing affordance”. We make a distinction with affordance defined by Gibson (and Norman, Gaver extensions) because systems have interactions with the environment, not only with humans. The overall idea is to bridge theoretical computing models with ours. Like others (\cite{gibson50}, Hornbæk and Oulasvirta discuss others), we extend the Church-Turing thesis beyond computing of mathematical functions. Our approach is an anthropomorphic vision of systems that interact with an unpredictable environment. We view the system as a combination of hardware and software, that correspond to the human body and cognition. According to Gibson, our understanding of our environment is not a cognitive process alone, but a relation between our body and our cognition. Our model applies this principle to computation. This require computation to refer to “a behaviour an interaction system can have with people or objects in their environment” rather than just computing mathematical functions. The term “computing affordance” binds this extended notion of computing with affordance.
-\end{idee}
-
-% there is not there is
-%info perceived false affordance perceived affordance
-%info not perceived correct reject hidden affordance
-
-% there is not there is
-%info perceived feature required existing feature
-%info not perceived feature not required missing feature
-
-\begin{table}[htb]
-\caption{Computing affordance are either desired or not. Either their implementation permits it or not.}
-\label{table:computingaffordance}
-\newcommand{\cell}[1]{
- \node[minimum width=3cm, minimum height=2cm,draw=black!20,thin,text width=2cm, align=center] {
- #1
- };
-}
-\newcommand{\topcell}[2]{
- \node[minimum width=3cm, minimum height=1cm,draw=black!20,thin,fill=black!10,text width=2cm, align=center](#1) {
- #2
- };
-}
-\newcommand{\leftcell}[2]{
- \node[minimum width=2cm, minimum height=2cm,draw=black!20,thin,fill=black!10,text width=2cm, align=center](#1) {
- #2
- };
-}
-\centering
-\begin{tikzpicture}
- \matrix[row sep=0mm, column sep=0mm,inner sep=0, node distance=0] (cells) {
- & \topcell{nd}{Not desired} & \topcell{d}{Desired} \\
- \leftcell{y}{Yes} & \cell{Unwanted behavior} & \cell{Existing feature} \\
- \leftcell{n}{No} & \cell{Not required feature} & \cell{Missing feature}\\
- };
- \node[anchor=south, minimum width=4cm, rotate=90, thick,draw=black!20,fill=black!20] at (n.north west) {Implementation};
- \node[anchor=south, minimum height=0.6cm, minimum width=6cm, thick, draw=black!20,fill=black!20] at (nd.north east) {Computing affordance};
-\end{tikzpicture}
-\end{table}
-
-%Engineers and designers build systems with the capabilities they desire, following the seven stages of reaction.
-%However the behavior of the system can be different, because of something unpredicted in the environment, or just because of a software or hardware malfunction.
-
-
-\subsection{Evolution}
-
-The discrepancy between the the system behavior and the design model is complementary to the discrepancy between the user's model and the designer's model revealed by Norman's model.
-They both contribute to the evolution of the interactive system.
-
-There are two scales of evolutions for humans.
-The first one is the evolution of mankind as a species.
-Every generation evolve thanks to genetics, society and culture to name a few.
-The second one is the evolution of every individual during their whole life.
-They learn about the world, but they also train their capacities.
-Even though there is of course a limit to this training, it is a key capability that enables people to adapt to their environment.
-
-Interactive systems can evolve at similar scales.
-They first evolve through software and hardware updates.
-These updates remove unwanted behaviors, and implement or fix missing features.
-Following principles such as reification, polymorphism and reuse facilitates such evolutions~\cite{mbl00a}.
-The set of desired behaviors actually evolves as well.
-First, users adapt to the interactive systems' behavior and get used to it.
-Second, practice stimulates new ideas of desired behavior.
-Therefore users and systems evolve together.
-This phenomenon is known as coevolution~\cite{mackay90}.
-While training makes users interact more efficiently, the benefits are greater when interactions techniques support training~\cite{cockburn14}.
-
-The evolution of interactive systems during their execution is still in its infancy.
-For example Pharo applications support the modification and debugging of their own code during their execution~\cite{black10}.
-Neural networks enables programs to evolve their behavior with supervised or unsupervised learning~\cite{mcculloch43}.
-Recent advances in deep learning facilitated this learning phase, making this technique more practical~\cite{lecun15}.
-For example robots can learn how to interact with an object with a curiosity behavior~\cite{laversannefinot18}.
-
-The validation of the behavior of an interactive system is an essential part of the evolution process.
-Thanks to it, behaviors can be categorized according to Table~\ref{table:computingaffordance}.
-There are several tools to prove programs with rational methods~\cite{chlipala13}.
-With these tools we can verify that the code is a correct implementation of an algorithm.
-Proving the behavior with rational methods of neural networks is a current challenge in machine learning.
-Similarly to interactive systems, they seem to be more suited for empirical evaluations.
-
-\section{Conclusion}
-
-Designing and implementing an interactive system is hard because it connects sensory, cognitive, software and hardware components.
-Mismatches between intended and actual behaviors can happen at any stage of the process.
-We presented a model of interactive systems behavior.
-It is an adaptation to Norman's action theory, which makes it easier to combine both model to describe the full phenomenon.
-
-
-Depending on their background, readers will find different things in this paper.
-Experts in human factors will get an insight on the implementation aspects of interactive devices.
-%It must not limit their creativity,
-Engineers will get a better insight on how implementation issues affect usability.
-It is also a checklist for designers, makers and practitioners when they are designing, implementing, upgrading or fixing interactive systems.
-
-In our future work, we will generalize the notion of computability.
-We conjecture that interactive systems suffer from a similar incompleteness phenomenon than formal systems~\cite{godel31}.
-
-\begin{idee}
-Our model certainly has limitations. It does not take into account sociological aspects, and more generally it does not discuss the design process (the Rode ref is awesome by the way). In its current form it focuses on functional and engineering aspects. We are interested in the question of the design and evolution of interactive systems though. It definitely involves more than engineering matters. \cite{huot13} is an interesting first answer. Our long term objective is to study the incompleteness nature of interactive systems, and it involves addressing these questions. However we consider this subject is out of the scope of the current submission.
-\end{idee}
-
-\section{Descriptive Case study (Printgets)}
-\begin{idee}
-The objectives and purpose of our model will be much clearer with an additional case study. We propose describing a vibrotactile system with capacitive input, within our framework. We will describe how implementation steps fit in our model: calibration, interpolation, filtering, thresholds, hysteresis, event fusion, output models, signal computing and physical response to command.
-\end{idee}
-\cite{frisson17}
-
-\section{Evaluative Case study (Latency)}
-\cite{casiez17}
--- /dev/null
+%!TEX root = ../hdrmain.tex
+
+\chapter{Evolutions of interaction}
+ \epigraph{\lorem}{Auteur}
+
+\begin{Abstract}
+\loremipsum
+\end{Abstract}
+
+
+\section{Extending the computer desktop}
+
+\subsection{CtrlMouse: cuplicating mode switchers on the mouse}
+
+Ctrl Mouse \cite{pietrzak14}
+
+\subsection{Métamorphe: a keyboard with actuated keys}
+
+Métamorphe \cite{bailly13}
+
+\subsection{Living Desktop: actuated peripherals}
+
+Living Desktop \cite{bailly16}
+
+
+\section{Extending interaction paradigms}
+
+\subsection{Direct Manipulation in tactile displays}
+
+Direct Manipulation~\cite{gupta16}
+
+\subsection{Summon interactions}
+
+Summon interactions~\cite{gupta17}
+++ /dev/null
-%!TEX root = ../hdrmain.tex
-
-\chapter{Extending gestural interaction}
- \epigraph{\lorem}{Auteur}
-
-\begin{Abstract}
-\loremipsum
-\end{Abstract}
-
-\section{New interaction paradigm (Summon interactions)}
-\cite{gupta17}
-
-\section{Pointing in Virtual Reality (Raycursor)}
-\cite{baloup19,baloup18,baloup19a}
-
-\section{New degrees of freedom for pen interaction (Flexstylus)}
-\cite{fellion17}
+++ /dev/null
-%!TEX root = ../hdrmain.tex
-
-\chapter{Leveraging haptic feedback}
- \epigraph{\lorem}{Auteur}
-
-\begin{Abstract}
-\loremipsum
-\end{Abstract}
-
-\section{Extending direct manipulation (Tactile direct manipulation)}
-\cite{gupta16,gupta16a}
-
-\section{Haptic feedback for activity monitoring (Activibe)}
-\cite{cauchard16}
-
-\section{Contribution of haptic to sense of embodiment}
+++ /dev/null
-%!TEX root = ../hdrmain.tex
-
-\chapter{Extending interactive devices}
- \epigraph{\lorem}{Auteur}
-
-\begin{Abstract}
-\loremipsum
-\end{Abstract}
-
-\section{(Metamorphe)}
-\cite{bailly13,bailly13a}
-
-\section{(CtrlMouse)}
-\cite{pietrzak14}
-
-\section{(Living Desktop)}
-\cite{bailly16}
--- /dev/null
+%!TEX root = ../hdrmain.tex
+
+\chapter{Control with automation}
+ \epigraph{\lorem}{Auteur}
+
+\begin{Abstract}
+\loremipsum
+\end{Abstract}
+
+\begin{figure}[htb]
+\centering
+\includegraphics[width=.8\textwidth]{continuum}
+\caption{Continuum between Beaudouin Lafon's Tool and Partner paradigms.}
+\label{fig:interactioncontinuum}
+\end{figure}
+
+\begin{itemize}
+ \item Métamorphe \cite{bailly13}
+ \item Living Desktop \cite{bailly16}
+ \item Ctrl Mouse \cite{pietrzak14}
+ \item RayCursor \cite{baloup18,baloup19}
+\end{itemize}
+
+Ongoing work: facial animation
\ No newline at end of file
--- /dev/null
+%!TEX root = ../hdrmain.tex
+
+\chapter{Direct Manipulation}
+ \epigraph{\lorem}{Auteur}
+
+\begin{Abstract}
+\loremipsum
+\end{Abstract}
+
+\section{Extending direct manipulation to haptic displays}
+
+Haptic Direct Manipulation \cite{pietrzak15,gupta16}
+
+\section{Direct manipulation without pointing}
+
+Summon \& Select \cite{gupta17}
+
+\section{Fingers as interactive instruments}
+
+Fingercuts \cite{goguey14,goguey14a,goguey17}
--- /dev/null
+%!TEX root = ../hdrmain.tex
+
+\chapter{A model of interactive systems}
+ \epigraph{Computing is to watching a movie what interaction is to playing a video game.}{Me}
+
+\begin{Abstract}
+Modeling the behavior of both an interactive system and its users is challenging because of their fundamental different nature.
+ In his action theory, Norman presents how an agent interacting with world objects thanks to the seven stages of action.
+ The actions the agent performs on the object, and the way she perceives its behavior depends on factors such as affordances or her knowledge about this object or similar ones.
+ While this model is efficient at describing how users perceive interactive systems, it does not explain how systems interact with users because world objects are seen as black boxes.
+ We present the seven stages of reaction, which mirrored the seven stages of action that describe the behavior of reactive systems.
+ We discuss the similarities and differences of the two models, how they combine, as well as implications for the design of interactive systems.
+\end{Abstract}
+
+%\section{7 stages model}
+
+
+Studying interactive system is a complex task because it requires knowledge on a wide range of areas.
+It spans from human sciences such as psychology, biology or sociology, to technical sciences such as computer science, electronics, mechanics, automatics or mathematics.
+All these experts need a common ground for studying interactive systems together.
+In particular, despite the fact that computers are intended for interacting with humans, they are still not designed for this use.
+They are still designed for computation in mind.
+There is certainly an historical reason for this, exacerbated by inertia and resistance to change.
+However we argue here that the theoretical background computers are built upon is the main reason why today computers are designed for computation rather than for interaction.
+
+In this chapter we first present existing models of computation, software and users.
+Then we describe our own model.
+Finally, we describe two case studies, to demonstrate the descriptive and evaluative powers of our model.
+
+\begin{idee}
+None of the computational and software models consider the hardware part
+\end{idee}
+
+\section{Modelling interactive systems}
+
+\begin{idee}
+Reviewers questioned the similarity with past research due to insufficient positioning in the paper. Past research they mention on toolkits and tasks models essentially focus on the application, input phrase or encoding stages of our model. However, in the HCI community we observe a notable rise of custom hardware for the design of interactive systems in the past decade. Our model covers both the software and hardware parts of interactive systems, as{} a whole.
+\end{idee}
+
+\cite{hornbaek17}
+
+\subsection{Computational models}
+
+Computing models such as $\lambda$-calculus~\cite{church32} or Turing machines~\cite{turing38} focus on solving numerical problem, and overlook the interaction machines have with their environment.
+While people daily interact with a world full of interactive devices, machines constantly interact with the world full of humans.
+Because of that, Goldin and Wegner showed that interaction is a more general model of computing that Turing-complete models~\cite{goldin08}.
+They argue that the universality of the computing models above is due to the fact they all rely on induction~\cite{wegner99}.
+Interaction is rather a co-inductive phenomenon.
+
+The induction mechanism converges to base cases, which ensures computation always terminates.
+At the opposite, co-induction is a process that applies to streams as input are received.
+The question whether the co-inductive process terminates is not relevant.
+It potentially runs forever on an infinite input stream.
+This is a necessary mechanism to model and implement interaction with external agents.
+
+\begin{algorithm}[htb]
+\SetAlgoLined
+\caption{Typical main function of an interactive application}
+\While{true}{
+ get inputs\;
+ update internal model\;
+ generate outputs\;
+}
+\end{algorithm}
+
+\subsection{Software models}
+
+PAC \cite{coutaz87}
+Arch \cite{arch92}
+MVC \cite{reenskaug79a}
+Seeheim \cite{green85}
+
+\subsection{User models}
+
+Many human behavior models are used in HCI, and there is an active community working on this.
+With GOMS~\cite{card83} and Keystroke~\cite{card80} we can predict the time it takes for a person to use a keyboard or a pointer for example, including mental activities.
+Fitts' Law is extensively used for modeling pointing~\cite{mackenzie92}, and the steering law for modeling how users follow a path~\cite{accot97}.
+However such models are specialized to specific tasks, therefore not generic enough for describing entire interactive systems.
+
+Norman's action theory is a more general description of how people interact with the world, in particular interactive systems~\cite{norman02}.
+Figure~\ref{fig:sevenstages} depicts the seven stages of actions he uses to describe his theory.
+The person starts with a goal in her mind.
+Then she successively forms intentions, specify and execute actions on the world.
+As a consequence, the state of the world changes, and the person perceive these changes.
+Then she interprets this state and evaluate the consequence of her actions on the world.
+%The way people interact with the world highly depends on actions enabled by shared properties between a user and an object,
+While this model is efficient at describing how people interact with interactive devices, the machines themselves are seen as black boxes.
+%However interactive systems have a process to interact with the world.
+
+%The seven stages of action as described by Norman is a direct application of Gibson's perception/action coupling~\cite{gibson50}.
+%Human beings act on the world to perceive it.
+%The way they interpret it depends on the action they performed to explore it.
+%O'Regan describes this interaction as the sensorimotor cycle~\cite{oregan01a}.
+%It is a continuous cycle of actions and perceptions, that shape our understanding of the world around us.
+%This phenomenon is the building block of the direct manipulation paradigm~\cite{schneiderman83}.
+%\fixme{Describe seven stages of action~\cite{norman02}.}
+
+\begin{figure}[htb]
+\centering
+\definecolor{cellred}{rgb} {0.98,0.17,0.15}
+
+\newcommand{\stage}[2]{
+ \node[minimum width=2.5cm, minimum height=1.0cm,fill=cellred, text=white,text width=2cm, align=center, rounded corners=2ex, outer sep=0](#1) {#2};
+}
+\begin{tikzpicture}
+ \small
+ \matrix[row sep=3mm, column sep=2mm,inner sep=0, node distance=0, outer sep=5mm] (cells) {
+ & \stage{goal}{Goal} & \\
+ \stage{intention}{Intention} & & \stage{evaluation}{Evaluation} \\
+ \stage{specification}{Specification actions} & & \stage{interpretation}{Interpretation} \\
+ \stage{execution}{Execution actions} & & \stage{perception}{Perception} \\
+ };
+ \node[anchor=north, minimum width=8.6cm,minimum height=.8cm,fill=black!10](world) at (cells.south) {World};
+ \draw [->, -stealth', thick]
+ (perception) edge (interpretation)
+ (interpretation) edge (evaluation)
+ (evaluation) edge[out=90, in=0] (goal)
+ (goal) edge[out=180, in=90] (intention)
+ (intention) edge (specification)
+ (specification) edge (execution);
+ \draw [->, -stealth', thick, dashed, draw=black!50, fill=black!50]
+ (perception|-world.north) edge (perception.south)
+ (execution) to (execution|-world.north);
+ \node[anchor=south, minimum width=2.6cm, rotate=90, outer sep=5mm](gulfexecution) at (specification.west) {Gulf of execution};
+ \draw [->, -stealth', thick,transform canvas={xshift=1em}]
+ (gulfexecution.east |- intention.north) to (gulfexecution.east |- execution.south);
+ \node[anchor=south, minimum width=2.6cm, rotate=270, outer sep=5mm](gulfevaluation) at (interpretation.east) {Gulf of evaluation};
+ \draw [->, -stealth', thick,transform canvas={xshift=-1em}]
+ (gulfevaluation.west |- perception.south) to (gulfevaluation.west |- evaluation.north);
+% \node[anchor=south, minimum width=2.6cm, rotate=90, thick,draw=black!20,fill=black!20] at (n.north west) {Implementation};
+% \node[anchor=south, minimum height=0.6cm, minimum width=6cm, thick, draw=black!20,fill=black!20] at (nd.north east) {Computing affordance};
+\end{tikzpicture}
+\caption{Norman's seven stages of action~\protect\cite{norman02}. It describes how people interact with their environment.}
+ \label{fig:sevenstages}
+\end{figure}
+
+
+We present the seven stages of reaction, a model of interactive systems behavior inspired by Normal's seven stages of action.
+Similarly to the designeering approach~\cite{huot13}, it advocates for the consideration of implementation aspects of interacting systems as a complement to user-centered design methods.
+Our model describes both the software and hardware part of interactive systems, and how it interacts with its environment.
+%We discuss how they work, how they interact with their environment, and how they evolve.
+After describing the model, we discuss its combination with Norman's model, and its implications to the design of interactive systems.
+
+
+\section{Seven stages of reaction}
+
+Interactive systems globally work in a similar way than humans.
+They sense the world, and they act on it.
+%However, we can identify a major difference in most cases.
+%While people generally have the initiative, the loop in the seven stages of action starts with a goal, interactive systems tend to react to the environment.
+%They get information from the world, interpret it, and act on the world in return.
+%Despite the tremendous progress of computers the past decades, they can still process a tiny part of
+%\loremipsum
+We describe below the \defword{seven stages of reaction} based on Norman's seven stages of action (Figure~\ref{fig:mysevenstages}).
+The model is upside down, because it starts with an input chain (on the left), a software part that interprets the inputs and produces outputs (on the right).
+%After reviewing the seven stages of reaction, we will discuss the pitfalls it reveals, and we will compare them with the seven stages of action.
+
+\begin{idee}
+The stages of our model are functional slicing of system parts by input/output: peripherals (sensing/Physical effect), driver (event/command), toolkit (phrase/encoding), and the application binds both sides with interaction techniques. Hence it not only covers the application, but also the full input and output chains.
+\end{idee}
+
+\begin{idee}
+Criticisms to Norman's model are not necessarily relevant since they focus on the human side. In our submission we get inspiration from this model to build a compatible model to describe the system behavior. We only discuss the human side on how it interacts with the system. But any variation of Norman's model should be compatible with our own.
+\end{idee}
+
+\begin{figure}[htb]
+\centering
+\definecolor{cellblue}{rgb} {0.17,0.60,0.99}
+
+\newcommand{\stage}[2]{
+ \node[minimum width=2.5cm, minimum height=1.0cm,fill=cellblue, text=white,text width=2cm, align=center, rounded corners=2ex, outer sep=0](#1) {#2};
+}
+\begin{tikzpicture}
+ \small
+ \matrix[row sep=3mm, column sep=2mm,inner sep=0, node distance=0, outer sep=5mm] (cells) {
+ \stage{sensing}{Sensing} & & \stage{physical}{Physical effect} \\
+ \stage{events}{Input Events} & & \stage{command}{Command} \\
+ \stage{phrase}{Input phrase} & & \stage{encoding}{Encoding} \\
+ & \stage{application}{Application} & \\
+ };
+ \node[anchor=south, minimum width=8.6cm,minimum height=.8cm,fill=black!10](world) at (cells.north) {World};
+ \draw [->, -stealth', thick]
+ (sensing) edge (events)
+ (events) edge (phrase)
+ (phrase) edge[out=270, in=180] (application)
+ (application) edge[out=0, in=270] (encoding)
+ (encoding) edge (command)
+ (command) edge (physical);
+ \draw [->, -stealth', thick, dashed, draw=black!50, fill=black!50]
+ (sensing|-world.south) edge (sensing.north)
+ (physical) to (physical|-world.south);
+ \node[anchor=south, minimum width=2.6cm, rotate=90, outer sep=5mm](funnelevaluation) at (events.west) {Funnel of evaluation};
+ \draw [->, -stealth', thick,transform canvas={xshift=1em}]
+ (funnelevaluation.east |- sensing.north) to (funnelevaluation.east |- phrase.south);
+ \node[anchor=south, minimum width=2.6cm, rotate=270, outer sep=5mm](funnelexecution) at (command.east) {Funnel of evaluation};
+ \draw [->, -stealth', thick,transform canvas={xshift=-1em}]
+ (funnelexecution.west |- encoding.south) to (funnelexecution.west |- physical.north);
+% \node[anchor=south, minimum width=2.6cm, rotate=90, thick,draw=black!20,fill=black!20] at (n.north west) {Implementation};
+% \node[anchor=south, minimum height=0.6cm, minimum width=6cm, thick, draw=black!20,fill=black!20] at (nd.north east) {Computing affordance};
+\end{tikzpicture}
+\caption[Seven stages of interactive computation]{The seven stages of interactive computation, adapted from Norman's seven stages of action. It describes how interactive systems interact with their environment.}
+\label{fig:mysevenstages}
+\end{figure}
+
+\subsection{Input chain}
+
+The input stage is the mirror of the evaluation part of the seven stages of action.
+It comprises three stages that explain how interactive systems get informations from the environment.
+%world, and in particular the user.
+
+\paragraph{Sensing}
+
+The input chain begins with the \emph{sensing} stage.
+It mainly consists in hardware, sensors and their driving electronics, that measure physical properties of the environment.
+User movements is the favorite kind of information for interactive systems, but it can be various other information such as light~\cite{sonne17}, temperature~\cite{sarsenbayeva17}, moisture~~\cite{jia18}, vibrations~\cite{casiez17}.
+There is also an important software part that consists in encoding~\cite{song11}, filtering~\cite{casiez12} and transmitting the data.
+
+\paragraph{Input events}
+
+The low level stages of the system transform sensed information into \emph{input events}.
+It takes into account predefined information, such as calibration or transfer functions~\cite{casiez11a}.
+Raw capacitance values are transformed into contact points~\cite{lee85}.
+Body skeletons are computed from depth images~\cite{shotton11}.
+At this stage we notice that the infinite richness of the world is reduced to a relatively small number of digits.
+%Let's discuss the simple example of a keypress on a keyboard.
+%The only information in the digital world is whether a key is pressed or not. There is no information about the finger that pressed it, the speed of the finger or its trajectory.
+%There is neither the possibility if a finger is hovering the key, if several fingers are pressing it, or even if it was pressed with the nose.
+
+\paragraph{Input phrases}
+
+Input events are treated as tokens, or lexical units.
+They are interpreted as \emph{input phrases} with grammars or finite automatons~\cite{appert06} or more complex algorithms~\cite{wobbrock07}.
+They form the building blocks of \defword{interaction techniques}, also called modalities~\cite{nigay93}.
+A click, a drag \& drop or a pinch gesture are examples of interaction techniques.
+The combination of modalities, called multimodality~\cite{coutaz95}, expands the possible inputs.
+The joint use of a digal pen and multitouch on a interactive surface is such an example~\cite{hinckley10}.
+%Multimodality is the combination of several modalities.
+%\cite{oviatt99}.
+
+\subsection{Application}
+
+The \emph{application} layer is specialized for assisting users in their tasks.
+It executes actions as a result of input phrases, and produce outputs to give users feedback, and the result of their actions.
+The architecture of this stage is further detailed in models such as PAC~\cite{coutaz87}, Arch~\cite{arch92} or MVC~\cite{reenskaug79a}.
+These architectures define several layers between the user, seen through widgets, and the computer, seen through a functional core (or abstraction, model).
+
+% meta algorithm that chooses whether there are algorithms to run, which algorithms to run, and with which parameters.
+% It occurs typically when a command is selected, or an object is being manipulated.
+
+% Every computer system has at some point an effect on the world.
+% We extend the notion of dead code to not only code that is never executed, but also code which result has no effect whatsoever on the physical world.
+% Even a routing algorithm will at some point transmit data to a computer which will display it or print it in any way.
+% % Quantic information : stored in computer memory. Will it be observed?
+% Bringing a piece of information to the physical world requires several steps.
+
+%Software architecture models such as further detail this stage.
+
+\subsection{Output chain}
+
+The output chain mirrors the action part on the seven stages of action.
+It describes the way interactive systems act on the world.
+In the following we will use haptic interfaces as an example, because of the diversity of actuation mechanisms.
+However the model applies to any output modality.
+
+\paragraph{Encoding}
+
+First of all the systems must \emph{encode} the piece on information.
+It takes into account multiple parameters, as described by Bernsen's design space~\cite{bernsen93a}, or Bertin in the case of vision~\cite{bertin83}.
+At this stage, the application decides how an object or a piece of information will be represented in the physical world.
+%A visual encoding can be an icon, or a text~\cite{bernsen93a}.
+Audio encodings can be sounds~\cite{gaver93} or melodies~\cite{brewster93}.
+Various haptic encodings include vibrations~\cite{brewster04}, forces~\cite{maclean03a,pietrzak05a} or textures~\cite{pietrzak09}.
+Force feedback typically compute a force vector a a result of a force model that depend on the device position~\cite{zilles95}.
+%Proxy\cite{ruspini97}
+
+%Effectors can produce light (like screens), sounds, vibrations, forces, …
+
+\paragraph{Commands}
+
+Output devices have driving electronics which require specific \emph{commands}.
+For example force feedback devices commonly use DC motors.
+The output force depends on the voltage it receives.
+High force device require strong motors, therefore high voltage.
+Consequently, haptic devices need precise amplifiers.
+When these amplifiers do not have a linear response, the command has to be adjusted.
+%\fixme{closed loop}
+
+\paragraph{Physical effect}
+
+The command send to the device produces a \emph{physical effect}.
+These can be light, sounds, vibrations or forces for instance.
+The user feels these effects though his senses.
+Many external factors may disturb this effect.
+The vibration transmitted by a vibrotactile actuator to the skin of a user depends on how it is attached~\cite{yao10}, or how the users holds the device.
+The way a user sees an object displayed on a screen may be affected by ambient light or distance.
+
+
+\fixme{\subsection{Pitfalls}}
+
+With this model, we demonstrate that solving a problem with a interactive system is not only a matter of algorithmic computation.
+Sensing phenomenons of the environment, and producing effects on it resulting of the results of algorithmic computation is subject to non trivial issues.
+Algorithms can only observe the shadow of the physical world, under the light of input and output streams.
+They are like prisoners from a digital Plato's cave.
+Therefore, addressing interaction problems require a broader view than just observing algorithms.
+It requires identifying information the system needs, and how to convey a result efficiently.
+Norman's gulf of evaluation and gulf of execution are mirrored with a \defword{funnel of evaluation} and a \defword{funnel of execution} in this effect.
+
+\paragraph{Funnel of evaluation}
+
+The funnel of evaluation depicts the fact that the input stages reduce the world into few bits.
+A good design of the input chain senses the right phenomenons, at an appropriate amplitude, with a sufficient spatial and temporal resolution, and with little distortions.
+These information must me combined correctly to form a meaningful sequence of inputs.
+For example, the \emph{Midas touch} problem is a usual issue with 3D gestural interaction~\cite{gupta17}.
+Since the sensors observes all the users movements, there is no obvious segmentation.
+The system has no way to know if you move your hand to interact with the system, or for scratching your nose.
+At the opposite, occlusion prevents vision-based gesture sensors from getting position information for hidden objects.
+In these situations, we can grow the funnel of evaluation by adding segmentation gestures, and using multiple cameras.
+
+\paragraph{Funnel of execution}
+
+The funnel of execution is symmetrical.
+The software part of the system displays parts of its data and state in an intelligible way for users.
+The way data is shown to the user can have a huge impact on how he interacts with it~\cite{zhang94}.
+Therefore the encoding part is crucial, and it is a first filter for reducing the internal complexity of the system.
+The specifications of the output device is a second filter.
+There is a limitation of force, color, brightness, frequency, etc. each device can produce in theory.
+There is also a limit of precision, that greatly depends on amplifiers and digital to analog converters (DAC).
+Last the physical effect can be inconsistent for the same command.
+Some haptic devices can behave differently depending on ambient temperature, finger moisture, cleanliness, etc.
+
+
+\begin{idee}
+In the paper we mentioned the usefulness of this model for several audiences. We will elaborate more on the objectives for HCI researchers and practitioners.
+Our model provides a framework for describing the software and hardware parts of an interactive system. We will describe it along three dimensions Beaudouin Lafon proposed to evaluate interaction models~\cite{mbl04} descriptive, evaluative and generative.
+
+\begin{itemize}
+\item Descriptive power. Many research papers do not describe critical information such as transfer functions, input\&output mappings, actuator response, …. Our model is a systematic structure for describing interactive systems. Such a description enables highlighting useful implementation details. Benefits include replicability, and highlighting potential undesired side effects in psychophysical experiments for instance.
+\item Evaluative power. Implementing hardware+software interactive systems is a particular difficult task. Many of these, including published research prototypes, have implementation issues that could be avoided with using a systematic approach. For example input systems have jitter because they do not use filtering. Many vibrotactile systems give poor feedback because they have a high inertia. Describing interactive systems with this common framework would make it easier to compare their implementation, and identify implementation issues.
+\item Generative power. The description of interactive systems with a common framework also has the advantage of inspiring alternative designs, new combination of designs and transgressive uses of technology.
+\end{itemize}
+\end{idee}
+
+\section{Interaction between users and systems}
+
+The purpose of interactive systems is to assist users in their activities.
+We can model the interaction of a user with a system by simply plugging the seven stages of action to the seven stages of reaction.
+The seven stages of reaction is a detailed view of the “world” stage in Norman's model.
+The connection between both occurs when the user manipulates an input device, and when she feels the physical effects of output devices~\cite{lederman96}.
+We can improve interaction not only by studying the two models separately, but also by studying their connections, similarities and differences.
+%This is typically what we study in HCI as we design new input devices with original sensing technologies~\cite{fellion17}, but also when we design new output devices with cutting-edge actuators%~\cite{frisson17,potier12,potier16}.
+
+Users and interactive systems are both modeled with an internal running loop (Figure~\ref{fig:loops}).
+The cycles of actions and perceptions that help us exploring the world is the \defword{sensorimotor loop}~\cite{oregan01a}.
+The seven stages of action is an instance of this phenomenon.
+Both models follow Gibson's theory, according to which exploratory movements are part of our understanding of the sensations we felt as a result of them~\cite{gibson50}.
+With this phenomenon, users can control interactive systems continuously~\cite{gupta16}.
+It enables fast and incremental adjustments, which are two of the building blocks of \defword{direct manipulation}~\cite{schneiderman83}.
+In order to make it happen, the interactive system must also respond in real time.
+This means its \defword{execution loop} must be fast, and with low latency\cite{casiez17}.
+
+\begin{figure}[htb]
+\centering
+\definecolor{cellred}{rgb} {0.98,0.17,0.15}
+\definecolor{cellblue}{rgb} {0.17,0.60,0.99}
+
+\newcommand{\labelcell}[2]{
+\node[minimum width=3cm, minimum height=1.0cm,text width=1.7cm, align=center, outer sep=0](#1) {#2};
+}
+\newcommand{\bluecell}[2]{
+ \node[minimum width=2.5cm, minimum height=1.0cm,fill=cellblue, text=white,text width=2cm, align=center, rounded corners=2ex, outer sep=0](#1) {#2};
+}
+\newcommand{\redcell}[2]{
+ \node[minimum width=2.5cm, minimum height=1.0cm,fill=cellred, text=white,text width=2cm, align=center, rounded corners=2ex, outer sep=0](#1) {#2};
+}
+\begin{tikzpicture}
+ \small
+ \matrix[row sep=3mm, column sep=0,inner sep=0, node distance=0, outer sep=5mm] (cells) {
+ & \labelcell{perception}{Perception} & \labelcell{output}{Output} & \\
+ \redcell{user}{User} & \labelcell{sensorimotor}{Sensorimotor loop} & \labelcell{execution}{Execution loop} & \bluecell{system}{System}\\
+ & \labelcell{action}{Action} & \labelcell{input}{Input} & \\
+ };
+ \draw [->, -stealth', thick]
+ (sensorimotor.north east) edge[bend right] (user.north)
+ (user.south) edge[bend right] (sensorimotor.south east)
+ (execution.south west) edge[bend right] (system.south)
+ (system.north) to[bend right] (execution.north west);
+ \draw [ultra thick, draw=black!20, fill=black!50]
+ (perception.north east) edge (action.south east);
+\end{tikzpicture}
+\caption{The similarity of a user and a system interacting with each other.}
+\label{fig:loops}
+\end{figure}
+
+\subsection{Initiative}
+
+In Norman's model, the first stage is the user's goal.
+This means in this extended model that the user has the initiative, and the system reacts to her actions.
+This is generally a desired property in user-centered designed systems.
+We depict this case in Figure~\ref{fig:extendedaction}-a.
+However we can also imagine the case in which the system has the initiative as illustrated on Figure~\ref{fig:extendedaction}-b.
+Following Beaudouin-Lafon's interaction paradigms~\cite{mbl04}, the first case is the computer-as-a-tool paradigm.
+The second case corresponds to the computer-as-a-partner paradigm.
+There is a third paradigm named computer-as-a-medium, which corresponds to the case of several users interacting with each other through computers (Figure~\ref{fig:extendedaction}-c).
+In this case the two users have the initiative, and the system reacts to both.
+
+In many cases, interactive systems are not binary though.
+They sometimes behave like tools, and sometimes like partners.
+There is a continuum between the tool and the partner paradigms.
+Systems in between are what Horvitz calls Mixed-initiative User Interfaces~\cite{horvitz99}.
+He describes factors to consider for the design of automated systems.
+Most of them are related to issues concerning the inference of users' goals.
+Other studies show that high controllability must be favored over automation accuracy~\cite{roy19}.
+Whether the user or the system has the initiative, we must keep in sight that the overall objective is to empower the user so that she can succeed in performing her activities.
+Designing interactive systems consists in combining users and machines strengths to compensate their weaknesses in order to empower users.
+
+\begin{idee}
+Regarding related work on AI, while the paper does mention these topics, it is not the focus on this submission. There is certainly more to say on this, but it would require a proper paper. The Intervention UI paradigm\cite{schmidt17} is relevant to our model though, and will be discussed in the Initiative paragraph, where we already discuss Horvitz's mixed initiatives. It is a form of Beaudouin Lafon's partner paradigm, and integrates well in our model (Figure 4b).
+\end{idee}
+
+\begin{figure}[htb]
+\centering
+\definecolor{cellred}{rgb} {0.98,0.17,0.15}
+\definecolor{cellblue}{rgb} {0.17,0.60,0.99}
+
+\newcommand{\labelcell}[2]{
+\node[minimum width=1.9cm, minimum height=.8cm,text width=1.7cm, align=center, outer sep=0](#1) {#2};
+}
+\newcommand{\bluecell}[2]{
+ \node[minimum width=2cm, minimum height=.8cm,fill=cellblue, text=white,text width=2cm, align=center, rounded corners=2ex, outer sep=0](#1) {#2};
+}
+\newcommand{\redcell}[2]{
+ \node[minimum width=2cm, minimum height=.8cm,fill=cellred, text=white,text width=2cm, align=center, rounded corners=2ex, outer sep=0](#1) {#2};
+}
+\newcommand{\barcell}[1]{
+ \node[minimum width=4cm, minimum height=1mm,fill=black!10, align=center, outer sep=0](#1) {};
+}
+\begin{tikzpicture}[align=center,outer sep=5mm]
+ \small
+ \matrix[anchor=south, row sep=3mm, column sep=2mm,inner sep=0, node distance=0] (cells) at (0,0.3) {
+ \redcell{user}{User} & \bluecell{system2}{System}\\
+ \barcell{thebar} & \barcell{thebar2}\\
+ \bluecell{system}{System} & \redcell{user2}{User}\\
+ \redcell{user3}{User} & \redcell{user4}{User}\\
+ };
+ \node[anchor=south, minimum width=8.6cm,minimum height=1mm, inner sep=0,fill=black!10, outer sep=0](thebar3) at (0,0.35) {};
+ \node[anchor=north, minimum width=8.6cm, minimum height=.8cm,fill=cellblue, text=white, align=center, rounded corners=2ex, outer sep=0](system3) at (0,0) {System};
+ % fig a
+ \node[anchor=south west] at (-4.9,1.4) {a)};
+ \draw [->, -stealth', thick]
+ (user.west) edge[bend right] ([xshift=-50]thebar.north)
+ ([xshift=-50]thebar.south) edge[bend right] (system.west)
+ (system.east) edge[bend right] ([xshift=50]thebar.south)
+ ([xshift=50]thebar.north) to[bend right] (user.east);
+ %fig b
+ \node[anchor=south west] at (-0.4,1.4) {b)};
+ \draw [->, -stealth', thick]
+ (system2.west) edge[bend right] ([xshift=-50]thebar2.north)
+ ([xshift=-50]thebar2.south) edge[bend right] (user2.west)
+ (user2.east) edge[bend right] ([xshift=50]thebar2.south)
+ ([xshift=50]thebar2.north) to[bend right] (system2.east);
+ %fig c
+ \node[anchor=south west] at (-4.9,0.7) {c)};
+ \draw [->, -stealth', thick]
+ (user3.west) edge[bend right] ([xshift=-110]thebar3.north)
+ ([xshift=-110]thebar3.south) edge ([xshift=-110]system3.north)
+ ([xshift=-5]system3.north) edge ([xshift=-5]thebar3.south)
+ ([xshift=-5]thebar3.north) to[bend right] (user3.east);
+
+ \draw [->, -stealth', thick]
+ (user4.west) edge[bend right] ([xshift=5]thebar3.north)
+ ([xshift=5]thebar3.south) edge ([xshift=5]system3.north)
+ ([xshift=110]system3.north) edge ([xshift=110]thebar3.south)
+ ([xshift=110]thebar3.north) to[bend right] (user4.east);
+
+\end{tikzpicture}
+\caption{Combinations of Norman's seven stages of action~\protect\cite{norman02} with the above 7 stages of computation. The three combinations represent Beaudouin-Lafon's three paradigms~\protect\cite{mbl04}: a) computer-as-a-tool; b) computer-as-a-partner; c) computer-as-a-medium. The agent(s) at the top have the initiative, while the agent at the bottom react to the other agents.}
+\label{fig:extendedaction}
+\end{figure}
+
+%: user delegates, then the system releases the control.
+
+%The interaction between a user and an interactive system is represented Figure~\ref{fig:extendedaction}.
+%The user's actions are connected to the system sensors of the input devices, and the physical effects produced by the output devices are connected to the user's sensory organs.
+
+\subsection{Computing affordance}
+
+The seven stages of actions are closely related to the notion of \emph{affordance}~\cite{gibson77}.
+An affordance is a property between a person an an object that enables this person to perform a set of physical actions on this object.
+%These properties are advertised by \emph{signifiers}.
+For example the $2 cm^2$ embossed surface of a button affords pressing it with a finger.
+The way we grasp an interactive device affords different actions we can perform on it~\cite{fellion17}.
+%The way users interact with an object such as an interactive system depends on affordances.
+The combination of affordances, past experiences and other knowledge enables users to create a mental model of an interactive system.
+%The user has a mental model of how the system works, which he created with past experience and knowledge of similar systems.
+%In the best case, the user can confirm or complete his mental model with further exploration through these 7 stages.
+Thanks to this, we can use systems we have never seen before.
+If the system follows standard usability guidelines~\cite{nielsen90}, the user can explore it further to complete her mental model.
+However the user model can differ from the design model.
+In this situation, the user's action can lead to results he did not expect.
+%The discrepancy between the user model and the design model reveals usability issues.
+These are generally usability issues that designers should fix.
+
+Similarly, interactive systems get information from the environment and act on it based on its programmed behavior.
+The equivalent of the user's mental model here is a set of assumptions that are crystallized in the system's program.
+It strongly limits what the system can perceive from the world, and the actions it can perform in it.
+%On top of that, what we can observe with the seven stages of reaction is that at each stage the result may differ from what the designer would like to achieve.
+%There can be noise in the sensed signal, differences between the command and the output signal or between the output signal and the physical effect.
+%Sensors may fail at sensing the intended phenomenon, and the physical effects produced by the interactive system can be altered by something in its environment.
+Wegner models interactive computation with input and output streams, which allow interaction machines to react continuously with their environment~\cite{wegner99}.
+He demonstrated that Turing machines cannot reproduce this behavior, and refuted the strong Church-Turing thesis because of this~\cite{goldin08}.
+Indeed, computability only concerns the computation of mathematical functions~\cite{turing38}.
+It cannot capture the essence of continuous streams of actions and reactions with an uncontrolled environment.
+Therefore we need a more general notion of “what interactive systems can do” than just computability.
+
+% Therefore we claim that this notion is more general than computability~\cite{turing38}.
+% The notion of computability is essentially algorithmic by nature.
+% One of the essential properties of algorithms is that their output have a specified relation to the inputs~\cite{knuth68}.
+% This property prevents algorithms to explore an open world, with unexpected objects with unknown behavior.
+% Interaction is a better paradigm in such situations, because interactive systems are connected to the outside world with input and output streams~\cite{wegner99}.
+
+We define the notion of \emph{computing affordance} as a behavior an interactive system can have with people or objects in their environment.
+It takes into account both its program and its sensing and actuating capabilities.
+They can either be desired or not desired, and can either happen or not with the interactive system implementation (Table~\ref{table:computingaffordance}).
+\emph{Existing features} are desired behaviors that can be the result of the interactive system implementation.
+It means appropriate sensing and actuating chains are working properly, and the program uses it in the intended way.
+\emph{Missing features} are desired behaviors that the interactive system cannot reproduce.
+It can be because it was not implemented, or because the implementation does not behave as intended.
+Indeed there can be noise in the sensed signal, differences between the command and the output signal or between the output signal and the physical effect.
+Sensors may fail at sensing the intended phenomenon, and the physical effects produced by the interactive system can be altered by something in its environment.
+\emph{Unwanted behavior} is a not desired behavior that can happen with the interactive system implementation.
+They can be the result of a malfunction, or an unpredicted side effect of another behavior.
+Finally, \emph{Not required features} are not desired behaviors that cannot occur with the interactive system implementation.
+
+\begin{idee}
+We will clarify our term “computing affordance”. We make a distinction with affordance defined by Gibson (and Norman, Gaver extensions) because systems have interactions with the environment, not only with humans. The overall idea is to bridge theoretical computing models with ours. Like others (\cite{gibson50}, Hornbæk and Oulasvirta discuss others), we extend the Church-Turing thesis beyond computing of mathematical functions. Our approach is an anthropomorphic vision of systems that interact with an unpredictable environment. We view the system as a combination of hardware and software, that correspond to the human body and cognition. According to Gibson, our understanding of our environment is not a cognitive process alone, but a relation between our body and our cognition. Our model applies this principle to computation. This require computation to refer to “a behaviour an interaction system can have with people or objects in their environment” rather than just computing mathematical functions. The term “computing affordance” binds this extended notion of computing with affordance.
+\end{idee}
+
+% there is not there is
+%info perceived false affordance perceived affordance
+%info not perceived correct reject hidden affordance
+
+% there is not there is
+%info perceived feature required existing feature
+%info not perceived feature not required missing feature
+
+\begin{table}[htb]
+\caption{Computing affordance are either desired or not. Either their implementation permits it or not.}
+\label{table:computingaffordance}
+\newcommand{\cell}[1]{
+ \node[minimum width=3cm, minimum height=2cm,draw=black!20,thin,text width=2cm, align=center] {
+ #1
+ };
+}
+\newcommand{\topcell}[2]{
+ \node[minimum width=3cm, minimum height=1cm,draw=black!20,thin,fill=black!10,text width=2cm, align=center](#1) {
+ #2
+ };
+}
+\newcommand{\leftcell}[2]{
+ \node[minimum width=2cm, minimum height=2cm,draw=black!20,thin,fill=black!10,text width=2cm, align=center](#1) {
+ #2
+ };
+}
+\centering
+\begin{tikzpicture}
+ \matrix[row sep=0mm, column sep=0mm,inner sep=0, node distance=0] (cells) {
+ & \topcell{nd}{Not desired} & \topcell{d}{Desired} \\
+ \leftcell{y}{Yes} & \cell{Unwanted behavior} & \cell{Existing feature} \\
+ \leftcell{n}{No} & \cell{Not required feature} & \cell{Missing feature}\\
+ };
+ \node[anchor=south, minimum width=4cm, rotate=90, thick,draw=black!20,fill=black!20] at (n.north west) {Implementation};
+ \node[anchor=south, minimum height=0.6cm, minimum width=6cm, thick, draw=black!20,fill=black!20] at (nd.north east) {Computing affordance};
+\end{tikzpicture}
+\end{table}
+
+%Engineers and designers build systems with the capabilities they desire, following the seven stages of reaction.
+%However the behavior of the system can be different, because of something unpredicted in the environment, or just because of a software or hardware malfunction.
+
+
+\subsection{Evolution}
+
+The discrepancy between the the system behavior and the design model is complementary to the discrepancy between the user's model and the designer's model revealed by Norman's model.
+They both contribute to the evolution of the interactive system.
+
+There are two scales of evolutions for humans.
+The first one is the evolution of mankind as a species.
+Every generation evolve thanks to genetics, society and culture to name a few.
+The second one is the evolution of every individual during their whole life.
+They learn about the world, but they also train their capacities.
+Even though there is of course a limit to this training, it is a key capability that enables people to adapt to their environment.
+
+Interactive systems can evolve at similar scales.
+They first evolve through software and hardware updates.
+These updates remove unwanted behaviors, and implement or fix missing features.
+Following principles such as reification, polymorphism and reuse facilitates such evolutions~\cite{mbl00a}.
+The set of desired behaviors actually evolves as well.
+First, users adapt to the interactive systems' behavior and get used to it.
+Second, practice stimulates new ideas of desired behavior.
+Therefore users and systems evolve together.
+This phenomenon is known as coevolution~\cite{mackay90}.
+While training makes users interact more efficiently, the benefits are greater when interactions techniques support training~\cite{cockburn14}.
+
+The evolution of interactive systems during their execution is still in its infancy.
+For example Pharo applications support the modification and debugging of their own code during their execution~\cite{black10}.
+Neural networks enables programs to evolve their behavior with supervised or unsupervised learning~\cite{mcculloch43}.
+Recent advances in deep learning facilitated this learning phase, making this technique more practical~\cite{lecun15}.
+For example robots can learn how to interact with an object with a curiosity behavior~\cite{laversannefinot18}.
+
+The validation of the behavior of an interactive system is an essential part of the evolution process.
+Thanks to it, behaviors can be categorized according to Table~\ref{table:computingaffordance}.
+There are several tools to prove programs with rational methods~\cite{chlipala13}.
+With these tools we can verify that the code is a correct implementation of an algorithm.
+Proving the behavior with rational methods of neural networks is a current challenge in machine learning.
+Similarly to interactive systems, they seem to be more suited for empirical evaluations.
+
+\section{Conclusion}
+
+Designing and implementing an interactive system is hard because it connects sensory, cognitive, software and hardware components.
+Mismatches between intended and actual behaviors can happen at any stage of the process.
+We presented a model of interactive systems behavior.
+It is an adaptation to Norman's action theory, which makes it easier to combine both model to describe the full phenomenon.
+
+
+Depending on their background, readers will find different things in this paper.
+Experts in human factors will get an insight on the implementation aspects of interactive devices.
+%It must not limit their creativity,
+Engineers will get a better insight on how implementation issues affect usability.
+It is also a checklist for designers, makers and practitioners when they are designing, implementing, upgrading or fixing interactive systems.
+
+In our future work, we will generalize the notion of computability.
+We conjecture that interactive systems suffer from a similar incompleteness phenomenon than formal systems~\cite{godel31}.
+
+\begin{idee}
+Our model certainly has limitations. It does not take into account sociological aspects, and more generally it does not discuss the design process (the Rode ref is awesome by the way). In its current form it focuses on functional and engineering aspects. We are interested in the question of the design and evolution of interactive systems though. It definitely involves more than engineering matters. \cite{huot13} is an interesting first answer. Our long term objective is to study the incompleteness nature of interactive systems, and it involves addressing these questions. However we consider this subject is out of the scope of the current submission.
+\end{idee}
+
+\section{Descriptive Case study (Printgets)}
+\begin{idee}
+The objectives and purpose of our model will be much clearer with an additional case study. We propose describing a vibrotactile system with capacitive input, within our framework. We will describe how implementation steps fit in our model: calibration, interpolation, filtering, thresholds, hysteresis, event fusion, output models, signal computing and physical response to command.
+\end{idee}
+\cite{frisson17}
+
+\section{Evaluative Case study (Latency)}
+\cite{casiez17}
--- /dev/null
+%!TEX root = ../hdrmain.tex
+
+\chapter{Extending gestural interaction}
+ \epigraph{\lorem}{Auteur}
+
+\begin{Abstract}
+\loremipsum
+\end{Abstract}
+
+\section{New interaction paradigm (Summon interactions)}
+\cite{gupta17}
+
+\section{Pointing in Virtual Reality (Raycursor)}
+\cite{baloup19,baloup18,baloup19a}
+
+\section{New degrees of freedom for pen interaction (Flexstylus)}
+\cite{fellion17}
--- /dev/null
+%!TEX root = ../hdrmain.tex
+
+\chapter{Leveraging haptic feedback}
+ \epigraph{\lorem}{Auteur}
+
+\begin{Abstract}
+\loremipsum
+\end{Abstract}
+
+\section{Extending direct manipulation (Tactile direct manipulation)}
+\cite{gupta16,gupta16a}
+
+\section{Haptic feedback for activity monitoring (Activibe)}
+\cite{cauchard16}
+
+\section{Contribution of haptic to sense of embodiment}
--- /dev/null
+%!TEX root = ../hdrmain.tex
+
+\chapter{Extending interactive devices}
+ \epigraph{\lorem}{Auteur}
+
+\begin{Abstract}
+\loremipsum
+\end{Abstract}
+
+\section{(Metamorphe)}
+\cite{bailly13,bailly13a}
+
+\section{(CtrlMouse)}
+\cite{pietrzak14}
+
+\section{(Living Desktop)}
+\cite{bailly16}
--- /dev/null
+%!TEX root = ../hdrmain.tex
+
+\chapter{Interaction techniques and devices}
+ \epigraph{\lorem}{Auteur}
+
+\begin{Abstract}
+\loremipsum
+\end{Abstract}
+
+The design of interactive systems requires deep knowledge of both users and computers.
+Deeper than this, it requires understanding how the user and the interactive system interact with each other.
+
+In his \defword{theory of action}, Norman presents his \defword{seven stages of action}, which describes how humans interact with the world~\cite{norman88,norman02}.
+This model is depicted on Figure~\ref{fig:sevenstages}.
+The subject starts with establishing a \emph{goal}, which is a high level objective to achieve.
+The purpose of every system is to enable achieving such goals.
+As an example to illustrate the concept, let me present you Jimmy who just returned home after the Christmas holidays.
+His house is cold, and he needs to raise the temperature.
+To achieve this goal he has to formulate \emph{intentions}, such as turning on the radiator on.
+This requires \emph{specifying}, then \emph{executing} actions, like turning a knob and decide how much to turn the knob.
+This task is not as trivial as it seems.
+First of all Jimmy identified the possible actions on the radiator, like turning the knob.
+This is what Gibson called an \defword{affordance}~\cite{gibson77}.
+%Objects can have hidden or false affordances.
+%This is the case for a microwave in a flat I rent once, with only one button.
+%I never figured out how to setup time on the display.
+The knob has notches ranging from 0 to 11 painted on the sides.
+They provide \defword{perceived affordances} (also called \defword{signifiers}~\cite{norman08}) that helps Jimmy to know in which direction, and how much to turn it.
+Now, Jimmy wants to get warm faster.
+Therefore he turns the knob all the way up to 11, thinking that the radiator will be hotter.
+This is a mistake, because the radiator is either on or off, and (in short) the knob only defines a threshold for switching between these two states.
+It means that the radiator cannot make the room warmer faster.
+This illustrates issues caused by a difference between the designer's model and the user's model.
+An alternative way of conveying an appropriate signifier consists in displaying the target temperature rather than arbitrary 0-11 numbers.
+
+\begin{figure}[htb]
+\centering
+\includegraphics[width=.6\textwidth]{sevenstages}
+\caption[Seven stages of action]{Norman's seven stages of action~\cite{norman02}.}
+\label{fig:sevenstages}
+\end{figure}
+
+Jimmy's actions have effects on the world, that he can eventually \emph{perceive}.
+In this case he feels the room getting warmer.
+This is an \emph{interpretation} of the results of his actions.
+It is based on both the room temperature he perceives, experience, knowledge, personal taste, etc.
+%It is unlikely he notices that the room could not get hotter faster.
+% However at
+At some point he will notice the room will be too hot because he set it to the maximum value.
+This is an \emph{evaluation} of the temperature he perceived.
+His new goal is to lower temperature, keeping in mind the initial state.
+
+These seven stages of actions describe how a person interacts with objects in the world.
+They put the light on issues that can happen at each of these stages.
+On the action side, the difference between the person's goal and the executed action is called the gulf of execution.
+It represents pitfalls that can make more difficult for the person to perform the right actions to achieve his goal.
+On the perception side, the difference between the perceived stimulus and its evaluation is called the gulf of evaluation.
+It illustrates troubles the person can have with interpreting correctly the state of the world, in particular in result of his actions.
+A correct design of technology can reduce or bridge such gulfs.
+
+Now, we can observe that interactive systems can also have difficulties to understand or act on the world in an intended way.
+I present below an adaptation of Norman's seven stages to the interactive system side.
+It will highlight engineering and design problems of interactive systems, which will be useful for identifying challenges for their design and implementation.
+
+\section{Seven stages of interactive computation}
+
+Interactive systems globally work in a similar way than humans.
+They get information from the world, interpret it, and act on the world in return.
+%Despite the tremendous progress of computers the past decades, they can still process a tiny part of
+%\loremipsum
+We therefore define the \defword{seven stages of interactive computation} based on Norman's seven stages of action.
+It is illustrated on Figure~\ref{fig:mysevenstages}.
+
+\begin{figure}[htb]
+\centering
+\includegraphics[width=.6\textwidth]{mysevenstages}
+\caption[Seven stages of interactive computation]{The seven stages of interactive computation, adapted from Norman's seven stages of action.}
+\label{fig:mysevenstages}
+\end{figure}
+
+\subsection{Description}
+
+The input chain begins with the \emph{sensing} stage.
+Physical sensors measure physical properties.
+Typically they measure the user movements, but it can be various other information such as light, temperature, moisture, vibrations…
+All such information is transformed into \emph{input events}.
+At this stage we notice that the infinite richness of the world is reduced to a small number of digits.
+Let's discuss the simple example of a keypress on a keyboard.
+The only information in the digital world is whether a key is pressed or not. There is no information about the finger that pressed it, the speed of the finger or its trajectory.
+There is neither the possibility if a finger is hovering the key, if several fingers are pressing it, or even if it was pressed with the nose.
+Input events have to be treated as tokens, or lexical units.
+They are interpreted as \emph{input phrases} with grammars or finite automatons~\cite{appert06} and form the building blocks of \defword{interaction techniques}~\cite{nigay93}.
+
+The \emph{software} part is a meta algorithm that chooses whether there are algorithms to run, which algorithms to run, and with which parameters.
+It occurs typically when a command is selected, or an object is being manipulated.
+
+Every computer system has at some point an effect on the world.
+We extend the notion of dead code to not only code that is never executed, but also code which result has no effect whatsoever on the physical world.
+Even a routing algorithm will at some point transmit data to a computer which will display it or print it in any way.
+% Quantic information : stored in computer memory. Will it be observed?
+Bringing a piece of information to the physical world requires several steps.
+First of all the systems must \emph{encode} the piece on information.
+A visual encoding can be an icon, or a text.
+Audio encodings can be sounds~\cite{gaver93} or melodies~\cite{brewster93}.
+Various haptic encodings include vibrations~\cite{brewster04}, forces~\cite{maclean03a,pietrzak05b,pietrzak05} or textures~\cite{pietrzak09,pietrzak06,potier16}.
+%Effectors can produce light (like screens), sounds, vibrations, forces, …
+Output devices have driving electronics which require specific \emph{commands}, and turn them into \emph{physical effects}.
+These can be light (like screens), sounds, vibrations, forces, …
+
+
+With this model, we demonstrate that solving a problem with a interactive system is not only a matter of algorithmic computation.
+The encoding of the world's events, and the effects on the world resulting of the algorithms output are subject to non trivial issues.
+Algorithms are therefore prisoners from a digital Plato's cave.
+They can only handle shadows of the physical world, under the light of input and output streams.
+Therefore, addressing interaction problems require a broader design than just algorithms.
+It requires identifying information the system needs, and how to convey a result efficiently.
+Norman's gulf of evaluation and gulf of execution are mirrored with a \defword{funnel of evaluation} and a \defword{funnel of execution}.
+
+\paragraph{Funnel of evaluation}
+
+The funnel of evaluation depicts the fact that the input stages reduce the world into few bits.
+A good design of the input chain senses the right phenomenons, at an appropriate amplitude, with a sufficient spatial and temporal resolution, with little distortions,.
+These information must me combined correctly to form a meaningful sequence of action.
+For example, the \emph{Midas touch} problem is a usual issue with 3D gestural interaction.
+Since the sensors observes all the users movements, there is no obvious segmentation.
+The system has no way to know if you move your hand to interact with the system, or for scratching your nose.
+At the opposite, occlusion is the other problem with vision-based gesture sensors.
+The sensor cannot get position information for hidden objects.
+In these situations, we can grow the funnel of evaluation by adding segmentation gestures, and using multiple cameras.
+
+\paragraph{Funnel of execution}
+
+The funnel of execution is symmetrical.
+The software part of the system keeps an model~\cite{reenskaug79a} (or abstraction~\cite{coutaz87}), and it has to display parts of it in an intelligible way for users.
+The way data is shown to the user can have a huge impact on how he interacts with it~\cite{zhang94}.
+Therefore the encoding part is crucial, and it is a first filter for reducing the internal complexity of the system.
+The specifications of the output device is a second filter.
+There is a limitation of force, color, brightness, frequency, etc. each device can produce in theory.
+There is also a limit of precision, that greatly depends on amplifiers and digital to analog converters (DAC).
+Last the physical effect can be inconsistent for the same command.
+Some haptic devices can behave differently depending on ambient temperature, finger moisture, cleanliness, etc.
+
+
+\subsection{The user and the system}
+
+
+The interaction between a user and an interactive system is represented Figure~\ref{fig:extendedaction}.
+The user's actions are connected to the system sensors of the input devices, and the physical effects produced by the output devices are connected to the user's sensory organs.
+We can improve interaction not only by studying the two models separately, but also by studying their connections.
+This is typically what we study in HCI as we design new input devices with original sensing technologies~\cite{fellion17}, but also when we design new output devices with cutting-edge actuators~\cite{frisson17,potier12,potier16}.
+
+\begin{figure}[htb]
+\centering
+\includegraphics[width=.7\textwidth]{extendedaction}
+\caption[Fourteen stages of Human-Computer Interaction]{Norman's seven stages of action~\cite{norman02}, extended with the system side of the mirror.}
+\label{fig:extendedaction}
+\end{figure}
+
+While these two models are similar, there are fundamental differences. We discuss their induced running loops, and the implication for design of these two models.
+
+\paragraph{Loops}
+
+Let's observe these two models in action through two situations.
+In the first situation, a user is using a word processing editor, has already selected text, and wants to format it in bold.
+Through the execution phase of Norman's model, the user will decide and perform an appropriate interaction, clicking on a toolbar button for example.
+In the new model, the mouse will detect a movement then a click.
+It will detect which button is pressed and trigger the relevant action.
+This will change the internal model of the document, which will in turn change the display of the selected text on the screen.
+The user will see the changes and acknowledge he reached his initial goal.
+That was an easy case, but it illustrates the whole pipeline.
+
+Now, let's consider a second, and more interesting, situation.
+The user is using a picture editing program, and would like to adjust the exposition.
+Contrary to the previous example, there is no specified output.
+The user just knows the picture is currently too dark.
+He wants it lighter so that more details will be visible.
+But at the same time the dark parts of the pictures must remain somewhat dark.
+There is no objective measure of the result, and the user will probably adjust his criteria while performing the task.
+The user moves the cursor of the exposition slider.
+As the cursor moves, the picture is continuously adjusted with the current exposition value.
+Therefore the user adjusts the exposition value continuously depending on the result.
+We observe several important elements.
+First of all the continuous perception/action cycle reflects the notion of the \defword{sensorimotor loop}~\cite{oregan01a}.
+This cycle is efficient for interaction because for each adjustment, the user knows both his action and its result.
+It enables fast and incremental adjustments, which enables \defword{direct manipulation}~\cite{schneiderman83}.
+This will be discussed further in the next chapter.
+Second, we observe a similar loop between sensed inputs and produced outputs.
+This loop corresponds to interaction techniques, which is the focus of this chapter.
+The algorithmic complexity of the software part is a key element for enabling direct manipulation.
+Further than complexity, the software part must run in real time.
+The time constraint is less than $16ms$\footnote{This estimation is based on a $60Hz$ display rate. Haptic display require a $1000Hz$ loop, therefore a $1ms$ limit.}.
+In the exposition adjustment scenario, computing the output on the full size image might be too slow.
+This is why the algorithm is computed on a thumbnail, or a crop view of the picture.
+
+In any case, identifying the limiting factor in terms of speed in the interaction cycle is essential for designing fast and incremental actions.
+User training can reduce interaction time.
+It requires interactions techniques that support training efficiently~\cite{cockburn14}.
+The limiting factors of the system reside in the input chain, the output chain and the software part.
+
+%\cite{vermeulen13}
+
+\paragraph{Implications for design}
+
+While these two models are similar, we must be careful with their usage.
+On one hand, the seven stages of action explain how people interact with objects of the world, in particular interactive systems.
+They exhibit a gulf of execution and a gulf of evaluation, that designers tend to reduce or bridge with a better design of the objects of the world.
+On the other hand, the seven stages of interactive computation describe how interactive systems interact with the world, in particular their users.
+We discussed about the funnel of evaluation and the funnel of execution, but designers should not aim at expanding these by expecting a different behavior and understanding from the user.
+The behavior of users will inevitably change to adapt to the evolution of technology.
+This is known as the co-evolution phenomenon~\cite{mackay90}.
+However this is not our focus.
+The design objectives related to this model are:
+
+\begin{itemize}
+ \item Using existing input and output technologies more efficiently.
+ \item Expanding the input and output vocabulary.
+\end{itemize}
+
+\begin{idee}
+Introduce projects below
+\end{idee}
+
+
+% \begin{figure}[htb]
+% \centering
+% \includegraphics[width=.4\textwidth]{modeldifference}
+% \caption{}
+% \label{fig:modeldifference}
+% \end{figure}
+
+%Vibrotactile widgets~\cite{frisson17}
+
+\begin{idee}Use existing input and output technologies more efficiently.\end{idee}
+
+\section{Investigating input: latency measurement}
+
+Direct manipulation interactions require fast and incremental actions.
+
+
+Lagmeter \cite{casiez17}
+
+\section{Understanding touch: tactile textures}
+
+Friction textures \cite{potier12,potier16}
+
+
+\begin{idee}Expand the input and output vocabulary.\end{idee}
+
+
+\section{Exploring new input vocabulary: flexible pen}
+
+FlexStylus \cite{fellion17}
+
+\section{Encoding information: tactile cues for notifications}
+
+Previous work on encoding information with haptic cues: \cite{pietrzak05,pietrzak06,pietrzak09}
+
+Activibe \cite{cauchard16}
\ No newline at end of file
--- /dev/null
+%!TEX root = ../hdrmain.tex
+
+\chapter{Related Work}
+ \epigraph{Education isn't something you can finish.}{Isaac Asimov}
--- /dev/null
+%!TEX root = ../hdrmain.tex
+
+\chapter{Something}
+ \epigraph{\lorem}{Auteur}
+
+\begin{Abstract}
+\loremipsum
+\end{Abstract}
+
+\section{Lagmeter}
+\cite{casiez17}
+
+Haptic latency
+++ /dev/null
-%!TEX root = ../hdrmain.tex
-
-\chapter{Interaction techniques and devices}
- \epigraph{\lorem}{Auteur}
-
-\begin{Abstract}
-\loremipsum
-\end{Abstract}
-
-The design of interactive systems requires deep knowledge of both users and computers.
-Deeper than this, it requires understanding how the user and the interactive system interact with each other.
-
-In his \defword{theory of action}, Norman presents his \defword{seven stages of action}, which describes how humans interact with the world~\cite{norman88,norman02}.
-This model is depicted on Figure~\ref{fig:sevenstages}.
-The subject starts with establishing a \emph{goal}, which is a high level objective to achieve.
-The purpose of every system is to enable achieving such goals.
-As an example to illustrate the concept, let me present you Jimmy who just returned home after the Christmas holidays.
-His house is cold, and he needs to raise the temperature.
-To achieve this goal he has to formulate \emph{intentions}, such as turning on the radiator on.
-This requires \emph{specifying}, then \emph{executing} actions, like turning a knob and decide how much to turn the knob.
-This task is not as trivial as it seems.
-First of all Jimmy identified the possible actions on the radiator, like turning the knob.
-This is what Gibson called an \defword{affordance}~\cite{gibson77}.
-%Objects can have hidden or false affordances.
-%This is the case for a microwave in a flat I rent once, with only one button.
-%I never figured out how to setup time on the display.
-The knob has notches ranging from 0 to 11 painted on the sides.
-They provide \defword{perceived affordances} (also called \defword{signifiers}~\cite{norman08}) that helps Jimmy to know in which direction, and how much to turn it.
-Now, Jimmy wants to get warm faster.
-Therefore he turns the knob all the way up to 11, thinking that the radiator will be hotter.
-This is a mistake, because the radiator is either on or off, and (in short) the knob only defines a threshold for switching between these two states.
-It means that the radiator cannot make the room warmer faster.
-This illustrates issues caused by a difference between the designer's model and the user's model.
-An alternative way of conveying an appropriate signifier consists in displaying the target temperature rather than arbitrary 0-11 numbers.
-
-\begin{figure}[htb]
-\centering
-\includegraphics[width=.6\textwidth]{sevenstages}
-\caption[Seven stages of action]{Norman's seven stages of action~\cite{norman02}.}
-\label{fig:sevenstages}
-\end{figure}
-
-Jimmy's actions have effects on the world, that he can eventually \emph{perceive}.
-In this case he feels the room getting warmer.
-This is an \emph{interpretation} of the results of his actions.
-It is based on both the room temperature he perceives, experience, knowledge, personal taste, etc.
-%It is unlikely he notices that the room could not get hotter faster.
-% However at
-At some point he will notice the room will be too hot because he set it to the maximum value.
-This is an \emph{evaluation} of the temperature he perceived.
-His new goal is to lower temperature, keeping in mind the initial state.
-
-These seven stages of actions describe how a person interacts with objects in the world.
-They put the light on issues that can happen at each of these stages.
-On the action side, the difference between the person's goal and the executed action is called the gulf of execution.
-It represents pitfalls that can make more difficult for the person to perform the right actions to achieve his goal.
-On the perception side, the difference between the perceived stimulus and its evaluation is called the gulf of evaluation.
-It illustrates troubles the person can have with interpreting correctly the state of the world, in particular in result of his actions.
-A correct design of technology can reduce or bridge such gulfs.
-
-Now, we can observe that interactive systems can also have difficulties to understand or act on the world in an intended way.
-I present below an adaptation of Norman's seven stages to the interactive system side.
-It will highlight engineering and design problems of interactive systems, which will be useful for identifying challenges for their design and implementation.
-
-\section{Seven stages of interactive computation}
-
-Interactive systems globally work in a similar way than humans.
-They get information from the world, interpret it, and act on the world in return.
-%Despite the tremendous progress of computers the past decades, they can still process a tiny part of
-%\loremipsum
-We therefore define the \defword{seven stages of interactive computation} based on Norman's seven stages of action.
-It is illustrated on Figure~\ref{fig:mysevenstages}.
-
-\begin{figure}[htb]
-\centering
-\includegraphics[width=.6\textwidth]{mysevenstages}
-\caption[Seven stages of interactive computation]{The seven stages of interactive computation, adapted from Norman's seven stages of action.}
-\label{fig:mysevenstages}
-\end{figure}
-
-\subsection{Description}
-
-The input chain begins with the \emph{sensing} stage.
-Physical sensors measure physical properties.
-Typically they measure the user movements, but it can be various other information such as light, temperature, moisture, vibrations…
-All such information is transformed into \emph{input events}.
-At this stage we notice that the infinite richness of the world is reduced to a small number of digits.
-Let's discuss the simple example of a keypress on a keyboard.
-The only information in the digital world is whether a key is pressed or not. There is no information about the finger that pressed it, the speed of the finger or its trajectory.
-There is neither the possibility if a finger is hovering the key, if several fingers are pressing it, or even if it was pressed with the nose.
-Input events have to be treated as tokens, or lexical units.
-They are interpreted as \emph{input phrases} with grammars or finite automatons~\cite{appert06} and form the building blocks of \defword{interaction techniques}~\cite{nigay93}.
-
-The \emph{software} part is a meta algorithm that chooses whether there are algorithms to run, which algorithms to run, and with which parameters.
-It occurs typically when a command is selected, or an object is being manipulated.
-
-Every computer system has at some point an effect on the world.
-We extend the notion of dead code to not only code that is never executed, but also code which result has no effect whatsoever on the physical world.
-Even a routing algorithm will at some point transmit data to a computer which will display it or print it in any way.
-% Quantic information : stored in computer memory. Will it be observed?
-Bringing a piece of information to the physical world requires several steps.
-First of all the systems must \emph{encode} the piece on information.
-A visual encoding can be an icon, or a text.
-Audio encodings can be sounds~\cite{gaver93} or melodies~\cite{brewster93}.
-Various haptic encodings include vibrations~\cite{brewster04}, forces~\cite{maclean03a,pietrzak05b,pietrzak05} or textures~\cite{pietrzak09,pietrzak06,potier16}.
-%Effectors can produce light (like screens), sounds, vibrations, forces, …
-Output devices have driving electronics which require specific \emph{commands}, and turn them into \emph{physical effects}.
-These can be light (like screens), sounds, vibrations, forces, …
-
-
-With this model, we demonstrate that solving a problem with a interactive system is not only a matter of algorithmic computation.
-The encoding of the world's events, and the effects on the world resulting of the algorithms output are subject to non trivial issues.
-Algorithms are therefore prisoners from a digital Plato's cave.
-They can only handle shadows of the physical world, under the light of input and output streams.
-Therefore, addressing interaction problems require a broader design than just algorithms.
-It requires identifying information the system needs, and how to convey a result efficiently.
-Norman's gulf of evaluation and gulf of execution are mirrored with a \defword{funnel of evaluation} and a \defword{funnel of execution}.
-
-\paragraph{Funnel of evaluation}
-
-The funnel of evaluation depicts the fact that the input stages reduce the world into few bits.
-A good design of the input chain senses the right phenomenons, at an appropriate amplitude, with a sufficient spatial and temporal resolution, with little distortions,.
-These information must me combined correctly to form a meaningful sequence of action.
-For example, the \emph{Midas touch} problem is a usual issue with 3D gestural interaction.
-Since the sensors observes all the users movements, there is no obvious segmentation.
-The system has no way to know if you move your hand to interact with the system, or for scratching your nose.
-At the opposite, occlusion is the other problem with vision-based gesture sensors.
-The sensor cannot get position information for hidden objects.
-In these situations, we can grow the funnel of evaluation by adding segmentation gestures, and using multiple cameras.
-
-\paragraph{Funnel of execution}
-
-The funnel of execution is symmetrical.
-The software part of the system keeps an model~\cite{reenskaug79a} (or abstraction~\cite{coutaz87}), and it has to display parts of it in an intelligible way for users.
-The way data is shown to the user can have a huge impact on how he interacts with it~\cite{zhang94}.
-Therefore the encoding part is crucial, and it is a first filter for reducing the internal complexity of the system.
-The specifications of the output device is a second filter.
-There is a limitation of force, color, brightness, frequency, etc. each device can produce in theory.
-There is also a limit of precision, that greatly depends on amplifiers and digital to analog converters (DAC).
-Last the physical effect can be inconsistent for the same command.
-Some haptic devices can behave differently depending on ambient temperature, finger moisture, cleanliness, etc.
-
-
-\subsection{The user and the system}
-
-
-The interaction between a user and an interactive system is represented Figure~\ref{fig:extendedaction}.
-The user's actions are connected to the system sensors of the input devices, and the physical effects produced by the output devices are connected to the user's sensory organs.
-We can improve interaction not only by studying the two models separately, but also by studying their connections.
-This is typically what we study in HCI as we design new input devices with original sensing technologies~\cite{fellion17}, but also when we design new output devices with cutting-edge actuators~\cite{frisson17,potier12,potier16}.
-
-\begin{figure}[htb]
-\centering
-\includegraphics[width=.7\textwidth]{extendedaction}
-\caption[Fourteen stages of Human-Computer Interaction]{Norman's seven stages of action~\cite{norman02}, extended with the system side of the mirror.}
-\label{fig:extendedaction}
-\end{figure}
-
-While these two models are similar, there are fundamental differences. We discuss their induced running loops, and the implication for design of these two models.
-
-\paragraph{Loops}
-
-Let's observe these two models in action through two situations.
-In the first situation, a user is using a word processing editor, has already selected text, and wants to format it in bold.
-Through the execution phase of Norman's model, the user will decide and perform an appropriate interaction, clicking on a toolbar button for example.
-In the new model, the mouse will detect a movement then a click.
-It will detect which button is pressed and trigger the relevant action.
-This will change the internal model of the document, which will in turn change the display of the selected text on the screen.
-The user will see the changes and acknowledge he reached his initial goal.
-That was an easy case, but it illustrates the whole pipeline.
-
-Now, let's consider a second, and more interesting, situation.
-The user is using a picture editing program, and would like to adjust the exposition.
-Contrary to the previous example, there is no specified output.
-The user just knows the picture is currently too dark.
-He wants it lighter so that more details will be visible.
-But at the same time the dark parts of the pictures must remain somewhat dark.
-There is no objective measure of the result, and the user will probably adjust his criteria while performing the task.
-The user moves the cursor of the exposition slider.
-As the cursor moves, the picture is continuously adjusted with the current exposition value.
-Therefore the user adjusts the exposition value continuously depending on the result.
-We observe several important elements.
-First of all the continuous perception/action cycle reflects the notion of the \defword{sensorimotor loop}~\cite{oregan01a}.
-This cycle is efficient for interaction because for each adjustment, the user knows both his action and its result.
-It enables fast and incremental adjustments, which enables \defword{direct manipulation}~\cite{schneiderman83}.
-This will be discussed further in the next chapter.
-Second, we observe a similar loop between sensed inputs and produced outputs.
-This loop corresponds to interaction techniques, which is the focus of this chapter.
-The algorithmic complexity of the software part is a key element for enabling direct manipulation.
-Further than complexity, the software part must run in real time.
-The time constraint is less than $16ms$\footnote{This estimation is based on a $60Hz$ display rate. Haptic display require a $1000Hz$ loop, therefore a $1ms$ limit.}.
-In the exposition adjustment scenario, computing the output on the full size image might be too slow.
-This is why the algorithm is computed on a thumbnail, or a crop view of the picture.
-
-In any case, identifying the limiting factor in terms of speed in the interaction cycle is essential for designing fast and incremental actions.
-User training can reduce interaction time.
-It requires interactions techniques that support training efficiently~\cite{cockburn14}.
-The limiting factors of the system reside in the input chain, the output chain and the software part.
-
-%\cite{vermeulen13}
-
-\paragraph{Implications for design}
-
-While these two models are similar, we must be careful with their usage.
-On one hand, the seven stages of action explain how people interact with objects of the world, in particular interactive systems.
-They exhibit a gulf of execution and a gulf of evaluation, that designers tend to reduce or bridge with a better design of the objects of the world.
-On the other hand, the seven stages of interactive computation describe how interactive systems interact with the world, in particular their users.
-We discussed about the funnel of evaluation and the funnel of execution, but designers should not aim at expanding these by expecting a different behavior and understanding from the user.
-The behavior of users will inevitably change to adapt to the evolution of technology.
-This is known as the co-evolution phenomenon~\cite{mackay90}.
-However this is not our focus.
-The design objectives related to this model are:
-
-\begin{itemize}
- \item Using existing input and output technologies more efficiently.
- \item Expanding the input and output vocabulary.
-\end{itemize}
-
-\begin{idee}
-Introduce projects below
-\end{idee}
-
-
-% \begin{figure}[htb]
-% \centering
-% \includegraphics[width=.4\textwidth]{modeldifference}
-% \caption{}
-% \label{fig:modeldifference}
-% \end{figure}
-
-%Vibrotactile widgets~\cite{frisson17}
-
-\begin{idee}Use existing input and output technologies more efficiently.\end{idee}
-
-\section{Investigating input: latency measurement}
-
-Direct manipulation interactions require fast and incremental actions.
-
-
-Lagmeter \cite{casiez17}
-
-\section{Understanding touch: tactile textures}
-
-Friction textures \cite{potier12,potier16}
-
-
-\begin{idee}Expand the input and output vocabulary.\end{idee}
-
-
-\section{Exploring new input vocabulary: flexible pen}
-
-FlexStylus \cite{fellion17}
-
-\section{Encoding information: tactile cues for notifications}
-
-Previous work on encoding information with haptic cues: \cite{pietrzak05,pietrzak06,pietrzak09}
-
-Activibe \cite{cauchard16}
\ No newline at end of file
+++ /dev/null
-%!TEX root = ../hdrmain.tex
-
-\chapter{Related Work}
- \epigraph{Education isn't something you can finish.}{Isaac Asimov}
+++ /dev/null
-%!TEX root = ../hdrmain.tex
-
-\chapter{Something}
- \epigraph{\lorem}{Auteur}
-
-\begin{Abstract}
-\loremipsum
-\end{Abstract}
-
-\section{Lagmeter}
-\cite{casiez17}
-
-Haptic latency
--- /dev/null
+%!TEX root = ../hdrmain.tex
+
+\chapter{Evolution o}
+ \epigraph{\lorem}{Auteur}
+
+\begin{Abstract}
+\loremipsum
+\end{Abstract}
+
+
+\section{Extending the input vocabulary}
+
+\subsection{FlexStylus: a flexible digital pen}
+
+FlexStylus \cite{fellion17}
+
+\subsection{RayCursor: a 3D pointing technique in VR}
+
+RayCursor \cite{baloup19}
+
+\subsection{FingerCuts: leveraging finger identification for multi-touch interaction}
+
+FingerCuts \cite{goguey14,goguey14a,goguey17}
+
+
+\section{Leveraging haptic feedback}
+
+\subsection{Activibe: vibrotactile feedback for activity monitoring}
+
+Activibe~\cite{cauchard16}
+
+\subsection{Tactile textures with programmable friction}
+
+Tactile Textures~\cite{potier12,potier16}
+
+\subsection{Vibrotactile widgets}
+
+Vibrotactile widgets~\cite{frisson17}