%, or to show something in the room in a video conference with the camera affixed to the screen.
%It is also frequent to move the mouse and keyboard to make space on the desk for something else, or share them with other users to give them the control of the computer.
It is also frequent to give other people the mouse or keyboard to give them the control over the computer.
-
-In the Living Desktop project\cite{bailly16}, we actuated a mouse, keyboard and screen (\reffig{livingdesktop}).
+In the Living Desktop project, we actuated a mouse, a keyboard, and a screen (\reffig{livingdesktop}).
The mouse and the keyboard can translate in the $x,y$ plane directions.
The keyboard can also rotate.
The screen can rotate, and translate in the $x$ axis direction.
+The details of the apparatus are described in~\cite{bailly16}.
\begin{figure}[!htb]
\centering
With these capabilities, devices can move on their own without requiring the user to move them.
The interesting question here is the degree of control the user has over his devices.
-There is a continuum between full control and full automation, in which we identify some particular degrees:
-
-\paragraph{Telekinesis}
-User Full control
-
-the user moves the devices with distant controls.
-
-Video Conference
-
-\paragraph{Teleoperation}
-User control, with system constraints
-the user suggests movements, the device decides to which degree it complies.
-
-\paragraph{Constraint}
-System control, with user constraints
-the user defines the constraints of the devices movements.
-
-\paragraph{Insurrection}
-System full control
-the user has no influence on the device movements.
-
-Ergonomic coach
-
-
-\newpage
-
-
-
-We implemented a couple of scenarios, which illustrate the concept.
-
-\paragraph{Peephole display}
-
-Even with a large screen, the interactive screen estate is limited.
-We propose to use the screen as a peephole display in a larger working space.
-In this scenario, the screen moves on the $x$ axis and the pixels show the content in this area in space.
+Beaudouin-Lafon defines two interaction paradigm: \emph{computer-as-a-tool} that users control, and \emph{computer-as-a-partner} to which users delegate tasks~\cite{mbl04}.
+Here I argue for a continuum between full control and full automation.
+We discuss examples of application scenarios for four degrees of control.
+
+%\paragraph{Telekinesis}
+\paragraph{Full control}
+When users have a full control over the actuated devices, they can move them around physically or remotely.
+%\paragraph{Video Conference}
+For example, when video-conferencing with a desktop computer, the camera is usually affixed to the screen.
+We can manipulate it with full control to adjust the field of view.
+The problem is when remote users would like to show an object they manipulate outside the camera field of view.
+They have to move the screen at the same time that they are manipulating the object.
+In this scenario, we take the contrrol over the remote screen to adjust the field of view and make sure we can see what the remote users would like to show.
+%In this scenario the screen follows the user so that he can always see the video conference, and show what he is doing to his collaborators.
+%The user does not control the screen position in the strict sense of the term. However he can activate or deactivate this behavior and still control the screen position manually or with another interface.
+
+
+%\paragraph{Teleoperation}
+\paragraph{Constraint control}
+%User control, with system constraints
+%the user suggests movements, the device decides to which degree it complies.
+%\paragraph{Peephole display}
+
+Even with a large screen or multiple screens, the interactive screen estate is limited.
+We propose to use an actuated screen as a peephole display in a larger working space.
+In this scenario, the screen moves on the $x$ axis and the pixels show the content in this area in the physical space.
The screen is like a moving physical window.
-In this scenario the user controls the screen position.
+%In this scenario the user controls the screen position.
+We can imagine combining this scenario with a projection that provides a low-resolution image around the screen~\cite{jones13}.
-\paragraph{Video Conference}
+%\paragraph{Constraint}
+\paragraph{Constraint automation}
+%System control, with user constraints
+%the user defines the constraints of the devices movements.
+We sometimes need to watch information on our screen while moving in a room, like when working on a whiteboard.
+We implemented a scenario in which the monitor orientation follows users in their office
+It displays notifications such as for new emails, agenda alerts, missed calls, etc.
+It also uses proxemic interaction~\cite{roussel04} by adapting the text size so that it is readable regardless of the distance.
-When video-conferencing with a desktop computer, the camera is usually affixed to the screen.
-The problem is when the user would like to show something he manipulates outside the camera range.
-He has to move the screen at the same time he is manipulating.
-In this scenario the screen follows the user so that he can always see the video conference, and show what he is doing to his collaborators.
-The user does not control the screen position in the strict sense of the term. However he can activate or deactivate this behavior and still control the screen position manually or with another interface.
+%Tidying. To keep the desk tidy, the devices move away when the user shuts down the computer, when he leaves his office or after a period of inactivity (e.g. 10mn). Moreover, the devices can move to their charging stations. This is especially useful with state of the art wireless devices (e.g. keyboard with embedded screens [4] or actuators [2], shape-changing mouses [21]) using more energy than conventional devices.
-\paragraph{Ergonomic coach}
+
+%\paragraph{Insurrection}
+\paragraph{Full automation}
+%System full control
+%the user has no influence on the device movements.
+%\paragraph{Ergonomic coach}
Being well seated is essential for healthy office work.
It reduces fatigue and pain.
It is however difficult to pay attention to our posture all day.
-In this scenario, devices move away if we are not seated correctly on the chair.
+In this scenario, the mouse and the keyboard move away if we are not seated correctly on the chair.
The user has no control over the devices in this situation.
\subsubsection{Discussion and conclusion}
-Looking at the office environment, there are many other objects involved.
-They can be actuated to provide other interactive scenarios.
-Probst et al. presented a prototype of chair they use for input\cite{probst14}.
-These chairs are not actuated, but equipped with sensors.
-However, Nissan designed parking chairs\footnote{\url{https://youtu.be/O1D07dTILH0}} which can move around.
-In their scenario the chairs move back under the table to tidy the room.
-But we can envision other scenarios.
-Pull-down beds are other examples of existing moving objects, which are easy to actuate.
-
-In a larger scale, the concept of moving walls makes it possible to have many rooms in small flats\footnote{\url{https://vimeo.com/110871691}}.
-Each wall has specific equipment, suitable for a particular room.
-If we keep think bigger, rotating houses is another example of actuated environment\footnote{\url{https://youtu.be/dIHUwp9x8Fg}}.
-The obvious application is to maintain sunlight at a specific location in the house.
-But there may be many interesting interactive scenarios to study with such a building.
-
-The early studies about TUIs used to consider everyday objects for interaction.
-Nowadays, computer peripherals became everyday objects.
-As such, they can also be considered as TUIs as long as they are not used as the device they are designed to be.
-We discussed how actuating computer peripherals enables new interactions.
-We presented a prototype of keyboard with actuated keys which can move up and down.
-We also presented a concept of moving computer peripherals, which enable new interactions.
-We envision this concept can apply to many other objects in our environment.
-The question we must always keep in mind is the degree of control we would like to keep over these wandering TUIs.
+This work on actuated devices is indeed different from projects discussed in the previous sections.
+The previous projects followed the typical view on haptic, that we commonly designate as “haptic feedback”.
+In these prrevious projects we encoded information or rendered haptic properties of objects with forces and vibrations.
+Here we take a step back, and consider the haptic properties of physical objects.
+The properties are due to the objects' shape, size, weight, material, position, etc.
+By actuating objects, desktop peripherals in our case, we modified some of these properties, typically the shape and position.
+Our focus here was not on how users perceive these haptic properties, but how we can leverage them to propose new interaction properties.
+
+These projects introduce two concepts that we will develop in \refchap{chap:input} and \refchap{chap:loop}.
+The first one is the idea that haptics does not only cover the sense of touch, but also our ability to manipulate.
+In our examples, the haptic properties of objects enabled different kinds of manipulation.
+The situation is reversed, and we can see the user as a haptic device that produces forces on a manipualted entity.
+The second concept is the idea that we cannot separate haptic as output and haptic as input.
+These are two sides of the same coin that forms interaction.
+This concept has many consequences, such as the continuum between control and automation, and it is linked to several fundamental paradigms of the literature that we will discuss in \refchap{chap:loop}.
+
+%Looking at the office environment, there are many other objects involved.
+%They can be actuated to provide other interactive scenarios.
+%Probst et al. presented a prototype of chair they use for input\cite{probst14}.
+%These chairs are not actuated, but equipped with sensors.
+%However, Nissan designed parking chairs\footnote{\url{https://youtu.be/O1D07dTILH0}} which can move around.
+%In their scenario the chairs move back under the table to tidy the room.
+%But we can envision other scenarios.
+%Pull-down beds are other examples of existing moving objects, which are easy to actuate.
+
+%In a larger scale, the concept of moving walls makes it possible to have many rooms in small flats\footnote{\url{https://vimeo.com/110871691}}.
+%Each wall has specific equipment, suitable for a particular room.
+%If we keep think bigger, rotating houses is another example of actuated environment\footnote{\url{https://youtu.be/dIHUwp9x8Fg}}.
+%The obvious application is to maintain sunlight at a specific location in the house.
+%But there may be many interesting interactive scenarios to study with such a building.
+
+%The early studies about TUIs used to consider everyday objects for interaction.
+%Nowadays, computer peripherals became everyday objects.
+%As such, they can also be considered as TUIs as long as they are not used as the device they are designed to be.
+%We discussed how actuating computer peripherals enables new interactions.
+%We presented a prototype of keyboard with actuated keys which can move up and down.
+%We also presented a concept of moving computer peripherals, which enable new interactions.
+%We envision this concept can apply to many other objects in our environment.
+%The question we must always keep in mind is the degree of control we would like to keep over these wandering TUIs.
+\newpage
\section{Conclusion}