The purpose of the modalities mentioned above is essentially to sense and interpret gestures of the fingers, hands, and arms.
This is a much more complex task that it seems at first sight.
-The hand alone has 27 degrees of freedom~\cite{elkoura03}.
+The hand alone has 21 degrees of freedom~\cite{elkoura03} and the arm adds 7 more~\cite{nasa14}.
The movement range of each of these degrees of freedom depends on multiple factors, including morphology, physical condition, age and gender~\cite{nasa14}.
It is therefore not surprising that input systems only sense a small part of the possible human movements.
Specifically, when users sould like to perform an action they form an intention that the cognitive system will turn into execution commands for the motor system.
The motor system, typicall muscles, tendons and articulations actuates the body, which in turn produces physical effects.
This physical effects enable the manipulation or objects in contact with the mobile parts of the body.
+They take the form of movements or forces.
+%Elkoura03: 21 (+6 “wrist”)
+%- 4 x (3 extension/flexion + 1 abduction /adduction) => 16
+%- thumb: => 5
+%The human hand has 27 degrees of freedom: 4 in each finger, 3 for extension and flexion and one for abduction and
+%adduction; the thumb is more complicated and has 5 DOF,
+%leaving 6 DOF for the rotation and translation of the wrist
+The hand has 21 degrees of freedom: 5 for the thumb, and 4 for each of the four other fingers~\cite{elkoura03}.
+The arms have 7 degrees of freedom: 3 for the shoulders, 1 for the elbow, 1 for the forearm, and 2 for the wrist~\cite{nasa14}.
+% 1) Shoulder horizontal Abduction/Adduction %(135/45)
+% 2) Lateral/medial shoulder rotation %(46/91)
+% 3) Shoulder flexion/extension %(152/33)
+% 4) Elbow flexion %(141)
+% 5) Forearm pronation/supination %(78/83)
+% 6) Wrist ulnar/radial bend %(19/16)
+% 7) Wrist flexion extension %(62/40)
+Knowledge about the range of movements and maximum forces is necessary for the design and layout of workstations.
+For example, NASA documents these values with and with gravity, or with pressurization because they need to provice precise and documented specifications for spacecrafts~\cite{nasa14}.
+They also use these specifications to design clothings that astronauts can wear comfortably to perform routine tasks.
+In our case, we design input systems.
+Therefore we need to know the range and precision of movements we have to sense.
+
+%motion range has application to
+%- Workstation Design and Layout
+%- design clothing for tasks: lift helmet visor, open door,
-Force and movements
\paragraph{Input systems}
-Buxton collection of interactive devices\footnote{\href{https://www.microsoft.com/buxtoncollection}{https://www.microsoft.com/buxtoncollection}}
-
-sensors
-
-signal: filters, transfer functions
-
-events
+Input systems observe our movements with sensors that measure directly or indirectly parts of our movements.
+%\defword{Direct input} consists in measuring the movement of our body, typically fingers and arms.
+%\defwork{Indirect input} systems measure the movements of an object manipulated by the user.
+There are several families of technologies for this.
+The list below is not necessarily meant to be exhaustive.
+Rather, the idea is to give readers an overview of today most frequent technologies.
+% for input systems.
+%I will describe below the most frequent technologies in input systems.
+The first family is electro-mechanical components.
+In this family, various types of switches and push buttons sense contacts.
+Keyboard and controller buttons use variations of this technology.
+Linear and rotary potentiometers sense continuous movements.
+Joysticks typically contain two of them to sense rotations in two directions.
+Encoders sense discrete movements.
+Ball mice used two of them to sense positions, and they are still used today in mice wheels.
+The second family of input sensing technologies is electrical sensors, such as resistive or capacitive sensors.
+They sense the position of one or more contact points.
+They are used in current touchscreens, including mobile phones and tablets.
+The third family is vision-based technologies.
+Such sensors use cameras and vision algorithms to detect movements.
+For example, optical mice use high-frequency and low-resolution cameras.
+Old-generations tabletops used infrared lights and cameras to detect contact points.
+Other devices use one or more RGB or infrared cameras to detect markers or a projected pattern.
+They are used for body motion capture or for tracking objects.
+VR headsets also use this technology to track the controllers.
+The last family of technologies is Microelectromechanical systems (\defword{MEMS}).
+They sense many kinds of physical phenomenons such as acceleration, rotations, magnetic fields, or fluid pressure.
+Input systems typically use combinations of accelerometers, gyroscopes, and magnetometers called Inertial measurement units (\defwords{IMUs}{IMU}).
+
+The signal coming from these sensors requires several transformations.
+Contact inputs such as buttons require a software or hardware \defword{debouncing} mechanism to avoid unwanted multiple activations.
+Threshold-based input such as capacitive sensing not only require adjusting a sensitivity and threshold value, but they often require an \defword{hysteresis} mechanism to avoir multiple activations.
+Analog signals must be transformed to digital values with an Analog-to-digital converter (\defword{ADC}).
+Input values often have noise that must be \defwords{filtered}{filter}.
+There many possible filters that remove noise, at the cost of latency~\cite{casiez12}.
+Some kinds of input require further transformation.
+In particular, pointing input require a \defword{transfer functions} that computes the movement of the cursor on the screen depending on the physical movements on the input device.
+These transfer functions usually take into account the ballistic-then-corrective nature of our movements~\cite{meyer88}.
+Vision-based technologies are sensible to occlusions.
+Therefore the software part of the pipeline extrapolates data to fill gaps in the input streams.
+The combination of several sources of inputs is challenging as well, but provides more precision in some cases.
+For example data from accelerometers require mathemathical integrations for position sensing.
+Not only it requires calibration, but it is also sensible to drifts due to the data precision.
+The fusion of accelerometers, gyroscopes, and magnetometers provides better tracking, at the cost of increased processing complexity.
+
+The software processes described in the previous paragraph are either computed on the device or on the host computer.
+For example, the device systematically debounces inputs with analog low-pass filters.
+They also implement hysteresis effects with a Schmitt trigger\footnote{\href{https://en.wikipedia.org/wiki/Schmitt_trigger}{https://en.wikipedia.org/wiki/Schmitt\_trigger}}.
+The ADCs are microcontrollers peripherals on the device.
+Filters are commonly implemented both on the device and host side.
+For example, mouse out touchpad movements are filtered on the device.
+At the opposite, inputs of depth-cameras such as a Kinect\footnote{\href{https://en.wikipedia.org/wiki/Kinect}{https://en.wikipedia.org/wiki/Kinect}} are filtered on the host side, because the host retreives raw data and computes a skeleton for example~\cite{shotton11}.
+The transfer function is typically computed on the host, because it requires information about the screen.
+
+When devices are integrated in the interactive system, they are connected to the host with a simple bus like SPI or I2C\footnote{\href{https://en.wikipedia.org/wiki/Serial_Peripheral_Interface}{https://en.wikipedia.org/wiki/Serial\_Peripheral\_Interface} \href{https://en.wikipedia.org/wiki/I\%C2\%B2C}{https://en.wikipedia.org/wiki/I\textsuperscript{2}C}}.
+In this case, the device implements a communication protocol that the host has to follow.
+There is no standard protocol, but the overall idea is usually similar.
+The previous buses use no correction codes.
+Therefore they are fast, but sensible to intereferences.
+Thus, devices that users can plug use more robust buses.
+Today's most frequent bus is certainly USB with the Human Interface Devices (\defword{HID}) class\footnote{\href{https://www.usb.org/hid}{https://www.usb.org/hid}}.
+This class defines a standard communication protocol.
+When the device is plugged, it sends descriptors that lists its features.
+In particular, the HID descriptor details the format and semantic of the data packets it will send at a fixed frequency.
+Thanks to this protocol, the host can interpret virtually any HID device with a generic driver.
+Regardless of the communication method between the host and the device, the operating system creates \defwords{input events}{input event} that applications will interpret for their own use.
+
+%Buxton collection of interactive devices\footnote{\href{https://www.microsoft.com/buxtoncollection}{https://www.microsoft.com/buxtoncollection}}
%Pseudo haptics \cite{lecuyer01}