Noa Ghersin, Research Associate, Lux Research04.19.16
Last month, Apple filed a patent for a display that detects hovering gestures and enables users to interact with the device without directly touching it. While this concept may seem new, other big players already made hover touch (or proximity touch) commercially available in 2012.
The Floating touch feature in Sony’s Xperia Sona smartphone enabled users to select links and bring up popup menus by hovering over them at a distance of 20 mm from the display. Samsung also introduced a similar concept to their Galaxy S4 smartphone with the Air View feature, which enabled users to preview content of unopened emails and videos, view a magnified view of the Internet browser, or view a phone number saved in speed dial – all by hovering over the display.
With several big players tapping into what is collectively regarded as “hover touch,” the question becomes, “How does hover touch fit into the next-gen controls landscape, and where does it stand amongst competing traditional touch controls, and more advanced force touch and gesture controls?”
Although most gesture controls up to 12 cm away from the device can be regarded as hover touch, there are two main mechanisms for accomplishing such proximity touch interactions: infrared sensing and capacitive sensing. Apple has been looking into integrating infrared sensors into multi-touch displays to detect instances in which emitted light bounces back, indicating that there is a finger above the display.
The core technology used by most developers, however, relies on old concepts, specifically on capacitive sensing prevalent in mainstream touch technologies. Hover touch technologies developed by Sony, Fogale Sensation, Atmel and Hover Labs combine two types of capacitive sensors: mutual capacitance to make multi-touch detection possible, and self-capacitance to generate a strong enough signal to allow for detection of the finger further away from the sensors, thereby introducing a z-axis to touch detection.
The resulting proximity sensing allows for detection of objects up to 12 cm from the capacitive sensors. Using an interference of electrostatic field to detect the presence of fingers also allows hover touch technologies to detect objects on the sides of the sensor, as opposed to only directly above.
This capability, claimed by Fogale Labs, can allow for replacement of physical buttons on smartphones by virtual buttons, and could also enable the device to become aware of the way it is being held (e.g. one hand, two hands, left hand, right hand) and adapt its behavior accordingly.
Examining hover touch alongside competing touch and gesture technologies reveals that hover touch allows for a continuum of control. Technologies that require direct contact include traditional touch and force touch technologies, like Apple’s 3D Touch; technologies that allow for long-range device control include gesture controls, like the one developed by MUV Interactive, which can track up to 10 distinct objects at a distance of 30 meters away.
While it may be tempting to view hover touch as a competitor of touch and gesture controls, hover touch bridges the gap between direct-contact and far control mechanisms. As a result, hover controls are uniquely positioned, but will not be able to stand on their own. The marrying of an outdated technology with a limited gesture control functionality will not enable hover touch to become a ubiquitous control mechanism that will replace either of these control mechanisms. With that said, hover touch could make for a valuable add-on control mechanism that could enhance the user experience in niche use cases.
Noa Ghersin is a research associate on the Wearable Electronics Intelligence and the Electronic User Interfaces Intelligence teams at Lux Research, which provides strategic advice and on-going intelligence for emerging technologies. Ghersin received her B.Sc. in biological engineering from MIT, where she focused on regenerative medicine, drug delivery and prosthetics.
The Floating touch feature in Sony’s Xperia Sona smartphone enabled users to select links and bring up popup menus by hovering over them at a distance of 20 mm from the display. Samsung also introduced a similar concept to their Galaxy S4 smartphone with the Air View feature, which enabled users to preview content of unopened emails and videos, view a magnified view of the Internet browser, or view a phone number saved in speed dial – all by hovering over the display.
With several big players tapping into what is collectively regarded as “hover touch,” the question becomes, “How does hover touch fit into the next-gen controls landscape, and where does it stand amongst competing traditional touch controls, and more advanced force touch and gesture controls?”
Although most gesture controls up to 12 cm away from the device can be regarded as hover touch, there are two main mechanisms for accomplishing such proximity touch interactions: infrared sensing and capacitive sensing. Apple has been looking into integrating infrared sensors into multi-touch displays to detect instances in which emitted light bounces back, indicating that there is a finger above the display.
The core technology used by most developers, however, relies on old concepts, specifically on capacitive sensing prevalent in mainstream touch technologies. Hover touch technologies developed by Sony, Fogale Sensation, Atmel and Hover Labs combine two types of capacitive sensors: mutual capacitance to make multi-touch detection possible, and self-capacitance to generate a strong enough signal to allow for detection of the finger further away from the sensors, thereby introducing a z-axis to touch detection.
The resulting proximity sensing allows for detection of objects up to 12 cm from the capacitive sensors. Using an interference of electrostatic field to detect the presence of fingers also allows hover touch technologies to detect objects on the sides of the sensor, as opposed to only directly above.
This capability, claimed by Fogale Labs, can allow for replacement of physical buttons on smartphones by virtual buttons, and could also enable the device to become aware of the way it is being held (e.g. one hand, two hands, left hand, right hand) and adapt its behavior accordingly.
Examining hover touch alongside competing touch and gesture technologies reveals that hover touch allows for a continuum of control. Technologies that require direct contact include traditional touch and force touch technologies, like Apple’s 3D Touch; technologies that allow for long-range device control include gesture controls, like the one developed by MUV Interactive, which can track up to 10 distinct objects at a distance of 30 meters away.
While it may be tempting to view hover touch as a competitor of touch and gesture controls, hover touch bridges the gap between direct-contact and far control mechanisms. As a result, hover controls are uniquely positioned, but will not be able to stand on their own. The marrying of an outdated technology with a limited gesture control functionality will not enable hover touch to become a ubiquitous control mechanism that will replace either of these control mechanisms. With that said, hover touch could make for a valuable add-on control mechanism that could enhance the user experience in niche use cases.
Noa Ghersin is a research associate on the Wearable Electronics Intelligence and the Electronic User Interfaces Intelligence teams at Lux Research, which provides strategic advice and on-going intelligence for emerging technologies. Ghersin received her B.Sc. in biological engineering from MIT, where she focused on regenerative medicine, drug delivery and prosthetics.