What is Gesture-Based Computing?

Thanks in part to the Nintendo Wii, the Apple iPhone and the iPad, many people now have some immediate experience with gesture-based computing as a means for interacting with a computer. The proliferation of games and devices that incorporate easy and intuitive gestural interactions will certainly continue, bringing with it a new era of user interface design that moves well beyond the keyboard and mouse. While the full realization of the potential of gesture-based computing remains several years away, especially in education, its significance cannot be underestimated, especially for a new generation of students accustomed to touching, tapping, swiping, jumping, and moving as a means of engaging with information.

It’s almost a cliché to say it, but the first exposure to gesture-based computing for many people may have occurred over a decade ago when they saw Tom Cruise in Minority Report swatting information around in front of him by swinging his arms. The fact that John Underkoffler, who designed the movie’s fictional interface, presented a non-fiction version of it, called the G-Speak, in a TED Talk in 2010, fittingly asserts the growing relevance and promise of gesture-based computing. The G-Speak tracks hand movements and allows users to manipulate 3D objects in space. This device, as well as SixthSense, which was developed by Pranav Mistry while at the MIT Media Lab and uses visual markers and gesture recognition to allow interaction with real-time information, has ignited the cultural imagination regarding the implications for gesture-based computing. This imagination is further fueled by the Kinect system for the Xbox, which continues to explore the potential of human movement in gaming. In short, gesture-based computing is moving from fictional fantasy to lived experience.

The approaches to gesture-based input vary. The screens for the iPhone, iPad and the multi-touch Surface from Microsoft all react to pressure, motion, and the number of fingers used in touching the devices. Some devices react to shaking, rotating, tilting, or moving the device in space. The Wii, for example, along with similar gaming systems, function by combining a handheld, accelerometer-based controller with a stationary infrared sensor to determine position, acceleration, and direction. Development in this area centers on creating a minimal interface, and in producing an experience of direct interaction such that, cognitively, the hand and body become input devices themselves. The Sony PlayStation 3 Motion Controller and the Microsoft Kinect system both move closer to this ideal.

The technologies for gesture-based input also continue to expand. Evoluce has created a touch-screen display that responds to gestures, and is working on a way to allow people to interact with Windows 7 through the Kinect system. Similarly, students at the MIT Media Lab have developed DepthJS, which unites the Kinect with the web, allowing users to interact with the Google Chrome web browser through gestures. Also at MIT, researchers are developing inexpensive gesture-based interfaces that track the entire hand. Elliptic Labs recently announced a dock that will let users interact with their iPad through gestures.

Another direction for technological innovation centers on haptics, which refers to the tactile feedback communicated to a user. At McGill University researchers are developing a haptic feedback system that allows people with visual impairments to get more feedback with fine degrees of touch, and a researcher with the Media Computing Group at RWTH Aachen University, Germany, has created a localized active haptic feedback interface called MudPad for fluid touch interfaces that promises to offer more nuanced ways to interact with screens through touch.

Other researchers are exploring ways to use gestural computing with mobile devices. GestureTek’s Momo software, for example, uses two different trackers to detect motion and the position of objects, and is designed to bring gesture-based computing to phones. iDENT Technology’s Near Field Electrical Sensing Interfaces is designed to allow mobiles to respond to grip and proximity sensing. A ringing mobile will put the call through if it is picked up and held, but will send it to voice mail if it is picked up and quickly put down again.

While gesture-based computing has found a natural home in gaming, as well as in browsing files, its potential uses are far more broad. The ability to move through three-dimensional visualizations could prove compelling and productive, for example, and gesture-based computing is perfect for simulation and training. Gesture-based computing has strong potential in education, both for learning, as students will be able to interact with ideas and information in new ways, and for teaching, as faculty explore new ways to communicate ideas. It also has the potential to transform what we understand to be scholarly methods for sharing ideas.

Gesture-based computing is changing the ways that we interact with computers, both physically and mechanically. As such, it is at once transformative and disruptive. Researchers and developers are just beginning to gain a sense of the cognitive and cultural dimensions of gesture-based communicating, and the full realization of the potential of gesture-based computing within higher education will require intensive interdisciplinary collaborations and innovative thinking about the very nature of teaching, learning, and communicating.

INSTRUCTIONS: Enter your responses to the questions below. This is most easily done by moving your cursor to the end of the last item and pressing RETURN to create a new bullet point. Please include URLs whenever you can (full URLs will automatically be turned into hyperlinks; please type them out rather than using the linking tools in the toolbar).

Please "sign" your contributions by marking with the code of 4 tildes (~) in a row so that we can follow up with you if we need additional information or leads to examples- this produces a signature when the page is updated, like this: - alan alan Jan 25, 2011

(1) How might this technology be relevant to the educational sector you know best?

  • creating more compelling interfaces; tangible interactions
  • a theme too little emphasized is that gestural interfaces BRING THE BODY BACK into computer-enhanced learning, which has been too much about screens and keyboard/mice. This is important as the gestures and other bodily movements used in user experiences for learning and education invoke memory systems that could be helpful in knowledge retrieval and transfer from gesture-based learning interfaces [- roy.pea roy.pea Feb 27, 2011]

(2) What themes are missing from the above description that you think are important?

  • The leap from the swipe interface or even toss interface to the "my body is the interface" in Microsoft Kinnect is profound.

  • add your response here

(3) What do you see as the potential impact of this technology on teaching, learning, or creative expression?

  • add your response here
  • add your response here

(4) Do you have or know of a project working in this area?

Please share information about related projects in our Horizon K-12 Project form.