Ever since computers were invented, people have been developing new ways to interact with them. Touch screen is the latest commercially available way to interact with today’s gadgets, but designers now want to make computers totally intuitive – and this has led to gestural interfacing technology. Advances in the field of computer vision have meant that now video tracking can be used to accurately build up a picture of how a person is moving – and these movements can be used to control a computer.
One company that is already developing this technology is Oblong Industries, who have developed the “G-Speak” spatial operating environment. It’s the closest thing yet to Tom Cruise in the film “Minority Report” (the founder of Oblong was actually a science adviser on the film and designed the computer interface in it). Infrared sensors around the room pick up data from special gloves worn by the user, and the computer recognises certain gestures such as pointing and twisting. Huge screens mounted all around the room display all the information. Everything is controlled by gesturing with the hands and fingers, which makes things like handling large data sets and moving 3D objects much easier than it would be with, say, a mouse. A version of Oblong’s technology is already being used by some companies in the USA and, according to the developers, hand positions are tracked to an accuracy of 0.1mm whilst updating one hundred times a second.
Another research team at Massachusetts Institute of Technology are developing a mobile device that uses gestural interfacing to bridge the gap between the physical and digital worlds. The device, worn around the user’s neck, is comprised of a camera, a portable projector and a mirror. The camera continuously tracks finger movement and sends data to a mobile phone in the user’s pocket, which processes all the information and sends it back to the projector to be displayed.
The technology, according to its creator Pranav Mistry, is “a wearable interface that augments the physical world around us with digital information”. This “Sixth Sense” tries to make our surroundings totally interactive – Instead of taking out a camera and pressing a button, just make a picture frame shape with your fingers and the camera will take. Draw an “@” sign in the air and your e-mails will be projected onto a free surface in front of you.
The device doesn’t only track finger movements, as Dr. Mistry continues, “Let’s say I’m in a bookstore, and I’m holding a book. Sixth Sense will recognize that, and will go up to Amazon. Then, it will display online reviews of that book, and prices, right on the cover of the book I’m holding.” It’s easy to see the potential of this technology – think live travel updates projected onto your flight boarding pass and price comparisons projected onto supermarket items.
What makes this device so exciting is its simplicity and extendibility– the prototype cost just 350 to produce (all from off the shelf components) and Dr. Mistry has said that he will make the entire technology open source – he says, “Rather than waiting for that time to come, I want people to make their own system… People will be able to make their own hardware. I will give them instructions how to make it”. Mistry seems more excited than anyone about this technology, and clearly not from a profit point of view. With developers worldwide being part of the project and creating their own applications, the possibilities are limitless.
This portable gestural technology is a really exciting new way to bring the virtual world into the real world. Apart from making us look as cool as Tom Cruise in Minority Report, it lets us use the power of our computers without having to be sitting at one, or even holding one. Although it has a long way to go before it will be used in a commercial product, the concept has been proven and I think that interest in this area will only increase – and I’m excited about what’s to come.