by October 1, 2012 0 comments





Now that touch interface has become a common technology for masses, work is now on to refine other interactive technologies-voice, gestures, and facial recognition. Several companies have been working on these technologies, but are still far from making them fully robust, secure, and usable.

You can already see gesture control in action in gaming consoles introduced by Nintendo, Microsoft, and Sony. All of them work pretty well, but they support minimum number of gestures. Apple added SIRI for voice recognition in its iPhone 4. It worked well in the US, but had difficulty recognizing other languages. In fact Micromax, the budget smartphones player, also introduced an Indian version of SIRI, called AISHA (read our review in “Gadgets & More” section of this issue). Several laptop vendors have already introduced facial recognition long back, which works, but isn’t secure enough.

[image_library_tag 299/71299, border=”0″ align=”right” hspace=”4″ vspace=”4″ ,default]

So while a lot of work has happened on this front, it’s not enough. Much more needs to be done, and I got some useful insights on what’s in store for the future in this area at Intel’s Developer Forum, held in San Francisco in September this year. The chipmaker giant introduced what’s called a Perceptual Computing SDK, which enables developers to build apps that allow the usage of alternate input technologies. They showcased a voice recognition assistant for ultrabooks, which recognizes natural language commands given by the user to perform its tasks. Another application shown was an interactive gesture camera from Creative Technologies, which can clip onto a notebook screen. With this, you can play games just by moving your hands. Intel plans to reduce its size to a level where it can be integrated on top of ultrabook screens, in place of webcams. Intel plans to throw open a developer challenge for its Perceptual Computing SDK very soon.

Once it succeeds, gesture control will hit the masses and will find a lot of interesting applications. If you’re in the kitchen, cooking something by reading a recipe from an e-book on your ultrabook or tablet, you could flip the pages just by waving your hands instead of touching the screen with your dirty hands. Similarly, a presenter could move presentation slides just by waving his hands, instead of hitting the next key on the keyboard or using the presentation controller.

Another interesting technology I saw was similar to surface computing, but where everyday objects can become computing devices, like a flower vase or a bowl kept in your living room. The technology can for instance, project your digital photography collection on a bowl or a wall by using a projector. The projector would in turn have two cameras to recognize your gestures as you move the photos around, pinch and zoom them just by moving your hands .

You can read more about it in my article on “How Everyday Objects can Become Compute Devices” in this issue. So the possibilities of what all you can do with alternate input devices are immense, so expect some very exciting times ahead in this domain!

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.

Your data will be safe!Your e-mail address will not be published. Also other data will not be shared with third person.