Despite so much advancement in information technology, the methods of
communication between humans and computers by and large, continue to remain the
mouse and keyboard. Sadly, no other interface technology, be it voice, touch,
gestures, or anything else has reached a level of mass adoption like these two
till date. It's a field where technology announcements have come and gone
without making a major impact over the last four decades. The coming decade
however, promises to be different. In the last five years alone, we've seen
considerable action in various interface technologies, especially in the areas
of speech and touch.Let' s explore where they've reached and where they're
heading.
Touch and Multi-Touch
Though they were introduced in the market in the 1970s, touch screens have not
been able to reach mass adoption, due to pricing and lack of some path breaking
applications in the mainstream industry. There were specific applications of
touch screens in specialized tablets for police, medical and research segments
but that was not a value for money proposition in the main stream market till
the advent of smart phones.
Smart phones made touch screens very popular and more affordable. But the
future is brighter for touch screens because we are about to see lots of
applications around very soon. For instance, the MultiTouch Ready iPhone has
made the first big dent in the market, changing the way we interact with devices
forever.
When we use a computer's graphical interface either using a mouse or a touch
screen, the point of contact or the point of operation remains one. For example,
while using a mouse you will find only a single mouse curser, and while using
fingers or a stylus on a touch screen you will touch a single point of contact
to operate. But what if you want to touch or operate multiple points of a GUI
simultaneously? This is what multi-touch promises to deliver.
The applications for multi-touch are immense. Imagine a case where you have a
60 or 100 inch display at your home which is more than enough for two people to
share and work on. Just one machine with good processing power and a single
screen can enable many people to work simultaneously on it. This concept is
called Screen Sharing which might become widespread in near future with the
advent of large wall-sized screens. Simply imagine playing a virtual Piano on a
touchscreen.
And not to forget the type of performance benefit you get while using
multi-touch of many applications such as, drawing and designing tools, handling
interface, etc you save a lot of time by using both hands and multiple fingers
simultaneously.
Today you don't have phones with support for multi-touch only, both Dell and
Toshiba released their tablets with multi touch. Microsoft is also planning to
release the New Windows 7 which is essentially Windows Vista with multi-touch
option. Not yet officially disclosed, but we might see Windows 7 by next year or
2010.
How Multi-Touch works
Let's first understand how multitouch works and what the device level
requirements are. Essentially there are two mechanisms that multi-touch uses.
One is the touchscreen way, where a touchscreen is used as an interface and
multiple fingers moving on the screen are captured and converted into commands.
This technology is comparatively costlier that might not be available with each
and every computing devices.
So to make it more usable and cheaper, multiple cameras are used for
recording the hand gestures and converting them into commands. This technology
is far more cheaper and is easily upgradable to any existing computing device.
The future of Multi-touch
Multiple parallel development from Apple, Microsoft and from the opensource
world is happening towards multi-touch. Apple has more plans to enhance their
existing portfolio of multi-touch devices and Microsoft is targeting it in three
levels, i.e, WM 7 (a.k.a Windows Mobile 7), Windows 7 and Microsoft Surface. All
of them are under research and we have to wait for two to three years to see
them in action. On the other hand, open source projects such as OpenTable is
trying to build a similar Surface Computing environment with much cheaper price.
So, the final verdict is: we are going to see a lot of Surface Computing and
multi-touch in near future.
Accelerometer
This is another form of computer interface which converts the hand's or physical
motion into signals and sends to the computing device. Such devices are already
there in the market, majorly in two forms. In some high end mobile phones, some
are using it for auto orientation of the screen from landscape to portrait and
vice versa when the user turns the device, or is used for phones to track your
physical activities and count your speed and number of steps while jogging with
your phone.
The other device which is making a real use of accelerometer is the Nintendo
Wii, which has accelerometer based game pads, also known as nunchucks. These
devices are small handhelds which connect with the console wirelessly and when
they're moved, they recreate the movement action inside the game, making it much
more realistic to play games like tennis, boxing, etc.
If you have seen the movie, Minority Report, then you would easily be able to
imagine where this technology is heading. We might very soon see interface
devices like gloves to interact with the computer by just hand gestures.
Neural Impulse Actuator
This might sound like science friction but it's already there in the market as a
packaged product, albit in a very nascent stage with lots of enhancements
required. It comprises of a headband which can connect to a PC over USB and
convert your very shallow brain signals into instructions for the computer, so
you don't even need to move your hands or do anything, and your PC will
understand your instructions. This device is currently used only for playing
some games and needs calibration every time you use it, but it works pretty well
and has helped in reducing the response time significantly in the game play.
This concept was first invented in 80's for US Air Force pilots, to help them
control the direction of their planes without making their hands busy so that
they can use their hands only for using the ammunitions.
The future for this device completely depends upon where all our imagination
can go. Imagine a time where we don't even touch, move or even utter a single
word and our machines will understand what we want at the speed of thought and
perform the task. Well, we look forward to seeing that happen very soon.
Speech Recognition
This technology is there since the last decade. We have lots of good products
also but the technology could not become mainstream in Indian context. The
primary reason for this is the Indian accent. Speech recognition software today
are primarily created for users in US, UK and Australia, so they can very easily
recognize their accent, but when it comes to India and other Asian countries, it
becomes difficult for an average PC user to adjust to the required accent. Hence
in India you will not see a major penetration of voice recognition systems.
The other thing that makes it difficult to use is the requirement of a
high-end microphone, and in most cases a very quiet ambiance which are not so
easily available. However, since last year we have seen many products which are
coming into market with special support for Indian accent. Besides, the reducing
price of noise cancellation microphones is making it possible for more people to
adopt such products.
A Brewing Revolution in Human-Computer Interfaces |
Anbumani Subramanian, Research Scientist, HP Labs India
Human to human communication in the real world tends to use cues from more Human-computer interaction (HCI) is the discipline concerned with the However, this popular paradigm is designed to accept and process only one
Each of these sensory modes by itself has the potential to support rich |