Advertisment

Interface Technology : Beyond Mouse and Keyboard

author-image
PCQ Bureau
New Update

Despite so much advancement in information technology, the methods of

communication between humans and computers by and large, continue to remain the

mouse and keyboard. Sadly, no other interface technology, be it voice, touch,

gestures, or anything else has reached a level of mass adoption like these two

till date. It's a field where technology announcements have come and gone

without making a major impact over the last four decades. The coming decade

however, promises to be different. In the last five years alone, we've seen

considerable action in various interface technologies, especially in the areas

of speech and touch.Let' s explore where they've reached and where they're

heading.

Advertisment

Touch and Multi-Touch



Though they were introduced in the market in the 1970s, touch screens have not
been able to reach mass adoption, due to pricing and lack of some path breaking

applications in the mainstream industry. There were specific applications of

touch screens in specialized tablets for police, medical and research segments

but that was not a value for money proposition in the main stream market till

the advent of smart phones.

Smart phones made touch screens very popular and more affordable. But the

future is brighter for touch screens because we are about to see lots of

applications around very soon. For instance, the MultiTouch Ready iPhone has

made the first big dent in the market, changing the way we interact with devices

forever.

When we use a computer's graphical interface either using a mouse or a touch

screen, the point of contact or the point of operation remains one. For example,

while using a mouse you will find only a single mouse curser, and while using

fingers or a stylus on a touch screen you will touch a single point of contact

to operate. But what if you want to touch or operate multiple points of a GUI

simultaneously? This is what multi-touch promises to deliver.

Advertisment

The applications for multi-touch are immense. Imagine a case where you have a

60 or 100 inch display at your home which is more than enough for two people to

share and work on. Just one machine with good processing power and a single

screen can enable many people to work simultaneously on it. This concept is

called Screen Sharing which might become widespread in near future with the

advent of large wall-sized screens. Simply imagine playing a virtual Piano on a

touchscreen.



And not to forget the type of performance benefit you get while using
multi-touch of many applications such as, drawing and designing tools, handling

interface, etc you save a lot of time by using both hands and multiple fingers

simultaneously.

Today you don't have phones with support for multi-touch only, both Dell and

Toshiba released their tablets with multi touch. Microsoft is also planning to

release the New Windows 7 which is essentially Windows Vista with multi-touch

option. Not yet officially disclosed, but we might see Windows 7 by next year or

2010.

How Multi-Touch works



Let's first understand how multitouch works and what the device level
requirements are. Essentially there are two mechanisms that multi-touch uses.

One is the touchscreen way, where a touchscreen is used as an interface and

multiple fingers moving on the screen are captured and converted into commands.

This technology is comparatively costlier that might not be available with each

and every computing devices.

Advertisment

So to make it more usable and cheaper, multiple cameras are used for

recording the hand gestures and converting them into commands. This technology

is far more cheaper and is easily upgradable to any existing computing device.

The future of Multi-touch



Multiple parallel development from Apple, Microsoft and from the opensource
world is happening towards multi-touch. Apple has more plans to enhance their

existing portfolio of multi-touch devices and Microsoft is targeting it in three

levels, i.e, WM 7 (a.k.a Windows Mobile 7), Windows 7 and Microsoft Surface. All

of them are under research and we have to wait for two to three years to see

them in action. On the other hand, open source projects such as OpenTable is

trying to build a similar Surface Computing environment with much cheaper price.

So, the final verdict is: we are going to see a lot of Surface Computing and

multi-touch in near future.

Accelerometer



This is another form of computer interface which converts the hand's or physical
motion into signals and sends to the computing device. Such devices are already

there in the market, majorly in two forms. In some high end mobile phones, some

are using it for auto orientation of the screen from landscape to portrait and

vice versa when the user turns the device, or is used for phones to track your

physical activities and count your speed and number of steps while jogging with

your phone.

Advertisment

The other device which is making a real use of accelerometer is the Nintendo

Wii, which has accelerometer based game pads, also known as nunchucks. These

devices are small handhelds which connect with the console wirelessly and when

they're moved, they recreate the movement action inside the game, making it much

more realistic to play games like tennis, boxing, etc.

If you have seen the movie, Minority Report, then you would easily be able to

imagine where this technology is heading. We might very soon see interface

devices like gloves to interact with the computer by just hand gestures.

Neural Impulse Actuator



This might sound like science friction but it's already there in the market as a
packaged product, albit in a very nascent stage with lots of enhancements

required. It comprises of a headband which can connect to a PC over USB and

convert your very shallow brain signals into instructions for the computer, so

you don't even need to move your hands or do anything, and your PC will

understand your instructions. This device is currently used only for playing

some games and needs calibration every time you use it, but it works pretty well

and has helped in reducing the response time significantly in the game play.

This concept was first invented in 80's for US Air Force pilots, to help them

control the direction of their planes without making their hands busy so that

they can use their hands only for using the ammunitions.

Advertisment

The future for this device completely depends upon where all our imagination

can go. Imagine a time where we don't even touch, move or even utter a single

word and our machines will understand what we want at the speed of thought and

perform the task. Well, we look forward to seeing that happen very soon.

Speech Recognition



This technology is there since the last decade. We have lots of good products
also but the technology could not become mainstream in Indian context. The

primary reason for this is the Indian accent. Speech recognition software today

are primarily created for users in US, UK and Australia, so they can very easily

recognize their accent, but when it comes to India and other Asian countries, it

becomes difficult for an average PC user to adjust to the required accent. Hence

in India you will not see a major penetration of voice recognition systems.

The other thing that makes it difficult to use is the requirement of a

high-end microphone, and in most cases a very quiet ambiance which are not so

easily available. However, since last year we have seen many products which are

coming into market with special support for Indian accent. Besides, the reducing

price of noise cancellation microphones is making it possible for more people to

adopt such products.

A Brewing Revolution in Human-Computer Interfaces
Anbumani Subramanian, Research Scientist, HP Labs India

Human to human communication in the real world tends to use cues from more

than one form of expression-speech, hand gestures, eye movements, facial

expressions and many more. The interactions we have with people around us

are rich in these myriad forms of expression that we consider natural and

intuitive. However a similar kind of communication between a human and a

computer, where the computer can understand human speech and actions still

remains in the realm of fiction. Despair not. We are at the cusp of a

brewing revolution which could usher us into a new era of interaction

experiences with computers.

Human-computer interaction (HCI) is the discipline concerned with the

design, evaluation and implementation of interactive computing systems for

human use and with the study of major phenomena surrounding them. The

popular paradigm of human-computer interaction today is based on the use of

Windows, icons, menus, and pointing devices (WIMP). For instance, in WinXP,

you can select an application by clicking on an icon using a mouse 'pointing

device'. From the resultant application 'window', you can proceed to choose

any 'menu' item in the application. This model of user-computer interface is

so widely entrenched that a similar interface design has been adopted for

other computing devices like mobile phones, PDAs etc.

However, this popular paradigm is designed to accept and process only one

form of input (mode) at any time, mostly from a keyboard or a mouse. Going

ahead, the next generation of users may use various new sensory modes such

as touch, hand gestures, speech and facial expressions to interact with a

computer. Among this new set of input modalities, touch has recently entered

mainstream and is the principal form of input in computers like HP

TouchSmart and mobile phones like Apple iPhone and HTC Touch. Using a

touch-sensitive screen, the user can perform many simple tasks by directly

manipulating on-screen widgets, as opposed to using a keyboard and mouse for

interaction.



With the cost of cameras decreasing every day, it has also become
inexpensive to include a camera with desktop and notebook computers and

mobile phones. There are already notebooks available today that use face

recognition technology with a camera for automated user authentication.

While this technology may not work well in all situations, such usage of

camera is a good example of what is about to come. Some companies are

considering leveraging camera input as a platform to develop technologies

that can enable users to use intuitive hand gestures to interact with

computer applications. With this technology, instead of using a joystick or

keyboard, you can simply make a gesture of turning an imaginary steering

wheel in front of the camera to drive a virtual car in a computer game.





The camera can also help determine the state of the user from facial
expressions and take necessary action like offering help. These are some

instances of camera based technology that can enhance the interaction

experience.

Each of these sensory modes by itself has the potential to support rich

forms of interaction with computers. When used in combination, an even

better user experience with multi-modal inputs becomes possible. For

example, hand gestures may be used to complement or supplement speech input,

as is often found in human to human interaction. Some researchers have

explored the use of such multi-modal interaction models with computers and

have shown encouraging results. With the increasingly widespread use of

computers and mobile phones and the exponentially increasing number of

sensors on these devices, the field of multi-modal HCI is gaining

significant attention.

Advertisment