Advertisment

Interfaces of the Third Kind

author-image
PCQ Bureau
New Update

If we look at the graph of advancement in computer technology, we always see

an upwardly climb. Be it the processing power, memory or the storage media, all

have seen tremendous changes over time, but one aspect hasn't changed much over

the period of time; and it is human interaction with computers which sadly is

still bound to keyboard and mouse. Though there have been interface technologies

like speech, touch and gestures, but none has been able to replace these two

till date. Now the user interaction is not limited to just computers but to an

array of multimedia devices like mobile phones, music players, etc. Though touch

interface has evolved a lot over the last decade but hasn't found the mass

adoption to replace keyboard and mouse as devices of user interaction. But in

case of small devices like smartphones, PDAs, Mp3 players, etc the Touch and

Multi-Touch interface has become popular and more readily adopted by users. The

Apple iPhone is an example of how Multi-Touch interface has changed the way we

interact with devices. Few years back when Microsoft introduced the Surface

Computing concept, it brought a paradigm shift in the way users interacted with

computers having Surface Computing as the interface. The way computers are

evolving today, user interfaces are also bound to evolve. And this year too, at

Microsoft Research TechFest 2009, there were quiet a few new innovative user

interfaces showcased that are set to mark the beginning of a new era of user

interfaces. Let's take a look at some of the innovations.

Advertisment

Direct Hit!

Applies To: Everyone



USP: Learn how human-computer interaction
will evolve in future



Primary Link: None


Google Keywords: SecondLight, Back of
device touch, Omni-directional projection, write in air


SecondLight



We know about Microsoft's Surface Computing. SecondLight takes it a step

further and adds another realm of interface to it. The system appears like a

regular Surface computer, but there is a second screen of information that is

projected but not displayed on the Surface. SecondLight projects images or

information and detects gestures in air above the display, which means the User

Interface is no longer bound to the display surface. This technology uses same

setup as Surface, where IR camera records finger and hand placements of the

screen and the display is projected onto the surface using a projector that's

mounted below the Surface's screen. SecondLight has taken it a step further and

added another projector below the screen. Both projectors project their

respective information one at a time, towards the screen at the rate of 60 times

per second. The switching between two projectors is so quick that it is

imperceptible to human eye. Though the display of first projector gets displayed

on the Surface's screen, the display from the second projector can be captured

mid-air and seen on translucent screens which can be of plastic or even paper.

Advertisment

For instance on the display surface, the night-sky could be shown but as you

hover a translucent paper over the screen, you would see the formation of

various constellations and their information. This would be the additional

display projected beyond the display surface onto the paper by the second

projector. The system can support multi-users, and while displaying a single

information on the surface computer's screen, it can project different

information to different users over the screen.

Omni-directional Projection



You might have used Microsoft's WorldWide Telescope (WWT) to see distant

starts and galaxies, but imagine if you could control the galaxy. Impossible,

but with Omni-Directional Projector you can come near to achieving that. As the

name suggests, the projector displays data on the ceiling and the walls, i.e. a

360-degrees view all around the user. In addition to that it allows the user to

manipulate through 360-degrees of data through hand gestures and voice commands.

So, if you use WWT inside a hemispherical dome, you could be in middle of

universe and with hand gestures you could be zooming in and out of galaxies and

stars.

Advertisment

This technology uses a projector pointing upwards in the center of the room

to project the data all around. And with IR camera sensors placed around the

projector's lens it can detect hand gestures that the user makes, thus providing

an interactive virtual display of information all around the user. The system

can be used in medical reviewing where a doctor can view the patient's MRI scans

and other digital images of the body parts in an immersive manner so as to

diagnose a proper surgical operation.

Write in Air



It becomes a painful task for inputing characters into devices such as XBox
connected to a television with no keyboard and mouse attached to it.

Write-in-Air is a proof-of-concept technology that will have a television or the

display screen with a web-cam attached to it, the user can write the intended

character in air by using his finger or a bright object. The gestures will be

captured by the camera and fed to a hand-writing recognizer which can list

recognition results onto the screen for final selection by the user. The demo

shown at TechFest had a system that had a vocabulary of English and numerical

characters as well as for Chinese, Japanese and Korean characters as well.

Advertisment

Back of Device Touch Interface



With touch-screen devices we face a problem that at times our finger

conceals the display and we inadvertently make a mistake and press for wrong

information or make a wrong action.

And as the devices are becoming smaller and much of the area being consumed

by the display itself, the touch interaction becomes a problem. To tackle that,

the touch interface has been put on the back of the device. As the device is

hand-held, using your fingers from behind the display area you provide input for

interaction with the device. This would provide a complete view of the display

and from underneath the device you could place fingers at right points for

input. With the user performing feedback function from back of the device, it

can be imagined to have devices that are as small as a button, with one side

there is a display while at the backside the surface is for Touch input.

Advertisment