Advertisment

Processing at the Speed of Light

author-image
PCQ Bureau
New Update

While traditional electronic computers can do only one thing at a time, optical computers can compute in parallel. They can be both thousands of times more powerful than traditional computers, and tremendously energy-efficient. This is because light, or photons, is more westernized as compared to electrons. Light beams can run close to each other, and even

criss-cross each other’s path, with no cross-talk. And massive space reduction can be achieved by placing virtually unlimited optical paths in a limited

space.

Advertisment

When associated with fast switching speeds, this ability of an optical computer to do tasks in parallel can result in staggering computational power.

Time Travel?

Researchers have recently found that under certain conditions, when a pulse of light is passed through a gas filled chamber, the pulse emerges at the other end even before it has entered the chamber. Effectively, it travels through time and space–time travel, if you wish. This really isn’t as weird or impossible as it sounds. It’s just one of the offshoots of Einstein’s theory of relativity. Hence, with light reaching its destination even before it has really started its journey, the possibilities opened up for computer speed are obvious. This phenomenon doesn’t just offer possibilities for staggering processing speeds, but also for networking. We can already see the benefits of fiber-optic technology. Just imagine what could be achieved if light could travel from one end of the fiber to another, faster than it currently does. 

Moreover, traveling at 186,000 miles per hour–the fastest thing known to man–light can tremendously reduce inter- and intra-chip communication time, pushing up processor speeds. Issues like those of bus speeds could become a thing of the past, as light has the potential of providing all the bandwidth that we need. Optics has already had a revolutionizing effect on networking using with fiber optics.

Advertisment

This is just the tip of the iceberg. Electronic switching limits network speeds to about 50 Gbps. What we need however is terabit speeds (never satisfied are we?). Fraser, Abdeldayer, and teams of NASA have built all-optical logic gate circuits switching at gigabit and terabit rates, besides nanosecond and Picosecond all-optical switches.

The Fourier transform (which is extremely important in signal analysis) has a major role to play in optical computers. In optical systems data is conveyed by the modulation of a cross-section of a light beam, usually by some sort of a transparency. In this the transmitted beam may also suffer a phase change. Now to carry out an operation, say subtraction, two light beams having different amplitudes (hence representing different numbers), and having a 180-degree phase difference are combined. The amplitudes cancel, thus giving us the final result.

The use of beam splitters and transparencies allows us to perform arithmetic operations on a continuous feed of data in parallel. The capability to carry out unlimited fan addition is one of the strongest points of optical computing. Optical computing can also be used to find solutions of partial differential equations where accuracy is not much of a concern and analytical solutions not possible.

Advertisment

The heart of any modern electronic circuit is the transistor. To carry out processing using optics we need a similar device. A transphasor is the electronic equivalent of a transistor. Other important non linear devices are the Fabry-Perot resonator, interference filter bistable device, Hybrid resonator based bistable device and of course the SEED (Self electro-optical Device).

Logic gates are primarily implemented by three methods–laser logic gates, threshold logic and shadow casting. Out of these shadow casting is optically very simple with no lenses or non-linear devices being required.

Optical computers also offer the possibility of dynamic reconfiguration if spatial light modulators control the optical inter- connection pattern. The reconfiguration pattern may be changed real time.

Advertisment

In optical computing, post-processing also plays a major role requiring a lot of computing power. The usual correlation process requires a Fourier transform, multiplication by a matched filter and an inverse Fourier transform. Research has found that greater flexibility and S/N ratio may be achieved if the first Fourier transform is performed optically and the second using a digital electronic computer. 

Speedy Silicon Transistors

Speedy Silicon Transistors
Advertisment
Intel develops 0.02 micron

transistor

Intel researchers have recently announced the development of a working

prototype transistor that measures just 20 nanometers (0.02 micron), a

factor nine times smaller than today’s state-of-the-art 0.18 micron chip

technology. It is so fast that it will help Intel build microprocessors

running at 20 GHz that consume just 1 V of electricity. The advanced

transistors were developed and built at Intel research labs in Hillsboro,

Oregon.

The technology to make chips with 0.02 micron features is likely to hit

the market around 2007. It takes that long for chip and equipment makers

to perfect all of the tools, from etching and deposition systems to

high-speed lithography machines, to reliably produce ICs with such small

features.

The 0.02 micron transistor will be the technological basis for Intel’s

0.045 process generation, which the company plans to have in production in

approximately 2007. It can then scale down the technology over the next

three to five years to the 0.02 micron level.

The implications are significant. If fully utilized, Intel will be able

to cram up to 1 billion of the tiny transistors on a chip the size of

today’s P4 processor which has ‘only’ 42 million transistors. The

development means Intel will be able to continue following Moore’s law

and double the number of transistors every 18—24 months until around

2011.

According to Gerald Marcyk, director of the Components Research Lab in

Intel’s Technology and Manufacturing Group, the transistor tests the

limits of technology in some ways. For example, the silicon dioxide gate,

a layer that prevents the metal on top from short-circuiting the silicon

underneath when current is passed through is only one silicon dioxide (SiO)

molecule thick. “You can’t really scale much lower than three atoms

(one silicon and two oxygen atoms) thick. We’re going to have to invent

a new kind of material to replace the silicon dioxide.”

IBM stretches atoms to

make them work faster

It may seem a stretch of the imagination, but IBM researchers say they have actually been able to increase IC processing speeds by up to 35 percent by ‘stretching’ silicon atoms during the manufacturing process.

IBM announced the breakthroughs at the Symposium on VLSI Technology in

Kyoto, Japan. In developing the technique, IBM said researchers were able to take advantage of the natural tendency of atoms inside a compound to align themselves with other atoms. They deposited silicon atoms on a chip substrate in which atoms are further spaced apart than normal. The atoms in the new layer subsequently also spaced themselves further apart.

In the stretched configuration, electrons passing through the silicon

encounter less resistance and move up to 70 percent faster, which can lead

to chips that are 35 percent faster, without having to shrink the size of

transistors. Power consumption and power dissipation are also reduced.

Bijan Davari, vice president of semiconductor development at IBM

Microelectronics, said the technology will be available in commercial

products by 2003.

Paul Swart runs the Silicon Valley News Service (SVNS)

Easier said than done

Nerd Talk

This is where I get to torture you with boring details of physics. Read only if you have an insatiable hunger for knowledge.



Fermions These are what electrons are.

Fermions are particles that repel one another. A characteristic that is a pain in the neck for chip designers.

Bosons These are particles that don’t interact with each other. (Introverts really.) Photons are prime examples of Bosons. Therefore, light beams can

criss-cross in free space, without distorting or disturbing the information contained in other beams of light.

Self Electrooptic Effect Device (SEED) The building block of current optical computers. It is based on an MQW (Multiple Quantum Well) structure placed inside a PIN photodiode detector. It transmits low inputs and blocks high inputs. SEEDs are preferred as candidates for optical logic gates because of their excellent properties: small size, low power requirements, quick response and not hard to adjust resonance.

Einstein’s theory of relativity This extremely well tested theory does away with notions of absolute time and space. What this means is that the time interval between two events measured by two different observers may not necessarily tally. The only thing fixed is the speed of light. This may seem weird at first glance. The reason for that is that we don’t get to see the effects of relativity in daily life and hence are unable to associate with it. Anyhow, in short, time doesn’t necessarily pass at the same rate for everyone. (Now didn’t that really clear things up?)

Advertisment

So what’s causing the delay? Well, the chief hurdle is that we are yet to figure out an efficient process for miniaturizing optical technology. Second, for achieving maximum optical processing speeds we have to carry out analog computing. However, this by its very nature is extremely unreliable. Therefore, what we have today are digital, electro-optical computing devices which use media such as lithium

niobate. These computers sacrifice speed for reliability. Again, while space interconnections are possible, and even useful to a certain extent, on a smaller scale (integrated circuits) diffraction effects necessitate wave-guides. Currently optical fibers are widely used for this purpose. As a result, the current generation of optical computers resemble a bowl of optical fiber spaghetti and are massive in size. Thin-film materials offer another possibility. Current research is focusing on organic molecules like phthalocyanine and polydiacetylene polymers, which are more light sensitive than inorganic molecules. Organics can also perform switching, signal processing, and frequency doubling using less power than inorganic molecules. But we can’t currently do without inorganics like silicon. These, when used with organic materials, let us use both photons and electrons in current hybrid systems. However, the lack of efficient non-linear optical materials, which can work at low power levels, hinders progress. As a result, current all-optical components require high levels of laser power to function as required. Obviously, this isn’t very energy-efficient. Another challenge faced by designers is that due to the parallel architecture of optical computers the entire software logic is altered. All of these make optical computers terribly expensive.

Current research

In present electro-optical machines, optical systems are special purpose sub-units to an electronic unit. An example is the feature extraction from an image using a Fourier transform.

Advertisment

Most self-respecting universities in the US boast of optical research labs. Multinational projects like the Joint Opto-Electronic Project (JOP) were started as long back as 1992. In 1990 AT&T Bell labs unveiled an experimental machine consisting of four arrays of 32 S-SEEDs. This calculates by sending light beams from laser diodes through the SEEDs, which either reflect or absorb light depending on the logic operation to be performed. In 1993 The University of Colorado at Boulder developed the first fully optical computer prototype that could store and execute computer programs.

Research into optical materials has led researchers from the University of Southern California to develop polymers with a switching frequency of 60 GHz. Another group at Brown University and IBM’s Almaden research center has achieved 100ps switching speeds. European researchers have demonstrated free space communication between ICs exceeding 1 Tbps (Terabit per second). And of course, the guys at NASA are at it.

Conclusion

What really matters is how fast we can get this technology from the drawing board into the real world. In the midst of this technological revolution, I have a small question. With terms like Silicon Valley, Cyber Valley, and Cyber City being flogged to death, is anybody interested in naming some place Optical Valley?

Adheraj Singh

Advertisment