Advertisment

Faster Memory

author-image
PCQ Bureau
New Update

RAM (Random Access Memory), or the memory used inside PCs, has seen

continuous develop- ment in the past few years. Initially there was EDO RAM

(Extended Data Output RAM), which was followed by SDRAM (Synchronous Dynamic

RAM) that reigned supreme for a long time on motherboards. However, technologies

change overnight in a cruel technological world. One possible technology

threatening the throne is DDR SDRAM (Double Data Rate SDRAM), which promises

double the throughputs than that of SDRAM. To understand how this happens, we’ll

first have to understand how EDO RAM and SDRAM worked.

Advertisment

EDORAM and SDRAM

EDO RAM was asynchronous in nature, meaning it couldn’t transfer data

non-stop. It was also pretty slow, making the CPU wait for data. Transfer rates

in EDO RAM were partly dependent on its access time (nanoseconds). So, a lower

access time obviously resulted in faster transfer rates. Since this access time

couldn’t indefinitely be reduced, EDO RAM had its limitations. It was,

therefore, replaced with a newer technology called SDRAM. This memory, as the

name suggests, was synchronous or tied its transfer rates to the system clock

(also called the front side bus speed on a motherboard). This meant good news

for the CPU as it didn’t have to wait for data and got consistent throughputs.

The convention for identifying SDRAM speeds were the words ‘PC’ followed

by the system bus speed. So PC100 SDRAM meant memory running at 100 MHz clock

speed, PC133 meant 133 MHz, and so on. As the motherboard bus speeds increased,

so did the clock speeds in SDRAM. So there was 66 MHz, 100 MHz, and finally 133

MHz SDRAM. Fortunately, SDRAM was backward compatible, meaning you could place a

133 MHz SDRAM module in a computer with a 100 MHz system bus. This memory would,

of course, work at 100 MHz.

Advertisment

This increase in clock speeds also increased SDRAM’s data transfer rate,

also known as the throughput. While it was 0.8 GB/sec for 100 MHz SDRAM, it went

up to 1.064 GB/sec with 133 MHz. So logically, you could keep increasing

throughputs just by increasing the clock speed, right? Well, had things been so

simple, clock speeds would’ve crossed Gigahertz frequencies, just like

microprocessors. So the bad news is that theoretically, approximately 133 MHz is

the farthest that clock speeds can go. To give a simple explanation, consider

this. When the motherboard clock generates a signal, it has to travel across the

entire board. The time difference between when this clock signal is sent and

received must be relatively lower than the time between signals. Otherwise the

clock would send the next signal even before the first is received, resulting in

data jams on the board. This is something similar to a crossing where the

traffic lights don’t work. Thanks to the wonders of technology, a workaround

to the problem has come out. Enter DDR SDRAM.

How DDR works

DDR doubles the throughput without increasing the clock frequency. The

maximum clock frequency remains at 133 MHz, while the technology in DDR plays

with the way data is transferred.

Advertisment

Let’s explore this concept a little more. Clock frequency can be

represented as a square wave. This has a rising edge, a high-plane, a falling

edge, and a low-plane. In conventional SDRAM, one bit of data is transferred

during the rising edge of the clock cycle. Since the rising edge gets all the

data, the falling edge sits there twiddling its thumbs. So memory manufacturers

decided to give the falling edge some work to do, and therefore gave it a ‘bit’

of data to transfer as well. This resulted in two bits of data being transferred

per clock cycle, essentially doubling the transfer rate.

Data transfer in SDRAM happens on the rising edge of

the clocks pulse

Data transfer in DDR SDRAM happens on the rising and

falling edges of the clock pulse

This way, if you take 100 MHz SDRAM and apply DDR technology to it, the

throughput increases to 1.6 GB/sec. Similaly, if you apply DDR to 133 MHz, then

the throughput goes up to 2.1 GB/sec. It has been said that this technology can

be applied at a significantly low cost, making it affordable. However, we’re

yet to see the memory make mass appearance in the market.

Advertisment

DDR vs SDRAM

We won’t discuss performance differences here, but the actual physical

ones. DDR memory also fits into DIMM (Dual In-line Memory Module) slots,

although the pin count is different. Whereas SDRAM has 168 pins, DDR consists of

184 pins. So the bad news is that you can’t buy DDR memory and fit it in your

existing motherboard with SDRAM.

Another benefit that has been added to DDR is reduced power consumption.

Whereas SDRAM consumes 3 volts per signal, DDR takes just 2.5 volts. Lower power

requirements can help increase the battery backup time in notebooks.

Advertisment

Motherboard chipsets have to be designed to support DDR SDRAM as well. Many

manufacturers provide this support, two apparent ones are the Micron Samurai and

AMD 760 chipsets.

Who needs so much bandwidth?

So, will DDR SDRAM take over, given its whopping 2.1 GB/sec throughput? Not

really. SDRAM is likely to stay around for quite some time. A normal user wouldn’t

really need so much bandwidth, as provided in DDR memory. That’s why DDR

memory would find itself creeping into high-end graphics workstations or

high-end server systems with multiple CPUs. Take, for example, the latest

Samurai chipset from Micron. DDR technology has been effectively utilized here

by supporting multiple chipsets on the same motherboards. This lets several

users share the same system, and at the same time giving them dedicated devices

and memory space.

Anil Chopra

Advertisment