by January 4, 1999 0 comments

How large a piece of data can you handle at one go? Ask
that of a modern day microprocessor, and you’ll get 32 or 64 bits as the answer.

Your operating system gets work done by the CPU by passing
data to it and receiving data in return. When your OS talks to your CPU, it sends data in
16-, 32-, or 64-bit wide chunks. OSs are designed around a specific class of CPU, capable
of handling a specific size of data. Thus an OS’s performance is to a large extent
defined by this size, so, for instance, we use 32-bit operating systems.

Protected memory
access

16-bit processors used a simple
addressing scheme called real-mode addressing. They could access about 1 MB of RAM by a
small trick of making a 20-bit register out of two 16-bit registers, by left shifting one
of them by 4 bits.

In 32-bit computing, the registers are 32 bit wide.
Computing changed due to the way in which 32-bit CPUs could address memory. The 32-bit
processor could access memory in an additional mode called protected mode, and exit it
without resetting, unlike in the 286 where exiting protected mode would entail resetting
the CPU and losing current data. The 386, a 32-bit CPU, included memory paging, which was
significantly better than plain segment addressing.

These sizes are important because they define almost
all the capabilities of the OS–the maximum amount of memory that can be used, the
maximum size of storage disks that can be used, and even the speed at which data is
processed. As CPUs and OSs evolved, they grew from 16-bits to 32-bits and now 64-bits.

This width of data is actually the size of the registers in
the CPU. Registers are little memory stores for instructions and writing data. Of all
memory devices in a computer, the register is the fastest. And like all good things,
registers too come in limited numbers.

The CPU accesses data in memory by telling special
registers where to get the next chunk of data. The maximum amount of memory that can be
accessed by a register is dependent on its size. For instance, a 16-bit register can
access 216 locations of memory. The 8086, 8088, and 80286 were 16-bit processors. The
80386dx was the first chip with 32-bit registers. The Merced, which is to be released, is
a 64-bit chip from Intel.

Every change in register size has meant added functionality
from the operating system. When the register size changed from 16- to 32-bit, memory
protection was introduced. This was possible as 32-bit processors provided another mode of
CPU operation–the protected/flat addressing mode–in which memory areas could be
demarcated.

Bridging that bit

When you change from a 16-bit computer (and operating
system) to a 32-bit one, you can’t leave behind your apps and data, and start afresh.
So programs written for the earlier 16-bit systems have to run on the 32-bit systems. By a
process called “thunking”, a 16-bit instruction is joined by the operating
system into another 16-bit instruction to use the 32-bit space. The reverse is also
possible, as in splitting a 32-bit instruction into two 16-bit pieces. In Win 95, 32-bit
code sometimes had to be thunked into 16-bit code, since many of the files (DLLs, etc),
that Win 95 used were the original 16-bit versions from Win 3.x and DOS.

The wider the merrier

The more bits that are handled in one go by the processor,
the faster your programs run. However, thunking makes programs run a bit slower, as time
is spent in thunking the instructions together.

A16-bit instruction has lesser information on it than a
32-bit one. To get the same amount of work done, a 16-bit program brings instructions in
more steps than a 32-bit program. Additionally, a 16-bit instruction accesses smaller
areas of memory at one time than a 32-bit instruction, making the CPU run around more
often. So, if you run a 16-bit application on a 32-bit OS/CPU combination, the processor
is not only under-utilized but also overworked. Thus a Win 3.1 app running on Win 95 will
be slower that when running on Win 3.1. 16-bit apps do not multitask well, unlike the
32-bit where memory protection is a standard feature.

More bits

Operating systems and chips have been consistently moving
toward 64-bit computing. 64-bit flavors of Unix have been around for some time (DEC-OSF,
HP-UX, IRIX). Linux is also available for the brave new 64-bit world. Most new 64-bit
chips are super scalar, that is, they can handle multiple execution units. Intel’s
64-bit chip, the Merced, will use what is called a Very Long Instruction Word. VLIW is an
extension of the RISC concept and consists of multiple RISC instructions. VLIW is also
capable of supplying different execution units to the CPU at one go.

Another advantage of such a large register size is that
floating point accuracy is increased, 64-bit apps can also potentially access a huge
amount of memory. Parallelism in code to utilize multiple execution units are also
adaptations for the new range of super scalar 64-bit chips. File sizes in 64-bit computing
go way larger than the 2 GB possible in the 32-bit world.

Incidentally, 64-bits is not double of 32-bits. We are
dealing with exponential numbers. 64-bit data structures are four billion times larger
than 32-bit ones.

So where would one use 64-bit chips and operating systems?
In potentially large operations with searches performed on active memory, such as data
warehousing, and similar number crunching operations. Also, in high-end multimedia,
simulations and CAD.

Beyond 64

Computing started off with the 8-bit CP/M OS. Since then we
have progressed through 16-bit to 32-bit computing. Today, the first steps to 64-bit
computing are being made. What next? Are we going to have 128-bit processors and operating
systems? Would anyone really want to have operating systems of such gargantuan
capabilities?

Nobody is talking about them yet. But don’t think it
won’t happen. Was it not Bill Gates who said that no one would ever need more than
640 kB of memory?

 

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.

<