Advertisment

Living With the Singularity

author-image
PCQ Bureau
New Update

The 'technological singularity' is straight out of sci-fic.

Advertisment

That's a point in the future when computer power is expected to overtake human intelligence. It's an 'event horizon' beyond which we cannot predict the future in any way, because it's beyond our human intelligence to understand it.

Sci-fic has had a good record of turning to reality. Vernor Vinge, the sci-fic author who coined this term, is convinced it's going to happen in the next decade, and he's been convincing PCQuest too, in this issue.

Advertisment

Super-human intelligence could crack problems plaguing humans for decades or centuries: finding the cure for cancer, interstellar space travel.

Or it could lead to Terminator scenarios, “ending the human era”, as Vinge calls it. Things may not turn out that way, however, and for alternative futures, where else could I turn to, but the master of science fiction?

Isaac Asimov suggests a couple of them. The first deals squarely with the creation of robots with superhuman capabilities. The robots can create and maintain other robots, and do not need humans to survive...but for a key part of their software. Always embedded inside their positronic brains are the Three Laws of Robotics, which ensure that they do not turn against their human creators.

The Three Laws are simple: a robot may not injure a human (or allow one to come to harm); a robot must obey human orders, except where there is conflict with the first law; and it must protect itself, except where it is in conflict with the first two laws.

Advertisment

The analogy is clear. Technology development will have to include safeguards, to maintain intact the command structure: always, humans at the top. (Dilbert might say: the managers ain't the brightest guys. But they're on top.)

Asimov's other alternative path is the basis of his Foundation trilogy. The fictional Hari Seldon uses psychohistory (combining history, sociology, and statistics) to predict the future behavior of very large populations. A single person is unpredictable, but the larger the group, the greater its predictability.

Advertisment

And so, the Galactic Empire encounters a singularity: the man they called the Mule, with mental powers, becomes super-conqueror. He could disrupt the Seldon Plan, which had assumed that no single person could alter galactic socio-historical trends.

But Hari Seldon had forseen the likely disruptiion from 'mentalics'. Though he could not predict its exact nature, he planned for it by creating a Second Foundation with mental powers. By analogy, we too 'know' the 'problem' in advance. That gives us time.

Superhuman intelligence, then, is not so daunting. After all, today's computers are superhuman number crunchers. They can beat humans in modeling, visualization, and so many other things. They have created a discontinuity, something whose impact could not have been predicted a few decades ago.

Advertisment

Like the computer, other 'mini-singularities' are not rare: events that alter the future, making an event horizon beyond which it is difficult to visualize the future. Disruptive change does that.

The Internet is a great example. Beyond the odd sci-fic story on hyperconnected worlds, the Net was not predicted; nor were the changes it created.

Then there are the 'micro-singularities'. Twitter, for instance. Maybe you could have predicted micro-blogging, ten years ago. But what you could not have forseen was its profound impact, helping trigger revolutions and bring down governments.

And so, the technological singularity is just an extreme version of disruptive change. We'll live with it-and use it well.

Advertisment