What's the technological singularity?

Ray Kurzweil, inventor and computer engineer, presents a talk on the Singularity at the RAS Conference 2007. See our collection of laptop pictures.
Gabriel Bouys/AFP/Getty Images

It's a common theme in science fiction -- mankind struggles to survive in a dystopian futuristic society. Scientists discover too late that their machines are too powerful to control. Computers and robots force the human race into servitude. But this popular plot might not belong within the realm of fiction forever. Discussed by philosophers, computer scientists and women named Sarah Connor, this idea seems to gain more credence every year.

Could machines replace humans as the dominant force on the planet? Some might argue that we've already reached that point. After all, computers allow us to communicate with each other, keep track of complex systems like global markets and even control the world's most dangerous weapons. On top of that, robots have made automation a reality for jobs ranging from building automobiles to constructing computer chips.

Advertisement

But right now, these machines have to answer to humans. They lack the ability to make decisions outside of their programming or use intuition. Without self-awareness and the ability to extrapolate based on available information, machines remain tools.

­How long will this last? Are we headed for a future in which machines gain a form of consciousness? If they do, what happens to us? Will we enter a future in which computers and robots do all the work and we enjoy the fruits of their labor? Will we be converted into inefficient ­batteries a la "The Matrix?" Or will machines exterminate the human race from the face of the Earth?

To the average person, these questions may seem outlandish. But some people think we need to take questions like these into consideration now. One such person is Vernor Vinge, a former professor of mathematics at the San Diego State University. Vinge proposes that mankind is heading toward an irrevocable destiny in which we will evolve beyond our understanding through the use of technology. He calls it the singularity.

What is the singularity, and how might it come about?

Advertisement

The Singularity

Robots like this might look cute, but could they be plotting your downfall?
Yoshikazu Tsuno/AFP/Getty Images

Vernor Vinge proposes an interesting -- and potentially terrifying -- prediction in his essay titled "The Coming Technological Singularity: How to Survive in the Post-Human Era." He asserts that mankind will develop a superhuman intelligence before 2030. The essay specifies four ways in which this could happen:

  • Scientists could develop advancements in artificial intelligence (AI)
  • Computer networks might somehow become self-aware
  • Computer/human interfaces become so advanced that humans essentially evolve into a new species
  • Biological science advancements allow humans to physically engineer human intelligence

Out of those four possibilities, the first three could lead to machines taking over. While Vinge addresses all the possibilities in his essay, he spends the most time discussing the first one. Let's take a look at his theory.

Advertisement

Computer technology advances at a faster rate than many other technologies. Computers tend to double in power every two years or so. This trend is related to Moore's Law, which states that transistors double in power every 18 months. Vinge says that at this rate, it's only a matter of time before humans build a machine that can "think" like a human.

But hardware is only part of the equation. Before artificial intelligence becomes a reality, someone will have to develop software that will allow a machine to analyze data, make decisions and act autonomously. If that happens, we can expect to see machines begin to design and build even better machines. These new machines could build faster, more powerful models.

Technological advances would move at a blistering pace. Machines would know how to improve themselves. Humans would become obsolete in the computer world. We would have created a superhuman intelligence. Advances would come faster than we could recognize them. In short, we would reach the singularity.

What would happen then? Vinge says it's impossible to say. The world would become such a different landscape that we can only make the wildest of guesses. Vinge admits that while it's probably not fruitful to suggest possible scenarios, it's still a lot of fun. Maybe we'll live in a world where each person's consciousness merges with a computer network. Or perhaps machines will accomplish all our tasks for us and let us live in luxury. But what if the machines see humans as redundant -- or worse? When machines reach the point where they can repair themselves and even create better versions of themselves, could they come to the conclusion that humans are not only unnecessary, but also unwanted?

It certainly seems like a scary scenario. But is Vinge's vision of the future a certainty? Is there any way we can avoid it? Find out in the next section -- before it's too late.

Advertisement

Can We Avoid Machines Taking Over?

We can use robots to perform repetitive tasks automatically, but will we engineer all humans out of a job?
Hana Kalvachova/isifa/Getty Images

Not everyone thinks we're destined -- or doomed -- to reach the singularity detailed in Vinge's essay. It might not even be physically possible to achieve the advances necessary to create the singularity effect. To understand this, we need to go back to Moore's Law.

In 1965 Gordon E. Moore, a semiconductor engineer, proposed what we now call Moore's Law. He noticed that as time passed the price of semiconductor components and manufacturing costs fell. Rather than produce integrated circuits with the same amount of power as earlier ones for half the cost, engineers pushed themselves to pack more transistors on each circuit. The trend became a cycle, which Moore predicted would continue until we hit the physical limits of what we can achieve with integrated circuitry.

Advertisement

Moore's original observation was that the number of transistors on a square inch of an integrated circuit would double each year. Today, we say that the data density of an integrated circuit doubles every 18 months. Manufacturers now build transistors on the nanoscale. Recent microprocessors from Intel and AMD have transistors that are 45-nanometers wide -- a human hair can have a diameter of up to 180,000 nanometers.

Engineers and physicists aren't sure how much longer this can continue. Gordon Moore said in 2005 that we are approaching the fundamental limits to what we can achieve through building smaller transistors. Even if we find a way to build transistors on a scale of just a few nanometers, they wouldn't necessarily work. That's because as you approach this tiny scale you have to take quantum physics into account.

­It turns out that when you deal with things on a subatomic scale, they behave in ways that seemingly contradict common sense. For example, physicists have shown that electrons can pass through extremely thin material as if the material weren't there. They call this phenomenon electron or quantum tunneling. The electron doesn't make a physical hole through the material -- it just seemingly approaches from one side and ends up on the other. Since transistors control the flow of electrons like a valve, this becomes a problem.

­If we hit this physical limit before we can create machines that can think as well or better than humans, we may never reach the singularity. While there are other avenues we can explore -- such as building chips vertically, using optics and experimenting with nanotechnology -- there's no guarantee we'll be able to keep up with Moore's Law. That might not prevent the singularity from coming but it might take longer than Vinge's prediction.

Another way to prevent the singularity includes building in safety features before machines are able to become self-aware. These features might even resemble the three laws of robotics proposed by Isaac Asimov. But Vinge counters that argument by pointing out one detail: if the machines are smarter than we are, won't they be able to find ways around these rules?

Even Vinge doesn't go so far as to say the singularity is inevitable. There are plenty of other engineers and philosophers who think it's a non-issue. But maybe you should think twice before you mistreat a piece of machinery -- you never know if it'll come after you for revenge later down the road.

To learn more about our future computer overlords and other topics, take a look at the links on the next page.

Advertisement

Lots More Information

Related How Stuff Works Articles

More Great Links

  • Chang, Kenneth. "Nanowires May Lead to Superfast Computer Chips." The New York Times. November 9, 2001. (Sept. 29, 2008) http://query.nytimes.com/gst/fullpage.html?res=9D06E4DF1638F93AA35752C1A9679C8B63
  • Dubash, Manek. "Moore's Law is dead, says Gordon Moore." TechWorld. April 13, 2005. (Sept. 29, 2008) http://www.techworld.com/opsys/news/index.cfm?newsid=3477
  • Hanson, Robin. "A Critical Discussion of Vinge's Singularity Concept." (Sept. 29, 2008) http://hanson.gmu.edu/vi.html
  • Moore, Gordon. "Cramming more components onto integrated circuits." Electronics. April 19, 1965. Vol. 38, No. 8. (Sept. 29, 2008) http://download.intel.com/museum/Moores_Law/Articles-Press_Releases/ Gordon_Moore_1965_Article.pdf
  • Vinge, Vernor. "The Coming Technological Singularity: How to Survive in the Post-Human Era." VISION-21 Symposium. 1993. (Sept. 29, 2008) http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html
  • Yudkowsky, Eliezer. "The Low Beyond." 2001. (Sept. 29, 2008) http://sysopmind.com/singularity.html

Advertisement

Loading...