Technological Singularity and the End of Human History

In this century, humanity is predicted to undergo a transformative experience, the likes of which have not been seen since we first began to speak, fashion tools, and plant crops. This experience goes by various names – “Intelligence Explosion,” “Accelerando,” “Technological Singularity” – but they all have one thing in common.

They all come down to the hypothesis that accelerating change, technological progress, and knowledge will radically change humanity. In its various forms, this theory cites concepts like the iterative nature of technology, advances in computing, and historical instances where major innovations led to explosive growth in human societies.

Many proponents believe that this “explosion” or “acceleration” will take place sometime during the 21st century. While the specifics are subject to debate, there is general consensus among proponents that it will come down to developments in the fields of computing and artificial intelligence (AI), robotics, nanotechnology, and biotechnology.

In addition, there are differences in opinion as to how it will take place, whether it will be the result of ever-accelerating change, a runaway acceleration triggered by self-replicating and self-upgrading machines, an “intelligence explosion” caused by the birth of an advanced and independent AI, or the result of biotechnological augmentation and enhancement.

Opinions also differ on whether or not this will be felt as a sudden switch-like event or a gradual process spread out over time which might not have a definable beginning or inflection point. But either way, it is agreed that once the Singularity does occur, life will never be the same again. In this respect, the term “singularity” – which is usually used in the context of black holes – is quite apt because it too has an event horizon, a point in time where our capacity to understand its implications breaks down.

Source: Kurzweil Technologies

Definition

The use of the term “singularity” in this context first appeared in an article written by Stanislav Ulam about the life and accomplishments of John von Neumann. In the course of recounting opinions his friend held, Ulam described how the two talked at one point about accelerating change:

“One conversation centered on the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which  human affairs, as we know them, could not continue.”

However, the idea that humanity may one day achieve an “intelligence explosion” has some precedent that predates Ulam’s description. Mahendra Prasad of UC Berkeley, for example, credits 18th-century mathematician Nicolas de Condorcet with making the first recorded prediction, as well as creating the first model for it.

In his essay, Sketch for a Historical Picture of the Progress of the Human Mind: Tenth Epoch (1794), de Condorcet expressed how knowledge acquisition, technological development, and human moral progress were subject to acceleration:

“How much greater would be the certainty, how much more vast the scheme of our hopes if… these natural [human] faculties themselves and this [human body] organization could also be improved?… The improvement of medical practice… will become more efficacious with the progress of reason…

“[W]e are bound to believe that the average length of human life will forever increase… May we not extend [our] hopes [of perfectibility] to the intellectual and moral faculties?… Is it not probable that education, in perfecting these qualities, will at the same time influence, modify, and perfect the [physical] organization?”

Another forerunner was British mathematician Irving John Good who worked at Bletchley Park with Alan Turing during World War II. In 1965, he wrote an essay titled “Speculations Concerning the First Ultraintelligent Machine,” where he contended that a smarter-than-AI could create even smarter AIs in an ongoing process known as “subassembly theory.”

In 1965, American engineer Gordon Moore noted that the number of transistors on an integrated circuit (IC) can be expected to double every year (later updated to roughly every two years). This has come to be known as “Moore’s Law” and is used to describe the exponential nature of computing in the latter half of the 20th century. It is also referenced in relation to the Singularity and why an “intelligence explosion” is inevitable.

In 1983, Vernor Vinge popularized the theory in an op-ed piece for Omni magazine where he contended that rapidly self-improving AI would eventually reach a “kind of singularity,” beyond which reality would be difficult to predict. It was also here that the first comparison to a black hole was made:

“We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.”

How and when?

Vinge popularized the Technological Singularity further in a 1993 essay titled “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In addition to reiterating the nature of the concept, Vinge also laid out four possible scenarios for how this event could take place. They included:

Superintelligent Computers: This scenario is based on the idea that human beings may eventually develop computers that are “conscious.” If such a thing is possible, said Vinge, there is little doubt that an artificial intelligence far more advanced than humanity might naturally result. 

Networking: In this scenario, large networks of computers and their respective users would come together to constitute superhuman intelligence.

Mind-Machine Interface: Vinge also proposed a scenario where human intelligence could merge with computing to augment their intelligence, leading to superhuman intelligence.

Guided Evolution: It is also possible, said Vinge, that biological science could advance to the point where it would provide a means to improve natural human intellect.

But perhaps the most famous proponent of the concept is noted inventor and futurist Ray Kurzweil. His 2005 book, The Singularity is Near: When Humans Transcend Biology, is perhaps his best-known work and expands on ideas presented in earlier books, most notably his “Law of Accelerating Returns.”

This law is essentially a generalization of Moore’s Law and states that the rate of growth in technological systems increases exponentially over time. He further cited how an exponential increase in technologies like computing, genetics, nanotechnology, and artificial intelligence would culminate and lead to a new era of superintelligence.

“The Singularity will allow us to transcend these limitations of our biological bodies and brains,” wrote Kurzweil. “There will be no distinction, post-Singularity, between human and machine.” He further predicted that the Singularity would take place by 2045 since this was the earliest point where computerized intelligence would significantly exceed the sum total of human brainpower

To see these trends at work, futurists and speculative thinkers generally point to examples of major innovations from human history, oftentimes focusing on technologies that have made the way we convey and consume information exponentially faster. In all cases, the purpose is to show how the time lag between innovations keeps getting shorter.

Technological Singularity: An Impending
Source: Kurzweil, R./Jurvetson, S.