The technological singularity is the hypothetical future emergence of greater-than-human intelligence through technological means. Since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend, the occurrence of a technological singularity is seen as an intellectual event horizon, beyond which events cannot be predicted or understood. Proponents of the singularity typically state that an "intelligence explosion" is a key factor of the Singularity where superintelligences design successive generations of increasingly powerful minds.
This hypothesized process of intelligent self-modification might occur very quickly, and might not stop until the agent's cognitive abilities greatly surpass that of any human. The term "intelligence explosion" is therefore sometimes used to refer to this scenario.
The term was coined by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement or brain-computer interfaces could be possible causes of the singularity. The concept is popularized by futurists like Ray Kurzweil and it is expected by proponents to occur sometime in the 21st century, although estimates vary.
Many of the most recognized writers on the singularity, such as Vernor Vinge and Ray Kurzweil, define the concept in terms of the technological creation of superintelligence, and argue that it is difficult or impossible for present-day humans to predict what a post-singularity world would be like, due to the difficulty of imagining the intentions and capabilities of superintelligent entities.
Vinge continues by predicting that superhuman intelligences will be able to enhance their own minds faster than their human creators. "When greater-than-human intelligence drives progress," Vinge writes, "that progress will be much more rapid." This feedback loop of self-improving intelligence, he predicts, will cause large amounts of technological progress within a short period, and that the creation of superhuman intelligence represented a breakdown in humans' ability to model their future. His argument was that authors cannot write realistic characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.
Vinge named this event "the Singularity".
Damien Broderick's popular science book The Spike was the first to investigate the technological Singularity in detail. In 2000 Bill Joy, a prominent technologist, founder of Sun Microsystems, voiced concern over the potential dangers of the Singularity.
In 2005, Ray Kurzweil published The Singularity is Near, which brought the idea of the singularity to the popular media both through the book's accessibility and a publicity campaign that included an appearance on The Daily Show with Jon Stewart. The book stirred intense controversy, in part because Kurzweil's utopian predictions contrasted starkly with other, darker visions of the possibilities of the singularity. Kurzweil, his theories, and the controversies surrounding it were the subject of Barry Ptolemy's documentary Transcendent Man.
In 2007 Eliezer Yudkowsky suggested that many of the different definitions that have been assigned to singularity are mutually incompatible rather than mutually supporting. For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability. In 2008 Robin Hanson, taking "singularity" to refer to sharp increases in the exponent of economic growth, lists the agricultural and industrial revolutions as past "singularities". Extrapolating from such past events, Hanson proposes that the next economic singularity should increase economic growth between 60 and 250 times. An innovation that allowed for the replacement of virtually all human labor could trigger this event.
In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, whose stated mission is "to assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies in order to address humanity’s grand challenges." Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA's Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year. Program faculty include experts in technology, finance, and future studies, and a number of videos of Singularity University sessions have been posted online.
In 2010 Aubrey de Grey applied the term the "Methuselarity" to the point at which medical technology improves so fast that expected human lifespan increases by more than one year per year. In 2010 in "Apocalyptic AI - Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality," Robert Geraci offers an account of the developing "cyber-theology" inspired by Singularity studies.
In 2011, Kurzweil noted existing trends and concluded that the singularity was becoming more probable to occur around 2045. He told Time magazine: "We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence."
The notion of an "intelligence explosion" is key to the singularity. It was first described thus by, who speculated on the effects of superhuman machines:
Most proposed methods for creating superhuman or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The means speculated to produce intelligence augmentation are numerous, and include bio- and genetic engineering, nootropic drugs, AI assistants, direct brain-computer interfaces, and mind uploading. The existence of multiple paths to an intelligence-explosion makes a singularity more likely; for a singularity to not occur they would all have to fail.
Despite the numerous speculated means for amplifying human intelligence, non-human artificial intelligence is the most popular option for organizations trying to advance the singularity. is also skeptical of human intelligence augmentation, writing that once one has exhausted the "low-hanging fruit" of easy methods for increasing human intelligence, further improvements will become increasingly difficult to find.
Whether or not an intelligence explosion occurs depends on three factors. The first, accelerating factor, is the new intelligence enhancements made possible by each previous improvement. Contrawise, as the intelligences become more advanced, further advances will become more and more complicated, possibly overcoming the advantage of increased intelligence. Each improvement must be able to beget at least one more improvement, on average, for the singularity to continue. Finally, there is the issue of a hard upper limit. Absent Quantum Computing, eventually the laws of physics will prevent any further improvements.
There are two logically independent, but mutually reinforcing, accelerating effects: increases in the speed of computation, and improvements to the algorithms used. The former is predicted by Moore’s Law and the forecast improvements in hardware, and is comparatively similar to previous technological advance. On the other hand, most AI researchers believe that software is more important than hardware.
The first is the improvements to the speed at which minds can be run. Whether human or AI, better hardware increases the rate of future hardware improvements. Oversimplified, Moore's Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity. An upper limit on speed may eventually be reached, though it is unclear how high this would be.
, responding to Good, argued that the upper limit is relatively low;
Belief in this idea is based on a naive understanding of what intelligence is. As an analogy, imagine we had a computer that could design new computers faster than itself. Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build? No. It might accelerate the rate of improvements for a while, but in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity.Whereas if it were a lot higher than current human levels of intelligence, the effects of the singularity would be enormous enough as to be indistinguishable from a singularity with an upper limit. For example, if the speed of thought could be increased a million-fold, a subjective year would pass in 30 physical seconds.