Skip to main content

The Technological Singularity: Keywords In Contemporary America: Singularity

The Technological Singularity
Keywords In Contemporary America: Singularity
    • Notifications
    • Privacy
  • Project HomeKey Concepts in Contemporary America
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. History of singularity
  2. Kurzweil’s singularity
  3. Criticism of singularity
  4. 4. Conclusion        
  5. Works cited

The Technological Singularity

(An image of HAL-9000, an example of AGI from Kubrick’s 2001: A Space Odyssey
"HAL 9000" by Daniel Kulinski is licensed under CC BY-NC-SA 2.0)

  1. History of singularity

It is incredibly boring and bland to start an essay with a definition. With the essay’s keyword, however, it is also hardly possible: it is incredibly hard to define a singularity, even though this paper focuses specifically on a technological singularity. This term was first coined by a famous scientist John von Neumann in the middle of XX century. By that time, this word was only used in the astrophysical context, and sometimes in mathematics. Von Neumann, however, talked about a hypothetical point in human future history where the rate of technological progress would be so rapid that it would eventually reach a certain point “beyond which human affairs, as we know them, could not continue”. (Ulam, S.)

That is where the concept of technological singularity started; a concept with the roots based in a mathematical idea of a point in which an object in question cannot be defined, combined with the observations of an incredibly rapid technological development and applied to our nearest future. Although every single science fiction writer, computer scientist or futurologist seems to give their own definition of this concept, they all seem to agree that this is something we could hardly imagine before it happens.

So how do we even discuss something that is impossible to describe or even think of? Well, we do it the way we’ve always done: we build theories, assumptions, extrapolations and write fiction. For example, one of the theories suggests that singularity will be reached roughly around 2030 (Azulay, Dylan) ( “995 Experts Opinion: AGI / Singularity by 2060 [2021 Update].”); this notion is based on the extrapolation of Moore’s law, which is an observation stating that the number of components on an integrated circuit doubles every two years, thus making our computers more and more efficient and cost-effective, resulting in their exponential development. (“Moore's Law.”) The exponential change in quality of computers and other technical devices will result in recursive growth: the latest inventions and discoveries will serve as tools for even faster development of new, superior technology, which will be later used for further, more efficient development of engineering marvels.

The idea of self-improving technology (or the technology used to improve itself by someone else - in the case it doesn’t possess any artificial consciousness) has been a frequent visitor in science fiction since the beginning of XX century. Surprisingly, the themes of self-improving machines, artificial consciousness, human-machine conflict and other events that we might see once the singularity is there were rather prominent as early as by 1920s; a play “R. U. R” created by Czech science fiction writer Karel Čapek (who actually invented the word robot for this play) was staged in 1921; the conflict of the play revolves around robots (or rather some kind of organisms artificially assembled in factories) rebelling against and destroying humanity. In 1927 and 1934, there were two films aired: “Metropolis” and “Master of the World” respectively. Both are very important milestones for the cinematographic and sci-fi culture and both also explored the idea of post-singularity society and the subsequent rise of intelligent human-like machines fighting humans, impersonating them and taking their place.

However, it took a few decades for the idea to be widely discussed in scientific circles. Although one of the of the earliest mentions of it I’ve found traced back to 1863 (it was an article published by a philosopher Samuel Butler and called “Darwin among the machines” that discussed the constant evolution of the machines and their potential replacement of human race), it took slightly more than a century for the idea to gain any remote credibility. One of the pioneers of the concept, Vernor Vinge (Vinge, Vernor “Technological Singularity.”), contributed a lot to its popularity by publishing various articles where he discussed how any post-singularity events are essentially unpredictable as it would require an “intelligence greater than our own'' (Dooling, Richard); and such intelligence will be created through the so-called intelligence explosion, which is, essentially, the recursive self-improving technology I’ve already described before. Although the idea of intelligence explosion was suggested in 60s by a mathematician called Irving John Good (Good, Irving John), Alan Turing’s peer, it wasn’t until 80s and 90s when the topic was greatly explored and expanded by Vinge (Vinge, Vernor. "The Coming Technological Singularity: How to Survive in the Post-Human Era") and Ray Solomonoff (Solomonoff, R.J.) (the latter also discussed a very similar idea under a name of an infinity point).

  1. Kurzweil’s singularity

In more present days, another person took the banner of promoting singularity; it is a futurist Ray Kurzweil, who published the book “The Singularity Is Near” (Kurzweil, Raymond) in 2005 and was expected to publish another one called “The Singularity Is Nearer” in 2021 (possibly postponed now). Kurzweil is currently mostly known for his radical, extremely optimistic outlook on various issues, including transhumanism, futurologist predictions (claims of various inventions and events happening in the nearest future), health (for example, one’s ability to dramatically increase the lifespan and get rid of diseases via the help of a set of basic rules) and a few others. Although Kurzweil himself has claimed his predictions of the future were 86% accurate, the Forbes journal’ investigation has disagreed with him on this point (Knapp, Alex).

In his book, Kurzweil brings together an absurd amount of ideas from nanotechnology and nanotechnological computing to brain computer interfacing; he views the history of the universe as part of natural evolution, progressing through epochs from physics, chemistry, Biology and DNA, Brains, and Technology, arguing that singularity as “The Merger of Human Technology with Human Intelligence” is the next epoch. It’s also worthy to mention that Kurzweil’s theory has an incredibly anthropocentric and even to a certain extent religious undertone, as he views human evolution as a spiritual undertaking and denies the possibility of existence of extraterrestrial intelligence. This notion ignores the real scale of the Universe with the Local group (“Local Group.”) (area of space in which Humanity is “trapped” by expansion of the Universe unless there exists a way to travel at super-light speeds) being a measly 0.011% of the Observable Universe (“Observable Universe.”), and 0.000044% of the Hypothetical Unobservable Universe ( Siegel, Ethan.). Assuming that in all that unimaginable quantity of space humans represent the only natural intelligence is absurd, and leads to Kurzweil not even considering possibilities of the Great Filter (“The Fermi Paradox: Where Are All the Aliens?”), and alike. Kurzweil's invention of self-improving AGI magically leads to humanity's leap from type 0 civilization to type III civilization on the extended Kardashev scale.

  1. Criticism of singularity

As the singularity concept seems to populate our minds for quite awhile, it is natural that not everyone has adopted Kurzweil’s optimistic look. Moreover, it has been somewhat of a good taste to be vary of incredibly head-in-the-cloud opinions; for example, any decent article including an overview or otherwise touching the singularity concept takes pride in mentioning that people like Elon Musk and Stephen Hawking have expressed their negative opinions on the matter (Sparkes, Matthew.), being concerned that it might result in human extinction, and they are only mentioned that often due to their immense popularity in the mass culture, and there are many more scientists who are not that well-known to the wide public but still hold the same opinion. After all, it is in our nature to be afraid of something unknown.

It’s not only that we as a society (and the scientific community specifically) are afraid of the upcoming singularity; there has been circulating certain opinions questioning it’s possibly not-so-inevitable upcoming and even the credibility of the concept as a whole. For example, according to a survey sent to the authors of a few machine learning conferences, a total of 29% believe that the singularity is quite likely or likely to happen, 21% believe it is just as likely to happen as it is not to, and a total of 50% believe it is unlikely or quite unlikely to happen (Grace, Katja, et al.).

Another one of the notable mentions belongs to an AI theorist and the promoter of rational thinking Eliezer Yudkowsky. Not only does he argue that the progress rate of technology development is currently slowing down and the Moore’s law may have stopped working (and this idea of his is supported by a few other well-renowned scientists); but he also suggests that the whole concept of singularity has become a “suitcase word” with too many incompatible definitions that differ from person to person and often contradict each other (Horgan, John). This was already referenced in the beginning: there simply isn’t a single universal definition for this word.

And it’s not only about the general confusion with the term: even if we find a proper definition, the point about it possibly never happening still stands. Peter Norvig and Stuart Russell, two famous computer scientists, have written a book called “Artificial Intelligence: a Modern Approach”; it serves as a textbook in many computer science courses, and the last edition saw the light only in 2020. In this book, in addition to discussing AI in great detail, they also state that intelligence explosion (which is a direct consequence/cause for singularity) might not happen because computational complexity theory sets limits on the efficiency of problem-solving algorithms, which in turn limits technological advancement and prevents intelligence explosion.

The real-life constraints also bring into life the “exponential growth fallacy”: even though overall it might seem like we’re growing exponentially fast, there are many areas of technology that are lacking in development, falling back or just not progressing that fast. It can be mainly attributed to resource constraints that limit further development after a sudden growth spurt. Many researchers also argue that technological progress actually follows the S-curve instead of an exponent: short periods of fast growth are compensated with stagnation or fallbacks. It is, however, important to make a side note here: the recurring stagnation episodes do not mean that the technological progress as a whole will face a slowdown: it is rather that the average speed of it will be constant, not exponential. What does it mean, essentially? Well, the progress isn’t going to stop, and that’s good news. The singularity might not happen, and this might be bad news… or not. We do not know.

4. Conclusion
        

After reading this paper, you are probably left with a feeling of confusion and disorientation. The main questions are still left unanswered: is the singularity good or bad? Is it going to happen or not? What is it, after all? The truth is, nobody knows. Yet everyone does their best to discuss it in the deepest detail, argue and publish thousands of books and articles.

There are so many young people reaching into STEM fields, pursuing their hopes, aiming to be a part of something bigger, whether that would be bringing singularity closer or making AI real. What they fail to see is that there are so many down-to-earth, everyday problems: wars, hunger, poverty, climate change. These problems aren't beautiful. They’re scary, and they’re also surprisingly more distant from mainstream scientists than some hypothetical technological advancements. If you’re a dreamer raised on science fiction, with all these intelligent robots and star ships faster than the speed of light being with you throughout your whole childhood all the way through the teenage years, you rarely care about global warming or war in a third-world country unless you’ve experienced it yourself.

There needs to be a shift in our paradigm, in our way of thinking. Otherwise in a few decades, we will be left on an overheated, overdivided, underfed, dirty planet, unable to even travel into space and launch satellites because of space debris; yet we will have countless money and lifetime wasted on a technological singularity that never came.

Works cited

“995 Experts Opinion: AGI / Singularity by 2060 [2021 Update].” AIMultiple, 2 Feb. 2021, research.aimultiple.com/artificial-general-intelligence-singularity-timing/.

Azulay, Dylan. “When Will We Reach the Singularity? - A Timeline Consensus from AI Researchers (AI FutureScape 1 of 6).” Emerj, Emerj, 18 Mar. 2019, emerj.com/ai-future-outlook/when-will-we-reach-the-singularity-a-timeline-consensus-from-ai-researchers/.

Dooling, Richard. Rapture for the Geeks: When AI Outsmarts IQ. Three Rivers Press, 2008.

Good, Irving John. “Speculations Concerning the First Ultraintelligent Machine.” Advances in Computers Advances in Computers Volume 6, 1966, pp. 31–88., doi:10.1016/s0065-2458(08)60418-0.

Grace, Katja, et al. “Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts.” Journal of Artificial Intelligence Research, vol. 62, 2018, pp. 729–754., doi:10.1613/jair.1.11222.

Horgan, John. “AI Visionary Eliezer Yudkowsky on the Singularity, Bayesian Brains and Closet Goblins.” Scientific American Blog Network, Scientific American, 1 Mar. 2016, blogs.scientificamerican.com/cross-check/ai-visionary-eliezer-yudkowsky-on-the-singularity-bayesian-brains-and-closet-goblins/.

Knapp, Alex. “Ray Kurzweil's Predictions For 2009 Were Mostly Inaccurate.” Forbes, Forbes Magazine, 2 June 2013, www.forbes.com/sites/alexknapp/2012/03/20/ray-kurzweils-predictions-for-2009-were-mostly-inaccurate/.

Kurzweil, Raymond. The Singularity Is near: When Humans Transcend Biology. Duckworth, 2006.

“Local Group.” Wikipedia, Wikimedia Foundation, 21 May 2021, en.wikipedia.org/wiki/Local_Group.

“Moore's Law.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., www.britannica.com/technology/Moores-law.

“Observable Universe.” Wikipedia, Wikimedia Foundation, 22 May 2021, en.wikipedia.org/wiki/Observable_universe#:~:text=The observable universe is a,Earth since the beginning of.

Siegel, Ethan. “Ask Ethan: How Large Is The Entire, Unobservable Universe?” Forbes, Forbes Magazine, 15 July 2018, www.forbes.com/sites/startswithabang/2018/07/14/ask-ethan-how-large-is-the-entire-unobservable-universe/?sh=2f71c39edf80.

Solomonoff, R.j. “The Time Scale of Artificial Intelligence: Reflections on Social Effects.” Human Systems Management, vol. 5, no. 2, 1985, pp. 149–153., doi:10.3233/hsm-1985-5207.

Sparkes, Matthew. “Top Scientists Call for Caution over Artificial Intelligence.” The Telegraph, Telegraph Media Group, 13 Jan. 2015, www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html.

“The Fermi Paradox: Where Are All the Aliens?” Encyclopædia Britannica, Encyclopædia Britannica, Inc., www.britannica.com/story/the-fermi-paradox-where-are-all-the-aliens.

Ulam, S. “John Von Neumann 1903-1957.” Bulletin of the American Mathematical Society, vol. 64, no. 3, 1958, pp. 1–50., doi:10.1090/s0002-9904-1958-10189-5.

Vinge, Vernor. “Technological Singularity.” Whole Earth Review, no. 81, 1993, p. 88.

Vinge, Vernor. "The Coming Technological Singularity: How to Survive in the Post-Human Era"Archived 2018-04-10 at the Wayback Machine, in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993.

Annotate

A project created by a SPRING 2021 ENG 131 class at UW: What's UP with US?
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org