57 pages • 1 hour read
Max TegmarkA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
In Chapter 2, Tegmark addresses the nature of intelligence. Understanding intelligence, learning, and information processing is integral to the development of AI and, as we will see in later chapters, consciousness. In Tegmark’s view, “One of the most spectacular developments during the 13.8 billion years since our Big Bang is that dumb and lifeless matter has turned intelligent” (49). He begins with an account of what it means to be an “intelligent” system, how these systems work, how they learn, and what this means for us. Foreshadowing the concluding chapters on goals and consciousness, Tegmark notes that “intelligent behavior is inexorably linked to goal attainment” (53). In other words, achieving complex tasks requires motivation, not just intelligence.
Tegmark notes that there is no universally agreed upon definition of intelligence. For this reason, he uses a broad, encapsulating concept of intelligence to structure both this chapter and the book. Intelligence, he writes, is “the ability to accomplish complex goals” (81). He discusses the distinction between broad and narrow intelligence, noting that humans are currently far superior to AI in terms of broad intelligence, but AI is quickly becoming superior at a growing number of narrow intelligent operations. In short, broad intelligence refers to the ability “to master a dazzling panoply of skills” and narrow intelligence refers to the ability to master a specific task (52). The so-called “holy grail” of AI research is the development of AGIs: artificial general intelligences (52). AGIs exhibit broad intelligence, like humans, and would be capable of far more than the tasks performed by present day computer programs and robots. Superintelligent AIs, a long-term possibility, could far exceed human capacity both broadly and narrowly.
This leads Tegmark to consider the existential import of AI on human life. If AGIs become as advanced as human beings, what will that mean for us and what will it look like? Here, Tegmark begins an investigation of singularity, the point at which AIs surpass humans in intelligence. Referring to Figure 2.2, Tegmark notes the hills and valleys of Moravec’s “landscapes of human competence” (53). This graphic shows how human abilities at various tasks are continuously surpassed by advancing AIs. Even though there are still plenty of tasks at which humans are far superior, the rising water levels depict the approach of AI to human level competency. When AIs overtake the highest mountains, like science and art, singularity will have been achieved:
As the sea level keeps rising, it may one day reach a tipping point, triggering dramatic change. This critical sea level is the one corresponding to machines becoming able to perform AI design. Before this tipping point is reached, the sea-level rise is caused by humans improving machines; afterwards, the rise can be driven by machines improving machines, potentially much faster than humans could have done, rapidly submerging all land (54).
At some point, AIs will become recursively self-producing, potentially eliminating humans from the picture of technological production. This prospect, already a near-term issue, structures the next chapter.
Tegmark points to the anthropocentric biases we have when thinking about intelligence: We “rate the difficulty of tasks relative to how hard it is for us humans to perform them” (53). For instance, certain sensorimotor skills, like driving or walking, may seem very easy from many people's perspective, but these are actually extremely complex behaviors. On the other hand, mathematical computations, especially with large numbers, are difficult for most human beings, but they can be achieved by computers easily. Given Tegmark’s broad understanding of intelligence, this is a clue to how open and alive with possibility the future is in his eyes. Part of Tegmark’s work is to burst open the limited, anthropomorphic perspective on AI that many people have, at least to the degree that it’s possible.
Tegmark goes on to discuss the nature of memory, binary code, and the sense in which atoms are containers for bits of information. All of this works up to one of the central concepts of the book, “substrate independence” (58). Substrate independence refers to the relative autonomy that bits of information have from the form of matter upon which they’re dependent. In other words, information is not limited to, nor dependent upon, one specific physical medium. This is all associated with the development of artificial memories, which leads to discussions on computation, the development of proper mathematical functions, and the ability to program complex goals. Computation is the “transformation of one memory state into another” (61) in which information is put into the system/program, processed, and repackaged as output. As Tegmark puts it, when it comes to substrate independence, “Matter doesn’t matter” (67).
Tegmark also discusses the exponential growth throughout the history of computer processing and discusses “machine learning” (72), one of the most important fields of current AI research. Machine learning is “the study of algorithms that improve through experience” (72). Certain algorithms are capable of experiential wisdom. They constantly learn and therefore can achieve more complex goals more easily. We should note that Tegmark, again, assumes a certain physicalist, materialist conception of reality: He believes the physical universe itself can develop the ability to learn. This will lead him, philosophically, towards the view that the world has an immanent teleological goal.