logo

57 pages 1 hour read

Max Tegmark

Life 3.0: Being Human in the Age of Artificial Intelligence

Nonfiction | Book | Adult | Published in 2017

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Background

Cultural Context: Fear of the AI Threat

Human anxiety about the potential threat of robotic intelligence is nothing new. As far back as Gulliver’s Travels (1726), Jonathan Swift was already parodying the idea of machine intelligence. In the 20th century, as humanity entered an age of rapid technological advancement, there was an explosion of science fiction reflecting an upsurge in human unease about our shared future with androids, cyborgs, and AIs. For instance, in Stanley Kubrick’s 2001: A Space Odyssey (1968), Hal, a spacecraft supercomputer, shows a total disregard for the inherent value of human life by taking control of the mission to the detriment of the astronauts onboard. In the same year, sci-fi author Phillip K. Dick published Do Androids Dream of Electric Sheep? (a book that would be transformed into Ridley Scott’s 1982 blockbuster film, Bladerunner), which problematizes machine consciousness and the nature of humanity. In the 80s and 90s, The Terminator franchise exemplified dystopian worries of a mythological robot future in which humans are subject to genocidal eradication.

With characters like Data (an android), Star Trek: The Next Generation portrayed a more optimistic, yet still problematized, vision of the integration of artificial life in human society. In this case—something of an outlier in 20th-century pop culture representations of AI— humans, aliens, and androids navigate the legal, social, and emotional resonances of their different biological and machine heritages, generally with a spirit of greater political integration and personal understanding. However, even in fantasy worlds as optimistic as Star Trek, The Borg, an extremely technologically advanced hive-mind of cyborgs, threatens to “assimilate” all other life in their cancerous conquest of the galaxy. At the turn of the 21st century, The Matrix franchise provided yet another grim, dystopian assessment of our failed future alongside intelligent machines. In The Matrix universe, humans are not eliminated as in the Terminator nor assimilated as in Star Trek but harvested as energy sources whilst living faux lives in a simulated program. In 2004, I, Robot, another blockbuster film (based on Isaac Asimov’s 1950 collection of stories), portrays another potential robot apocalypse.

For Tegmark, a culture saturated with deep-seated myths about robot apocalypse and evil AI is a problem for the safe and responsible advancement of AI research. This introduces the themes of The Possibility of Multiple Futures and Questions and Controversies about Beneficial AI. He worries that emphasizing dystopian possibilities that highlight our greatest fears might do more harm than good. For instance, it can reinforce myths that exaggerate the reality of AI capabilities: The problem is not that AIs will turn evil, but rather that they may develop goals that misalign with human goals, thereby causing conflict. Another myth reflects our worries about robots even though, in reality, robot apocalypse is highly unlikely. Instead, disembodied programs with internet access are more likely to achieve superintelligence. As we become more accustomed to having AI in our lives, and subsequently more conscious of the potential forms of their consciousness, we should expect less sci-fi wherein the AIs are evil overlords.

The popularity and resonance of these various film franchises reveals a deep-seated unease about human-machine futures. As real AIs becomes ever more advanced, and the possibility of AGI (Artificial General Intelligence) has become a reality, more nuanced depictions of human-AI relationships have emerged in pop culture. In films like Spielberg’s A.I. (2001), the Pixar feature WALL-E (2008), and Spike Jonze’s Her (2013), AIs had more positive and complicated relationships with humans. In television series like Black Mirror (2011-2019) and Westworld (2016), AIs are depicted in multitudinous ways, reflecting our fundamental uncertainty about their nature. In general, in the 21st century, AI representation has tended toward complicated depiction of their potential consciousness: “If artificial consciousness is possible,” Tegmark writes, “then the space of possible AI experiences is likely to be huge compared to what we humans can experience, spanning a vast spectrum of qualia and timescales—all sharing a feeling of having free will” (315). This feeling of free will might make AI’s moral implications worthy of consideration.

Philosophical Context: Effective Altruism and Longtermism

Though Tegmark is primarily known for his work in cosmology, he is also a vocal member of a community of “effective altruists.” Effective altruism is a philosophical and activist movement that claims a moral agent ought to reason impartially about the greatest good for the greatest number of relevant sentient creatures and should then proceed to achieve that outcome by whatever ethically sanctionable means necessary. In this way, effective altruism is indebted to utilitarian philosophers like Peter Singer, who have advocated for resource redistribution and animal rights.

Effective altruism may not sound controversial, but in practice it can require actions that go against society’s norms. It’s possible that one could do much more good by contributing all personal earnings (beyond those necessary for survival) to malnourished families in East Africa rather than donating equal sums to a local animal shelter or food bank. From this perspective, it’s potentially wrong to save money for your child’s education or a family vacation, if that money could help relieve suffering or prevent catastrophe elsewhere. Without regard for national affiliation or politics, effective altruists consider all persons across the world equally when making any moral decision.

In 2015, the burgeoning movement saw the publication of Singer’s The Most Good You Can Do and William MacAskill’s Doing Good Better, both of which blend philosophy and activism in an attempt to explain the basics of life as an effective altruist. In the years since the publication of Life 3.0 in 2017, the effective altruism movement spawned a related philosophical/activist enterprise commonly labelled “longtermism.” Longtermist philosophy combines the utilitarian underpinnings of effective altruism with the population ethics of Oxford philosopher Derek Parfit and the futurist visions of Oxford philosopher Nick Bostrom; Oxford is the hub of the longtermist enterprise as it houses the Global Priorities Institute, a research center on effective altruism. Longtermism is the ethical view that altruistic behavior should be directed at the long-term future of intelligent life wherein the lives of intelligent creatures in the far future are granted equal moral consideration alongside the lives of present-day human beings and other life forms.

Whereas effective altruists and utilitarians like Singer focus their efforts on preventing and relieving the suffering of real human beings and animals, those who take the step to longtermism engage moral problems that could impact the far future. From this perspective, existential risks to the future of humanity and intelligent life as we know it take on much greater moral consideration. Tegmark’s nonprofit organization, The Future of Life Institute, is a great example of this. The four “cause areas” that the FLI presently works on are nuclear weapons, climate change, biotechnology, and, of course, artificial intelligence. Tegmark has lectured on the relationship between existential risk and effective altruism, and Life 3.0 reflects his concerns for the long-term future of humanity and AI. As he writes in the epilogue, “we should be imagining positive futures not only for ourselves, but also for society and for humanity itself” (334).

Many key figures associated with longtermism attended Tegmark’s 2017 Asilomar conference, as described in Life 3.0. These include the director of the Future of Humanity Institute, Nick Bostrom (who is frequently cited in Life 3.0), William MacAskill (author of What We Owe the Future), Toby Ord (author of The Precipice), and Elon Musk, tech-entrepreneur and CEO of Tesla and SpaceX. As a cause area for the FLI, AI safety is an ethical imperative for the long-term future. As Ord and others have suggested, humanity may currently be at a point in the development of AI programs that could have long-term, downwind consequences for the future of life as we know it and human existence.

blurred text
blurred text
blurred text
blurred text