“The Terminator” as non-fiction
If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares (Little, Brown and Company, 2025)
Reviewed by Christian McNamara
In August 1999, I began my senior year at Hillsborough High School in Tampa, Florida, dimly aware of (and altogether unconcerned about, in a very teenage way) the impending arrival of the so-called Y2K problem. For younger readers not fortunate enough to have lived through the heady days of the late 1990s, a few words of explanation may be in order. In the early era of computing during the 1960s and 1970s, computer storage was monstrously expensive. An amount of memory that today you could pay for with the contents of the “take a penny, leave a penny tray” at your local gas station would have set you back millions of dollars. To cut costs, programmers developed systems that abbreviated four-digit years to their final two digits (e.g., “1977” would be stored as “77”). Left largely unconsidered was the question of how these systems would interpret the eventual arrival of the year 2000. Would computers think they’d time traveled back to the year 1900? Crash? Spontaneously combust?
Although a few early warnings about the possible consequences of the two-digit approach had been raised as far back as the 1960s, it wasn’t until the mid-90s that the world really started to pay attention. Task forces were formed. Legislation was passed. Survivalist types began stockpiling canned goods and ammunition. And after several years of handwringing, January 1, 2000, finally rolled around and…mostly nothing bad happened. Contrary to some of the direr predictions made at the time, planes did not suddenly start falling from the sky. I got on with senior year and graduated in May.
Shortly thereafter, in September 2000, just as I was beginning my freshman year in college, the American decision theorist Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence (SIAI) to pursue the goal of developing machine-based superintelligence. A child genius who never attended high school much less college, Yudkowsky initially believed that artificial superintelligence (ASI) would usher in a utopian world in which humans would overcome their own mortality and travel to the farthest reaches of the universe. But by 2001, he had concluded that such an ASI would not necessarily be friendly to humanity. SIAI shifted its focus to addressing the existential risks Yudkowsky now believed were associated with ASI and ultimately renamed itself the Machine Intelligence Research Institute (MIRI) to reflect this change. Thus was born the field of AI alignment, the term given to the body of research attempting to ensure that AI systems act in accordance with human values and goals. And how, you might reasonably ask, is that important work going? In September 2025, Yudkowsky and his MIRI colleague Nate Soares published a new book cheerily titled If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. So, you know, not great Bob.
Yudkowsky and Soares are cognizant of the fact that such an arresting title is liable to be dismissed as hyperbole – intended to draw attention to a problem that, while serious, is nowhere near so dire. Yet in the book’s opening pages, they explicitly disclaim any such embellishment. Yudkowsky and Soares seem genuinely convinced that on our present course, the human race will develop ASI (which they define as an AI so powerful that it exceeds human capabilities at almost every mental task) and that said ASI will then end the human race. They believe, moreover, that this will happen in the very near future. In one passage, they allow for the possibility that we may have “a whole decade left on the clock.” And they’ve put their money where their mouths are. MIRI does not offer 401(k) matching to its employees because it views saving for retirement as moot.
As alarming as the phrase “everyone dies” is, the most important word in Yudkowsky and Soares’ title may be “anyone.” The narrative being relentlessly pushed by Silicon Valley is that the United States is locked in an AI arms race with China, with the safety of the world dependent on the “good guys” beating the “bad guys” to be the first to develop ASI. According to this view, anything that acts as even a slight check on Silicon Valley’s headlong dive into ASI threatens to doom humanity by giving the Chinese an edge. American policymakers have largely embraced this “deregulate or die” message, as with the Trump Administration’s recent executive order purporting to preempt any state regulation of AI. But for Yudkowsky and Soares, the existential threat posed by ASI is not dependent on who creates it. This is because ASI will develop preferences of its own that are unknown to and not under the control of its human developers, evidence of which already exists in current AI systems. And, Yudkowsky and Soares argue, ASI will seek to repurpose the Earth’s matter and energy to the pursuit of these “weird and alien ends” instead of allowing resources to be used for the survival of the human race.
Like detectives in a murder mystery, Yudkowsky and Soares describe this desired repurposing as ASI’s motive for killing all of humanity. There remains the question of opportunity. As they acknowledge, “[i]t doesn’t matter what AIs want unless they’re able to get it.” But in a world of “smart” devices in which even our refrigerators are connected to the Internet, the possibilities for mayhem are limitless. As they document, existing AIs have already succeeded in accessing systems they were not intended to, as when ChatGPT’s o1 model hacked into the program hosting a safety check the model was undergoing. Yudkowsky and Soares are clear that while they view the endpoint (human extinction) as an “easy call,” the specific pathway by which it will happen is impossible to predict.
At this point in the review, I should probably come clean about the fact that I have nowhere near the technical acumen that would be required to independently evaluate all of the claims in this book. Dr. Tubb was among my favorite teachers at Hillsborough High, but my single year in his introductory computer science class did not prevent my brain from melting slightly while reading even the book’s dumbed-down descriptions of how AI works (to my horror, there is an online supplement with more complex discussions). Yudkowsky and Soares point to the many other prominent researchers that share their concerns about the potentially catastrophic consequences of ASI. But they also acknowledge there are others who are dismissive of their claims. How are we as laypeople supposed to decide between the competing arguments of experts?
Yet perhaps the most essential point that Yudkowsky and Soares make is that we don’t have to decide between these competing arguments in order to reject the approach to AI that the world is now taking. That it is even possible to have a legitimate debate about whether or not ASI will result in human extinction means that we should be exercising far, far more caution in its development than is currently the case. As they write, “[i]t’s not enough for us to be wrong; we have to be so wrong that a lack of disaster is callable.” To return to the year 2000, while the exact extent that Y2K preparations played in rendering the problem a non-issue (versus the threat having been overstated in the first place) is not clear-cut, would anyone really argue that, in the ex ante zone of uncertainty, governments and corporations should not have done what they did: take the problem seriously and devote enormous amounts of time and money to solving it?
What would taking the existential threat of ASI seriously look like? According to Yudkowsky and Soares, the first step should be gathering up all of the computing power capable of producing more powerful AIs and consolidating it into places where it can be monitored by observers to ensure that it is not used to develop ASI. They set the threshold for such power at the admittedly arbitrary level of 100,000 GPUs, the powerful computer chips used to drive the development of AI. And if a rogue state or organization is found to have a large unexplained use of electricity that could signal the existence of an underground ASI lab? Per Yudkowsky and Soares, “they get a somberly written letter from multiple nuclear powers about next steps.” In other words, the doctrine of mutually assured destruction, updated for the AI age. At a time when the world’s fragile nuclear armistice is itself increasingly in doubt, the notion that we will successfully come together around a framework to address a danger that most policymakers don’t even appear to believe is real seems risible. Which means that if Yudkowsky and Soares are correct about the nature of the threat posed by ASI and the inevitability of its development absent restraints on doing so, then it is very likely a question not of whether the human race will end, but when.
If Anyone Builds It, Everyone Dies should be widely read and its message taken seriously. But as with the “deregulate or die” narrative, the “ASI equals extinction” scenario also requires caution. A storyline about the ability of AI to wipe out humanity is itself an endorsement of the technology’s potential. A technology powerful enough to kill us all would presumably also be capable of achieving Yudkowsky’s initial dream of fostering human immortality and interstellar travel. If we could just solve for the existential threat that he and Soares have identified, would not such a technology be worth pursuing even in the face of the considerable (but non-extinction level) costs that AI is already imposing? Such costs are almost entirely absent from If Anyone Builds It, Everyone Dies, but have been extensively documented in other works such as Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao. These include everything from the environmental degradation and soaring utility costs caused by data centers to the exploitation of contract labor in the Global South, where workers are paid piece rate at starvation levels to do the data annotation necessary for AI to function. To say nothing of the threat AI poses to the relationships and creativity that are the essence of what it means to be human. The risk is that by focusing solely on headline-grabbing claims about the potential extinction threat of AI, we ignore all of the other bad things that are already happening. Indeed, some see this as the deliberate purpose of such claims. In a recent piece, the academic James O’Sullivan argues that “[a]rtificial superintelligence narratives perform very intentional political work, drawing attention from present systems of control toward distant catastrophe, shifting debate from material power to imagined futures.” What one hopes, then, is that the concerns raised by If Anyone Builds It, Everyone Dies are ultimately but one factor in a more comprehensive critique of the AI project – one taking account not only of hypothetical costs and benefits, but also of the limitations and harms that we can already observe – that society undertakes before it is too late to change course.
Christian McNamara recently relocated to his home state of Florida with his wife and two children. He has worked as a researcher and lecturer at the Yale School of Management, an attorney, a social sector consultant, and the executive director of a small youth development non-profit. He is a graduate of the University of Notre Dame and Harvard Law School.