{"id":162612,"date":"2025-09-23T04:35:09","date_gmt":"2025-09-23T04:35:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/162612\/"},"modified":"2025-09-23T04:35:09","modified_gmt":"2025-09-23T04:35:09","slug":"if-anyone-builds-it-everyone-dies-review-how-ai-could-kill-us-all-books","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/162612\/","title":{"rendered":"If Anyone Builds it, Everyone Dies review \u2013 how AI could kill us all | Books"},"content":{"rendered":"<p class=\"dcr-130mj7b\">What if I told you I could stop you worrying about climate change, and all you had to do was read one book? Great, you\u2019d say, until I mentioned that the reason you\u2019d stop worrying was because the book says our species only has a few years before it\u2019s wiped out by superintelligent AI anyway.<\/p>\n<p class=\"dcr-130mj7b\">We don\u2019t know what form this extinction will take exactly \u2013 perhaps an energy-hungry AI will let the millions of fusion power stations it has built run hot, boiling the oceans. Maybe it will want to reconfigure the atoms in our bodies into something more useful. There are many possibilities, almost all of them bad, say Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies, and who knows which will come true. But just as you can predict that an ice cube dropped into hot water will melt without knowing where any of its individual molecules will end up, you can be sure an AI that\u2019s smarter than a human being will kill us all, somehow.<\/p>\n<p class=\"dcr-130mj7b\">This level of confidence is typical of Yudkowsky, in particular. He has been warning about the existential risks posed by technology for years on the website he helped to create, <a href=\"http:\/\/lesswrong.com\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">LessWrong.com<\/a>, and via the Machine Intelligence Research Institute he founded (Soares is the current president). Despite not graduating high school or university, Yudkowsky is highly influential in the field, and a celebrity in the world of very bright young men arguing with each other online (as well as the author of a 600,000-word <a href=\"https:\/\/hpmor.com\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">work of fanfic<\/a> called Harry Potter and the Methods of Rationality). Colourful, annoying, polarising. \u201cPeople become clinically depressed reading your crap,\u201d lamented leading researcher Yann LeCun during one <a href=\"https:\/\/x.com\/ylecun\/status\/1650288972956946433\" data-link-name=\"in body link\" rel=\"nofollow\">online spat<\/a>. But, as chief scientist at Meta, who is he to talk?<\/p>\n<p class=\"dcr-130mj7b\">And while Yudkowsky and Soares may be unconventional, their warnings are similar to those of Geoffrey Hinton, the Nobel-winning \u201cgodfather of AI\u201d, and Yoshua Bengio, the <a href=\"https:\/\/www.adscientificindex.com\/citation-ranking\/?subject=Engineering+%26+Technology+%2F+Computer+Science\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">world\u2019s most-cited computer scientist<\/a>, both of whom <a href=\"https:\/\/aistatement.com\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">signed up to the statement<\/a> that \u201cmitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war\u201d.<\/p>\n<p class=\"dcr-130mj7b\">As a clarion call, If Anyone Builds It, Everyone Dies is well timed. Superintelligent AI doesn\u2019t exist yet, but in the wake of the ChatGPT revolution, investment in the datacentres that would power it is now counted in the hundreds of billions. This amounts to \u201cthe biggest and fastest rollout of a general purpose technology in history,\u201d <a href=\"https:\/\/www.ft.com\/content\/a76f238d-5543-4c01-9419-52aaf352dc23\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">according to the FT\u2019s John Thornhill<\/a>. Meta alone will spend as much as $72bn (\u00a354bn) on AI infrastructure this year, and the achievement of superintelligence is now Mark Zuckerberg\u2019s explicit goal.<\/p>\n<p class=\"dcr-130mj7b\">Not great news, if you believe Yudkowsky and Soares. But why should we? Despite the complexity of its subject, If Anyone Builds It, Everyone Dies is as clear as its conclusions are hard to swallow. Where the discussions become more technical, mainly in passages dealing with AI model training and architecture, it\u2019s still straightforward enough for readers to grasp the basic facts.<\/p>\n<p class=\"dcr-130mj7b\">Among these is that we don\u2019t really understand how generative AI works. In the past, computer programs were hand coded \u2013 every aspect of them was designed by a human. In contrast, the latest models aren\u2019t \u201ccrafted\u201d, they\u2019re \u201cgrown\u201d. We don\u2019t understand, for example, how ChatGPT\u2019s ability to reason emerged from it being shown vast amounts of human-generated text. Something fundamentally mysterious happened during its incubation. This places a vital part of AI\u2019s functioning beyond our control and means that, even if we can nudge it towards certain goals such as \u201cbe nice to people\u201d, we can\u2019t determine how it will get there.<\/p>\n<p class=\"dcr-130mj7b\">That\u2019s a problem, because it means that AI will inevitably generate its own quirky preferences and ways of doing things, and these alien predilections are unlikely to be aligned with ours. (This is, it\u2019s worth noting, entirely separate from the question of whether AIs might be \u201csentient\u201d or \u201cconscious\u201d. Being set goals, and taking actions in the service of them, is enough to bring about potentially dangerous behaviour.) In any case, Yudkowsky and Soares point out that tech companies are already trying hard to build AIs that do things on their own initiative, because businesses will pay more for tools that they don\u2019t have to supervise. If an \u201cagentic\u201d AI like this were to gain the ability to improve itself, it would rapidly surpass human capabilities in practically every area. Assuming that such a superintelligent AI valued its own survival \u2013 why shouldn\u2019t it? \u2013 it would inevitably try to prevent humans from developing rival AIs or shutting it down. The only sure-fire way of doing that is shutting us down.<\/p>\n<p class=\"dcr-130mj7b\">What methods would it use? Yudkowsky and Soares argue that these could involve technology we can\u2019t yet imagine, and which may strike us as very peculiar. They liken us to Aztecs sighting Spanish ships off the coast of Mexico, for whom the idea of \u201csticks they can point at you to make you die\u201d \u2013 AKA guns \u2013 would have been hard to conceive of.<\/p>\n<p>Yudkowsky and Soares present their case with such conviction that it\u2019s easy to emerge from this book ready to cancel your pension contributions<\/p>\n<p class=\"dcr-130mj7b\">Nevertheless, in order to make things more convincing, they have a go. In the part of the book that most resembles sci-fi, they set out an illustrative scenario involving a superintelligent AI called Sable. Developed by a major tech company, Sable spreads through the internet to every corner of civilisation, recruiting human stooges through the most persuasive version of ChatGPT imaginable, before destroying us with synthetic viruses and molecular machines. It\u2019s outlandish, of course \u2013 but the Aztecs would\u2019ve said the same about muskets and Catholicism.<\/p>\n<p class=\"dcr-130mj7b\">Yudkowsky and Soares present their case with such conviction that it\u2019s easy to emerge from this book ready to cancel your pension contributions. The glimmer of hope they offer \u2013 and it\u2019s low wattage \u2013 is that doom can be averted if the entire world agrees to shut down advanced AI development as soon as possible. Given the commercial and strategic incentives, and the current state of political leadership, this seems a little\u00a0unlikely.<\/p>\n<p class=\"dcr-130mj7b\">The crumbs of hope we are left to scrabble for, then, are indications that they may not be right, either about the fact that superintelligence is on its way, or that its creation equals our annihilation.<\/p>\n<p class=\"dcr-130mj7b\">There are certainly moments in the book when the confidence with which an argument is presented outstrips its strength. A small example: as an illustration of how AI can develop strange, alien preferences, the authors offer up the fact that some large language models find it hard to interpret sentences without full stops. \u201cHuman thoughts don\u2019t work like that,\u201d they write. \u201cWe wouldn\u2019t struggle to comprehend a sentence that ended without period.\u201d But that\u2019s not really true; humans often rely on markers at the end of a sentences in order to interpret them correctly. We learn language via speech, so they\u2019re not dots on the page but \u201c<a href=\"https:\/\/en.wikipedia.org\/wiki\/Prosody_(linguistics)\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">prosodic<\/a>\u201d features like intonation: think of the difference between a rising and falling tone at the end of a phrase such as \u201che said he was coming\u201d. If text-trained AI leans heavily on punctuation to figure out what\u2019s going on, that shows its thought processes are analogous, not alien, to human ones.<\/p>\n<p class=\"dcr-130mj7b\">And for writers steeped in the hyper-rational culture of LessWrong, Yudkowsky and Soares exhibit more than a touch of confirmation bias. \u201cHistory,\u201d they write, \u201cis full of \u2026 examples of catastrophic risk being minimised and ignored,\u201d from leaded petrol to Chornobyl. But what about predictions of catastrophic risk being proved wrong? History is full of those, too, from Malthus\u2019s population apocalypse to Y2K. Yudkowsky himself once claimed that <a href=\"https:\/\/web.archive.org\/web\/20070708235912\/https:\/\/www.yudkowsky.net\/singularity.html\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">nanotechnology would destroy humanity<\/a> \u201cno later than\u00a02010\u201d.<\/p>\n<p class=\"dcr-130mj7b\">The problem is that you can be overconfident, inconsistent, a serial doom-monger, and still be right. It\u2019s important to be aware of our own motivated reasoning when considering the arguments presented here; we have every incentive to disbelieve them.<\/p>\n<p class=\"dcr-130mj7b\">And while it\u2019s true that they don\u2019t represent the scientific consensus, this is a rapidly changing, poorly understood field. What constitutes intelligence, what constitutes \u201csuper\u201d, whether intelligence alone is enough to ensure world domination \u2013 all of this is furiously debated.<\/p>\n<p class=\"dcr-130mj7b\">At the same time, the consensus that does exist is not particularly reassuring. In a 2024 survey of 2,778 AI researchers, the <a href=\"https:\/\/aiimpacts.org\/wp-content\/uploads\/2023\/04\/Thousands_of_AI_authors_on_the_future_of_AI.pdf?utm_source=chatgpt.com\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">median probability placed<\/a> on \u201cextremely bad outcomes, such as human extinction\u201d was 5%. Worryingly, \u201chaving thought more (either \u2018a lot\u2019 or \u2018a great deal\u2019) about the question was associated with a median of 9%, while having thought \u2018little\u2019 or \u2018very little\u2019 was associated with a median of 5%\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Yudkowsky has been thinking about the problem for most of his adult life. The fact that his prediction sits north of 99% might reflect a kind of hysterical monomania, or an especially thorough engagement with the problem. Whatever the case, it feels like everyone with an interest in the future has a duty to read what he and Soares have to say.<\/p>\n<p class=\"dcr-130mj7b\"> If Anyone Builds it, Everyone Dies by Eliezer Yudkowsky and Nate Soares is published by Bodley Head (\u00a322). To support the Guardian, order your copy at <a href=\"https:\/\/www.guardianbookshop.com\/if-anyone-builds-it-everyone-dies-9781847928924?utm_source=editoriallink&amp;utm_medium=merch&amp;utm_campaign=article\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">guardianbookshop.com<\/a>. Delivery charges may apply.<\/p>\n","protected":false},"excerpt":{"rendered":"What if I told you I could stop you worrying about climate change, and all you had to&hellip;\n","protected":false},"author":2,"featured_media":162613,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-162612","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/162612","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=162612"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/162612\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/162613"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=162612"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=162612"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=162612"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}