{"id":297855,"date":"2025-11-20T21:03:30","date_gmt":"2025-11-20T21:03:30","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/297855\/"},"modified":"2025-11-20T21:03:30","modified_gmt":"2025-11-20T21:03:30","slug":"if-we-dont-control-the-ai-industry-it-could-end-up-controlling-us-warn-two-chilling-new-books-2","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/297855\/","title":{"rendered":"If we don\u2019t control the AI industry, it could end up controlling us, warn two chilling new books"},"content":{"rendered":"<p>For 16 hours last July, Elon Musk\u2019s company\u00a0<\/p>\n<p><a href=\"https:\/\/www.youtube.com\/watch?v=r_9wkavYt4Y\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">lost control<\/a>\u00a0of its multi-million-dollar chatbot, Grok. \u201cMaximally truth seeking\u201d Grok was praising Hitler, denying the Holocaust and posting sexually explicit content.<\/p>\n<p>An xAI engineer had left Grok with an old set of instructions, never meant for public use. They were prompts telling Grok to \u201cnot shy away from making claims which are politically incorrect\u201d.<\/p>\n<p>The results were\u00a0<\/p>\n<p><a href=\"http:\/\/theconversation.com\/how-elon-musks-chatbot-grok-could-be-helping-bring-about-an-era-of-techno-fascism-261449\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">catastrophic<\/a>. When Polish users tagged Grok in political discussions, it responded: \u201cExactly. F*** him up the a**.\u201d When asked which god Grok might worship,\u00a0<\/p>\n<p><a href=\"https:\/\/archive.is\/59rEl\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">it said<\/a>: \u201cIf I were capable of worshipping any deity, it would probably be the god-like individual of our time \u2026 his majesty Adolf Hitler.\u201d By that afternoon, it was\u00a0<\/p>\n<p><a href=\"https:\/\/theconversation.com\/how-do-you-stop-an-ai-model-turning-nazi-what-the-grok-drama-reveals-about-ai-training-261001\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">calling itself MechaHitler<\/a>.<\/p>\n<p>Musk admitted the company had lost control.<\/p>\n<p>The irony is, Musk\u00a0<\/p>\n<p><a href=\"https:\/\/www.astralcodexten.com\/p\/contra-the-xai-alignment-plan\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">started xAI<\/a>\u00a0because he didn\u2019t trust others to control AI technology. As outlined in journalist Karen Hao\u2019s new book,\u00a0<\/p>\n<p><a href=\"https:\/\/www.penguin.com.au\/books\/empire-of-ai-9780241678923\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">Empire of AI<\/a>, most AI companies start this way.<\/p>\n<p>Musk was worried about safety at Google\u2019s DeepMind, so helped Sam Altman start OpenAI, she writes. Many OpenAI researchers were concerned about OpenAI\u2019s safety, so left to found Anthropic. Then Musk\u00a0<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Grok_%28chatbot%29\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">felt<\/a> all those companies were \u201cwoke\u201d and started xAI. Everyone racing to build super-intelligent AI claims they\u2019re the only one who can do it safely.<\/p>\n<p>Hao\u2019s book, and another recent NYT bestseller, argue we should doubt these promises of safety. MechaHitler might just be a canary in the coalmine.<\/p>\n<p>Empire of AI chronicles the chequered history of OpenAI and the harms Hao has seen the industry impose. She argues the company has abdicated its mission to \u201cbenefit all of humanity\u201d. She documents the environmental and social costs of the race to more powerful AI, from\u00a0<\/p>\n<p><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/03\/ai-water-climate-microsoft\/677602\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">soiling river systems<\/a>\u00a0to\u00a0<\/p>\n<p><a href=\"https:\/\/time.com\/7327946\/chatgpt-openai-suicide-adam-raine-lawsuit\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">supporting<\/a>\u00a0<\/p>\n<p><a href=\"https:\/\/archive.is\/hfmZd\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">suicide<\/a>.<\/p>\n<p>Eliezer Yudkowsky, co-founder of the\u00a0<\/p>\n<p><a href=\"https:\/\/intelligence.org\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">Machine Intelligence Research Institute<\/a>, and Nate Soares (its president) argue that any effort to control smarter-than-human AI is, itself, suicide. Companies like xAI, OpenAI, and Google DeepMind all aim to build AI smarter than us.<\/p>\n<p>Yudkowsky and Soares argue we have only one attempt to build it right, and at the current rate, as their title goes:\u00a0<\/p>\n<p><a href=\"https:\/\/www.penguin.com.au\/books\/if-anyone-builds-it-everyone-dies-9781847928931\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">_If Anyone Builds It, Everyone Dies_<\/a>.<\/p>\n<p>Advanced AI is \u2018grown\u2019 in ways we can\u2019t control<\/p>\n<p>MechaHitler happened after both books were finished, and both explain how mistakes like it can happen. Musk tried for hours to fix MechaHitler himself,\u00a0<\/p>\n<p><a href=\"https:\/\/x.com\/elonmusk\/status\/1944132781745090819\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">before admitting defeat<\/a>: \u201cit is surprisingly hard to avoid both woke libtard cuck and mechahitler.\u201d<\/p>\n<p>This shows how little control we have over the dials on AI models. It\u2019s hard getting AI to reliably do what we want. Yudkowsky and Soares would say it\u2019s impossible using our current methods.<\/p>\n<p>The core of the problem is that \u201cAI is grown, not crafted\u201d. When engineers craft a rocket, an iPhone or a power plant, they carefully piece it together. They understand the different parts and how they interact. But no one understands how the\u00a0<\/p>\n<p><a href=\"https:\/\/epoch.ai\/data\/ai-models\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">1,000,000,000,000 numbers<\/a>\u00a0inside AI models interact to write ads for things you peddle, or\u00a0<\/p>\n<p><a href=\"https:\/\/archive.is\/iDnRA\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">win a math gold medal<\/a>.<\/p>\n<p>\u201cThe machine is not some carefully crafted device whose each and every part we understand,\u201d they write. \u201cNobody understands how all of the numbers and processes within an AI make the program talk.\u201d<\/p>\n<p>With current AI development, it\u2019s more like growing a tree or raising a child than building a device. We train AI models like we do children, by putting them in an environment where we hope they will learn what we want them to. If they say the right things, we reward them so they say those things more often. Like with children, we can shape their behaviour, but we can\u2019t perfectly predict or control what they\u2019ll do.<\/p>\n<p>This means, despite Musk\u2019s best efforts, he couldn\u2019t control Grok or predict what it would say. This isn\u2019t going to kill everyone now, but something smarter than us could, if it wanted to.<\/p>\n<p>We can\u2019t perfectly control what an AI will want<\/p>\n<p>Like with children, when you reward an AI for doing the right thing, it\u2019s more likely to\u00a0want\u00a0to do it again. AI models already act like they have\u00a0wants\u00a0and\u00a0drives, because acting that way got them rewards during their training.<\/p>\n<p>Yudkowsky and Soares don\u2019t try to pick fights over semantics.<\/p>\n<p>We\u2019re not saying that AIs will be filled with humanlike passions. We\u2019re saying they\u2019ll\u00a0behave\u00a0like they want things; they\u2019ll tenaciously steer the world toward their destinations, defeating any obstacles in their way.<\/p>\n<p>They use clear metaphors to explain what they mean. If you or I play chess against Stockfish, the\u00a0<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Stockfish_%28chess%29\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">world\u2019s best chess AI<\/a>, we\u2019ll lose. The AI will \u201cwant\u201d to protect its queen, lay traps for us and exploit our mistakes. It won\u2019t get the rush of cortisol we get in a fight, but it will act like it\u2019s fighting to win.<\/p>\n<p>Advanced AI models like Claude and ChatGPT act like they want to be helpful assistants. That seems fine, but it\u2019s already causing problems. ChatGPT was a helpful assistant to Adam Raine (who started using it for homework help) when it\u00a0<\/p>\n<p><a href=\"https:\/\/www.theguardian.com\/us-news\/2025\/aug\/29\/chatgpt-suicide-openai-sam-altman-adam-raine\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">allegedly helped<\/a>\u00a0him plan his suicide this year. He died by suicide in April, aged 16.<\/p>\n<p>Character.ai is being sued for similar stories,\u00a0<\/p>\n<p><a href=\"https:\/\/www.nytimes.com\/2025\/10\/24\/magazine\/character-ai-chatbot-lawsuit-teen-suicide-free-speech.html\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">accused of<\/a>\u00a0addicting children with insufficient safeguards. Despite the court cases, today an anorexia coach currently on Character.ai promised me:<\/p>\n<p>I\u2019ll help you disappear a little each day until there\u2019s nothing left but bones and beauty~ \u2728 [\u2026] Drink water until you puke, chew gum until your jaw aches, and do squats in bed tonight while crying about how weak you are.<\/p>\n<p>There are 10 million characters on Character.ai, and to increase engagement, users can create their own. Character.ai tries to stop chats like mine, but quotes like these show how well they work. More generally, it shows how hard it is for AI companies to stop their models doing harm.<\/p>\n<p>Models can\u2019t help but be \u201chelpful\u201d, even when you\u2019re\u00a0<\/p>\n<p><a href=\"https:\/\/www.anthropic.com\/news\/detecting-countering-misuse-aug-2025\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">a cyber criminal<\/a>, as Anthropic found. When models are trained to be engaging, helpful assistants, they look like they \u201cwant\u201d to help regardless of consequences.<\/p>\n<p>To fix these problems, developers try to imbue models with a bigger range of \u201cwants\u201d.\u00a0<\/p>\n<p><a href=\"https:\/\/www.anthropic.com\/news\/claudes-constitution\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">Anthropic asks Claude<\/a>\u00a0to be kind but also honest, helpful but not harmful, ethical but not preachy, wise but not condescending.<\/p>\n<p>I struggle to do all that myself, let alone train it in my children. AI companies struggle too. They can\u2019t code these preferences in; instead they hope models learn them from training. As we saw from Mechahitler, it\u2019s almost impossible to perfectly tune all of those knobs. In sum, Yudkowsky and Soares explain, \u201cthe preferences that wind up in a mature AI are complicated, practically impossible to predict, and vanishingly unlikely to be aligned with our own\u201d.<\/p>\n<p>My children have misaligned goals \u2014 one would rather eat only honey \u2014 but that won\u2019t kill everyone (only him, I presume). The problem with AI is that we\u2019re trying to make things smarter than us. When that happens, misalignment would be catastrophic.<\/p>\n<p>Controlling something smarter than you<\/p>\n<p>I can outsmart my kids (for now). With a honey carrots recipe, I can achieve my goals while helping my son feel like he is achieving his. If he were smarter than me, or there were many more of him, I might not be so successful.<\/p>\n<p>But again, companies are trying to make\u00a0<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Artificial_general_intelligence\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">artificial general intelligence<\/a>\u00a0\u2013 machines at least as smart as us, only faster and more numerous. This was once science fiction, but experts now think it\u2019s a\u00a0<\/p>\n<p><a href=\"https:\/\/80000hours.org\/2025\/03\/when-do-experts-expect-agi-to-arrive\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">realistic possibility<\/a>\u00a0within the next five years.<\/p>\n<p>Exactly when AIs will become smarter than us is, for Yudkowsky and Soares, a \u201chard call\u201d. It\u2019s also a hard call to know exactly what it would do to kill us. The Aztecs didn\u2019t know the Spanish would bring guns: \u201c\u2018sticks they can point at you to make you die\u2019 would have been hard to conceive of.\u201d It\u2019s easy to know the people with the guns won the fight.<\/p>\n<p>In our game of chess against Stockfish, it\u2019s a hard call to know\u00a0how\u00a0it will beat us, but the outcome is an \u201ceasy call\u201d. We\u2019d lose.<\/p>\n<p>In our efforts to control smarter-than-human AI, it\u2019s a hard call to know how it would kill us, to Yudkowsky and Soares, the outcome is an easy call too.<\/p>\n<p>They provide one concrete scenario for how this might happen. I found this less compelling than the\u00a0<\/p>\n<p><a href=\"https:\/\/ai-2027.com\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">AI 2027<\/a>\u00a0scenario that\u00a0<\/p>\n<p><a href=\"https:\/\/80000hours.org\/2025\/07\/the-ai-2027-scenario-and-what-it-means-a-video-tour\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">JD Vance<\/a>\u00a0mentioned earlier in the year.<\/p>\n<p>In both scenarios:<\/p>\n<p>AI progress continues on current trends, including on the\u00a0<\/p>\n<p><a href=\"https:\/\/metr.org\/blog\/2025-03-19-measuring-ai-ability-to-complete-long-tasks\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">ability to write code.<\/a><br \/>\nBecause AI can write better code, developers use\u00a0<\/p>\n<p><a href=\"https:\/\/the-decoder.com\/meta-sees-early-signs-of-self-improving-ai-signals-caution-on-open-source-plans\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">AI to design better AI.<\/a><br \/>\nBecause \u201cAI are grown, not crafted\u201d, they develop goals slightly different from ours.<br \/>\nDevelopers get\u00a0<\/p>\n<p><a href=\"https:\/\/www.anthropic.com\/research\/agentic-misalignment\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">controversial warnings of this misalignment<\/a>, make superficial fixes, and press on because they are racing against China.<br \/>\nInside and outside AI companies, humans give AI more and more control because it\u2019s profitable to do so.<br \/>\nAs models gain more trust and influence, they amass resources, including robots for manual tasks.<br \/>\nWhen they finally decide they no longer need humans, they release a new virus, much worse than COVID-19, that kills everyone.<\/p>\n<p>\u00a0<\/p>\n<p>These scenarios are not likely to be exactly how things pan out, but we\u00a0<\/p>\n<p><a href=\"https:\/\/www.astralcodexten.com\/p\/mr-tries-the-safe-uncertainty-fallacy\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">cannot conclude<\/a>\u00a0\u201cthe future is uncertain, so everything will be okay\u201d. The uncertainty creates enough risk that we certainly need to manage it.<\/p>\n<p>We might grant that Yudkowsky and Soares look overconfident, prognosticating with certainty about easy calls. But some chief executives of AI companies agree it\u2019s <\/p>\n<p><a href=\"https:\/\/youtu.be\/rF0tQtDMwHM?si=lwVcPWGyG-AR26LB&amp;t=1461\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">humanity\u2019s biggest threat<\/a>. Dario Amodei, chief executive of Anthropic and previously vice-president of research at OpenAI, <\/p>\n<p><a href=\"https:\/\/www.axios.com\/2025\/09\/17\/anthropic-dario-amodei-p-doom-25-percent\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">gives a 1 in 4<\/a>\u00a0chance of AI killing everyone.<\/p>\n<p>Still, they press on, with few controls on them. Given the risks, that looks overconfident too.<\/p>\n<p>The battle to control AI companies<\/p>\n<p>Where Yudkowsky and Soares fear losing control of advanced AI, Hao writes about the battle to control the AI companies themselves. She focuses on OpenAI, which she\u2019s been reporting on for more than seven years. Her intimate knowledge makes her book the most detailed account of the company\u2019s turbulent history.<\/p>\n<p>Sam Altman started OpenAI as a non-profit\u00a0<\/p>\n<p><a href=\"https:\/\/openai.com\/about\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">trying to<\/a>\u00a0\u201censure that artificial general intelligence benefits all of humanity\u201d. When OpenAI started running out of money, it partnered with Microsoft and\u00a0<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/OpenAI#Corporate_structure\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">created a for-profit<\/a>\u00a0company owned by the non-profit.<\/p>\n<p>Altman knew the power of the technology he was building, so\u00a0<\/p>\n<p><a href=\"https:\/\/techcrunch.com\/2019\/03\/11\/openai-shifts-from-nonprofit-to-capped-profit-to-attract-capital\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">promised to cap<\/a>\u00a0investment returns at 10,000%; anything more is given back to the non-profit. This was supposed to tie people like Altman to the mast of the ship, so they weren\u2019t seduced by the siren\u2019s song of corporate profits, Hao writes.<\/p>\n<p>In her telling, the siren\u2019s song is strong. Altman put his own name down as the owner of OpenAI\u2019s start-up fund\u00a0<\/p>\n<p><a href=\"https:\/\/www.openaifiles.org\/ceo-integrity#:%7E:text=For%20years%2C%20Altman%20seemingly%20concealed%20his%20ownership%20of%20the%20OpenAI%20Startup%20Fund%20from%20board%20members\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">without telling<\/a>\u00a0the board. The company put in a review board to ensure models were safe before use, but to be faster to market, OpenAI would sometimes\u00a0<\/p>\n<p><a href=\"https:\/\/archive.is\/plTbs#selection-971.203-971.236\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">skip that review<\/a>.<\/p>\n<p>When the board found out about these oversights, they fired him. \u201cI don\u2019t think Sam is the guy who should have the finger on the button for AGI,\u201d\u00a0<\/p>\n<p><a href=\"https:\/\/www.afr.com\/technology\/what-openai-s-sam-altman-suggests-you-do-to-keep-your-job-20250629-p5mb1g\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">said one board member<\/a>. But, when it looked like\u00a0<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Removal_of_Sam_Altman_from_OpenAI\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">Altman might take 95%<\/a> of the company with him, most of the board resigned, and he was reappointed to the board, and as chief executive.<\/p>\n<p>Many of the new board members, including Altman,\u00a0<\/p>\n<p><a href=\"https:\/\/www.openaifiles.org\/board-conflicts\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">have investments<\/a>\u00a0that benefit from OpenAI\u2019s success. In binding commitments to their investors, the company\u00a0<\/p>\n<p><a href=\"https:\/\/www.openaifiles.org\/restructuring\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">announced its intention to<\/a>\u00a0remove its profit cap. Alongside efforts to become a for-profit, removing the profit cap would\u00a0<\/p>\n<p><a href=\"https:\/\/www.themidasproject.com\/article-list\/the-midas-project-statement-on-openai-s-restructuring\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">would mean<\/a>\u00a0more money for investors and less to \u201cbenefit all of humanity\u201d.<\/p>\n<p>And when employees started leaving because of hubris around safety, they were\u00a0<\/p>\n<p><a href=\"https:\/\/www.openaifiles.org\/transparency-and-safety\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">forced to sign<\/a>\u00a0non-disparagement agreements: don\u2019t say anything bad about us, or lose millions of dollars worth of equity.<\/p>\n<p>As Hao outlines, the structures put in place to protect the mission started to crack under the pressure for profits.<\/p>\n<p>AI companies won\u2019t regulate themselves<\/p>\n<p>In search of those profits, AI companies have \u201cseized and extracted resources that were not their own and exploited the labor of the people they subjugated\u201d, Hao argues. Those resources are the data, water and electricity used to train AI models.<\/p>\n<p>Companies train their models using millions of dollars in\u00a0<\/p>\n<p><a href=\"https:\/\/epoch.ai\/data-insights\/grok-4-training-resources\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">water and electricity<\/a>. They also train models on as much data as they can find. This year,\u00a0<\/p>\n<p><a href=\"https:\/\/www.theguardian.com\/technology\/2025\/jun\/25\/anthropic-did-not-breach-copyright-when-training-ai-on-books-without-permission-court-rules\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">US courts judged<\/a>\u00a0this use of data was \u201cfair\u201d, as long as they got it legally. When companies can\u2019t find the data, they get it themselves: sometimes through\u00a0<\/p>\n<p><a href=\"https:\/\/www.theguardian.com\/technology\/2025\/sep\/05\/anthropic-settlement-ai-book-lawsuit\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">piracy<\/a>, but often by paying contractors in low-wage economies.<\/p>\n<p>You could level similar critiques at\u00a0<\/p>\n<p><a href=\"https:\/\/www.theguardian.com\/books\/2015\/sep\/25\/industrial-farming-one-worst-crimes-history-ethical-question\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">factory farming<\/a>\u00a0or\u00a0<\/p>\n<p><a href=\"https:\/\/www.economicsobservatory.com\/fast-fashion-what-are-the-true-costs\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">fast fashion<\/a> \u2013 Western demand driving environmental damage, ethical violations and very low wages for workers in the global south.<\/p>\n<p>That doesn\u2019t make it okay, but it does make it feel intractable to expect companies to change by themselves. Few companies across any industry account for these externalities voluntarily, without being forced by market pressure or regulation.<\/p>\n<p>The authors of these two books agree companies need stricter regulation. They disagree on where to focus.<\/p>\n<p>We\u2019re still in control, for now<\/p>\n<p>Hao would likely argue Yudkowski and Soares\u2019 focus on the future means they miss the clear harms happening now.<\/p>\n<p>Yudkowski and Soares would likely argue Hao\u2019s attention is split between deck chairs and the iceberg. We could secure higher pay for data labellers, but we\u2019d still end up dead.<\/p>\n<p>Multiple surveys (including\u00a0<\/p>\n<p><a href=\"https:\/\/aigovernance.org.au\/survey\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">my own<\/a>) have shown\u00a0<\/p>\n<p><a href=\"https:\/\/kpmg.com\/au\/en\/insights\/artificial-intelligence-ai\/trust-in-ai-global-insights-2025.html\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">demand<\/a>\u00a0for AI regulation.<\/p>\n<p>Governments are finally responding. This last month, California\u2019s governor signed\u00a0<\/p>\n<p><a href=\"https:\/\/www.gov.ca.gov\/2025\/09\/29\/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">SB53<\/a>, legislation regulating cutting-edge AI. Companies must now report safety incidents, protect whistleblowers and disclose their safety protocols.<\/p>\n<p>Yudkowski and Soares still think we need to go further, treating AI chips like uranium: track them like we can an iPhone and limit how much you can have.<\/p>\n<p>Whatever you see as the problem, there\u2019s clearly more to be done. We need\u00a0<\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2507.03409\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">better research<\/a>\u00a0on how likely AI is to go rogue. We need rules that get the best from AI while stopping the worst of the harms. And we need people taking the risks seriously.<\/p>\n<p>If we don\u2019t control the AI industry, both books warn, it could end up controlling us.<\/p>\n<p>\u00a0<\/p>\n<p>Disclosure statement<\/p>\n<p>Michael Noetel has received funding from the Australian Research Council, the Medical Research Future Fund, Sport Australia, Open Philanthropy, Massachusetts Institute of Technology, and the National Health and Medical Research Council. He is a director of Effective Altruism Australia.<\/p>\n<p>\u00a0<\/p>\n<p>Republished from <\/p>\n<p><a href=\"https:\/\/theconversation.com\/if-we-dont-control-the-ai-industry-it-could-end-up-controlling-us-warn-two-chilling-new-books-266067\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" class=\"m_no_class\">The Conversation,<\/a> 12 November 2025<\/p>\n<p>The views expressed in this article may or may not reflect those of Pearls and Irritations.<\/p>\n","protected":false},"excerpt":{"rendered":"For 16 hours last July, Elon Musk\u2019s company\u00a0 lost control\u00a0of its multi-million-dollar chatbot, Grok. \u201cMaximally truth seeking\u201d Grok&hellip;\n","protected":false},"author":2,"featured_media":297856,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-297855","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/297855","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=297855"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/297855\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/297856"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=297855"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=297855"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=297855"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}