{"id":43178,"date":"2025-08-04T12:02:08","date_gmt":"2025-08-04T12:02:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/43178\/"},"modified":"2025-08-04T12:02:08","modified_gmt":"2025-08-04T12:02:08","slug":"how-to-stopper-the-ai-genie","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/43178\/","title":{"rendered":"How to stopper the AI genie"},"content":{"rendered":"<p>Tech accelerationists insist that now that artificial intelligence has been invented, the genie can\u2019t be put back in the bottle. In doing so, they betray a naive faith in progress. There have been plenty of times in history when a technological genie was released, only to be securely corked again.<\/p>\n<p>Consider the electric taxi. In 1897, a fleet of battery-powered \u201c<a href=\"https:\/\/blog.sciencemuseum.org.uk\/the-surprisingly-old-story-of-londons-first-ever-electric-taxi\/\" rel=\"nofollow noopener\" target=\"_blank\">Hummingbirds<\/a>\u201d \u2014 named for their distinctive hum and yellow-and-black design \u2014 roamed the streets of London. These horseless carriages had swappable batteries, which could be replaced in minutes via hydraulic lifts at central stations. Their inventor, Walter Bersey, sounded a little like Elon Musk when he declared: \u201cThere is no apparent limit to the hopes and expectations of the electric artisan.\u201d Yet within two years, the Hummingbirds were scrapped due to their high costs, frequent breakdowns and accidents, and the fact that they were slower than horse-drawn alternatives. <a href=\"https:\/\/earthbound.report\/2017\/05\/01\/the-electric-taxi-returns-to-london-120-years-later\/\" rel=\"nofollow noopener\" target=\"_blank\">Electric cabs didn\u2019t return to London until 2019 <\/a>\u2014 after a 122-year-long pause.<\/p>\n<p>Many other vaunted technologies have met similar fates. Consider DNA cloning. In 1996,<a href=\"https:\/\/www.nms.ac.uk\/discover-catalogue\/the-story-of-dolly-the-sheep\" target=\"_blank\" rel=\"noopener nofollow\"> the birth of <\/a>Dolly the sheep proved cloning possible, and subsequent experiments edged toward human replication. Yet, in 2005, after illnesses within cloned animals had been exposed, and fears were raised around the unethical use of human embryos, the United Nations adopted a declaration\u00a0against all forms of human cloning. While non-binding, 84 member states voted in favour of prohibiting reproductive cloning, embryo research for non-therapeutic purposes, and the exploitative trade of human reproductive materials.<\/p>\n<p>Eugenics, too, promised to shape the future. From the 1880s to the Second World War, eugenics societies flourished across industrialised nations, championed by elite progressives including the English writer H.G. Wells and the Irish playwright George Bernard Shaw. Wells envisioned a world where <a href=\"https:\/\/creation.com\/hg-wells-darwins-disciple-and-eugenicist-extraordinaire\" target=\"_blank\" rel=\"noopener nofollow\">\u201cfine and efficient\u201d humans thrived while \u201cbase and servile types\u201d<\/a> were eradicated via \u201cmercy killings\u201d. The Nazi atrocities of the Second World War eventually discredited this pseudoscience, putting a stop to it.<\/p>\n<p>Other banished technologies include dangerous drugs such as thalidomide, marketed as a \u201cwonder drug\u201d for morning sickness until it caused severe birth defects. Asbestos, once a \u201cmagic mineral\u201d, was banned after links to lung cancer emerged. Leaded gasoline, despite boosting engine performance, was phased out globally by 2021 due to neurotoxicity. Even GM crops, touted as farming\u2019s future a decade ago, now face restrictions over biodiversity and health and safety concerns. Currently 26 countries, including France, Germany, Russia, China, and India have partially or fully banned genetically modified organisms, while another 60 countries have placed significant restrictions on them.<\/p>\n<p>Faster-than-sound passenger travel also faltered. The Concorde, a marvel of Sixties engineering, halved flight times from Europe to the USA, but was retired in 2003 after a fatal crash. Similarly, Apollo-era dreams of lunar bases dissolved when funding dried up; today, even the National Space Foundation admits that NASA no longer has the specific technologies, tooling and manufacturing capabilities that created the Sixties Apollo programme, making the reconstruction of such powerful technologies as the Apollo Saturn V\u2019s F-1 rockets impossible.<\/p>\n<p>Although we are encouraged to think of technology as developing in a smooth, upwards curve over time, the reality is that the graph of tech adoption shows many collapses and failed promises, as technologies have proved to be either too expensive or too dangerous to continue producing.<\/p>\n<p>When it comes to AI, companies have embraced the accelerationist myth of unstoppable, exponential growth that can\u2019t possibly be contained. The story goes like this: large language models (LLMs) will lead to human-level artificial general intelligence (AGI), creating a direct pathway through exponential self-learning to \u201cthe singularity\u201d and AI superintelligence \u2014 the all-powerful digital deity. This epic story has inspired hundreds of billions in investment since the early 2000s.<\/p>\n<p>But as recent tests and studies of LLMs have shown, this technology is\u00a0not the pathway to AGI that we were promised. <a href=\"https:\/\/garymarcus.substack.com\/p\/scaling-is-over-the-bubble-may-be\" target=\"_blank\" rel=\"noopener nofollow\">The hopes that \u201cscaling\u201d, \u201cemergent properties\u201d and \u201creasoning\u201d would lead to AGI have all failed<\/a>. The path to AGI lies elsewhere, if anywhere. As Yann LeCun, <a href=\"https:\/\/www.youtube.com\/watch?v=4__gg83s_Do\" target=\"_blank\" rel=\"noopener nofollow\">chief AI scientist at Meta, has said<\/a>: \u201cThere\u2019s absolutely no way that autoregressive LLMs\u2026 will reach human intelligence. It\u2019s just not going to happen.\u201d<\/p>\n<p>He has been joined by an increasing number of technologists and public figures including <a href=\"https:\/\/garymarcus.substack.com\/p\/a-knockout-blow-for-llms\" target=\"_blank\" rel=\"noopener nofollow\">Gary Marcus<\/a>, <a href=\"https:\/\/www.linkedin.com\/pulse\/why-large-language-models-route-agi-sandeep-reddy\/\" target=\"_blank\" rel=\"noopener nofollow\">Sandeep Reddy<\/a>, and <a href=\"https:\/\/www.newscientist.com\/article\/2471759-ai-scientists-are-sceptical-that-modern-models-will-lead-to-agi\/\" target=\"_blank\" rel=\"noopener nofollow\">Thomas Dietterich<\/a>, as well as <a href=\"https:\/\/www.freethink.com\/robots-ai\/arc-prize-agi\" target=\"_blank\" rel=\"noopener nofollow\">Fran\u00e7ois Chollet<\/a>, who claimed that <a href=\"https:\/\/www.freethink.com\/robots-ai\/arc-prize-agi\" target=\"_blank\" rel=\"noopener nofollow\">\u201cLLMs are a dead end to AGI\u201d<\/a>. Last year\u2019s calls of \u201cAGI is near\u201d \u2014 a line championed by Anthropic\u2019s Dario Amodei, Google\u2019s Eric Schmidt, and Elon Musk \u2014 now echo as an embarrassing clich\u00e9, trotted out to lessening effect each time the goalposts for \u201cnear\u201d are moved into the future. The recent excitement over <a href=\"https:\/\/www.euronews.com\/next\/2025\/07\/22\/did-google-deepmind-or-openai-win-gold-at-the-worlds-most-prestigious-math-competition\" target=\"_blank\" rel=\"noopener nofollow\">Google\u2019s Deep Mind winning a gold medal in the International Mathematical Olympiad (IMO), and Open AI claiming its model had done the same<\/a>, changes nothing: these were specialised systems trained for this specific task. Their success does not imply broad reasoning ability, let alone stand as examples of \u201cgeneral intelligence\u201d.<\/p>\n<p>\u201cLast year\u2019s calls of \u2018AGI is near\u2019 now echo as an embarrassing clich\u00e9.\u201d<\/p>\n<p>Large language model technology has also <a href=\"https:\/\/www.nttdata.com\/global\/en\/insights\/focus\/2024\/between-70-85p-of-genai-deployment-efforts-are-failing\" target=\"_blank\" rel=\"noopener nofollow\">failed to generate a return on investment for many companies<\/a>. While Open AI CEO Sam Altman may boast of <a href=\"https:\/\/www.theverge.com\/openai\/640894\/chatgpt-has-hit-20-million-paid-subscribers\" target=\"_blank\" rel=\"noopener nofollow\">20 million<\/a> paying subscribers of ChatGPT, their contributions are minuscule compared to the vast amount of venture capital that has been sunk into AI companies: an estimated\u00a0 <a href=\"https:\/\/www.sequoiacap.com\/article\/ais-600b-question\/\" target=\"_blank\" rel=\"noopener nofollow\">$600 billion<\/a>. A <a href=\"https:\/\/www.bcg.com\/publications\/2025\/closing-the-ai-impact-gap\" target=\"_blank\" rel=\"noopener nofollow\">recent study<\/a>\u00a0from Boston Consulting Group\u00a0shows that <a href=\"https:\/\/www.forbes.com\/sites\/cio\/2025\/01\/30\/why-75-of-businesses-arent-seeing-roi-from-ai-yet\/\" target=\"_blank\" rel=\"noopener nofollow\">75% of companies which have invested in AI haven\u2019t seen any return yet.<\/a> Like other technologies that fail in the open marketplace, AI companies are now turning to <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/jun\/17\/openai-military-contract-warfighting#:~:text=The%20US%20Department%20of%20Defense,work%20for%20the%20US%20military.\" target=\"_blank\" rel=\"noopener nofollow\">the military for financing<\/a>.<\/p>\n<p>So much for AI\u2019s unstoppable march forward. If we dig into the history of AI development, we also find that the technology has not been on an unbroken exponential growth curve over time. Since the Seventies, there have in fact been two <a href=\"https:\/\/www.perplexity.ai\/page\/a-historical-overview-of-ai-wi-A8daV1D9Qr2STQ6tgLEOtg\" target=\"_blank\" rel=\"noopener nofollow\">\u201cAI winters\u201d:<\/a> periods during which AI research funding and research froze.<\/p>\n<p>The first winter lasted roughly from 1974 to 1980, when overhyped expectations about narrow AI led to profound disillusionment when promised breakthroughs didn\u2019t materialise. The US Defense Advanced Research Projects Agency (DARPA) pulled its funding for five years, after discovering that <a href=\"https:\/\/www.researchgate.net\/profile\/Daniel-Crevier\/publication\/233820788_AI_The_Tumultuous_History_of_the_Search_for_Artificial_Intelligence\/links\/63fe3d9457495059454f87ca\/AI-The-Tumultuous-History-of-the-Search-for-Artificial-Intelligence.pdf\" target=\"_blank\" rel=\"noopener nofollow\">\u201cmany researchers were caught up in a web of increasing exaggeration\u201d while \u201cwhat they delivered stopped considerably short\u201d.<\/a> Meanwhile, in Britain, <a href=\"https:\/\/www.aiai.ed.ac.uk\/events\/lighthill1973\/lighthill.pdf\" target=\"_blank\" rel=\"noopener nofollow\">the Lighthill Report<\/a> claimed that the government-funded experiments in AI and robotics \u201cfail to reach their more grandiose aims\u201d and called for a halt to all government subsidy. By 1974, investment in AI projects in the UK and US had dwindled.<\/p>\n<p><a href=\"https:\/\/www.holloway.com\/g\/making-things-think\/sections\/the-second-ai-winter-19871993\" target=\"_blank\" rel=\"noopener nofollow\">The second AI winter<\/a> lasted roughly from 1998 to 2005, after early neural networks, \u201cexpert systems\u201d and machine learning models underperformed as a result of insufficient data and computing power. Once again there was a failure to deliver on the big promises. Once again DARPA, the US government and private investors pulled their funding in a rush of capital flight. In all, <a href=\"https:\/\/medium.com\/dscier\/the-history-of-ai-triumphs-trials-and-transformation-babae4b2c106\" target=\"_blank\" rel=\"noopener nofollow\">300 AI companies shut down.<\/a><\/p>\n<p>Oddly enough, you don\u2019t hear much about the two devastating AI winters from tech companies these days. This is probably because the tech companies are fearful that any whisperings about the true history of AI failure could burst the large language model bubble and usher in the third AI winter. But all the same elements are visible today: grandiose claims, hype and market frenzy followed by a failure to deliver.<\/p>\n<p>In 1970, AI pioneer Marvin Minsky told Life Magazine: \u201cIn three to eight years we will have a machine with the general intelligence of an average human being.\u201d Fifty-five years later, his promise has yet to come true \u2014 though the likes of Altman and Musk are hawking the same dream. Who is to say this third attempt at AI isn\u2019t another dead end?<\/p>\n<p>This leads us to Silicon Valley\u2019s fatal flaw. It fuses \u201cfake it till you make it\u201d tech optimism with the Californian belief in manifestation: the idea that enough belief and investment can will anything into existence. This characteristic reveals a na\u00efve faith in historical fatedness, as if progress were preordained and capital could rewrite reality.<\/p>\n<p>Now that the \u201cLLMs are the pathway to AGI\u201d narrative is collapsing, the only factors stopping the AI bubble from bursting are the Silicon Valley mechanisms of hype and denial, the sunk-cost fallacy of investors, and the fact that chatbots and generative AI have become widely diffused across the Internet. Today, there may be <a href=\"https:\/\/www.technollama.co.uk\/a-gemini-report-how-many-people-are-using-generative-ai-on-a-daily-basis-a-gemini-report#:~:text=Final%20Thought:,moves%20further%20into%20the%20mainstream.\" target=\"_blank\" rel=\"noopener nofollow\">115 million daily users<\/a> of AI.<\/p>\n<p>Far from accelerating us into a perfect future, this flawed technology is instead slowing us down. We are inundated with the sludge and slop of AI generated material \u2014 failings which seem incurable within LLMs. Google and Bing now force AI-generated answers to the top of search results, with <a href=\"https:\/\/www.nytimes.com\/2025\/05\/05\/technology\/ai-hallucinations-chatgpt-google.html#:~:text=Vectara&#039;s%20original%20research%20estimated%20that,1%20or%202%20percent%20range.\" target=\"_blank\" rel=\"noopener nofollow\">hallucination rates<\/a> that only increase as AI gets more powerful; social media is <a href=\"https:\/\/theweek.com\/tech\/is-ai-slop-breaking-the-internet\" target=\"_blank\" rel=\"noopener nofollow\">flooded with AI sludge<\/a> and <a href=\"https:\/\/www.fastcompany.com\/91321143\/bot-farms-social-media-manipulation\" target=\"_blank\" rel=\"noopener nofollow\">bot-farm boosted ads<\/a>; app stores and websites are swamped with <a href=\"https:\/\/www.forbes.com\/sites\/johnkoetsier\/2024\/08\/31\/fake-ai-generated-reviews-flooding-app-stores\/\" target=\"_blank\" rel=\"noopener nofollow\">fake reviews<\/a>, while <a href=\"https:\/\/www.creativebloq.com\/ai\/ai-art\/designers-say-ai-is-making-stock-image-sites-unusable\" target=\"_blank\" rel=\"noopener nofollow\">image banks are choked<\/a> with generative AI slop. AI is <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/c0k78715enxo\" target=\"_blank\" rel=\"noopener nofollow\">even contaminating the news.<\/a> As a result, early-adopter companies and employees are now turning against AI, realising it is <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/c93pz1dz2kxo#:~:text=Yet%2077%25%20of%20employees%20in,productivity%20gains%20their%20employers%20expect.\" target=\"_blank\" rel=\"noopener nofollow\">bad for productivity<\/a> and that using it to <a href=\"https:\/\/economictimes.indiatimes.com\/news\/international\/us\/company-that-sacked-700-workers-with-ai-now-regrets-it-scrambles-to-rehire-as-automation-goes-horribly-wrong\/articleshow\/121732999.cms?from=mdr\" target=\"_blank\" rel=\"noopener nofollow\">replace human labour has been counterproductive<\/a>. A study from MIT also claims that LLM use may be <a href=\"https:\/\/time.com\/7295195\/ai-chatgpt-google-learning-school\/\" target=\"_blank\" rel=\"noopener nofollow\">eroding cognitive skills.<\/a><\/p>\n<p>It\u2019s clear the AI genie isn\u2019t going to become the digital God we were promised, but can we put it back into the bottle? The clean-up will be enormous, given AI is now lurking within millions of locations, but it has already begun, with companies that replaced humans with AI now <a href=\"https:\/\/www.vice.com\/en\/article\/this-company-replaced-workers-with-ai-now-theyre-looking-for-humans-again\/\" target=\"_blank\" rel=\"noopener nofollow\">re-hiring humans again<\/a>. <a href=\"https:\/\/futurism.com\/companies-fixing-ai-replacement-mistakes\" target=\"_blank\" rel=\"noopener nofollow\">Companies are also employing \u201cslop cleaners\u201d\u2019<\/a> to tidy up the mess AI has made.<\/p>\n<p>Yet given that governments have na\u00efvely bought into the <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/czdv68gejm7o\" target=\"_blank\" rel=\"noopener nofollow\">AI ethos<\/a>, the clean-up is going to fall to us citizens. We may not have political power, but we have history on our side. Against the myth of fated progress, history shows even the most hyped technologies can be stopped when they fail humanity and society demands they be contained, corked and put back on the shelf. If this were not true, then we would all currently be inhabiting the Metaverse while wearing 3D headsets.<\/p>\n","protected":false},"excerpt":{"rendered":"Tech accelerationists insist that now that artificial intelligence has been invented, the genie can\u2019t be put back in&hellip;\n","protected":false},"author":2,"featured_media":43179,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,1108,733,4308,3845,4838,90,86,56,54,55],"class_list":{"0":"post-43178","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intellience","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-big-tech","13":"tag-diverse","14":"tag-science","15":"tag-technology","16":"tag-uk","17":"tag-united-kingdom","18":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/43178","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=43178"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/43178\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/43179"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=43178"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=43178"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=43178"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}