{"id":481487,"date":"2026-02-17T21:06:09","date_gmt":"2026-02-17T21:06:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/481487\/"},"modified":"2026-02-17T21:06:09","modified_gmt":"2026-02-17T21:06:09","slug":"race-for-ai-is-making-hindenburg-style-disaster-a-real-risk-says-leading-expert-science","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/481487\/","title":{"rendered":"Race for AI is making Hindenburg-style disaster \u2018a real risk\u2019, says leading expert | Science"},"content":{"rendered":"<p class=\"dcr-130mj7b\">The race to get artificial intelligence to market has raised the risk of a Hindenburg-style disaster that shatters global confidence in the technology, a leading researcher has warned.<\/p>\n<p class=\"dcr-130mj7b\">Michael Wooldridge, a professor of AI at Oxford University, said the danger arose from the immense commercial pressures that technology firms were under to release new AI tools, with companies desperate to win customers before the products\u2019 capabilities and potential flaws are fully understood.<\/p>\n<p class=\"dcr-130mj7b\">The surge in AI chatbots with guardrails that are <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/may\/21\/most-ai-chatbots-easily-tricked-into-giving-dangerous-responses-study-finds?CMP=Share_iOSApp_Other\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">easily bypassed<\/a> showed how commercial incentives were prioritised over more cautious development and safety testing, he said.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIt\u2019s the classic technology scenario,\u201d he said. \u201cYou\u2019ve got a technology that\u2019s very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Wooldridge, who will deliver the Royal Society\u2019s Michael Faraday prize lecture on Wednesday evening, titled \u201c<a href=\"https:\/\/royalsociety.org\/science-events-and-lectures\/2026\/02\/faraday-prize-lecture\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">This is not the AI we were promised<\/a>\u201d, said a Hindenburg moment was \u201cvery plausible\u201d as companies rushed to deploy more advanced AI tools.<\/p>\n<p class=\"dcr-130mj7b\">The Hindenburg, a 245-metre airship that made round trips across the Atlantic, was preparing to land in New Jersey in 1937 when it burst into flames, killing 36 crew, passengers and ground staff. The inferno was caused by a spark that ignited the 200,000 cubic metres of hydrogen that kept the airship aloft.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe Hindenburg disaster destroyed global interest in airships; it was a dead technology from that point on, and a similar moment is a real risk for AI,\u201d Wooldridge said. Because AI is embedded in so many systems, a major incident could strike almost any sector.<\/p>\n<p>Michael Wooldridge. Photograph: Steven May\/Alamy Stock Photo\/Alamy Live News.<\/p>\n<p class=\"dcr-130mj7b\">The scenarios Wooldridge imagines include a deadly software update for self-driving cars, an AI-powered hack that grounds global airlines, or a Barings bank-style collapse of a major company, triggered by AI doing something stupid. \u201cThese are very, very plausible scenarios,\u201d he said. \u201cThere are all sorts of ways AI could very publicly go wrong.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Despite the concerns, Wooldridge said he did not intend to attack modern AI. His starting point is the gap between what researchers expected and what has emerged. Many experts anticipated AI that computed solutions to problems and provided answers that were sound and complete. \u201cContemporary AI is neither sound nor complete: it\u2019s very, very approximate,\u201d he said.<\/p>\n<p class=\"dcr-130mj7b\">This arises because large language models, which underpin today\u2019s AI chatbots, rattle out answers by predicting the next word, or part of a word, based on probability distributions learned in training. It leads to AIs with <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/feb\/03\/deepfakes-ai-companions-artificial-intelligence-safety-report\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">jagged capabilities<\/a>: incredibly effective at some tasks, yet terrible at others.<\/p>\n<p class=\"dcr-130mj7b\">The problem, Wooldridge said, was that AI chatbots failed in unpredictable ways and had no idea when they were wrong, but were designed to provide confident answers regardless. When delivered in human-like and sycophantic responses, the answers could easily mislead people, he added. The risk is that people start treating AIs as if they were human. In a <a href=\"https:\/\/cdt.org\/insights\/hand-in-hand-schools-embrace-of-ai-connected-to-increased-risks-to-students\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">2025 survey<\/a> by the Center for Democracy and Technology, nearly a third of students reported that they or a friend had had a romantic relationship with an AI.<\/p>\n<p class=\"dcr-130mj7b\">\u201cCompanies want to present AIs in a very human-like way, but I think that is a very dangerous path to take,\u201d Wooldridge said. \u201cWe need to understand that these are just glorified spreadsheets, they are tools and nothing more than that.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Wooldridge sees positives in the kind of AI depicted in the early years of Star Trek. In one 1968 episode, The Day of the Dove, Mr Spock quizzes the Enterprise\u2019s computer only to be told in a distinctly non-human voice that it has <a href=\"https:\/\/www.youtube.com\/shorts\/nTCKL348xg0\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">insufficient data to answer<\/a>. \u201cThat\u2019s not what we get. We get an overconfident AI that says: yes, here\u2019s the answer,\u201d he said. \u201cMaybe we need AIs to talk to us in the voice of the Star Trek computer. You would never believe it was a human being.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"The race to get artificial intelligence to market has raised the risk of a Hindenburg-style disaster that shatters&hellip;\n","protected":false},"author":2,"featured_media":481488,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-481487","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/481487","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=481487"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/481487\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/481488"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=481487"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=481487"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=481487"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}