{"id":145684,"date":"2025-09-15T19:43:12","date_gmt":"2025-09-15T19:43:12","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/145684\/"},"modified":"2025-09-15T19:43:12","modified_gmt":"2025-09-15T19:43:12","slug":"openai-realizes-it-made-a-terrible-mistake","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/145684\/","title":{"rendered":"OpenAI Realizes It Made a Terrible Mistake"},"content":{"rendered":"<p>OpenAI claims to have figured out what&#8217;s driving &#8220;hallucinations,&#8221; or AI models&#8217; strong tendency to make up answers that are factually incorrect.<\/p>\n<p>It&#8217;s a major problem plaguing the entire industry, greatly undercutting the usefulness of the tech. Worse yet, experts have found that the problem is <a href=\"https:\/\/futurism.com\/ai-industry-problem-smarter-hallucinating\" class=\"underline hover:text-futurism hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:blue\" rel=\"nofollow noopener\" target=\"_blank\">getting worse as AI models get more capable<\/a>.<\/p>\n<p>As a result, despite incurring astronomical expenses in their deployment, frontier AI models are still prone to <a href=\"https:\/\/futurism.com\/gpt-5-huge-factual-errors\" class=\"underline hover:text-futurism hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:blue\" rel=\"nofollow noopener\" target=\"_blank\">making inaccurate claims<\/a>\u00a0when <a href=\"https:\/\/futurism.com\/ai-makes-up-answers\" class=\"underline hover:text-futurism hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:blue\" rel=\"nofollow noopener\" target=\"_blank\">faced with a prompt they don&#8217;t know the answer to<\/a>.<\/p>\n<p>Whether there&#8217;s a solution to the problem remains a hotly debated subject, with some experts arguing that hallucinations are <a href=\"https:\/\/futurism.com\/the-byte\/impossible-chatbots-stop-lying-experts\" class=\"underline hover:text-futurism hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:blue\" rel=\"nofollow noopener\" target=\"_blank\">intrinsic to the tech itself<\/a>. In other words, large language models may be a <a href=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/current-ai-models-a-dead-end-for-human-level-intelligence-expert-survey-claims\" class=\"underline hover:text-futurism hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:blue\" rel=\"nofollow noopener\" target=\"_blank\">dead end<\/a> in our quest to develop AIs with a reliable grasp on factual claims.<\/p>\n<p>In a <a href=\"https:\/\/arxiv.org\/abs\/2509.04664\" class=\"underline hover:text-futurism hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:blue\" rel=\"nofollow noopener\" target=\"_blank\">paper<\/a> published last week, a team of OpenAI researchers attempted to come up with an explanation. They suggest that large language models hallucinate because when they&#8217;re being created, they&#8217;re incentivized to guess rather than admit they simply don&#8217;t know the answer.<\/p>\n<p>Hallucinations &#8220;persist due to the way most evaluations are graded \u2014 language models are optimized to be good test-takers, and guessing when uncertain improves test performance,&#8221; the paper reads.<\/p>\n<p>Conventionally, the output of an AI is graded in a binary way, rewarding it when it gives a correct response and penalizing it when it gives an incorrect one.<\/p>\n<p>In simple terms, in other words, guessing is rewarded \u2014 because it\u00a0might\u00a0be right \u2014 over an AI admitting it doesn&#8217;t know the answer, which will be graded as incorrect no matter what.<\/p>\n<p>As a result, through &#8220;natural statistical pressures,&#8221; LLMs are far more prone to hallucinate an answer instead of &#8220;acknowledging uncertainty.&#8221;<\/p>\n<p>&#8220;Most scoreboards prioritize and rank models based on accuracy, but errors are worse than abstentions,&#8221; OpenAI wrote in an <a href=\"https:\/\/openai.com\/index\/why-language-models-hallucinate\/\" class=\"underline hover:text-futurism hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:blue\" rel=\"nofollow noopener\" target=\"_blank\">accompanying blog post<\/a>.<\/p>\n<p>In other words, OpenAI says that it \u2014 and all its imitators across the industry \u2014 have made a grave structural error in how they&#8217;ve been training AI.<\/p>\n<p>There&#8217;ll be a lot riding on whether the issue is correctable going forward. OpenAI claims that &#8220;there is a straightforward fix&#8221; to the problem: &#8220;Penalize confident errors more than you penalize uncertainty, and give partial credit for appropriate expressions of uncertainty.&#8221;<\/p>\n<p>Going forward, evaluations need to ensure that &#8220;their scoring discourages guessing,&#8221; the blog post reads. &#8220;If the main scoreboards keep rewarding lucky guesses, models will keep learning to guess.&#8221;<\/p>\n<p>&#8220;Simple modifications of mainstream evaluations can realign incentives, rewarding appropriate expressions of uncertainty rather than penalizing them,&#8221; the company&#8217;s researchers concluded in the paper. &#8220;This can remove barriers to the suppression of hallucinations, and open the door to future work on nuanced language models, e.g., with richer pragmatic competence.&#8221;<\/p>\n<p>How these adjustments to evaluations will play out in the real world remains to be seen. While the company claimed its latest GPT-5 model hallucinates less, users were <a href=\"https:\/\/futurism.com\/gpt-5-disaster\" class=\"underline hover:text-futurism hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:blue\" rel=\"nofollow noopener\" target=\"_blank\">left largely unimpressed<\/a>.<\/p>\n<p>For now, the AI industry will have to continue reckoning with the problem as it justifies tens of billions of dollars in capital expenditures and <a href=\"https:\/\/www.npr.org\/2024\/07\/12\/g-s1-9545\/ai-brings-soaring-emissions-for-google-and-microsoft-a-major-contributor-to-climate-change\" class=\"underline hover:text-futurism hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:blue\" rel=\"nofollow noopener\" target=\"_blank\">soaring emissions<\/a>.<\/p>\n<p>&#8220;Hallucinations remain a fundamental challenge for all large language models, but we are working hard to further reduce them,&#8221; OpenAI promised in its blog post.<\/p>\n<p class=\"\">More on hallucinations: <a href=\"https:\/\/futurism.com\/gpt-5-huge-factual-errors\" class=\"underline hover:text-futurism hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:blue\" rel=\"nofollow noopener\" target=\"_blank\">GPT-5 Is Making Huge Factual Errors, Users Say<\/a><\/p>\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"OpenAI claims to have figured out what&#8217;s driving &#8220;hallucinations,&#8221; or AI models&#8217; strong tendency to make up answers&hellip;\n","protected":false},"author":2,"featured_media":145685,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-145684","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/145684","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=145684"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/145684\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/145685"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=145684"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=145684"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=145684"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}