{"id":127580,"date":"2025-09-10T12:48:18","date_gmt":"2025-09-10T12:48:18","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/127580\/"},"modified":"2025-09-10T12:48:18","modified_gmt":"2025-09-10T12:48:18","slug":"why-do-ai-models-make-things-up-or-hallucinate-openai-says-it-has-the-answer-and-how-to-prevent-it","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/127580\/","title":{"rendered":"Why do AI models make things up or hallucinate? OpenAI says it has the answer and how to prevent it"},"content":{"rendered":"<p>\n         Published on<br \/>\n            09\/09\/2025 &#8211; 7:01 GMT+2\n            <\/p>\n<p>          <img decoding=\"async\" class=\"c-ad__placeholder__logo\" src=\"https:\/\/static.euronews.com\/website\/images\/logos\/logo-euronews-stacked-outlined-72x72-grey-9.svg\" width=\"72\" height=\"72\" alt=\"\" loading=\"lazy\"\/><br \/>\n          ADVERTISEMENT<\/p>\n<p>Artificial intelligence (AI) company OpenAI says algorithms reward chatbots when they guess, the company said in a new research paper.<\/p>\n<p>OpenAI is referring to \u201challucinations\u201d when the large language models (LLMs) used to train the chatbots guess answers when they are unsure, instead of admitting that they don&#8217;t know.\u00a0<\/p>\n<p>The <a href=\"https:\/\/cdn.openai.com\/pdf\/d04913be-3f6f-4d2b-b283-ff432ef4aaa5\/why-language-models-hallucinate.pdf\" target=\"_blank\" rel=\"noreferrer nofollow noopener\">researchers<\/a> say that hallucinations come from an error in binary classification, when the LLMs categorise new observations into one of two categories.<\/p>\n<p>The reason hallucinations continue is because LLMs are \u201coptimised to be good test-takers and guessing when uncertain[ty] improves test performance,\u201d the report said.\u00a0<\/p>\n<p>The researchers compared it to students who guess on multiple-choice exams or bluff on written exams because submitting an answer would receive more points than leaving the entry blank.\u00a0<\/p>\n<p>LLMs work with a points scheme that rewards them with a point for a correct answer and none for blanks or for saying that they don&#8217;t know the answer.\u00a0<\/p>\n<p>The paper comes a few weeks after OpenAI released GPT-5, the model the company\u00a0 says is \u201challucination-proof\u201d with 46 per cent fewer falsehoods than predecessor GPT-4o.\u00a0<\/p>\n<p>However, a recent study from the US company <a href=\"https:\/\/www.euronews.com\/next\/2025\/09\/05\/which-ai-chatbot-spews-the-most-false-information-1-in-3-ai-answers-are-false-study-says\" rel=\"nofollow noopener\" target=\"_blank\">NewsGuard<\/a> found that ChatGPT models in general spread falsehoods in 40 per cent of their answers.\u00a0<\/p>\n<p>Some questions \u2018unanswerable\u2019 by AI<\/p>\n<p>Through pre-training and post-training, chatbots learn how to predict the next word in large amounts of text.\u00a0<\/p>\n<p>OpenAI\u2019s paper found that while some things, such as spelling and grammar, follow very clear rules and structure, there are other subjects or types of data that will be hard or even impossible for an AI to identify.\u00a0<\/p>\n<p>For example, algorithms can classify pictures when they are labelled either \u201ccat or dog,\u201d but if the pictures were labelled after the pet\u2019s birthday, the chatbot wouldn\u2019t be able to categorise them in an accurate way.\u00a0<\/p>\n<p>This type of task that an AI performs would \u201calways produce errors, no matter how advanced the algorithm is,\u201d the report found.\u00a0<\/p>\n<p>\u00a0One of the key findings by researchers in the report is that models will never be 100 per cent accurate because \u201csome real-world questions are inherently unanswerable\u201d.\u00a0<\/p>\n<p>To limit hallucinations, users could\u00a0 instruct the LLM to respond with an \u201cI don&#8217;t know\u201d if it does not know the answer and modify the existing points system for the types of answers it gives, OpenAI said.\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"Published on 09\/09\/2025 &#8211; 7:01 GMT+2 ADVERTISEMENT Artificial intelligence (AI) company OpenAI says algorithms reward chatbots when they&hellip;\n","protected":false},"author":2,"featured_media":65725,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,6697,53426,86,56,54,55],"class_list":{"0":"post-127580","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-generative-ai","12":"tag-open-ai","13":"tag-technology","14":"tag-uk","15":"tag-united-kingdom","16":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/127580","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=127580"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/127580\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/65725"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=127580"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=127580"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=127580"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}