{"id":401829,"date":"2026-01-09T11:02:11","date_gmt":"2026-01-09T11:02:11","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/401829\/"},"modified":"2026-01-09T11:02:11","modified_gmt":"2026-01-09T11:02:11","slug":"chatgpt-health-lets-you-connect-medical-records-to-an-ai-that-makes-things-up","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/401829\/","title":{"rendered":"ChatGPT Health lets you connect medical records to an AI that makes things up"},"content":{"rendered":"<p>But despite OpenAI\u2019s talk of supporting health goals, the company\u2019s terms of service directly <a href=\"https:\/\/openai.com\/policies\/service-terms\/\" rel=\"nofollow noopener\" target=\"_blank\">state<\/a> that ChatGPT and other OpenAI services \u201care not intended for use in the diagnosis or treatment of any health condition.\u201d<\/p>\n<p>It appears that policy is not changing with ChatGPT Health. OpenAI writes in its announcement, \u201cHealth is designed to support, not replace, medical care. It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time\u2014not just moments of illness\u2014so you can feel more informed and prepared for important medical conversations.\u201d<\/p>\n<p>A cautionary tale<\/p>\n<p>The SFGate report on Sam Nelson\u2019s death illustrates why maintaining that disclaimer legally matters. According to chat logs reviewed by the publication, Nelson first asked ChatGPT about recreational drug dosing in November 2023. The AI assistant initially refused and directed him to health care professionals. But over 18 months of conversations, ChatGPT\u2019s responses reportedly shifted. Eventually, the chatbot told him things like \u201cHell yes\u2014let\u2019s go full trippy mode\u201d and recommended he double his cough syrup intake. His mother found him dead from an overdose the day after he began addiction treatment.<\/p>\n<p>While Nelson\u2019s case did not involve the analysis of doctor-sanctioned health care instructions like the type ChatGPT Health will link to, his case is not unique, as many people have been\u00a0<a href=\"https:\/\/arstechnica.com\/information-technology\/2025\/08\/with-ai-chatbots-big-tech-is-moving-fast-and-breaking-people\/\" target=\"_blank\" rel=\"noopener nofollow\">misled<\/a>\u00a0by chatbots that provide inaccurate information or\u00a0<a href=\"https:\/\/arstechnica.com\/tech-policy\/2025\/11\/openai-says-dead-teen-violated-tos-when-he-used-chatgpt-to-plan-suicide\/\" target=\"_blank\" rel=\"noopener nofollow\">encourage<\/a><a href=\"https:\/\/arstechnica.com\/tech-policy\/2025\/11\/openai-says-dead-teen-violated-tos-when-he-used-chatgpt-to-plan-suicide\/\" rel=\"nofollow noopener\" target=\"_blank\">\u00a0dangerous behavior<\/a>, as we have covered in the past.<\/p>\n<p>That\u2019s because AI language models can easily <a href=\"https:\/\/arstechnica.com\/information-technology\/2023\/04\/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them\/\" rel=\"nofollow noopener\" target=\"_blank\">confabulate<\/a>, generating plausible but false information in a way that makes it <a href=\"https:\/\/arstechnica.com\/information-technology\/2025\/08\/with-ai-chatbots-big-tech-is-moving-fast-and-breaking-people\/\" rel=\"nofollow noopener\" target=\"_blank\">difficult<\/a> for some users to distinguish fact from fiction. The AI models that services like ChatGPT use statistical relationships in training data (like the text from books, YouTube transcripts, and websites) to produce plausible responses rather than necessarily accurate ones. Moreover, ChatGPT\u2019s outputs can <a href=\"https:\/\/arstechnica.com\/information-technology\/2025\/08\/the-personhood-trap-how-ai-fakes-human-personality\/\" rel=\"nofollow noopener\" target=\"_blank\">vary widely<\/a> depending on who is using the chatbot and what has previously taken place in the user\u2019s chat history (including notes about previous chats).<\/p>\n","protected":false},"excerpt":{"rendered":"But despite OpenAI\u2019s talk of supporting health goals, the company\u2019s terms of service directly state that ChatGPT and&hellip;\n","protected":false},"author":2,"featured_media":401830,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[34],"tags":[64,63,137,500],"class_list":{"0":"post-401829","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-au","9":"tag-australia","10":"tag-health","11":"tag-healthcare"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/401829","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=401829"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/401829\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/401830"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=401829"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=401829"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=401829"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}