{"id":425484,"date":"2026-01-23T18:24:09","date_gmt":"2026-01-23T18:24:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/425484\/"},"modified":"2026-01-23T18:24:09","modified_gmt":"2026-01-23T18:24:09","slug":"giving-your-healthcare-info-to-a-chatbot-is-unsurprisingly-a-terrible-idea","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/425484\/","title":{"rendered":"Giving your healthcare info to a chatbot is, unsurprisingly, a terrible idea"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Every week, more than 230 million people <a href=\"https:\/\/openai.com\/index\/introducing-chatgpt-health\/\" rel=\"nofollow noopener\" target=\"_blank\">ask ChatGPT<\/a> for health and wellness advice, according to OpenAI. The company <a href=\"https:\/\/cdn.openai.com\/pdf\/2cb29276-68cd-4ec6-a5f4-c01c5e7a36e9\/OpenAI-AI-as-a-Healthcare-Ally-Jan-2026.pdf\" rel=\"nofollow noopener\" target=\"_blank\">says<\/a> that many see the chatbot as an \u201cally\u201d to help navigate the maze of insurance, file paperwork, and become better self-advocates. In exchange, it hopes you will trust its chatbot with details about your diagnoses, medications, test results, and other private medical information. But while talking to a chatbot may be starting to feel a bit like the doctor\u2019s office, it isn\u2019t one. Tech companies aren\u2019t bound by the same obligations as medical providers. Experts tell The Verge it would be wise to carefully consider whether you want to hand over your records.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Health and wellness is swiftly emerging as a key battleground for AI labs and a major test for how willing users are to welcome these systems into their lives. This month two of the industry\u2019s biggest players made overt pushes into medicine. OpenAI <a href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\/857640\/openai-launches-chatgpt-health-connect-medical-records\" rel=\"nofollow noopener\" target=\"_blank\">released ChatGPT Health<\/a>, a dedicated tab within ChatGPT designed for users to ask health-related questions in what it says is a more secure and personalized environment. Anthropic <a href=\"https:\/\/www.anthropic.com\/news\/healthcare-life-sciences\" rel=\"nofollow noopener\" target=\"_blank\">introduced Claude for Healthcare<\/a>, a \u201cHIPAA-ready\u201d product it says can be used by hospitals, health providers, and consumers. (Notably absent is Google, whose Gemini chatbot is one of the world\u2019s most competent and widely used AI tools, though the company did <a href=\"https:\/\/research.google\/blog\/next-generation-medical-image-interpretation-with-medgemma-15-and-medical-speech-to-text-with-medasr\/\" rel=\"nofollow noopener\" target=\"_blank\">announce<\/a> an update to its MedGemma medical AI model for developers.)<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">OpenAI actively encourages users to share sensitive information like medical records, lab results, and health and wellness data from apps like Apple Health, Peloton, Weight Watchers, and MyFitnessPal with ChatGPT Health in exchange for deeper insights. It explicitly states that users\u2019 health data will be kept confidential and won\u2019t be used to train AI models, and that steps have been taken to keep data secure and private. OpenAI says ChatGPT Health conversations will also be held in a separate part of the app, with users able to view or delete Health \u201cmemories\u201d at any time.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">OpenAI\u2019s assurances that it will keep users\u2019 sensitive data safe have been helped in no small way by the company launching an identical-sounding product with tighter security protocols at almost the same time as ChatGPT Health. The tool, called ChatGPT for Healthcare, is part of a broader range of <a href=\"https:\/\/openai.com\/index\/openai-for-healthcare\/\" rel=\"nofollow noopener\" target=\"_blank\">products<\/a> sold to support businesses, hospitals, and clinicians working with patients directly. OpenAI\u2019s suggested uses include streamlining administrative work like drafting clinical letters and discharge summaries and helping physicians collate the latest medical evidence to improve patient care. Similar to other enterprise-grade products sold by the company, there are greater protections in place than offered to general consumers, especially free users, and OpenAI says the products are designed to comply with the privacy obligations required of the medical sector. Given the similar names and launch dates \u2014 ChatGPT for Healthcare was announced the day after ChatGPT Health \u2014 it is all too easy to confuse the two and presume the consumer-facing product has the same level of protection as the more clinically oriented one. Numerous people I spoke to when reporting this story did so.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">Even if you trust a company\u2019s vow to safeguard your data\u2026 it might just change its mind.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Whichever security assurance we take, however, it is far from watertight. Users for tools like ChatGPT Health often have little safeguarding against breaches or unauthorized use beyond what\u2019s in the terms of use and privacy policies, experts tell The Verge. As most states haven\u2019t enacted comprehensive privacy laws \u2014 and there isn\u2019t a comprehensive federal privacy law \u2014 data protection for AI tools like ChatGPT Health \u201clargely depends on what companies promise in their privacy policies and terms of use,\u201d says Sara Gerke, a law professor at the University of Illinois Urbana-Champaign.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Even if you trust a company\u2019s vow to safeguard your data \u2014 OpenAI says it encrypts Health data by default \u2014 it might just change its mind. \u201cWhile ChatGPT does state in their current terms of use that they will keep this data confidential and not use them to train their models, you are not protected by law, and it is allowed to change terms of use over time,\u201d explains Hannah van Kolfschooten, a researcher in digital health law at the University of Basel in Switzerland. \u201cYou will have to trust that ChatGPT does not do so.\u201d Carmel Shachar, an assistant clinical professor of law at Harvard Law School, concurs: \u201cThere\u2019s very limited protection. Some of it is their word, but they could always go back and change their privacy practices.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Assurances that a product is compliant with data protection laws governing the healthcare sector like the Health Insurance Portability and Accountability Act, or HIPAA, shouldn\u2019t offer much comfort either, Shachar says. While great as a guide, there\u2019s little at stake if a company that voluntarily complies fails to do so, she explains. Voluntarily complying isn\u2019t the same as being bound. \u201cThe value of HIPAA is that if you mess up, there\u2019s enforcement.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">There\u2019s a reason why medicine is a heavily regulated field<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">It\u2019s more than just privacy. There\u2019s a reason why medicine is a heavily regulated field \u2014 errors can be dangerous, even lethal. There are no shortage of examples showing chatbots confidently spouting false or misleading health information, such as when a man <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/aug\/12\/us-man-bromism-salt-diet-chatgpt-openai-health-information\" rel=\"nofollow noopener\" target=\"_blank\">developed a rare condition<\/a> after he asked ChatGPT about removing salt from his diet and the chatbot suggested he replace salt with the sodium bromide, which was <a href=\"https:\/\/www.sciencedirect.com\/topics\/medicine-and-dentistry\/bromine-derivative\" rel=\"nofollow noopener\" target=\"_blank\">historically<\/a> used as a sedative. Or when Google\u2019s AI Overviews <a href=\"https:\/\/www.theverge.com\/news\/860356\/google-pulls-alarming-dangerous-medical-ai-overviews\" rel=\"nofollow noopener\" target=\"_blank\">wrongly advised<\/a> people with pancreatic cancer to avoid high-fat foods \u2014 the exact opposite of what they should be doing.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">To address this, OpenAI explicitly states that their consumer-facing tool is designed to be used in close collaboration with physicians and is not intended for diagnosis and treatment. Tools designed for diagnosis and treatment are designated as medical devices and are subject to much stricter regulations, such as clinical trials to prove they work and safety monitoring once deployed. Although OpenAI is fully and openly aware that one of the major use cases of ChatGPT is supporting users\u2019 health and well-being \u2014 recall the 230 million people asking for advice each week \u2014 the company\u2019s assertion that it is not intended as a medical device carries a lot of weight with regulators, Gerke explains. \u201cThe manufacturer\u2019s stated intended use is a key factor in the medical device classification,\u201d she says, meaning companies that say tools aren\u2019t for medical use will largely escape oversight even if products are being used for medical purposes. It underscores the regulatory challenges technology like chatbots are posing.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">For now, at least, this disclaimer keeps ChatGPT Health out of the purview of regulators like the Food and Drug Administration, but van Kolfschooten says it\u2019s perfectly reasonable to ask whether or not tools like this should really be classified as a medical device and regulated as such. It\u2019s important to look at how it\u2019s being used, as well as what the company is saying, she explains. When announcing the product, OpenAI suggested people could use ChatGPT Health to interpret lab results, track health behavior, or help them reason through treatment decisions. If a product is doing this, one could reasonably argue it might fall under the US definition of a medical device, she says, suggesting that Europe\u2019s stronger regulatory framework may be the reason why it\u2019s not available in the region yet.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">\u201cWhen a system feels personalized and has this aura of authority, medical disclaimers will not necessarily challenge people\u2019s trust in the system.\u201d <\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Despite claiming ChatGPT is not to be used for diagnosis or treatment, OpenAI has gone through a great deal of effort to prove that ChatGPT is a pretty <a href=\"https:\/\/www.vox.com\/future-perfect\/475081\/chatgpt-health-claude-openai-diagnosis-wellness-wearables\" rel=\"nofollow noopener\" target=\"_blank\">capable medic<\/a> and encourage users to tap it for health queries. The company highlighted health as a major use case when <a href=\"https:\/\/openai.com\/index\/introducing-gpt-5\/\" rel=\"nofollow noopener\" target=\"_blank\">launching GPT-5<\/a>, and CEO Sam Altman even <a href=\"https:\/\/www.youtube.com\/live\/0Uu_VJeVVfo?si=e0HWcP8l-tqftQ3g&amp;t=2118\" rel=\"nofollow noopener\" target=\"_blank\">invited a cancer patient and her husband<\/a> on stage to discuss how the tool helped her make sense of the diagnosis. The company says it assesses ChatGPT\u2019s medical prowess against a benchmark it developed itself with more than 260 physicians across dozens of specialties, <a href=\"https:\/\/openai.com\/index\/healthbench\/\" rel=\"nofollow noopener\" target=\"_blank\">HealthBench<\/a>, that \u201ctests how well AI models perform in realistic health scenarios,\u201d though <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/15\/chatgpt-health-ai-chatbot-medical-advice\" rel=\"nofollow noopener\" target=\"_blank\">critics note<\/a> it is not very transparent. Other studies \u2014 often small, limited, or run by the company itself \u2014 hint at ChatGPT\u2019s medical potential too, showing that in some cases it can <a href=\"https:\/\/journals.plos.org\/digitalhealth\/article?id=10.1371%2Fjournal.pdig.0000198\" rel=\"nofollow noopener\" target=\"_blank\">pass medical licensing exams<\/a>, <a href=\"https:\/\/jamanetwork.com\/journals\/jamanetworkopen\/fullarticle\/2821167?utm_source=For_The_Media&amp;utm_medium=referral&amp;utm_campaign=ftm_links&amp;utm_term=071624\" rel=\"nofollow noopener\" target=\"_blank\">communicate better with patients<\/a>, and <a href=\"https:\/\/www.nytimes.com\/2024\/11\/17\/health\/chatgpt-ai-doctors-diagnosis.html\" rel=\"nofollow noopener\" target=\"_blank\">outperform doctors at diagnosing illness<\/a>, as well as help doctors make <a href=\"https:\/\/openai.com\/index\/ai-clinical-copilot-penda-health\/\" rel=\"nofollow noopener\" target=\"_blank\">fewer mistakes<\/a> when used as a tool.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">OpenAI\u2019s efforts to present ChatGPT Health as an authoritative source of health information could also undermine any disclaimers it includes telling users not to utilize it for medical purposes, van Kolfschooten says. \u201cWhen a system feels personalized and has this aura of authority, medical disclaimers will not necessarily challenge people\u2019s trust in the system.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Companies like OpenAI and Anthropic are hoping they have that trust as they jostle for prominence in what they see as the next big market for AI. The figures showing how many people are already using AI chatbots for health suggest they may be onto something, and given the <a href=\"https:\/\/www.thelancet.com\/journals\/lanpub\/article\/PIIS2468-2667(24)00168-3\/fulltext\" rel=\"nofollow noopener\" target=\"_blank\">stark health inequalities<\/a> and difficulties many face in <a href=\"https:\/\/www.kff.org\/health-costs\/americans-challenges-with-health-care-costs\/\" rel=\"nofollow noopener\" target=\"_blank\">accessing even basic care<\/a>, this could be a good thing. At least, it could be, if that trust is well-placed. We trust our private information with healthcare providers because the profession has earned that trust. It\u2019s not yet clear whether an industry with a reputation for moving fast and breaking things has earned the same.<\/p>\n<p>Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Robert HartClose<img alt=\"Robert Hart\" data-chromatic=\"ignore\" loading=\"lazy\" decoding=\"async\" data-nimg=\"fill\" class=\"_1bw37385 x271pn0\" style=\"position:absolute;height:100%;width:100%;left:0;top:0;right:0;bottom:0;color:transparent;background-size:cover;background-position:50% 50%;background-repeat:no-repeat;background-image:url(&quot;data:image\/svg+xml;charset=utf-8,%3Csvg xmlns='http:\/\/www.w3.org\/2000\/svg' %3E%3Cfilter id='b' color-interpolation-filters='sRGB'%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3CfeColorMatrix values='1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 100 -1' result='s'\/%3E%3CfeFlood x='0' y='0' width='100%25' height='100%25'\/%3E%3CfeComposite operator='out' in='s'\/%3E%3CfeComposite in2='SourceGraphic'\/%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3C\/filter%3E%3Cimage width='100%25' height='100%25' x='0' y='0' preserveAspectRatio='none' style='filter: url(%23b);' href='data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mN8+R8AAtcB6oaHtZcAAAAASUVORK5CYII='\/%3E%3C\/svg%3E&quot;)\"   src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2026\/01\/1769192649_446_ROB_H_BLURPLE.jpg\"\/><\/p>\n<p>Robert Hart<\/p>\n<p class=\"fv263x1\">Posts from this author will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/authors\/robert-hart\" rel=\"nofollow noopener\" target=\"_blank\">See All by Robert Hart<\/a><\/p>\n<p>AIClose<\/p>\n<p>AI<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\" rel=\"nofollow noopener\" target=\"_blank\">See All AI<\/a><\/p>\n<p>HealthClose<\/p>\n<p>Health<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/health\" rel=\"nofollow noopener\" target=\"_blank\">See All Health<\/a><\/p>\n<p>OpenAIClose<\/p>\n<p>OpenAI<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/openai\" rel=\"nofollow noopener\" target=\"_blank\">See All OpenAI<\/a><\/p>\n<p>ReportClose<\/p>\n<p>Report<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/report\" rel=\"nofollow noopener\" target=\"_blank\">See All Report<\/a><\/p>\n<p>ScienceClose<\/p>\n<p>Science<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/science\" rel=\"nofollow noopener\" target=\"_blank\">See All Science<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Every week, more than 230 million people ask ChatGPT for health and wellness advice, according to OpenAI. The&hellip;\n","protected":false},"author":2,"featured_media":425485,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[59],"tags":[182,97,252,253,1283,2853,79],"class_list":{"0":"post-425484","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-health-care","8":"tag-ai","9":"tag-health","10":"tag-health-care","11":"tag-healthcare","12":"tag-openai","13":"tag-report","14":"tag-science"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/425484","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=425484"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/425484\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/425485"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=425484"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=425484"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=425484"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}