{"id":406968,"date":"2026-01-14T16:40:07","date_gmt":"2026-01-14T16:40:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/406968\/"},"modified":"2026-01-14T16:40:07","modified_gmt":"2026-01-14T16:40:07","slug":"what-chatgpt-health-can-actually-tell-you-and-what-it-cant","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/406968\/","title":{"rendered":"What ChatGPT Health can actually tell you \u2014 and what it can\u2019t"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">How often have you asked ChatGPT for health advice? Maybe about a mysterious rash or that tightening in your right calf after a long run. I have, on both counts. ChatGPT even correctly diagnosed that mysterious rash I developed when I first experienced Boston\u2019s winter as <a href=\"https:\/\/www.mayoclinic.org\/diseases-conditions\/cold-urticaria\/symptoms-causes\/syc-20371046\" rel=\"nofollow noopener\" target=\"_blank\">cold urticaria<\/a>, a week before my doctor confirmed it.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">More than 230 million people ask ChatGPT health-related questions every week, <a href=\"https:\/\/openai.com\/index\/introducing-chatgpt-health\/\" rel=\"nofollow noopener\" target=\"_blank\">according to OpenAI<\/a>. While people have been <a href=\"https:\/\/www.wsj.com\/articles\/SB939076866193196830\" rel=\"nofollow noopener\" target=\"_blank\">plugging<\/a> their health anxieties into the internet since its earliest days, what\u2019s changed now is the interface: Instead of scrolling through endless search results, you can now have what feels like a personal conversation. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1 _1lbxzst7\">Sign up <a href=\"https:\/\/www.vox.com\/pages\/future-perfect-newsletter-signup\" rel=\"nofollow noopener\" target=\"_blank\">here<\/a> to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In the past week, two of the biggest AI companies went all-in on that reality. OpenAI launched <a href=\"https:\/\/openai.com\/index\/introducing-chatgpt-health\/\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT Health<\/a>, a dedicated space within its larger chat interface where users can connect their medical records, Apple Health data, and stats from other fitness apps to get personalized responses. (It\u2019s <a href=\"https:\/\/www.advisory.com\/daily-briefing\/2026\/01\/12\/chatgpt-health-ab-oi-ec#:~:text=Currently%2C%20ChatGPT%20Health%20is%20only,iOS%20in%20the%20coming%20weeks.\" rel=\"nofollow noopener\" target=\"_blank\">currently available<\/a> to a small group of users, but the company says it will eventually be open to all users.) Just days later, Anthropic <a href=\"https:\/\/www.anthropic.com\/news\/healthcare-life-sciences\" rel=\"nofollow noopener\" target=\"_blank\">announced<\/a> a similar consumer-facing tool for Claude, alongside a host of others geared toward health care professionals and researchers.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Both consumer-facing AI tools come with disclaimers \u2014 not intended for diagnosis, consult a professional \u2014 that are likely crafted for liability reasons. But those warnings won\u2019t stop the hundreds of millions already using chatbots to understand their symptoms.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">However, it\u2019s possible that these companies have it backward: AI excels at diagnosis; several studies show it\u2019s one of the best use cases for the technology. And there are real trade-offs \u2014 around <a href=\"https:\/\/www.nytimes.com\/interactive\/2023\/12\/22\/technology\/openai-chatgpt-privacy-exploit.html\" rel=\"nofollow noopener\" target=\"_blank\">data privacy<\/a> and <a href=\"https:\/\/www.vox.com\/future-perfect\/417644\/ai-chatgpt-ocd-obsessive-compulsive-disorder-chatbots\" rel=\"nofollow noopener\" target=\"_blank\">AI\u2019s tendency to people-please<\/a> \u2014 that are worth understanding before you connect your medical records to a chatbot.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Let\u2019s start with what AI is actually good at: diagnosis.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Diagnosis is largely pattern-matching, which is partially how AI models are trained in the first place. All an AI model has to do is take in symptoms or data, match them to known conditions, and arrive at an answer. These are patterns doctors have validated over decades \u2014 these symptoms mean this disease, this kind of image shows that condition. AI has been trained on millions of these labeled cases, and it shows.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In a <a href=\"https:\/\/jamanetwork.com\/journals\/jamanetworkopen\/fullarticle\/2825395\" rel=\"nofollow noopener\" target=\"_blank\">2024 study<\/a>, GPT-4 \u2014 OpenAI\u2019s leading model at the time \u2014 achieved diagnostic accuracy above 90 percent on complex clinical cases, such as patients presenting with atypical lacy rashes. Meanwhile, human physicians using conventional resources scored around 74 percent. In a <a href=\"https:\/\/www.frontiersin.org\/journals\/medicine\/articles\/10.3389\/fmed.2025.1709413\/full\" rel=\"nofollow noopener\" target=\"_blank\">separate study<\/a> published this year, top models outperformed doctors at identifying rare conditions from images \u2014 including aggressive skin cancers, birth defects, and internal bleeding \u2014 sometimes by margins of 20 percent or more.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Treatment is where things get murky. Clinicians have to consider the right drug, but also try to figure out whether the patient will actually take it. The twice-daily pill might work better, but will they remember to take both doses? Can they afford it? Do they have transportation to the infusion center? Will they follow up?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">These are human questions, dependent on context that doesn\u2019t live in training data. And of course, a large language model can\u2019t actually prescribe you anything, nor does it have the reliable memory you\u2019d need in longer-term case management.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u201cManagement often has no right answers,\u201d said Adam Rodman, a physician at Beth Israel Deaconess Medical Center in Boston and a professor at Harvard Medical School. \u201cIt\u2019s harder to train a model to do that.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">But OpenAI and Claude aren\u2019t marketing diagnostic tools. They\u2019re marketing something more vague: AI as a personal health analyst. Both ChatGPT Health and Claude now let you connect Apple Health, Peloton, and other fitness trackers. The promise is that AI can analyze your sleep, movement, and heart rate over time \u2014 and surface meaningful trends out of all that disparate data.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup _1iohv3z2 xkp0cg9\">\u201cIt\u2019s going on vibes.\u201d<\/p>\n<p>\u2014 Adam Rodman, physician at Beth Israel Deaconess Medical Center in Boston<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">One problem with that is that there\u2019s no published independent research showing it can. The AI might observe that your resting heart rate is climbing or that you sleep worse on Sundays. But observing a trend isn\u2019t the same as knowing what it means \u2014 and no one has validated which trends, if any, predict real health outcomes. \u201cIt\u2019s going on vibes,\u201d Rodman said.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Both companies have tested their products on internal benchmarks \u2014 OpenAI developed HealthBench, built with hundreds of physicians, which tests how models explain lab results, prepare users for appointments, and interpret wearable data.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">But HealthBench relies on synthetic conversations, not real patient interactions. And it\u2019s text-only, meaning it doesn\u2019t test what happens when you actually upload your Apple Health data. Also, the average conversation is just 2.6 exchanges, far from the anxious back-and-forth a worried user might have over days.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">This doesn\u2019t mean ChatGPT or Claude\u2019s new health features are useless. They might help you notice trends in your habits, the way a migraine diary helps people spot triggers. But it\u2019s not validated science at this point, and it\u2019s worth knowing the difference.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The more important question is what AI can actually do with your health data, and what you\u2019re risking when you use them.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The health conversations are stored separately, OpenAI says, and its content is not used to train models, like most other interactions with chatbots. But neither ChatGPT Health nor Claude\u2019s consumer-facing health features are covered by HIPAA, the law that protects information you share with doctors and insurers. (OpenAI and Anthropic do offer enterprise software to hospitals and insurers that is HIPAA-compliant.)<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In the case of a lawsuit or criminal investigation, the companies would have to comply with a court order. Sara Geoghegan, senior counsel at the Electronic Privacy Information Center, <a href=\"https:\/\/therecord.media\/chatgpt-health-draws-concern-privacy-critics\" rel=\"nofollow noopener\" target=\"_blank\">told The Record<\/a> that sharing medical records with ChatGPT could effectively strip those records of HIPAA protection.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">At a time when reproductive care and gender-affirming care are under <a href=\"https:\/\/apnews.com\/article\/lawsuit-hhs-transgender-health-care-children-015b2e5df026c9d69da7eadbdf6647ae\" rel=\"nofollow noopener\" target=\"_blank\">legal threat in multiple states<\/a>, that\u2019s not an abstract worry. If you\u2019re asking a chatbot questions about either \u2014 and connecting your medical records \u2014 you\u2019re likely creating a data trail that could potentially be subpoenaed.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Additionally, AI models aren\u2019t neutral stores of information. They have a <a href=\"https:\/\/www.law.georgetown.edu\/tech-institute\/insights\/ai-sycophancy-impacts-harms-questions\/\" rel=\"nofollow noopener\" target=\"_blank\">documented tendency<\/a> to tell you what you want to hear. If you\u2019re anxious about a symptom \u2014 or <a href=\"https:\/\/www.vox.com\/future-perfect\/417644\/ai-chatgpt-ocd-obsessive-compulsive-disorder-chatbots\" rel=\"nofollow noopener\" target=\"_blank\">fishing for reassurance<\/a> that it\u2019s nothing serious \u2014 the model can pick up on your tone and possibly adjust its response in a way a human doctor is trained not to do.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Both <a href=\"https:\/\/openai.com\/index\/introducing-chatgpt-health\/\" rel=\"nofollow noopener\" target=\"_blank\">companies<\/a> <a href=\"https:\/\/www.anthropic.com\/news\/healthcare-life-sciences\" rel=\"nofollow noopener\" target=\"_blank\">say<\/a> they have trained their health models to explain information and flag when something warrants a doctor\u2019s visit, rather than simply agreeing with users. Newer models are more likely to ask follow-up questions when uncertain. But it remains to be seen how they perform in real-world situations.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">And sometimes the stakes are higher than a missed diagnosis.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">A <a href=\"https:\/\/arxiv.org\/abs\/2512.01241\" rel=\"nofollow noopener\" target=\"_blank\">preprint<\/a> published in December tested 31 leading AI models, including those from OpenAI and Anthropic, on real-world medical cases and found that the worst performing model made recommendations with a potential for life-threatening harm in about one out of every five scenarios. A <a href=\"https:\/\/www.medrxiv.org\/content\/10.1101\/2025.09.05.25335163v1\" rel=\"nofollow noopener\" target=\"_blank\">separate study<\/a> of an OpenAI-powered clinical decision support tool used in Kenyan primary care clinics found that when AI made a rare harmful suggestion (in about 8 percent of cases), clinicians adopted the bad advice nearly 60 percent of the time.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">These aren\u2019t theoretical concerns. Two years ago, a California teenager named Sam Nelson <a href=\"https:\/\/www.sfgate.com\/tech\/article\/calif-teen-chatgpt-drug-advice-fatal-overdose-21266718.php\" rel=\"nofollow noopener\" target=\"_blank\">died<\/a> after asking ChatGPT to help him use recreational drugs safely. Cases like his are rare, and mistakes by human physicians are real \u2014 tens of thousands of people die each year because of medical errors. But these stories show what can happen when people trust AI with high-stakes decisions.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">It would be easy to read all this and conclude that you should never ask a chatbot a health question. But that ignores why millions of people already do.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The average wait for a primary care appointment in the US is now 31 days \u2014 and in some cities, like Boston, it\u2019s <a href=\"https:\/\/www.psqh.com\/news\/survey-physician-appointment-wait-times-surge-19-since-2022\/\" rel=\"nofollow noopener\" target=\"_blank\">over two months<\/a>. When you do get in, the visit lasts <a href=\"https:\/\/profiles.wustl.edu\/en\/publications\/measuring-primary-care-exam-length-using-electronic-health-record\/\" rel=\"nofollow noopener\" target=\"_blank\">about 18 minutes<\/a>. According to OpenAI, seven in 10 health-related ChatGPT conversations happen outside clinic hours.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Chatbots, by comparison, are available 24\/7, and \u201cthey\u2019re infinitely patient,\u201d said Rodman. They\u2019ll answer the same question five different ways. For a lot of people, that\u2019s more than they get from the health care system.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 lg8ac5a xkp0cg1\">So should you use these tools? There\u2019s no single answer. But here\u2019s a framework: AI is good at explaining things like lab results, medical terminology, or what questions to ask your doctor. It\u2019s unproven at finding meaningful trends in your wellness data. And it\u2019s not a substitute for a diagnosis from someone who can actually examine you.<\/p>\n<p class=\"_1tzd3in1\">You\u2019ve read 1 article in the last month<\/p>\n<p class=\"_1tzd3in4\">Here at Vox, we&#8217;re unwavering in our commitment to covering the issues that matter most to you \u2014 threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.<\/p>\n<p class=\"_1tzd3in4\">Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.<\/p>\n<p class=\"_1tzd3in4\">We rely on readers like you \u2014 join us.<\/p>\n<p><img alt=\"Swati Sharma\" loading=\"lazy\" width=\"59\" height=\"69\" decoding=\"async\" data-nimg=\"1\" style=\"color:transparent\"  src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2026\/01\/1768408807_738_image.webp\"\/><\/p>\n<p class=\"_1tzd3in8\">Swati Sharma<\/p>\n<p class=\"_1tzd3in9\">Vox Editor-in-Chief<\/p>\n","protected":false},"excerpt":{"rendered":"How often have you asked ChatGPT for health advice? Maybe about a mysterious rash or that tightening in&hellip;\n","protected":false},"author":2,"featured_media":406969,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[59],"tags":[181,29583,29584,97,252,253,4530,1343,74],"class_list":{"0":"post-406968","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-health-care","8":"tag-artificial-intelligence","9":"tag-explainers","10":"tag-future-perfect","11":"tag-health","12":"tag-health-care","13":"tag-healthcare","14":"tag-innovation","15":"tag-policy","16":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/406968","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=406968"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/406968\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/406969"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=406968"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=406968"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=406968"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}