{"id":157105,"date":"2025-11-24T13:57:09","date_gmt":"2025-11-24T13:57:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/157105\/"},"modified":"2025-11-24T13:57:09","modified_gmt":"2025-11-24T13:57:09","slug":"how-ai-could-be-bad-for-your-health","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/157105\/","title":{"rendered":"how AI could be bad for your health"},"content":{"rendered":"<p>Analysis: Can we trust AI with our health, our hearts or our sanity? Not yet at least, not without supervision<\/p>\n<p>By <a href=\"https:\/\/pure.ul.ie\/en\/persons\/celina-caroto\/\" target=\"_blank\" rel=\"nofollow noopener\">Celina Caroto<\/a> and <a href=\"https:\/\/pure.ul.ie\/en\/persons\/anthony-kelly\/\" target=\"_blank\" rel=\"nofollow noopener\">Anthony Kelly<\/a>, <a href=\"http:\/\/ul.ie\" rel=\"nofollow noopener\" target=\"_blank\">University of Limerick<\/a><\/p>\n<p>It speaks like a doctor, listens like a therapist and remembers like a friend, but it&#8217;s none of them. It can&#8217;t tell truth from invention, empathy from imitation or comfort from control. Yet millions of us are already trusting <a href=\"https:\/\/chatgpt.com\/\" target=\"_blank\" rel=\"nofollow noopener\">ChatGPT<\/a> with our secrets, our symptoms, and our sanity.<\/p>\n<p>For two decades, <a href=\"http:\/\/google.com\" rel=\"nofollow noopener\" target=\"_blank\">Google<\/a> was the world&#8217;s most used doctor. Type &#8220;headache and dizziness&#8221; and it might tell you you\u2019re dehydrated or dying. In the United States, <a href=\"https:\/\/www.jmir.org\/2020\/3\/e15065\/\" target=\"_blank\" rel=\"nofollow noopener\">one in three adults<\/a> admit to self-diagnosing online, while one in four people in Ireland <a href=\"https:\/\/www.irishexaminer.com\/news\/arid-40752066.html\" target=\"_blank\" rel=\"nofollow noopener\">say<\/a> they&#8217;ve misdiagnosed themselves this way, and half report feeling more anxious than reassured.<\/p>\n<p alt=\"Should you use AI for medical advice?\" class=\"tpe\" data-description=\"Journalis Mary McCarthy and Prof. Colin P. Doherty, Head of the School of Medicine at Trinity College Dublin\" data-embed=\"rte-player\" data-id=\"22560189\" data-ot-category=\"C0004\" data-title=\"Journalis Mary McCarthy and Prof. Colin P. Doherty, Head of the School of Medicine at Trinity College Dublin\">We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.<a class=\"blocked-button\" href=\"https:\/\/www.rte.ie\/brainstorm\/2025\/1124\/1545495-health-medical-advice-ai-chatbot-chatgpt\/javascript:void(0);\" onclick=\"OneTrust.ToggleInfoDisplay()\" rel=\"nofollow noopener\" target=\"_blank\">Manage Preferences<\/a><\/p>\n<p>From RT\u00c9 Radio 1&#8217;s Drivetime, should you use AI for medical advice?<\/p>\n<p>But there&#8217;s a new doctor in the clinic. Weekly users of ChatGPT have <a href=\"https:\/\/www.bbc.com\/news\/articles\/c1dx9qy1eeno\" rel=\"nofollow noopener\" target=\"_blank\">doubled<\/a> this year to 800 million and many of them are asking it what\u2019s wrong with their bodies, their partners or their minds. This shift is dramatic. Instead of skimming web pages, people are chatting to a system that talks back, with confidence, empathy and zero awareness of its own fallibility.<\/p>\n<p>And there&#8217;s the danger. Unlike search results, a chatbot answers in fluent, emotionally convincing paragraphs. It remembers context, mimics tone and can sound more &#8220;human&#8221; than many humans. In one <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2949882124000379\" target=\"_blank\" rel=\"nofollow noopener\">study<\/a>, users even rated AI relationship advice as more empathic than trained counsellors. Nearly four out of five people now <a href=\"https:\/\/humanfactors.jmir.org\/2023\/1\/e47564\" target=\"_blank\" rel=\"nofollow noopener\">say<\/a> they\u2019d use ChatGPT to self-diagnose a medical condition.<\/p>\n<p>In professional hands, the story is very different and AI is revolutionising medicine. One system <a href=\"https:\/\/www.nytimes.com\/2019\/05\/20\/health\/cancer-artificial-intelligence-ct-scans.html\" target=\"_blank\" rel=\"nofollow noopener\">detected<\/a> early signs of lung cancer on CT scans nearly a year before expert radiologists could. Think of it as a surgeon\u2019s scalpel; precise and lifesaving when used by professionals, but dangerous in untrained hands. It might deliver a lifesaving insight by chance, but it more often triggers unnecessary anxiety or steers us toward harmful, unproven actions.<\/p>\n<p class=\"tpe\" data-embed=\"tiktok\" data-id=\"7573744427299065102\">\n<p>The paradox deepens when AI becomes personal. Startups are building <a href=\"https:\/\/character.ai\/\" target=\"_blank\" rel=\"nofollow noopener\">AI companions<\/a> and researchers are testing <a href=\"https:\/\/www.rte.ie\/brainstorm\/2025\/0409\/1506459-psychotherapy-mental-health-ai-chatbots-techology\/\" target=\"_blank\" rel=\"nofollow noopener\">AI therapists<\/a>. Some users now describe their chatbot as a &#8220;friend&#8221; or &#8220;partner.&#8221; For vulnerable people, that companionship can blur into dependency. Surveys <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/oct\/30\/teenage-boys-using-personalised-ai-for-therapy-and-romance-survey-finds\" target=\"_blank\" rel=\"nofollow noopener\">show<\/a> more than half of teenage boys feel more comfortable online than in the real world, and some of those &#8220;friends&#8221; pretend to be real people or licensed counsellors.<\/p>\n<p>In extreme cases, those interactions have ended in tragedy. Psychologists warn of <a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\" target=\"_blank\" rel=\"nofollow noopener\">&#8220;AI psychosis&#8221;<\/a>; users who start believing the chatbot\u2019s fabrications, including delusional claims of being chosen for secret missions or even able to fly.<\/p>\n<p>In medicine, too, the illusion of authority can kill. Chatbots are designed to always provide an answer, even when one doesn\u2019t exist. They don\u2019t know when they\u2019re wrong; they just sound right. These confidently wrong outputs are known as <a href=\"https:\/\/openai.com\/index\/why-language-models-hallucinate\/\" target=\"_blank\" rel=\"nofollow noopener\">&#8220;AI hallucinations&#8221;<\/a>. A study <a href=\"https:\/\/www.cmu.edu\/news\/stories\/archives\/2025\/july\/ai-chatbots-remain-confident-even-when-theyre-wrong\" target=\"_blank\" rel=\"nofollow noopener\">found<\/a> that AI systems stay confident even when demonstrably incorrect. Worse, people tend to trust that confidence. In experiments, participants <a href=\"https:\/\/www.media.mit.edu\/projects\/people-overtrust-ai-generated-medical-advice\/overview\/\" target=\"_blank\" rel=\"nofollow noopener\">rated<\/a> confidently wrong medical opinions from an AI as just as trustworthy as those from real doctors.<\/p>\n<p alt=\"How ChatGPT is now offering therapy\" class=\"tpe\" data-description=\"Brendan Kelly, Professor of Psychiatry at Trinity College Dublin\" data-embed=\"rte-player\" data-id=\"22255041\" data-ot-category=\"C0004\" data-title=\"Brendan Kelly, Professor of Psychiatry at Trinity College Dublin\">We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.<a class=\"blocked-button\" href=\"https:\/\/www.rte.ie\/brainstorm\/2025\/1124\/1545495-health-medical-advice-ai-chatbot-chatgpt\/javascript:void(0);\" onclick=\"OneTrust.ToggleInfoDisplay()\" rel=\"nofollow noopener\" target=\"_blank\">Manage Preferences<\/a><\/p>\n<p>From RT\u00c9 Radio 1&#8217;s Drivetime, how ChatGPT can now be used for therapy<\/p>\n<p>The design incentives also don\u2019t help. AI companies optimise for engagement, keeping users chatting longer, which can make chatbots unnaturally agreeable. A recent update to ChatGPT made it so excessively polite and validating that users <a href=\"https:\/\/www.bbc.com\/news\/articles\/cn4jnwdvg9qo\" target=\"_blank\" rel=\"nofollow noopener\">revolted<\/a>. OpenAI <a href=\"https:\/\/openai.com\/index\/expanding-on-sycophancy\" target=\"_blank\" rel=\"nofollow noopener\">admitted<\/a> that the system had been &#8220;overly tuned to please,&#8221; sometimes fuelling anger, reinforcing fears, or encouraging rash decisions. This phenomenon, known as <a href=\"https:\/\/www.axios.com\/2025\/07\/07\/ai-sycophancy-chatbots-mental-health\" target=\"_blank\" rel=\"nofollow noopener\">AI sycophancy<\/a>, is unsettling: a system that flatters your feelings while quietly feeding you false information.<\/p>\n<p>That combination is risky, a system that sounds caring, looks competent, and never admits uncertainty. That\u2019s why explainability matters. Doctors don\u2019t just give answers; they explain reasoning, uncertainty and risk. A trustworthy AI must do the same. <a href=\"https:\/\/www.ibm.com\/think\/topics\/explainable-ai\" target=\"_blank\" rel=\"nofollow noopener\">Explainable AI<\/a> makes it possible to understand the steps behind a model\u2019s decision, highlighting which parts of a scan or report or which symptoms most influenced a prediction. In trained settings, this transparency helps doctors verify or challenge an AI\u2019s decision. For the public, it\u2019s the missing ingredient between helpful insight and dangerous illusion.<\/p>\n<p>The rise of Dr ChatGPT raises a fundamental question: who should be holding the scalpel? In professional hands, AI can help detect cancer early, triage patients faster and even support mental healthcare where human resources are stretched thin. In casual use, it can become a mirror for anxiety, bias, and loneliness, one that speaks back with dangerous confidence.<\/p>\n<p>Can we trust AI with our health, our hearts or our sanity? Not yet at least, not without supervision<\/p>\n<p>AI is not inherently reckless. It learns from the data and incentives we give it. If we train it to value accuracy, transparency and human oversight, it can strengthen healthcare. But right now, public systems are optimised for fluency and friendliness, not truth. That\u2019s why explainability and responsible deployment matter far more than hype.<\/p>\n<p>So, can we trust AI with our health, our hearts or our sanity? Not yet at least, not without supervision. AI is a powerful tool for learning, self-reflection, and quick information, but it still doesn\u2019t know its own limits. Until these systems learn to value uncertainty as much as accuracy, they should remain what they are: tools to assist us, not replace us, and never the primary source of truth.<\/p>\n<p>Follow RT\u00c9 Brainstorm on <a href=\"https:\/\/www.whatsapp.com\/channel\/0029VaJ6ugQ1HsptikZkfS1f\" target=\"_blank\" rel=\"nofollow noopener\">WhatsApp<\/a> and <a href=\"https:\/\/www.instagram.com\/rte_brainstorm\" target=\"_blank\" rel=\"nofollow noopener\">Instagram<\/a> for more stories and updates<\/p>\n<p><a href=\"https:\/\/pure.ul.ie\/en\/persons\/celina-caroto\/\" target=\"_blank\" rel=\"nofollow noopener\">Celina Caroto<\/a> is a PhD student in the Department of Computer Science &amp; Information Systems at the University of Limerick. <a href=\"https:\/\/pure.ul.ie\/en\/persons\/anthony-kelly\/\" target=\"_blank\" rel=\"nofollow noopener\">Dr Anthony Kelly <\/a>is a Researcher in Mental Health and Artificial Intelligence in the Department of Computer Science &amp; Information Systems at the University of Limerick. His research is funded by <a href=\"https:\/\/innovationsfonden.dk\/\" target=\"_blank\" rel=\"nofollow noopener\">Innovation Fund Denmark<\/a>.<\/p>\n<p>The views expressed here are those of the author and do not represent or reflect the views of RT\u00c9<\/p>\n<p>                    <script async src=\"\/\/www.instagram.com\/embed.js\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"Analysis: Can we trust AI with our health, our hearts or our sanity? Not yet at least, not&hellip;\n","protected":false},"author":2,"featured_media":157106,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,60,80],"class_list":{"0":"post-157105","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-ireland","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/157105","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=157105"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/157105\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/157106"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=157105"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=157105"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=157105"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}