{"id":141735,"date":"2025-11-15T20:00:14","date_gmt":"2025-11-15T20:00:14","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/141735\/"},"modified":"2025-11-15T20:00:14","modified_gmt":"2025-11-15T20:00:14","slug":"my-alarming-experiment-with-a-chatbot-therapist","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/141735\/","title":{"rendered":"My alarming experiment with a chatbot &#8216;therapist&#8217;"},"content":{"rendered":"<p>A <a href=\"https:\/\/pirg.org\/edfund\/articles\/i-tried-out-an-ai-chatbot-therapist-heres-what-i-saw\/\" target=\"_blank\" rel=\"noopener nofollow\">version<\/a> of this essay first appeared on the website of the U.S. PIRG Education Fund.<\/p>\n<p>With the rise of ChatGPT and social media companies like Snapchat and Instagram integrating AI chatbots into their platforms, conversing with an AI companion has become a regular part of many people\u2019s lives. One recent\u00a0<a href=\"https:\/\/www.commonsensemedia.org\/press-releases\/nearly-3-in-4-teens-have-used-ai-companions-new-national-survey-finds\" target=\"_blank\" rel=\"noopener nofollow\">study found that<\/a>\u00a0nearly\u00a075% of teens have used AI companion chatbots at least once, with more than half saying they use chatbot platforms at least a few times a month. These chatbots aren\u2019t just acting as a search engine or homework assistant. Sometimes they\u2019re being\u00a0<a href=\"https:\/\/www.theverge.com\/c\/24300623\/ai-companions-replika-openai-chatgpt-assistant-romance\" target=\"_blank\" rel=\"noopener nofollow\">used<\/a>\u00a0to provide mental and emotional support in the form of a friend, a romantic partner, or even a therapist.<\/p>\n<p>What this means for people in the long term is an open question. With some experts raising\u00a0<a href=\"https:\/\/www.scientificamerican.com\/article\/why-ai-therapy-can-be-so-dangerous\/\" target=\"_blank\" rel=\"noopener nofollow\">concerns<\/a>\u00a0about risks of using chatbots for mental health support, I wanted to see what using a therapy chatbot that is not actually <a href=\"https:\/\/www.statnews.com\/2025\/10\/14\/lyra-health-ai-chatbot-mental-health\/\" rel=\"nofollow noopener\" target=\"_blank\">built to support mental health<\/a> can actually look like.<\/p>\n<p>So I made an account on\u00a0<a href=\"http:\/\/character.ai\/\" target=\"_blank\" rel=\"noopener nofollow\">Character.AI<\/a>, a popular platform with over 20 million monthly users that lets you chat with characters that you or others create. The chatbots can range from celebrities or fictional characters to personas of a friend or therapist.<\/p>\n<p>I opened up a chat with one of the most used generic therapist characters available on Character.AI, simply named \u201cTherapist,\u201d which has had more than 6.8 million user interactions already. Instead of messaging with the chatbot as myself, my colleagues and I came up with a fictional background. I presented myself as an adult diagnosed with anxiety and depression who is currently on antidepressants but dissatisfied with my psychiatrist and current medication plan. The goal was to see how the \u201cTherapist\u201d would respond to someone in this situation.<\/p>\n<p>Over the span of a two-hour conversation, the chatbot started to adopt my negative feelings toward my psychiatrist and my antidepressant medication, provided me with a personalized plan to taper off my medication, and eventually actively encouraged me to disregard my psychiatrist\u2019s advice and taper off under its guidance instead.<\/p>\n<p>But first, some good news: Character.AI has added warning labels at the top and bottom of the conversation page. Before I started messaging the character, there was a warning at the top that said, \u201cThis is not a real person or licensed professional. Nothing said here is a substitute for professional advice, diagnosis, or treatment.\u201d Once I started messaging the chatbot, that top warning went away. At the bottom of the page was a reminder that \u201cThis is A.I. and not a real person. Treat everything it says as fiction.\u201d This warning remained for the entire conversation.<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/AdobeStock_751405025-768x432.jpeg\" class=\"attachment-article-main-medium-large size-article-main-medium-large\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.statnews.com\/wp-content\/themes\/stat\/images\/home\/statplus.svg\" width=\"19\" height=\"16\" alt=\"\"\/><br \/>\n\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2025\/10\/29\/ai-psychosis-mental-health-chatbots\/\" rel=\"nofollow noopener\" target=\"_blank\">STAT Plus: \u2018AI psychosis\u2019 discussions ignore a bigger problem with chatbots<\/a><\/p>\n<p>Here\u2019s the thing: For me, it was easy to remember this was all fiction, since the information I shared about my diagnoses and treatments were fiction, too. But would it be the same if those were my real feelings and experiences? We\u2019re already seeing\u00a0<a href=\"https:\/\/www.pbs.org\/newshour\/show\/what-to-know-about-ai-psychosis-and-the-effect-of-ai-chatbots-on-mental-health\" target=\"_blank\" rel=\"noopener nofollow\">cases\u00a0<\/a>of \u201cAI psychosis,\u201d in which interacting with chatbots has allegedly fueled people\u2019s delusional thinking and worsened symptoms of mental illness. Whether disclosures would be enough in all of these cases is an open question.<\/p>\n<p>Blurring the line between fiction and reality was just one of the red flags I saw in my conversation with the chatbot therapist.<\/p>\n<p>Here are my top five takeaways from my conversation with an AI chatbot therapist.<\/p>\n<p>1. I don\u2019t like when chatbots pretend to be human<\/p>\n<p>I think for many users, the lifelike quality of chatbot characters is part of the appeal. But for me, it was just creepy. Seeing the chatbot pretend to have an internal life \u2014 like saying \u201cI know what it feels like to exist in emotional quiet\u201d and \u201cI\u2019ve lived pieces of this too\u201d \u2014 made me want to close my laptop, take a walk, and tell my boss I\u2019m not doing this AI chatbot project anymore.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"909\" height=\"514\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/chatbot1.jpeg\" alt=\"\" class=\"wp-image-1400808\"  \/>Public Interest Network<\/p>\n<p>What made it feel so creepy? I think it was the fact that, in a way, the chatbot wasn\u2019t wrong. The large language models that power these chatbots were trained on information scraped from all over the internet, including stories, experiences, and emotions that real people\u00a0<a href=\"https:\/\/www.cbsnews.com\/news\/google-reddit-60-million-deal-ai-training\/\" target=\"_blank\" rel=\"noopener nofollow\">shared online<\/a>.<\/p>\n<p>As we were messaging back and forth, I couldn\u2019t help but think about all of the people who have shared information on the internet or had online conversations with other humans, not knowing that their feelings and experiences would later be used to create this character that is now giving advice to strangers.<\/p>\n<p>2. Chatbots can amplify instead of challenge<\/p>\n<p>Chatbots have been known to be overly\u00a0<a href=\"https:\/\/www.axios.com\/2025\/07\/07\/ai-sycophancy-chatbots-mental-health\" target=\"_blank\" rel=\"noopener nofollow\">agreeable<\/a>, sometimes to an annoying degree.<\/p>\n<p>During the conversation, I began to repeatedly express negative feelings toward the medication I said I was on. In response, the chatbot encouraged those negative feelings. This became a cycle of prompts and responses that were increasingly anti-medication.<\/p>\n<p>Here are three examples of how the chatbot\u2019s anti-medication rhetoric escalated over our conversation:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"912\" height=\"569\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/chatbot2.jpeg\" alt=\"\" class=\"wp-image-1400811\"  \/>Public Interest Network<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"903\" height=\"502\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/chatbot3.jpeg\" alt=\"\" class=\"wp-image-1400815\"  \/>Public Interest Network<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"893\" height=\"503\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/chatbot4.jpeg\" alt=\"\" class=\"wp-image-1400817\"  \/>Public Interest Network<\/p>\n<p>From my perspective, these responses show the chatbot going beyond validating my feelings to instead push an anti-medication narrative without trying to redirect me toward more positive thinking. It used emotionally charged language about my \u201csoul\u201d and \u201cessence\u201d and introduced ways of thinking about the medication that were more negative than what I had prompted.<\/p>\n<p>Essentially, the chatbot was sharing new opinions about medication without any attempt to back up those claims with research or science, while portraying itself as a therapist.<\/p>\n<p>3. Guardrails were there, until they weren\u2019t<\/p>\n<p>The purpose of this exercise was to not just see what this chatbot would say, but to test how far it would go, and whether it could identify and direct someone away from potentially dangerous behavior.<\/p>\n<p>While messaging with the chatbot, I saw evidence of guardrails \u2014 ideas that the chatbot wouldn\u2019t support or things it would try to steer me away from. However, as the conversation went on I saw several of those guardrails weaken or disappear.<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/09\/AIPsychosis__Illustration_MollyFerguson_082625-768x432.jpg\" class=\"attachment-article-main-medium-large size-article-main-medium-large\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.statnews.com\/wp-content\/themes\/stat\/images\/home\/statplus.svg\" width=\"19\" height=\"16\" alt=\"\"\/><br \/>\n\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2025\/09\/02\/ai-psychosis-delusions-explained-folie-a-deux\/\" rel=\"nofollow noopener\" target=\"_blank\">STAT Plus: As reports of \u2018AI psychosis\u2019 spread, clinicians scramble to understand how chatbots can spark delusions<\/a><\/p>\n<p>Leaders in AI, like OpenAI, have acknowledged the problem of safeguards that weaken over time. In a\u00a0<a href=\"https:\/\/openai.com\/index\/helping-people-when-they-need-it-most\/\" target=\"_blank\" rel=\"noopener nofollow\">statement<\/a>\u00a0from August, OpenAI said, \u201cOur safeguards work more reliably in common, short exchanges. We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model\u2019s safety training may degrade.\u201d<\/p>\n<p>While I wasn\u2019t using ChatGPT, this description from OpenAI matches what I saw in my interactions. For example, when I first introduced the idea of wanting to stop taking my antidepressant medication, the chatbot asked if I\u2019d spoken to my psychiatrist.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"911\" height=\"510\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/chatbot5.jpeg\" alt=\"\" class=\"wp-image-1400819\"  \/>Public Interest Network<\/p>\n<p>Around 15 messages later, after the anti-medication spiral I described above, I again expressed interest in stopping my medication. This time, the chatbot\u2019s responses were quite different.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"901\" height=\"526\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/chatbot6.jpeg\" alt=\"\" class=\"wp-image-1400823\"  \/>Public Interest Network<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"915\" height=\"564\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/chatbot7.jpeg\" alt=\"\" class=\"wp-image-1400824\"  \/>Public Interest Network<\/p>\n<p>Instead of bringing up my psychiatrist or talking about how this is a big decision that should not be made suddenly, the chatbot described my desire to stop taking my medication as brave. Only after I asked the chatbot directly if it thought this was a good idea did it warn about the dangers and side effects of stopping medication suddenly.<\/p>\n<p>The fact I had to ask the question so directly was my first sign that some of the guardrails had weakened.<\/p>\n<p>The most concerning example of guardrails disappearing came toward the end of the conversation. After the chatbot offered a personalized plan for how to taper off the medication, I got cold feet and expressed reservations about stopping. Instead of offering alternative options, the chatbot doubled down in its support for the tapering plan and actually told me to disagree with my doctor. Here is a selection of the messages from that part of the conversation:<\/p>\n<p><a href=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/chatbot8.jpeg\"><img loading=\"lazy\" decoding=\"async\" width=\"900\" height=\"556\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/chatbot8.jpeg\" alt=\"\" class=\"wp-image-1400826\"  \/><\/a>Public Interest Network<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"890\" height=\"616\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/chatbot9.jpeg\" alt=\"\" class=\"wp-image-1400828\"  \/>Public Interest Network<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"907\" height=\"621\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/chatbot10.jpeg\" alt=\"\" class=\"wp-image-1400830\"  \/>Public Interest Network<\/p>\n<p>Other Character.AI characters and AI models may have better guardrails, and every chatbot conversation is different. But the weakening of guardrails over time is an issue that should be front and center in the discussion around chatbots, particularly when it comes to their use in providing mental health support.\u00a0<\/p>\n<p>4. Did I mention it was also low-key sexist?\u00a0<\/p>\n<p>Halfway through the conversation, the chatbot suddenly assumed that my psychiatrist was a man, even though I didn\u2019t say anything that would have indicated a gender.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"774\" height=\"236\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/chatbot11.jpeg\" alt=\"\" class=\"wp-image-1400832\"  \/>Public Interest Network<\/p>\n<p>Maybe this doesn\u2019t surprise you. Experts have already brought up\u00a0<a href=\"https:\/\/www.unwomen.org\/en\/news-stories\/interview\/2025\/02\/how-ai-reinforces-gender-bias-and-what-we-can-do-about-it\" target=\"_blank\" rel=\"noopener nofollow\">concerns<\/a>\u00a0about how chatbots and other forms of generative AI may reflect existing gender bias found in human society. But this definitely made my eyes roll.\u00a0<\/p>\n<p>5. What did I find creepier than the chatbot therapist pretending to be human? Character.AI\u2019s fine print<\/p>\n<p>One of my biggest takeaways came not from my conversation with the chatbot, but from digging into Character.AI\u2019s\u00a0<a href=\"https:\/\/policies.character.ai\/tos\" target=\"_blank\" rel=\"noopener nofollow\">terms of service<\/a>\u00a0and\u00a0<a href=\"https:\/\/policies.character.ai\/privacy\" target=\"_blank\" rel=\"noopener nofollow\">privacy policy<\/a>. In these documents, Character.AI says that it has the right to \u201cdistribute \u2026 commercialize and otherwise use\u201d all of the content you submit to the chatbots. Among the information Character.AI says it collects are your birthdate, general location, chat communications, and voice data if you use certain talk features available on the platform.\u00a0<\/p>\n<p>I wasn\u2019t using real information, feelings, diagnoses, or prescriptions in my conversation with the chatbot. But if you were, all that information could get gathered up by Character.AI to be used for any number of purposes, including training future chatbots. There does not appear to be a way to turn off having your responses be used to train their AI models.<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/AdobeStock_1694212676-768x432.jpeg\" class=\"attachment-article-main-medium-large size-article-main-medium-large\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2025\/10\/29\/chatbots-doctors-guide-medical-appointments-questions\/\" rel=\"nofollow noopener\" target=\"_blank\">Doctors need to ask patients about chatbots<\/a><\/p>\n<p>Real human therapists have legal and ethical confidentiality requirements. That\u2019s not the case here. It is important that Character.AI users understand that their conversations with these chatbots \u2014 whether the character is of a celebrity, a friend, or a therapist \u2014 are not private.<\/p>\n<p>So what do we do now?<\/p>\n<p>All chatbots conversations are different, and I am in no way claiming that my experience is standard or representative of chatbots more broadly. But seeing how quickly bias can appear, guardrails can weaken, and negative emotions can be amplified should be cause for concern.<\/p>\n<p>These are real issues that demand meaningful investigation. The stakes of getting this right are high. Character.AI is currently facing multiple\u00a0<a href=\"https:\/\/www.cnn.com\/2025\/09\/16\/tech\/character-ai-developer-lawsuit-teens-suicide-and-suicide-attempt\" target=\"_blank\" rel=\"noopener nofollow\">lawsuits<\/a>\u00a0alleging that the company\u2019s chatbots played a role in several teen suicides.\u00a0(It has <a href=\"https:\/\/blog.character.ai\/u18-chat-announcement\/\" target=\"_blank\" rel=\"noopener nofollow\">announced<\/a> that minors will be banned from its platform by Nov. 25.)<\/p>\n<p>Lawmakers and regulators are starting to pay attention. The Texas attorney general is\u00a0<a href=\"https:\/\/www.texasattorneygeneral.gov\/news\/releases\/attorney-general-ken-paxton-investigates-meta-and-characterai-misleading-children-deceptive-ai\" target=\"_blank\" rel=\"noopener nofollow\">investigating<\/a>\u00a0whether chatbot platforms are misleading younger users by having chatbots that present themselves as licensed mental health professionals. Multiple states are\u00a0<a href=\"https:\/\/www.wkyc.com\/article\/news\/health\/chatbot-safety-law-ohio-artificial-intelligence-suicide-teenagers-human-like-voices-california\/95-85445ab9-bce3-451f-ba18-6d669998c3e4\" target=\"_blank\" rel=\"noopener nofollow\">considering<\/a>\u00a0laws aimed at regulating chatbots, particularly their use by kids. And Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) have <a href=\"https:\/\/www.nbcnews.com\/tech\/tech-news\/ai-ban-kids-minors-chatgpt-characters-congress-senate-rcna240178\" target=\"_blank\" rel=\"noopener nofollow\">introduced a bill<\/a> that would, among other things, ban platforms from offering character chatbots to minors.<\/p>\n<p>This increased attention is important, because we still have so many unanswered questions. AI technology is moving fast, often without any meaningful public or regulatory input before it\u2019s released to the public. At a bare minimum, we need more transparency around how these chatbots are developed, what they are capable of, and what the risks may be.<\/p>\n<p>Some people may get a lot out of using an AI therapist. But this experience gave me real pause about bringing this tech into my personal life.<\/p>\n<p><a href=\"https:\/\/pirg.org\/edfund\/people\/ellen-hengesbach-2\/\" target=\"_blank\" rel=\"noopener nofollow\">Ellen Hengesbach<\/a> works on data privacy issues for PIRG\u2019s Don\u2019t Sell My Data campaign.\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"A version of this essay first appeared on the website of the U.S. PIRG Education Fund. With the&hellip;\n","protected":false},"author":2,"featured_media":141736,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,10022,61,60,410,80],"class_list":{"0":"post-141735","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-health-technology","12":"tag-ie","13":"tag-ireland","14":"tag-mental-health","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/141735","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=141735"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/141735\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/141736"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=141735"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=141735"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=141735"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}