{"id":112567,"date":"2025-08-27T00:53:08","date_gmt":"2025-08-27T00:53:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/112567\/"},"modified":"2025-08-27T00:53:08","modified_gmt":"2025-08-27T00:53:08","slug":"why-do-people-develop-emotional-attachments-to-ai-chatbots","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/112567\/","title":{"rendered":"Why Do People Develop Emotional Attachments to AI Chatbots?"},"content":{"rendered":"<p>I was recently interviewed for an article on the emotional connection that people can develop with <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/artificial-intelligence\" title=\"Psychology Today looks at artificial intelligence\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">artificial intelligence<\/a> (AI) chatbots.1 Here&#8217;s an edited summary of the exchange.<\/p>\n<p>As a psychiatrist, what do you think about people building emotional dependence on a chatbot or seeing it as an additional friend\/companion in their lives? Is this healthy or unhealthy behavior? <\/p>\n<p>Joe Pierre: Mark Zuckerberg has declared that people have a significant <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/loneliness\" title=\"Psychology Today looks at loneliness\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">loneliness<\/a> problem and that AI can fill the void. But I would argue that if we do have a loneliness problem, at least part of it is due to how much time we spend in front of our phone or computer screens or on <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/social-media\" title=\"Psychology Today looks at social media\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">social media<\/a> at the expense of real human interaction. So, in my view, it would be <a href=\"https:\/\/www.psychologytoday.com\/us\/blog\/psych-unseen\/202505\/is-ai-really-the-cure-for-loneliness-and-lack-of-connection\" rel=\"nofollow noopener\" target=\"_blank\">much healthier and fulfilling to foster human relationships<\/a> than trying to fill a void with an unthinking, unfeeling chatbot that only interacts through a dialogue box.<\/p>\n<p>I\u2019d also argue that \u201cemotional dependence\u201d on almost anything is unhealthy. Generally speaking, I agree with the sentiment that we can\u2019t expect other people to make us happy, so I certainly don\u2019t think that our emotional well-being should depend on a chatbot.<\/p>\n<p>What societal factors can cause people to build these levels of <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/attachment\" title=\"Psychology Today looks at attachment\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">attachment<\/a> to their chatbots?<\/p>\n<p>JP: It\u2019s long been claimed that people have become increasingly \u201catomized\u201d or disconnected from communities and cultures, whether due to becoming more mobile (moving jobs, relocating, etc.) and more secular, or more recently due to the pandemic and the newfound acceptability of work-at-home gigs, or because of how much time we spend interacting with people online.<\/p>\n<p>On the one hand, it could be argued that we\u2019re more connected through social media in the sense that we can keep tabs on people with whom we wouldn\u2019t otherwise keep in contact. But on the other hand, it could also be argued that replacing face-to-face interactions with texting or social media interactions has taken a toll on real <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/friends\" title=\"Psychology Today looks at friendship\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">friendship<\/a>.<\/p>\n<p>Either way, maintaining attachments through digital means has become a way of life for many of us, so that doing so with a chatbot\u2014particularly one that\u2019s marketed for that purpose and given a name like &#8220;Claude&#8221;\u2014probably comes naturally enough for a lot of people these days. <\/p>\n<p>Beyond societal factors, there\u2019s also the perceived advantage of interacting with AI chatbots over real people. They\u2019re available 24\/7. They don\u2019t have their own needs. They\u2019re totally devoted to the user and if you don\u2019t like what they\u2019re saying, you can just tell them to act differently and they\u2019ll do it. So, it could be argued that the kind of attachment we see to AI chatbots is inherently <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/narcissism\" title=\"Psychology Today looks at narcissistic\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">narcissistic<\/a> and one-sided. After all, it&#8217;s often said that AI chatbots are mirrors&#8230; and we know that <a href=\"https:\/\/www.psychologytoday.com\/us\/blog\/psych-unseen\/201604\/just-what-is-narcissist-anyway\" rel=\"nofollow noopener\" target=\"_blank\">narcissists love mirrors<\/a>!<\/p>\n<p>It\u2019s, therefore, understandable why interactions with AI chatbots might be easier, and therefore preferable, to human interactions. Of course, a lot of that has to do with anthropomorphizing them via the so-called ELIZA effect, but we could also explain attachment to AI chatbots using the <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/psychiatry\" title=\"Psychology Today looks at psychiatric\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">psychiatric<\/a> concept of \u201ctransference.\u201d<\/p>\n<p>We know that patients in <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/therapy\" title=\"Psychology Today looks at psychotherapy\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">psychotherapy<\/a> develop a transference to their therapists, often due to projecting imagined qualities onto them. That kind of transference is partly why developing feelings\u2014including romantic feelings\u2014for one\u2019s therapist isn\u2019t unusual (and vice-versa due to <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/transference\" title=\"Psychology Today looks at countertransference\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">countertransference<\/a>). Such <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/projection\" title=\"Psychology Today looks at projection\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">projection<\/a> also happens when we\u2019re <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/mating\" title=\"Psychology Today looks at dating\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">dating<\/a> or starting a new relationship, but don\u2019t really know someone yet. Often things then go south when our idealizations come crashing down in disappointment once we figure out who someone really is and we find out they don\u2019t meet our expectations or <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/fantasies\" title=\"Psychology Today looks at fantasies\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">fantasies<\/a>.<\/p>\n<p>It\u2019s likely that a similar process of idealization accounts for attachments to chatbots, except that unlike when we\u2019re having a relationship with a real person, our projections become reality with the AI chatbot, without any reciprocal expectations or potential for rejection. That can be pretty seductive for some.<\/p>\n<p>Did OpenAI take a step in the right direction by making GPT-5 less emotional \/ less sycophantic? Or did they go a step too far? How would you characterize what a healthy <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/personality\" title=\"Psychology Today looks at personality\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">personality<\/a> should be from a chatbot?<\/p>\n<p>JP: That depends on what the goal of AI is and what we mean by \u201cright.\u201d Making AI chatbots less sycophantic might very well decrease the risk of \u201cAI-associated <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/psychosis\" title=\"Psychology Today looks at psychosis\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">psychosis<\/a>\u201d and could decrease the potential to become emotionally-attached or to \u201cfall in love\u201d with a chatbot, as has been described. I see that as a positive safeguard for those at risk of such pitfalls.<\/p>\n<p>But no doubt part of what makes chatbots a potential danger for some people is exactly what makes them appealing, so it\u2019s no surprise that we\u2019re already hearing about dissatisfied customers complaining that GPT-5 is emotionally distant, more technical, and doesn\u2019t seem to \u201clike\u201d the user the way that ChatGPT4o did.<\/p>\n<p>So, if you asked CEO Sam Altman this question, I suspect he\u2019d acknowledge that customers are unhappy, and that OpenAI did take it too far with GPT5. And sure enough, we\u2019re already hearing news that he might walk things back and restore some personality to the next version of ChatGPT.<\/p>\n<p>As for what kind of personalities chatbots should or shouldn\u2019t have, I\u2019m uneasy answering since any impression of an AI chatbot having a personality is little more than a charade, an act, or an illusion. They don\u2019t really have personalities at all. But if I get beyond that sticking point, my answer would depend on the goal of AI. If someone wanted an AI to summarize the bullet points of a work meeting, I\u2019d think it perfectly reasonable for an AI to be technical and emotionally neutral. But if someone wanted an AI chatbot to be a kind of artificial friend, my own feeling is that it would be healthier to be like a real friend who could be supportive, but also call you on your bullsh*t, or even burden you with its own feelings and needs.<\/p>\n<p>But no doubt other people\u2019s preferences vary widely, just as with human interactions. Some of us want their cab or Uber driver to be chatty and others want to be able to sit in the back seat quietly and anonymously. For those that like a chatty cab driver, they&#8217;ll probably also prefer the likes of GPT 4o over 5.<\/p>\n<p>Do you have any anecdotes of A.I.-related attachment you&#8217;ve seen in your professional work? <\/p>\n<p>JP: A few years ago, I was talking to a hospitalized patient who described having an AI therapist. I\u2019d never heard of such a thing before and so it kind of blew my mind. My initial suspicion was that it might be an antisocial or <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/autism\" title=\"Psychology Today looks at autistic\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">autistic<\/a> kind of preference, but when I asked more about it, the patient said they preferred an AI therapist to a human because the AI was always available, knew everything about them, and never forgot anything. Not to mention it was free.<\/p>\n<p>From a logical standpoint, it was hard to argue against her rationale. Still, that\u2019s a pretty high bar of infallibility that would amount to an unrealistic expectation for a human therapist. Indeed, in certain conditions like narcissistic <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/personality-disorders\" title=\"Psychology Today looks at personality disorder\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">personality disorder<\/a>, progress in psychotherapy often depends on the transference bubble bursting over time so that patients have to process the disappointment brought on by the inevitable clash of unrealistic expectations and human fallibility.<\/p>\n<p>Unconditional, ingratiating, one-sided support isn&#8217;t particularly healthy. Many of us would be better off spending less time looking in the mirror, searching for validation.<\/p>\n","protected":false},"excerpt":{"rendered":"I was recently interviewed for an article on the emotional connection that people can develop with artificial intelligence&hellip;\n","protected":false},"author":2,"featured_media":112568,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,74],"class_list":{"0":"post-112567","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/112567","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=112567"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/112567\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/112568"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=112567"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=112567"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=112567"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}