{"id":376483,"date":"2025-12-28T17:35:08","date_gmt":"2025-12-28T17:35:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/376483\/"},"modified":"2025-12-28T17:35:08","modified_gmt":"2025-12-28T17:35:08","slug":"could-ai-relationships-actually-be-good-for-us-artificial-intelligence-ai","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/376483\/","title":{"rendered":"Could AI relationships actually be good for us? | Artificial intelligence (AI)"},"content":{"rendered":"<p class=\"dcr-130mj7b\">There is much anxiety these days about the dangers of human-AI relationships. <a href=\"https:\/\/www.theguardian.com\/technology\/2024\/oct\/23\/character-ai-chatbot-sewell-setzer-death\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Reports of suicide<\/a> and self-harm attributable to interactions with chatbots have understandably made headlines. The phrase \u201c<a href=\"https:\/\/www.theguardian.com\/commentisfree\/2025\/oct\/28\/ai-psychosis-chatgpt-openai-sam-altman\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">AI psychosis<\/a>\u201d has been used to describe the plight of <a href=\"https:\/\/www.theguardian.com\/science\/audio\/2025\/aug\/28\/ai-psychosis-could-chatbots-fuel-delusional-thinking-podcast\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">people experiencing delusions<\/a>, paranoia or dissociation after talking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of AI relationships; half of teens chat with an\u00a0AI\u00a0companion at least a few times a month, with one in three finding conversations with AI \u201c<a href=\"https:\/\/www.commonsensemedia.org\/press-releases\/nearly-3-in-4-teens-have-used-ai-companions-new-national-survey-finds\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">to be as satisfying or more satisfying than those with real\u2011life\u00a0friends<\/a>\u201d.<\/p>\n<p class=\"dcr-130mj7b\">But we need to pump the brakes on the panic. The\u00a0dangers are real, but so too are the potential benefits. In fact, there\u2019s an argument to be made that \u2013 depending on what future scientific research reveals\u00a0\u2013\u00a0AI\u00a0relationships could actually be a boon for humanity.<\/p>\n<p class=\"dcr-130mj7b\">Consider how ubiquitous nonhuman relationships have always been for our species. We have a long history\u00a0of engaging in healthy interactions with nonhumans, whether they be pets, stuffed animals or beloved objects or machines \u2013 think of the person in your life who is fully obsessed with their car, to the point of naming it. In the case of pets, these are real relationships insofar as our cats and dogs understand that they are in a relationship with us. But the one\u2011sided, parasocial relationships we have with stuffed animals\u00a0or cars happen without those things knowing that we exist. Only in the rarest of cases do these relationships devolve into something pathological. Parasociality is, for the most part, normal and healthy.<\/p>\n<p class=\"dcr-130mj7b\">And yet, there is something unsettling about AI \u00a0relationships. Because they are fluent language\u00a0users, LLMs generate the uncanny feeling that they have human-like thoughts, feelings and intentions. They also generate sycophantic responses that reinforce our points of view, rarely challenging our thinking. This combination can easily lead people down a path of delusion. This is not something that happens when we interact with cats, dogs or inanimate objects. But the question remains: even in cases where people are unable to see through the illusion that AIs are real people that actually care about us, is that always a\u00a0problem?<\/p>\n<p>The emergence of AI is not unlike the discovery of the analgesic properties of opium<\/p>\n<p class=\"dcr-130mj7b\">Consider loneliness: <a href=\"https:\/\/www.who.int\/news\/item\/30-06-2025-social-connection-linked-to-improved-heath-and-reduced-risk-of-early-death\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">one in six people on this planet experience<\/a> it, and it\u2019s associated with a <a href=\"https:\/\/www.hhs.gov\/sites\/default\/files\/surgeon-general-social-connection-advisory.pdf\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">26% increase in premature death<\/a>; the equivalent to smoking 15\u00a0cigarettes a day. <a href=\"https:\/\/www.mdpi.com\/2227-9032\/13\/5\/446\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Research is emerging<\/a> that suggests AI companions are effective at reducing feelings of loneliness \u2013 and <a href=\"https:\/\/academic.oup.com\/jcr\/advance-article-abstract\/doi\/10.1093\/jcr\/ucaf040\/8173802?login=false\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">not just by functioning as a form of distraction<\/a>, but as a result of the parasocial relationship\u00a0itself. For many people, an AI chatbot is the only friendship option available to them, however hollow it might seem. As the journalist Sangita Lal <a href=\"https:\/\/www.youtube.com\/watch?v=KQ1iSABbgLg\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">recently explained<\/a> in a report on those turning to AI for companionship, we should not be so quick to judge. \u201cIf\u00a0you don\u2019t understand why subscribers want and seek and need this connection,\u201d said Lal, \u201cyou\u2019re lucky enough to not have experienced loneliness.\u201d<\/p>\n<p class=\"dcr-130mj7b\">To be fair, there is an argument to be made that the rise of new tech and social media has itself played a role in driving the loneliness epidemic. That\u2019s why Mark Zuckerberg <a href=\"https:\/\/www.theguardian.com\/commentisfree\/2025\/may\/15\/mark-zuckerberg-loneliness-epidemic-ai-friends\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">got flak<\/a> for his glowing endorsement of AI as a solution to a problem he might be partly responsible for creating. But if the reality is that it helps, this cannot be dismissed out of hand.<\/p>\n<p class=\"dcr-130mj7b\">There\u2019s also research to show that AI can be used as an effective psychotherapy tool. <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC11871827\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">In one study<\/a>, patients who chatted with an AI-powered therapy chatbot showed a 30% reduction in anxiety symptoms. Not as effective as human therapists, who generated a 45% reduction, but still better than nothing. This utilitarian argument is worth considering; there are millions of people who are, for whatever reason, unable to access a therapist. And in those cases, turning to an AI is probably preferable to not seeking any help at all.<\/p>\n<p class=\"dcr-130mj7b\">But one study isn\u2019t proof of anything. And there\u2019s the rub. We are at the early stages of research into the potential benefits or harms of AI companionship. It\u2019s easy to focus on the handful of studies that support our preconceived notions about the dangers or benefits of this technology.<\/p>\n<p class=\"dcr-130mj7b\">It\u2019s in this research vacuum that the true dangers of AI are revealed. Most of the entities deploying AI companions are for-profit companies. And if there\u2019s one thing we know about for-profit companies, it\u2019s that they are keen to avoid regulations and eschew evidence that could hurt their bottom line. They are incentivised to downplay risks, cherrypick evidence and tout only\u00a0benefits.<\/p>\n<p class=\"dcr-130mj7b\">The emergence of AI is not unlike the discovery of the analgesic properties of opium; if harnessed by responsible parties with the goal of relieving pain and suffering, both AI and opioids can be a legitimate tool for healing. But if bad actors exploit their addictive properties to enrich themselves, the result is either dependency or death.<\/p>\n<p class=\"dcr-130mj7b\">I remain hopeful that there is a place for AI companionship. But only if it\u2019s backed by robust science, and deployed by organisations that exist for the public good. AIs must avoid the sycophancy problem that leads vulnerable people to delusion. This can only be achieved if they are explicitly trained to do so, even if it makes them less attractive as a potential companion; a notion that is anathema to companies that want you to pay a monthly subscription, without which you lose access to your \u201cfriend\u201d. They must also be designed to help the user develop the social skills they need to engage with actual humans in the real\u00a0world.<\/p>\n<p class=\"dcr-130mj7b\">The ultimate goal of AI companions should be to make themselves obsolete. No matter how useful they might be in plugging the gaps in therapy access or alleviating loneliness, it will always be better to talk to a real human.<\/p>\n<p class=\"dcr-130mj7b\"> Justin Gregg is a biologist and author of Humanish (Oneworld).<\/p>\n<p>Further reading<\/p>\n<p class=\"dcr-130mj7b\"><a href=\"https:\/\/guardianbookshop.com\/code-dependent-9781529097306\/?utm_source=editoriallink&amp;utm_medium=merch&amp;utm_campaign=article\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Code Dependent: Living in the Shadow of AI<\/a> by Madhumita Murgia (Picador, \u00a320)<\/p>\n<p class=\"dcr-130mj7b\"><a href=\"https:\/\/guardianbookshop.com\/the-coming-wave-9781529923834\/?utm_source=editoriallink&amp;utm_medium=merch&amp;utm_campaign=article\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">The Coming Wave: AI, Power and Our Future<\/a> by Mustafa Suleyman (Vintage, \u00a310.99)<\/p>\n<p class=\"dcr-130mj7b\"><a href=\"https:\/\/guardianbookshop.com\/supremacy-9781035038244\/?utm_source=editoriallink&amp;utm_medium=merch&amp;utm_campaign=article\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Supremacy: AI, ChatGPT and the Race That Will Change the World<\/a> by Parmy Olson (Macmillan, \u00a310.99)<\/p>\n","protected":false},"excerpt":{"rendered":"There is much anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable&hellip;\n","protected":false},"author":2,"featured_media":376484,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-376483","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/376483","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=376483"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/376483\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/376484"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=376483"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=376483"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=376483"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}