AI can never be the solution to loneliness. Photo: freepik.comAI can never be the solution to loneliness. Photo: freepik.com

“Senior Journalist” undervalues my position at The Catholic Weekly. “Jurassic Journalist” might be more appropriate. I confess that my self-esteem is daily threatened by my colleagues’ adept use of new-fangled social media. Readers might find this hard to believe, but I don’t even have a TikTok account. And I haven’t opened my Facebook page in a couple of years.

My ham-fisted approach to phones was on display recently as I typed the word “ship”. WhatsApp somehow autocorrected this to a different monosyllable beginning with “shi” and launched it into the ether.

What happened next was either comforting or confronting – I’m still trying to decide which.
“It seems like something’s got you feeling a bit stressed or concerned!” said WhatsApp’s AI assistant immediately. “Want to talk about what’s on your mind? I’m all ears! 😊”.

In point of fact, the smiley face had no ears, but its ingratiating empathy was encouraging, so I continued the conversation.

“I’m feeling inferior because my colleagues know heaps more than me about social media,” I painfully forefinger-tapped onto the phone’s keyboard. (Dinosaurs, as you will remember, do not have opposable thumbs.)

“Feeling like you’re not as knowledgeable as your colleagues about social media can be frustrating,” it (or she or he?) responded. “Here are some thoughts that might help.”

Then the conversation was diverted into boring stuff about collaborative workspaces. I was looking forward to a bit of psychoanalytic counselling from my bot-buddy. Alas!

Apparently, a lot of people must share my neediness and insecurity.

There have been youngsters who have fallen in love with chatbots or even committed suicide. The parents of Adam Raine, a 16-year-old from Southern California, are suing ChatGPT because he ended his life with the encouragement of a chatbot. It is the first known lawsuit for wrongful death against OpenAI, the developer of ChatGPT.

Far more common is the use of AI to avoid human interaction.

A recent article in Harvard Business Review surveyed how people are actually using AI. Although this is changing rapidly, it reported that in 2024 that the top three uses were for “therapy/companionship”, “organising my life”, and “finding purpose”.

HBR observed that AI-based therapy had three clear advantages over a face-to-face relationship with a human being: “It’s available 24/7, it’s relatively inexpensive (even free to use in some cases), and it comes without the prospect of judgment from another human being.”

AI experts had expected that it would be most useful in relatively technical tasks – writing computer code, seeking medical advice, simplifying legalese, and so on. But it turns out that some people are turning to AI products because they aren’t interacting with other human beings. In other words, they’re isolated and they’re lonely. They’re afraid of people and they lack friends.

The World Health Organization has declared loneliness a “global public health concern”. It can be as toxic as smoking 15 cigarettes a day. The use of AI for companionship and therapy is obviously a symptom of the pervasive loneliness of life in the developed world.

And as family sizes shrink and more and more people grow up in one-child homes, the problem is bound to become worse.

AI can never be the solution to loneliness. Only in a hyper-individualistic, atomised society which worships autonomy could that ever seem plausible. As persons, we can only find fulfillment in a community where we love and take responsibility for other persons.

Perhaps this is why Pope Leo XIV has made responding to the challenge of AI a central theme of his pontificate. He may not be worrying so much about job losses, fake news, and robot killing machines as AI’s capacity for degrading what it means to be human – especially for young people.

Perhaps Adam Raine would still be alive today if he had had a friend to talk to instead of a ChatGPT bot.