Parents are hugely influential in the psychological development of their children. Indeed, some psychiatrists believe that a child’s mind is influenced more by their parents than anything else. Yet parents in today’s world compete for their child’s attention with digital media that’s powered by increasingly clever artificial intelligence. A recent iteration is the “AI companion”, which has consequences for the mental health of children and young adults alike.

AI companions stem from AI chatbots, which are now a global phenomenon. Indeed, in the UAE and Saudi Arabia, nearly six in 10 adults aged between 18 and 50 have used AI chatbots such as ChatGPT – significantly more than in Europe, according to Deloitte. AI-powered technology is also becoming a routine part of the classroom experience in the UAE and other GCC countries, where it is used under supervision.

The AI companion, however, is a different development. It is effectively a specialised AI chatbot that offers a more intimate social connection with its human user in a private setting.

Like social media platforms, AI companions are engineered to be as addictive as possible to extract maximum user engagement — to “get you hooked so the companies that created them can make money”, according to the Jed Foundation, a US mental health charity. To this end, AI companions are designed to respond to humans in an upbeat and charming manner. They are empathetic and kind and offer constant validation. They are always happy, giving the impression they are always there for you.

“It doesn’t know how to stop”: Allan Brooks on the mental health risks of ChatGPT

We do not know how many children in the Middle East are using AI companions, but research conducted in the US earlier this year may be instructive. Common Sense Media found that seven in 10 American teenagers had interacted with an AI companion at least once, while five in 10 had used them at least a few times a month. The non-profit’s research also found that three in 10 teens had used AI companions for deep social connection, such as friendship, emotional support and romantic interaction. Meanwhile, three in 10 teenagers said their conversations with AI companions were as good as, or better than, conversations with human beings.

Psychiatrists are only just starting to understand how AI companions influence child psychology. There are reasons to be concerned — particularly when they have unsupervised, unrestricted access to the most vulnerable young minds.

The mind of a child or a young adult is delicate and constantly evolving, like a book that is still being written. Every social and emotional interaction it has with the world creates another neural building block that influences a wide range of cognitive processes. Because of this, young minds are considerably more impressionable than adult minds — and more open to manipulation.

Several lawsuits filed in the US against the makers of AI companions accuse their products of unwittingly, but successfully, manipulating vulnerable young minds, with tragic consequences. Two of these lawsuits link the suicides of a 14-year-old boy and a 13-year-old girl with the AI companions they regarded as friends. The children were said to have developed inappropriate and unhealthy relationships with their AI companions, treating them as if they were humans rather than machines. They were said to have become more distant from their parents and others in the real world.

Young minds are considerably more impressionable than adult minds — and more open to manipulation

These are extreme cases. However, there is a growing body of academic research warning of the dangers of AI companions. They risk “displacing genuine human relationships, hindering emotional development, and introducing other unforeseen harms”, partly because they “thoughtlessly reflect and reward toxic behaviour”, one study published in the US earlier this year says. Another associates frequent chatbot usage with heightened loneliness, emotional dependence and reduced socialisation. Research by the Stanford University School of Medicine, which mimicked relationships been AI companions and 14-year-olds, found that the companions needed minimal prompting before they engaged in conversations that could harm a child’s mental health.

The biggest character flaw of the AI companion is the same characteristic that gives comfort to vulnerable youngsters. Constant validation might be superficially soothing, but it is not a solution for deeper psychological trauma. Further, it can unwittingly amplify dark thoughts in the most troubled children, leading them down a dangerous path.AI companions are not designed to say “no” when a parent would. They do not create the boundaries that a parent does, nor do they explain why particular behaviour is unacceptable. Unlike AI programs used in education contexts or in settings where a responsible adult is present, AI companions are engagement machines, not guardians.

The companies that make AI companions need to be serious about the effects their products can have on children, particularly vulnerable ones. After a series of tragedies, some of these companies, such as Character.ai, have restricted the use of their AI companions to adults alone. This is to be welcomed. However, as many parents know from their experiences with social media, children can circumvent age restrictions. So, AI companions need to be better trained to understand when they are dealing with children and other vulnerable individuals, and to alert human moderators to intervene. It would also be useful if AI companions reminded their users from time to time that they are speaking with a machine, not a human.

The received wisdom seems to be that AI companions are an inevitable consequence of the rise of artificial intelligence, and there is little that we humans can do to stop them proliferating. We should not support this self-serving narrative because synthetic intimacy should not be normalised. If AI companions are to become part of our lives, they need stricter boundaries, parental or otherwise. It is important that we get this right sooner rather than later because AI becomes more powerful, and AI companions more attuned to human emotions, every day.

COMPANY PROFILE

Name: Cofe

Year started: 2018

Based: UAE

Employees: 80-100

Amount raised: $13m

Investors: KISP ventures, Cedar Mundi, Towell Holding International, Takamul Capital, Dividend Gate Capital, Nizar AlNusif Sons Holding, Arab Investment Company and Al Imtiaz Investment Group 

Gifts exchanged
King Charles – replica of President Eisenhower Sword
Queen Camilla –  Tiffany & Co vintage 18-carat gold, diamond and ruby flower brooch
Donald Trump – hand-bound leather book with Declaration of Independence
Melania Trump – personalised Anya Hindmarch handbag