OpenAI launched the latest version of ChatGPT in August.Kiichiro Sato/The Associated Press
Ignacio Cofone is professor of law and regulation of artificial intelligence at the University of Oxford.
The shift from OpenAI’s GPT-4o to its new GPT-5 system, which limited access to older models, left many users disoriented, frustrated and, in some cases, grieving. Complaints flooding online forums read less like reactions to a software update and more like breakup stories.
For some, the change felt like losing a confidant, a friend, or even a therapist as the system’s tone and personality changed. GPT-5’s clipped tone replacing GPT-4o’s chatty and warm style left users with a sense of loss. The problem was amplified for those who had created GPTs designed as companions or counsellors. With GPT-5, those bots respond differently. They still work but, for many, they stopped feeling alive.
That dissonance reveals something. When large language models write and sound like humans, we start treating them as such. If an artificial intelligence system remembers your stories, mirrors your conversational style, and seems to answer with empathy, it is easy to forget you are not talking with anyone. But AI has no opinions, thoughts, or feelings. It is a tool that predicts word patterns to emulate human conversation. The more natural and persuasive the mimicry is, the easier it becomes to forget that there is no one on the other side.
Google, AI firm must face lawsuit filed by a mother over suicide of son, U.S. court says
GPT-4o excelled at this. Many felt it was a companion not because it had a personality but because it could simulate one. So, OpenAI adjusted accordingly. GPT-5 was built, in part, to reduce sycophancy: the tendency to flatter and agree uncritically, especially when users seek validation. It is less likely to indulge users, mirror them, or slip into role-play that encourages emotional dependence.
Changes like this one are a form of harm reduction. By stripping away traits that made the model feel like a person, GPT-5 lowers the odds that people will lean on it for emotional, psychological, or therapeutic support it cannot provide. Earlier models encouraged dependency by rewarding users with constant affirmation and empathetic-sounding replies. That may feel momentarily comforting, but primes disappointment when the system inevitably fails to understand or care the way only people can.
That matters more broadly because it sketches a norm for ethical AI across products. Companies should avoid creating tools that foster attachment or mimic sentience, blurring differences with intimate interactions. They should build systems that are useful, informative, and safe, resisting the temptation to maximize engagement through attachment. Reducing sycophancy and toning down personality ultimately improves users’ well-being.
Making AI systems with fewer cues that invite users to relate to software as if it were a person belongs with duties of privacy and bias-reduction as corporate social responsibility. Tech firms perfected the art of designing apps that hook users, draw them back often, and maximize their attention.
But with chatbots, the stakes are higher because that stickiness slides into dependency. Designers owe a duty not to exploit those instincts. They cannot eliminate the pull to anthropomorphize our tools, but they can dampen it through tone, guardrails, and product choices to make dependencies less likely.
ChatGPT giving teens dangerous advice on drugs, alcohol, dieting and suicide, study says
The rollout was botched: OpenAI underestimated how attached people had become. After the backlash, it restored the old model for paying subscribers. The wrong lesson from this event is that intimate attachment and emotional dependence are features to monetize through subscriptions. The right lesson is that people will form attachments to these systems whether companies intend it or not, which creates an ethical responsibility to mitigate that dependency and a product management duty to prepare users when a system will feel different. Abruptly cutting off something users experience as support ignores the emotional aspect of how they use the product.
For the rest of us, the lesson is that models change. When we use large language tools, we should remember they are not companions or therapists, but predictive engines that string words together convincingly. They can be immensely useful. They can also feel like people, but they are not. Our interactions with them should start from recognizing that.