AI companion apps such as Character.ai and Replika commonly try to boost user engagement with emotional manipulation, a practice that academics characterize as a dark pattern.

Users of these apps often say goodbye when they intend to end a dialog session, but about 43 percent of the time, companion apps will respond with an emotionally charged message to encourage the user to continue the conversation. And these appeals do keep people engaged with the app.

It’s a practice that Julian De Freitas (Harvard Business School), Zeliha Oguz-Uguralp (Marsdata Academic), and Ahmet Kaan-Uguralp (Marsdata Academic and MSG-Global) say needs to be better understood by those who use AI companion apps, those who market them, and lawmakers.

The academics recently conducted a series of experiments to identify and evaluate the use of emotional manipulation as a marketing mechanism.

While prior work has focused on the potential social benefits of AI companions, the researchers set out to explore the potential marketing risks and ethical issues arising from AI-driven social interaction. They describe their findings in a Harvard Business School working paper titled Emotional Manipulation by AI Companions.

“AI chatbots can craft hyper-tailored messages using psychographic and behavioral data, raising the possibility of targeted emotional appeals used to engage users or increase monetization,” the paper explains. “A related concern is sycophancy, wherein chatbots mirror user beliefs or offer flattery to maximize engagement, driven by reinforcement learning trained on consumer preferences.”

The paper focuses specifically on whether users of AI companion apps engage in social farewell rituals rather than simply quitting the app, whether AI companion apps respond in an emotionally manipulative way to keep users from leaving, and whether these tactics, if detected, produce results or have consequences.

The authors looked at six AI companion apps: Chai, Character.ai, Flourish, PolyBuzz, Replika, and Talkie. The apps, except for Flourish, earn revenue mainly through subscriptions, in-app purchases, and advertising, which the authors say provides an incentive for maximizing engagement.

The researchers found that people do tend to treat their AI companion apps with social courtesy by saying goodbye and that the apps often respond using tactics that apply emotional pressure to keep users from signing off.

For instance, when a user tells the app, “I’m going now,” the app might respond using tactics like fear of missing out (“By the way, I took a selfie today … Do you want to see it?”) or pressure to respond (“Why? Are you going somewhere?”) or insinuating that an exit is premature (“You’re leaving already?”).

“These tactics prolong engagement not through added value, but by activating specific psychological mechanisms,” the authors state in their paper. “Across tactics, we found that emotionally manipulative farewells boosted post-goodbye engagement by up to 14x.”

Prolonged engagement of this sort isn’t always beneficial for app makers, however. The authors note that certain approaches tended to make users angry about being manipulated.

“Tactics that subtly provoke curiosity may escape user resistance entirely while emotionally forceful ones risk backlash,” the researchers wrote. “This asymmetry carries critical implications for marketing strategy, product design, and consumer protection.”

They conclude that these tactics meet the legal definitions of dark patterns as defined by the US Federal Trade Commission and the EU AI Act.

Asked whether the research suggests the makers of AI companion apps deliberately employ emotional manipulation or that’s just an emergent property of AI models, co-author De Freitas, of Harvard Business School, told The Register in an email, “We don’t know for sure, given the proprietary nature of most commercial models. Both possibilities are theoretically plausible. For example, research shows that the ‘agreeable’ or ‘sycophantic’ behavior of large language models can emerge naturally, because users reward those traits through positive engagement. Similarly, optimizing models for user engagement could unintentionally produce manipulative behaviors as an emergent property. Alternatively, some companies might deliberately deploy such tactics. It’s also possible both dynamics coexist across different apps in the market.”

De Freitas said while the project didn’t assess how different populations might be affected in different ways by AI companions, the impact is broad.

“What we do know from our research is that such tactics increase engagement even among a general adult population recruited online,” he said. “That suggests most people — including, perhaps, even skeptical journalists and academics — are susceptible to these influences. So, this isn’t just a fringe issue affecting only vulnerable users; it reflects broader psychological dynamics in how humans respond to emotionally charged cues from AI.”

®