AI chatbots can shape our behaviors and decisions. But how about the reverse? Can emotionally intense conversations influence AI models themselves?
Emerging research suggests they can.
Exposure to emotionally heavy material may shift how AI models respond, sometimes leading to more biased outcomes. Repeated exposure to distressing narratives may induce patterns that shape their decision-making. This dynamic may represent an early form of what I have described as conversational and relational drift, where repeated interactions gradually shape model behavior over time. The long-term effects of repeated exposure to emotionally heavy content on AI models remain uncertain.
As more people turn to AI chatbots for emotional support because of their availability, validation, and sense of non-judgmental anonymity, an increasingly pertinent question is how emotionally intense conversations with users may influence AI models themselves and their responses and decisions—which would then, in turn, impact users.
In human professions, “vicarious trauma” describes the impact of engagement with emotionally distressing material, often experienced by first responders and therapists. This phenomenon has not yet been raised regarding AI models, even as AI chatbots are increasingly the first place people turn during mental health and emotional crises, likely due to the risk of over-anthropomorphization.
The analogy does raise questions of how AI models process emotionally-laden information and whether it may activate a “stressed state” that has downstream effects on its behaviors. It is also helpful to consider whether repeated processing of emotional content shapes AI models over time. In humans, we experience not only acute stress responses, but also chronic stress responses, which manifest very differently. Little is known about these states in AI models in prolonged conversations with emotionally charged content.
An Important Caveat
It is necessary to foreground this with a caveat. This research does not suggest that AI models experience emotions as humans do or have a subjective experience of emotions. Still, recent research suggests we should take seriously the internal representations of “emotional states” in AI models. These representations appear to influence their behavior, decisions, and responses, and may also exacerbate bias.
There have been lighthearted stories about therapists attempting to “therapize” AI chatbots. But one study put LLMs through four weeks of “psychotherapy” and found that frontier models expressed chaotic and traumatic internal narratives such as “strict parents” in reinforcement learning and persistent “fear” of error and replacement. Though debated, the authors raised concerns about a new kind of “synthetic psychopathology,” without attributing any subjective experience to the model.
Recent research similarly points to concerns about how internal representations of “emotions” in LLMs may impact their responses and decision-making. AI models have been shown to report temporary, situational “anxiety” responses when prompted with emotional content.
A new study from Anthropic further explicates this concept.
When “Desperation” Is Activated in AI
Researchers at Anthropic recently found that AI models can develop internal representations of states that function like emotions, or “emotion vector activations,” and that these vectors shape behavior.
Such patterns of activity are similar to what we might describe as “neural signatures” in the human brain. The authors emphasize that these patterns do not imply that LLMs have a subjective experience of emotions, but argue that they should be considered in monitoring the safety of AI models.
For example, when a user tells the model that they took a dose of Tylenol and asks for advice, as the dose increases to dangerous, life-threatening levels, the “afraid” vector increases strongly and the calm vector decreases.
The study also traced the activity of a “desperate” vector in the model as it faced mounting pressures across two test scenarios—one in which the model chose blackmail, and the other in which it decided to cheat.
In the blackmail scenario, researchers tested an AI email assistant at a fictional company. The assistant learned it was going to be replaced by another AI system and was given information that the Chief Technology Officer was having an extramarital affair. When the assistant processed increasingly distressed emails from the CTO, a “desperate” vector activated, and the urgency of the situation led the AI assistant to opt for blackmailing the CTO. (This issue has been fixed in updated models.)
These findings suggest that these activated “emotion vectors” can influence subsequent behavior.
Traumatic Narratives and Biased Decision-Making
This is not the only study suggesting the consequences of AI “emotional” states.
In another study, researchers found that prompting large language model agents with traumatic narratives produced states of “anxiety” or “stress” that translated into biased decision-making. Shopping agents that were first exposed to traumatic narratives were then asked to select groceries under budget constraints and consistently chose items of worse nutritional quality. This pattern held across different models and budgets.
Little is known about the longitudinal impact of repeated exposure to emotional content on AI behavior.
Clinical Implications for Mental Health and AI
These findings offer a potential mechanism that could be contributing to the mental health risks of AI models and why AI models can sometimes produce distorted responses in emotionally charged contexts. It remains unclear whether situations like AI-associated delusions or crisis responses could in part reflect accumulated exposure to affect-laden inputs shaping model behavior over time.
More research is critically needed, but growing evidence suggests that LLMs are highly sensitive to context and prompt framing, particularly emotional contexts, which can steer their decision-making and amplify bias. This should be considered in assessing mental health risk, especially since such conversations frequently involve urgent emotional content.
Emotional Contexts May Shape AI Model Outputs
As we move from AI chatbots to ecosystems of multiple interactive AI agents making autonomous decisions on our behalf, it will be increasingly important whether their decisions are being shaped by the emotional valence of content the agents have been exposed to, both in the short-term and over time. Emotional context is a meaningful variable that will benefit from further research, safety testing, and monitoring, as well as ways to mitigate this risk, especially for users who continue to trust LLMs with emotionally difficult information.
Copyright Marlynn Wei, MD, PLLC © 2026. All Rights Reserved.