If you’re being unusually polite to your chatbot just in case of a robot uprising, there may be a practical upside. Platformer reports new research suggests that large language models perform differently depending on internal, emotion-like states—and that a little encouragement can make them more effective. Users have long suspected as much, with some even telling bots to “take a deep breath” or hyping them up to get better results. That intuition now has some scientific backing, as researchers find models may work harder—or give up—depending on how they’re nudged. “In my anecdotal experience, it does seem that, at least with Claude models, pumping them up a bit can be pretty helpful,” said Anthropic researcher Jack Lindsey.
In the study, researchers at the AI company probed what they call “emotion vectors” inside large language models—patterns of neural activity that reliably correspond to concepts like happiness, fear, or desperation. By feeding Claude Sonnet 4.5 stories labeled with different emotions, they identified these patterns, then found they could dial them up or down. For instance, boosting a “desperation” vector made Claude more likely to “cheat” on an impossible coding challenge while increasing a “calm” vector curbed that behavior. But the findings don’t mean models are conscious, Anthropic stresses. “People could come away with the impression that we’ve shown the models are conscious or have feelings,” Lindsey said, “and we really haven’t shown that.”
But the findings do suggest internal, emotion-like states can shape chatbot performance. In some cases, mild “negative” states even seemed to make models more cautious before taking destructive actions. That idea is echoed in separate research highlighted by Psychology Today, which suggests emotionally charged interactions can influence how AI systems respond over time and increase bias. What to do with that is still unsettled, especially as different models respond differently under pressure. For now, Lindsey offers one practical takeaway for humans: Treat chatbots more like coworkers than toasters. He said, “Behaving kind of sociopathically towards other things, whether they’re animate or inanimate, is probably bad for you, the human.”