The promise of emotionally aware systems comes with risks. Bera warned that many emotionally responsive interfaces are essentially simulations, and users may not always realize what is real and what is not.

“One major risk is empathy theater,” he said. “That is when an AI system mimics emotion convincingly without actually understanding or caring. Just because a machine sounds empathetic, does not mean it is.”

The gap between performance and intention can lead to harm, especially when people place trust in systems that are not designed to offer care. Bera is particularly concerned about affective systems being used in hiring, feedback or mental health contexts without clear guardrails. “We must ask: are we amplifying human understanding or automating misunderstanding?” he said.

Signal misinterpretation is another danger, Bera said—especially when emotional expressions can vary so widely across cultures and individuals. “There is the concern of misread signals,” Bera said. “Especially across cultural or neurodiverse expressions of emotion.”

Haber raised a separate concern: what happens when AI systems are too agreeable? “It is important to have a conversation partner that can engage in sometimes seemingly adversarial ways,” he said. “If I am talking with a chatbot about how I treated someone horribly and am trying to justify my actions, it can be really harmful if it tells me I was totally justified in doing that.”

Many large language models (LLMs) are trained to be affirming, not challenging, Haber said. The result is what Haber described as “emotional echo chambers,” where  harmful ideas get reinforced through AI feedback loops.

“We can become uncritical, unreflective and more isolated,” he said. “There has been a lot of talk about how large language model-based chatbots are sycophantic. That is just a small part of a much larger and more complex issue.”

Seif El Nasr added that emotionally responsive systems can create unintended psychological dependencies. “These include psychological issues such as overreliance, loneliness and anxiety,” she said. “They can also lead to isolation from society.”

Privacy is another concern. Seif El Nasr cited applications like Replika, which create intimate emotional experiences but operate with unclear data protections.  In her view, systems that deal with human emotion must be developed with interdisciplinary rigor. “Such systems need to be developed with extra care,” she said. “That means grounding them in user research and involving social scientists, not just technologists.”

Bera said his lab is focused on designing systems that can be audited and explained. “We address these concerns in our lab by grounding emotional inference in multimodal explainable models and coupling them with rigorous ethical frameworks,” he said. “It is not just about building better algorithms. It is about building trustworthy ones.”