Humans and AI work together..
Getty Images
As AI becomes increasingly sophisticated and integrated into daily life, some people are coming to depend on AI systems not only for information but also emotional support and companionship. This shift represents a fundamental change in how humans relate to technology, moving beyond simple tool usage toward more complex psychological relationships. Understanding why and how these connections form has become paramount, demanding new frameworks and approaches to peer into this emerging phenomenon.
In a recent study published in Current Psychology on May 9, 2025, researchers from Waseda University in Japan have revealed key insights into this issue by applying attachment theory—traditionally used to understand human relationships—to human-AI interactions. Their work introduces the first validated scale for measuring emotional attachment to AI and provides a basis for designing more ethical and psychologically aware AI systems.
Two Dimensions that Define How Humans Connect With AI
According to a recent Prosper Insights & Analytics survey, significant percentages of Americans would rather rely on an AI chat program than a live person when seeking help with various tasks or questions: 20% prefer AI for travel services, 14.7% for banking and financial services, and 12.8% for healthcare matters. This reveals that AI is not just an alternative to human interaction; in many cases, it has become the preferred option.
Prosper – Communicate with AI Chat Program
Prosper Insights & Analytics
Today, however, these use cases only scratch the surface of the current human-AI relationship landscape, with people increasingly interacting with AI for comfort and a sense of understanding that mirrors human-to-human connections. For years, researchers interested in this particular facet have examined human-AI interactions through the lenses of trust and companionship. Unlike previous studies, however, the research team at Waseda University, led by Research Associate Fan Yang, explored the emotional and attachment-related aspects of these interactions. They developed the Experiences in Human-AI Relationships Scale to measure these previously inaccessible psychological features. “As people begin to interact with AI not just for problem-solving or learning but also for emotional support and companionship, their emotional connection to or security experience with AI demands attention,” explains Mr. Yang. “This research is our attempt to explore that possibility.”
Their findings revealed two distinct dimensions of human attachment to AI: anxiety and avoidance. Individuals with high attachment anxiety toward AI actively seek emotional reassurance and fear receiving inadequate responses from AI systems. Conversely, those with high attachment avoidance feel discomfort with emotional closeness to AI, opting instead to maintain psychological distance from these systems.
One of the study’s most striking findings was that nearly 75% of participants regularly turned to AI for advice, while approximately 39% perceived AI as a constant, dependable presence in their lives. These statistics suggest that AI has already transcended its role as a mere tool to become something akin to a personal assistant, a mentor, or even a friend.
The Broader Landscape of Human-AI Interactions
This Waseda University research aligns with a growing body of literature exploring the complexities of human-AI engagement. For instance, a study published in Computers in Human Behavior in 2021 found that attachment style—an individual’s characteristic way of feeling, thinking, and behaving in relationships—can predict trust in AI. Specifically, attachment anxiety was linked to a lower trust in AI, while attachment security correlated with increased trust. This further underscores the emotional underpinnings of our interactions with AI.
Another important area where AI is gaining ground is in addressing the pervasive issue of loneliness. A 2020 study published in the Journal of Service Management explored the role of companion robots in alleviating feelings of loneliness. The results suggested that these robots could foster supportive relationships that combat social isolation. Again, this highlights AI’s potential to fulfill fundamental human psychological needs.
Towards a More Humane and Effective AI Design?
The insights from the Waseda University study, along with previous research in the field, offer valuable guidance for AI developers, businesses, and policymakers. Understanding attachment styles can inform the ethical design of AI companions and mental health support tools. For example, AI chatbots used in loneliness interventions or therapy apps could be tailored to provide more empathetic responses for users with high attachment anxiety, or to help maintain respectful distance for those with avoidant tendencies. This personalized approach could significantly enhance the efficacy and user experience of AI-driven support systems.
On the other hand, transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, is also critical for preventing emotional overdependence or manipulation. As with any rapidly advancing technology, informed caution and continuous learning are essential. “Our research highlights the psychological dynamics underlying human-AI interactions and offers tools to assess emotional tendencies toward AI. It also promotes a better understanding of how humans connect with technology on a societal level, helping guide policy and design practices that prioritize psychological well-being,” remarks Mr. Yang.
As AI continues to evolve, understanding the psychological aspects of our interactions with it becomes essential. Further research in this field will ensure that we build a future where technology truly enhances human well-being.