Earlier this year, an OpenAI study reached a troubling conclusion: Heavy ChatGPT users are more likely to report loneliness than non-users. The correlation between usage and feelings of social disconnection was strongest among people who leaned on the large language model for companionship and emotional support. The findings echo warnings from scholars, including Sherry Turkle of MIT, that “relationships” with machines often erode our capacity for the messy, demanding work of real connection.

Yet, in Silicon Valley, the exact opposite message is taking hold. Amid a slew of new AI-driven mental health apps and startups, tech titans like Mark Zuckerberg argue that chatbots will soon solve the crisis of disconnection. The Meta chief recently lamented that the “average American has fewer than three friends,” and then claimed that “people are going to want a system that knows them well and understands them in the way that their feed algorithms do,” pitching AI companions and therapists as the ultimate remedy.

Which is it?

Will AI decimate human connection, or will it help rebuild it?

The answer, of course, depends on the choices we make both as individuals and as a society. But one thing now seems certain: AI will radically reshape what it means to belong as a human being.

Both sides are correct that there’s already a real problem. Research reveals that in-person human interactions have dropped by roughly 45 percent in recent decades. The challenge isn’t just loneliness. It’s declining trust, social cohesion, shared purpose, and even a lost sense of connection to the physical environments where we live. Cumulatively, I’ve come to understand this broader issue as a deficit of “belonging.” The US now ranks last among G7 countries with respect to trust in public institutions. New research shows our connection to nature has fallen by 60 percent over the past two centuries. Anxiety and depression, as well as political polarization and pessimism, are on the rise.

There are glimmers of hope with the rise of ubiquitous AI. A recent peer-reviewed study found clinicians preferred ChatGPT’s answers to patient questions for quality and empathy, suggesting a role for AI triage and coaching. A growing range of therapy-oriented apps and tools offer low-cost, always-on mental health support. AI tutors show promise for freeing up teachers to emphasize the human work of mentoring and care. And AI systems can potentially serve as civic tools that summarize public comments in debates, translate government hearings in real time, and map out areas of agreement between conflicting groups.

But healthy skepticism is warranted. Julianne Holt-Lunstad of Brigham Young University and other scholars have shown how face-to-face connection bolsters emotional well-being and reduces risks of health problems like cardiovascular disease. Marco Iacoboni, a neuroscientist at UCLA, has explored the indispensable role of “mirror neurons,” brain cells activated through in-person human interaction, for empathy and emotional understanding.

To understand the implications of AI for human belonging, we also need to look beyond effects on literal human interaction. Start with work and purpose. New estimates suggest widespread AI adoption could save U.S. companies hundreds of billions of dollars a year, largely by reducing labour costs. While we need to explore strategies like universal basic income to cope, we also need to be talking about the less obvious implications of a mass displacement of employment, including what happens to our sense of mission and social solidarity when swaths of purposeful work are automated away.

Next, consider our connection to nature. People are now rightly scrutinizing the expansion of data centers for its effects on carbon emissions, water, land use, and grid resilience. But less discussed is how increasingly engaging AI systems push total time online upward, displacing neighbourhood chance encounters, time outdoors, and overall sense of connection to nature.

Finally, consider what AI means for our experience of belonging in community and society. Disinformation and deepfakes are already hijacking civic deliberation; hyper-personalized media and feeds narrow our exposure to new views and ideas; black-box decisions in hiring, lending, housing, and parole can further corrode trust in public institutions.

How do we take a stand for human connection and belonging before we enter the age of artificial general intelligence and the stakes get even higher? We need to think about the world we want—and a set of principles for realizing it.

One idea is to use AI to augment human relationships, not replace them. That means designing systems to hand us back to one another and to our communities. For example, AI systems should be designed to prompt people to connect with humans. Mental health systems should be designed to offer human handoffs—to a counsellor, peer group, or hotline—especially when there’s risk of dangerous behaviour. Similarly, we should strive to make offline life the default. Product teams can add gentle “step-outside” nudges after heavy use, and public-private partnerships can turn generic notifications into local invitations—park events, library story hours, farmers’ markets—so screens point people back to real physical places.

We also need to make information verifiable and systems explainable. That means “content-provenance standards” so people can see when media is synthetic. It can mean simple and understandable explanations—and a right to human appeal—whenever an algorithm touches a consequential decision like hiring or mortgage lending.

In terms of the economic transition, we should be using the savings generated by AI to buy back human face time with patients, customers, and students. It should also fund rapid, paid retraining tied to real jobs in sectors that hinge on presence and trust.

Overall, we need to measure what matters and then invest in it. Wherever AI affects schools, health, housing, or justice, we should require a short “belonging impact review” that looks at factors like trust, agency, in-person participation, and time outdoors. We should treat social connection and contact with nature as vital public goods, and fund the infrastructure that enables them.

All this might seem improbable, given the manic race to build bigger and smarter LLMs and integrate them everywhere. But there’s already massive political will to deal with these problems. Across the political spectrum, people are concerned about the impacts of AI on employment and social connections.

Without a lot of careful planning and effort, AI will likely make us lonelier, more distracted, and more estranged. Mark Zuckerberg’s vision of chatbots replacing human connection is naïve at best. But AI can still be a net-positive for human belonging—if we use it consciously to draw closer to what matters.