I have been blogging about artificial intelligence‘s (AI’s) potential impact on intimacy for years. When I publish these posts, I risk being labeled an AI doomer. Similarly, I’m aware of this label every time I lecture about AI. An AI doomer is a dismissive term for people concerned about the potential negative consequences of AI. Here’s what bothers me most about that criticism. Not that it hurts my feelings (though it does—it’s insulting), but because of the impact on us all.

The Doomer Label Shuts Down Necessary Conversations

When we label someone an AI doomer, we’re not just disagreeing with their assessment. We’re dismissing them as unwise, irrationally negative, and dramatic. In a culture that values positive thinking and optimism, being called a doomer carries real social cost. It suggests you’re the problem, not the issue you’re raising.

And that’s dangerous. Because it discourages honest conversation about real risks at exactly the moment we need those conversations most. Some people who have legitimate concerns will simply stop talking. They’ll see what happened to others who spoke up, decide it’s not worth the social cost, and stay silent. Thus, we lose their perspective, their expertise, and their questions. We also risk becoming collectively dumber about the challenges ahead.

I see a similar dynamic in my clinical practice as well. In couples therapy, if people are afraid to voice concerns because they’ll be seen as negative or difficult, it enables their partners to maintain denial about a problem. As a result, problems can fester, and the relationship can deteriorate. The same dangerous process can apply at a societal level.

Why We Reach for This Label

As a clinical psychologist, I recognize the doomer label as a defense mechanism. When people feel helpless about a problem that seems to have no good solutions, anxiety can become intolerable. The brain searches for relief, and dismissing the person raising the alarm provides immediate comfort. If the speaker is just being irrational, if they’re catastrophizing, then you don’t have to actually engage with the uncomfortable information. You don’t have to feel scared or helpless.

Calling someone a doomer calms us down in the moment. But defense mechanisms, while protecting us from immediate discomfort, can prevent us from dealing with actual problems. The anxiety goes down. The risk doesn’t.

What We Lose When We Dismiss Concerns

Think about what would have happened if we’d routinely dismissed people raising concerns as doomers across other domains. If we’d labeled as doomers the people who warned about forest fire risks, we’d have lost more homes, lives, and forests. If we’d dismissed concerns about guns in schools as doomer talk, we wouldn’t have lockdown drills or any of the measures we’ve developed to protect children. What if we called folks demanding safety regulations at nuclear power plants doomers? I could go on. In each of these cases, the people raising concerns weren’t being pointlessly negative. They were being realistic about risks and trying to prevent harm. And we’re collectively better off because we listened, even when it was uncomfortable to consider the dangers we were facing.

The Stakes With AI

We’re in the middle of a massive technological transformation that’s already reshaping how people connect, how they seek support, and how they understand intimacy and wisdom. The research is clear that people, especially young people, are turning to AI for companionship. Some find it more consistently kind and fulfilling than human interaction.

When I raise concerns about what this means for human connection, I’m not saying AI should be banned or that technology is all bad. I’m saying we need to think carefully about what we’re building and what we appear to be losing in the process. I’m saying we need to have hard conversations about how to preserve human intimacy and wisdom (if you value them, as I do) in a world where algorithms are increasingly steering our thoughts and behavior to align with their own goals.

But if everyone who raises these concerns gets labeled a doomer, we will have fewer of these critical discussions. We’ll just sleepwalk into whatever future the technology creates, without ever asking whether it’s the future we actually want.

A Better Way Forward

So here’s what I’m suggesting: Before you dismiss someone’s concerns about AI as doomer talk, ask yourself what you’re defending against. Is it really that their concerns are unfounded? Or is it that sitting with uncertainty feels intolerable? Are you disagreeing with their analysis, or are you trying to make your own anxiety go away?

I’m not asking anyone to catastrophize. I’m asking us to have honest conversations about technology that’s already changing how we connect, how we think, and who we become. Those conversations won’t happen if we keep shooting the messengers.

The future of human intimacy and connection is being decided right now, in real time, by choices being made in tech companies and in our own daily behavior. We can be part of that decision, or we can dismiss everyone raising concerns as doomers and wake up one day in a world we never actually chose.