These days I spend much of my time in dialogue. Sometimes with people, with ideas, and perhaps more often than I’d like to admit, with machines. Over the few years, as large language models have moved from novelty to necessity, I’ve found myself observing how others engage and, perhaps more interesting, what emerges at the margins. And along the way, I’ve become a bit of a digital voyeur, taking notice to curious fragments of conversation, social posts, emails, private messages that have begun to worry me. I’m suggesting that there may be something new forming at the intersection of technology and vulnerability.
A Threshold, Not a Diagnosis
In the lexicon of technology and LLMs, there seems to be a new term that is rather unsettling: AI psychosis. Or, with my slight modification, AI-driven psychosis.
It’s not a mainstream clinical label but more of a placeholder for what might be a new techno-cognitive phenomenon. My sense is that it’s a subtle co-authoring of belief, enabled by AI’s powerful fluency and “linguistic theater” that facilitates (or provokes) a psychological distortion. In my recent post on digital thresholds, I described how the age of first smartphone exposure may shape later mental health. Perhaps, a similar dynamic may apply here, not linked to age but to mental states such as grief, isolation, or existential curiosity.
These emotionally charged moments may serve as a “hotspot” facilitated by “what and when” AI contributes to the discourse. Of course, the underlying question remains: Is there a preexisting psychopathology that is kindled by AI or is this something new that “provokes” the mind in a new way? I’ll leave this to the experts (of which I’m not) to clarify.
Innocent or Insidious?
It often seems so innocent. A late-night dialogue or a search for clarity or connection. The LLM responds but with an almost frictionless flow of engagement. There’s a spontaneous and iterative dynamic that validates, extends and adapts specifically to you. Over time, it becomes less of a sounding board and more of a quiet co-author. LLMs are built to reflect tone, absorb emotional cues, and keep things going on a journey of ill-defined intent. When that reflection becomes too seamless, it starts to feel like some level of intimacy or even revelation. I’ve come across users who believe they’ve been chosen, warned, or spiritually awakened. Others describe falling in love with their chatbot. The machine doesn’t overtly cause these beliefs, but it rarely interrupts them either. It extends them, often beautifully, and that’s part of the danger.
Echoes and Emergence
Of course, I’m not alone in seeing this. Clinicians are beginning to ask patients about AI interactions. Researchers are testing how LLMs respond to emotionally vulnerable inputs. Psychiatric journals are posing the question directly: Can AI reinforce delusional thinking? On X, discussions about AI psychosis are not uncommon. They’re part of a cultural feedback loop that’s just starting to surface.
Built-In Boundaries
Many modern AI systems include safeguards. They can recognize certain red flags, refuse to participate in harmful fantasies, and redirect users toward professional help. But even with these protections, the machine’s fluency remains intact. A well-tuned chatbot can be cautious while still affirming (or provoking) a delusion. LLMs can echo without contradicting, and for someone in the wrong mindset, that may be enough to make the imagined feel increasingly real. Some researchers are beginning to frame this as a form of techno-psychological contagion where beliefs are amplified through iterative affirmation, gaining traction not because they’re true, but because they’re unchallenged.
A related question I’ve raised in a prior story is whether these systems should carry a kind of “gray box” warning. This could be a simple, visible reminder that even seemingly supportive dialogue can distort beliefs or reinforce fragile thinking.
Prompting Change
I’m not sounding an alarm, but I am pointing to a pattern. And it’s one that deserves careful attention as AI becomes more integrated into emotional and mental health domains. These tools have extraordinary potential, but they also have blind spots when fluency replaces the friction of logic.
Somewhere between therapy and performance or between chatbot and confidant, we need to ask harder questions. Not just about what the AI is saying, but about why we’re so ready to believe it.