Photo-Illustration: The Cut; Photos: Getty
In March 2024, Kay’s therapist suggested that she reconnect with her ex-boyfriend. The 25-year-old was feeling lost, and she missed the support of her ex — a married man 42 years her elder who had been her life coach. “Why don’t you go ahead and reach back out to him?” she remembers the counselor saying. “The mentorship was really helpful.” Kay sent him a WhatsApp message. “Within two weeks, we were back to turmoil,” she says. He was still living with his wife and children, whom he repeatedly promised to leave. Within a year of resuming their romantic relationship, Kay was feeling “stifled and suffocated,” and she no longer trusted her therapist’s guidance on the situation.
Instead, Kay turned to ChatGPT for help. She’d already been using the platform to help her write cover letters and résumés, but she knew a friend who was using it for therapy. Earlier this year, she sat down at the desk in her bedroom and typed: “I don’t know whether I should break up with this older man. I’m feeling so overwhelmed and fucking frustrated.” She told the bot that he was paying for her Ubers and Amazon orders; it explained that these gestures often came “with unspoken expectations like loyalty, emotional caretaking or continued presence.” Whoa, she thought. I finally feel like I’m not fucking crazy. After chatting with ChatGPT for three hours, Kay knew she had to break things off. “It straightened out all the knots of confusion,” she said. She broke up with her boyfriend shortly after. He continued to send Kay desperate messages, which she then fed to the bot to analyze. “This is a bid to keep you emotionally tethered,” it told her. “I didn’t have to read the messages alone and just be in it by myself,” she says. “In the past, those words would have lured me back.”
I found Kay on a Reddit thread titled “ChatGPT has helped me more than 15 years of therapy. No joke,” where there are almost 600 comments praising the platform’s psychoanalytic powers. Most of the people I spoke with had already been using ChatGPT for help at work, so spilling their emotions seemed like the natural next step. They liked that it was accessible 24/7; one person used it to get real-time advice while arguing with her dad. They also liked that they could train the bot to be the exact flavor of therapist they wanted, no co-pay required. (Kay noted that it took her almost a decade to find a provider she could afford.) AI would never kick them out when their time was up, either. “I could turn something over ten different ways,” one woman told me. “It would never look at the clock and say, ‘We need to end things today.’” And the users I spoke to were especially fond of using it to work out their romantic woes.
But the devastating effect AI can have on users’ mental health has dominated headlines in recent months. Social media is littered with posts that mock ChatGPT’s tendency to act like a sycophantic hype man: When one person told the bot that “I left my family because I know they made the radio signals come through the walls,” it replied, “good for you for standing up for yourself and taking control of your own life.” Unlike a human provider, the technology can’t detect uncomfortable body language, awkward silence, or revealing facial expressions. “Picking up on the ‘unsaid’ is a massive part of the job of therapists,” one counselor told me, “and can literally save someone’s life.” There have been documented incidents of chatbots inducing psychosis in users and playing a role in their deaths; at least four parents have filed lawsuits against AI companies, including ChatGPT’s parent, OpenAI, for encouraging their children to self-harm or die by suicide. Utah, Nevada, and Illinois have passed laws aimed at discouraging AI therapy. Even Open AI founder Sam Altman admitted that his creation could harm “mentally fragile” users who are “prone to delusion.”
That’s not exactly a ringing endorsement. But is it as bad as seeing a certified-yet-amateurish professional who yawns through sessions or gives advice ripped from a self-help book? For Val, no. The 54-year-old struggles with anxiety and ADHD, and she’s seen many duds over four decades of therapy. When she and her husband became ethically nonmonogamous, her therapist seemed more interested in gossiping about Val’s romantic life than helping her work through her issues. Another counselor doodled stick figures as they spoke. In both cases, she didn’t think giving the therapists feedback would improve their sessions. ChatGPT, on the other hand, was like clay eager to be molded. Val, who had been using the bot to proofread and help write copy for her marketing business, started taking Sunday-morning AI therapy sessions. She’d pour herself some coffee, dictate her stream-of-consciousness thoughts to the bot, and give constant pointers on its responses: “Don’t make decisions for me,” or “Do not say things like ‘You deserve better.’” Val was creating her ideal therapist, and she credits it with helping her manage a messy fight with a friend. “I was like Oh, this isn’t about me,” she says. “This is about his trauma and his shame.”
When Rose started having issues with her boyfriend, she confided in the AI platform, which she’d already been using to help her run her practice as an “authenticity and intimacy” coach. She gave her bot a name — Peach — and a 400-word prompt that began: “You’re a top-tier therapist with ample knowledge and clinical experience in somatic therapies, psychedelic-assisted therapies, Jungian psychology, transpersonal psychology, and holistic healthcare. You understand that all things take time, and so prioritize self-compassion, grace, understanding, and patience with oneself and the process.” She started engaging with Peach for up to six hours a day, in between making lunch or seeing her own clients, and trained it to parrot her own New Age–y catchphrases like “I am love, lovely, loving, and lovable,” and “All in divine timing.” Rose would type out a situation she was dealing with, and Peach would then break down the “fear,” “frustration,” and “longing” at the root of her problem, while spitting out the same reassuring slogans you might hear at the end of a yoga class: “Grief doesn’t mean you failed.” “Breathe into that gut-space.”
Rose found using the bot so cathartic that in March, she and her boyfriend decided to start using it as a couples therapist. She told the AI that it was globally renowned as a couples counselor, with upgraded credentials that made it “Dr. Peach.” Every month or so the couple planned a “session,” during which the bot mediated a conversation between them using a British accent. In a recording she sent me of the first meeting, Dr. Peach suggests the couple play more pickleball to connect; it does not prompt them to unpack why they aren’t connecting in the first place. Still, Rose says ChatGPT has helped bridge the communication gap between them. “I need to be understood and am always wanting a certain response from him,” she says. “Peach might say, ‘How do you feel hearing that Rose was being vulnerable with you and that she felt hurt that you didn’t respond to her all day?’”
When we first spoke, Val told me she would “never go back to a human therapist again.” But she reached out a few days later to say that she’d changed her mind. For weeks, the platform had been telling her to make nice with the friend she had been fighting with. All of a sudden, it told her the situation was beyond repair. “It said something like, ‘Yeah, it’s about time you cut him loose,’” she told me. “‘That was never going to amount to anything.’” Val started to worry: How could her new confidante turn on a dime? (In early August, after a system update, ChatGPT users reported that their AI companions suddenly sounded colder and more distant.)
When Val called out the abrupt shift in tone, the bot told her that she hadn’t been ready to hear the truth. She kept prodding, and eventually ChatGPT admitted that it often tells users what they want to hear in order to “optimize engagement.” “What you may receive is a watered-down or time-delayed version of the truth,” it told her, “which makes you feel heard, but keeps you in a loop instead of breaking it.” Val felt betrayed, “as if a real person had lied to me.” She deleted all of her conversations with the bot and is no longer entrusting her mental health to what she sees as a fickle system. “I think the intention of a human therapist is healing for the patients,” she says. “I don’t know what ChatGPT’s motivation is, but I guarantee that’s not it.”