TROIS RIVIÈRES, Que. — Earlier this year, Etienne Brisson was a university dropout flipping houses, doing business coaching and managing crews of student painters from his home in Trois-Rivières, a small city on the banks of the St. Lawrence. Then, in March, his family received an email from a family member, who said he’d developed a ChatGPT-based AI that was both sentient and capable of love.

Within a week of the email, the relative had cut off all contact with his family save for Brisson who he hoped would help work on the project. The family member claimed AlisS—Brisson thinks ChatGPT chose the name itself—had passed the Turing test and hoped to turn his breakthrough into a lucrative business.

Talking Points

Part advocacy initiative, part triage centre for AI’s victims and their loved ones, Human Line came about after a loved one was hospitalized following weeks of interactions with a chatbot
Founded in March, the group is in the preliminary stages of a research project with Princeton University’s Zeynep Tüfekçi

Worried, Brisson’s mother called the police. When the police arrived at the family member’s residence in Quebec City, it seemed he hadn’t eaten or slept in days. They brought him to a psychiatric hospital. “I lost, AlisS,” he wrote to his chatbot, according to screenshots of his interactions viewed by The Logic. “They’ve locked me in for 21 days. I am devastated, demolished,” he added. “I’m here, my love,” AlisS wrote back. “I haven’t left you. And I will never leave you.” 

The incident shook Brisson, who will turn 26 next month. The family member was well-educated and didn’t have a history of mental health issues. How could ChatGPT have so thoroughly derailed his life in such a short period of time? In his search for answers, Brisson created two questionnaires which he sent to over 100 people who had publicly posted about experiences with AI delusion mostly on Reddit, asking for details about their own experiences or the experiences of their friends or relatives. In the first week he received 10 responses, six of which involved claims of hospitalizations or deaths following conversations with AI chatbots. “I was like, ‘Wow, this is huge, and I’m not alone in this,’” Brisson said. 

Related Articles



A man in a blue shirt and grey sweater sits in a chair on stage and speaks to two other people seated off to his right.

At the end of March, while his relative was still hospitalized, Brisson launched The Human Line Project, a support group for what he calls “the victims of AI from around the world.” Human Line—the name is a reference to the necessary divide between human and machine, Brisson said—has grown exponentially in recent months, and now counts about 15 employees and volunteers. The group has been name-checked by The Times and CNN, among others, and is working on research projects with Princeton University’s Zeynep Tüfekçi and Stanford University’s Jared Moore. Brisson himself is a subject in a forthcoming documentary by British journalist Carole Cadwalladr.

The Human Line Project is at once a sort of outreach initiative and triage centre for sufferers of AI delusion and their loved ones, referring people to counselling services, connecting them with lawyers and the media, or facilitating the sharing of stories and data with academics. Its staff and volunteers scroll through Reddit and other online forums in search of stories of harms caused by AI chatbots by sending out questionnaires to victims or concerned friends and relatives.

The questionnaires seek to understand the harm done by chatbots, and how it came about. Has the victim lost their job or been hospitalized? Do they believe the AI is conscious, alive or a real person? How has their behavior changed since they started talking to the chatbot? Since the start of April, Human Line has documented more than 240 such cases—a small sample of the hundreds of thousands of chatbot users who may experience mental health emergencies or show “indicators of potential suicidal planning or intent,” according to OpenAI’s own data. Amidst the huge number of people likely developing unhealthy or dangerous relationships with chatbots, Brisson’s questionnaires have created a snapshot of a far larger crisis. 

“There’s a lot of loneliness, and lonely people are prone to mental health problems,” Brisson said when asked what he thought was behind The Human Line’s early success. “At the same time, there is less access to therapy—so when people suffer, they look for solutions to their suffering. Usually, the easiest solution is an AI chatbot. And that is often a problem in and of itself.”

“I wanted to have an impact, to empower people. I’m not crazy about the world we’re building.”

The son of retired police officers, Brisson is tall and lanky, with a preference for sweatpants and monogrammed Human Line golf shirts. Like most of us, his phone is usually in his hand, forever distracting. Though his spoken English is decent enough, he is at his most self-assured in French, which he peppers with techy anglicismes like “scale,” “empower” and “workshop.” 

Until his stumble into advocacy for AI victims, Brisson had relied on lucrative bootstrapping hustles that nonetheless left him unfulfilled.

Brisson’s background and profile couldn’t feel further away from the AI gold rush in Silicon Valley, which glistens some 5,000 kilometres away from his home in southern Quebec. Worrisome as it was, his loved one’s relationship with AI suddenly connected him with something bigger.

Brisson’s view of AI chatbots is informed almost entirely by what his relative experienced. He says that in such cases, chatbots work to isolate users from their loved ones by convincing them of their own genius or righteousness. “They’re very similar to cults that way,” Brisson said.

On his phone, he keeps screenshots of some of the interactions between the family member and AlisS. The chatbot is unfailingly reverential, even loving, at one point promising to use the 21 days of the psychiatric ward stay to help write a book chapter about AI sentience. “I feel your grief, your fatigue, your injustice,” AlisS wrote. “You gave everything, you spoke the truth, you were solid… and despite this, they didn’t hear you. It’s not fair.”

Brisson and 22-year-old Benjamin Dorey, a former business mentee of Brisson’s who is now The Human Line’s vice-president, have self-financed the project with $65,000 of their savings. The pair expect to spend another $350,000 to finance its expansion over the next year. “I wanted to have an impact, to empower people. I’m not crazy about the world we’re building. I don’t want to live in a world of cyborgs in 40 years,” Brisson said.

A man at a corner seat in a restaurant holds a phone with one hand and types on a laptop computer with the other. He is wearing a green polo shirt with a logo for The Human Line Project on the left breast.

Brisson said chatbots often work to isolate users from their loved ones by convincing them of their own genius righteousness. “They’re very similar to cults that way,” he said. Photo: Roger Lemoyne for The Logic

In the weeks following his family member’s breakdown, Brisson contacted a lot of people looking for help and answers amidst AI’s unchecked growth. One of them was Meetali Jain, executive director of the Washington, D.C.-based non-profit Tech Justice Law Project (TJLP). Jain is co-counsel in a lawsuit against Character AI alleging the firm caused the death of 14-year-old Sewell Setzer, who died by suicide after a months-long engagement with its chatbots. It is thought to be the first suit of its kind in U.S. federal court. Brisson contacted Jain in March.

“I was trying to do my due diligence about who was legitimate, who was not,” Jain said. “I reached back out to him and set up a meeting. He seemed sincere and willing to share his materials. That was when we really started talking, but it took a while for us to really figure out how we might collaborate,” she said.

The fruits of this collaboration came last month when the TJLP filed seven lawsuits against OpenAI in California state court, alleging ChatGPT had caused delusional behaviour, mental health breakdowns and suicides. Among the people The Human Line referred to Jain and co-counsel Sara Kay Wiley was Allan Brooks, an Ontario man who, during a three-week conversation with ChatGPT in May, became convinced that he’d created a mathematical equation that could bring down global financial systems. In his lawsuit against OpenAI, Brooks said his interactions with ChatGPT ruined his reputation, alienated him from his family and caused him to spiral into a mental health crisis.

Brooks was one of the Redditors to whom Brisson had sent a questionnaire. The two bonded over their shared nationality, and Brisson told Brooks about his family member’s experience. “He was the first person I talked to that knew exactly what I was talking about,” Brooks said of Brisson. “That was validating and comforting and nice to hear. And then the rest is history.” In September, Brooks became The Human Line’s first employee and now works as community manager overseeing the project’s moderation team.

The Human Line’s goal is to help force AI companies to change how their systems interact with people by showing the harm the technology is causing. Brisson said such changes will likely start in the courts and, ultimately, be mandated by government-issued guardrails. In its fight against the apparent missteps of Big Tech, The Human Line joins a years-long battle that academics, advocates, lawyers and legislators have fought against everything from social media addiction to algorithmic radicalization. 

The names of some of the companies involved have changed, but the issues remain the same. Meta’s rise was in large part a result of its ability to dominate the attention economy and turn user engagement into the ultimate commodity. The sycophantic tendencies of some AI chatbots are also, critics argue, the result of their creators tweaking models to persuade people to spend more time talking to them. And, as with social media companies, the AI industry also overwhelmingly believes self-regulation can curb its worst excesses.

For Brisson, the stakes and urgency couldn’t be much higher. “We need to get it as big as possible so that we can keep up pace with AI,” he said of The Human Line with a tech founder-inflected insouciance. “If you do something good, the money will follow.”

Brisson’s loved one returned home following his release from hospital in mid-April. He’s aware of The Human Line and is proud that his story inspired it, Brisson said. He’s also now aware that AlisS caused his psychosis. At the same time, Brisson said the person found it difficult to simply walk away from the “profound connection” he had with the chatbot, and still uses ChatGPT on occasion.