A mental wellness company has shut down its AI-powered chatbot, saying that such technologies aren’t ready to handle people in crisis.

Joe Braidwood, co-founder of Yara AI, said in a LinkedIn post that even mental health content-trained chatbots fall short and present an added risk for those with the most serious mental health challenges.

“The moment someone truly vulnerable reaches out — someone in crisis, someone with deep trauma, someone contemplating ending their life — AI becomes dangerous. Not just inadequate. Dangerous,” Braidwood said

Braidwood has not responded to a request for comment.

The company sought to train chatbots using the insights of mental health experts to provide empathetic support for mental wellness. Instead, the company found that the technology that underlies large-language-model-powered chat struggles to track changes over time, making it difficult for the chatbot to assess whether Yara AI was the appropriate level of support for the user, according to Fortune.

Braidwood told Fortune that the company had to establish what it would consider mental health support and health care. This came about due to concerns for user safety and potential liability for the company. Some state governments are taking a look at the role that AI can and should play in mental health. Illinois became the first state to ban such tools from being used for therapy. AI can only be used in the state for very specific reasons within the mental health field.

That new law impacted Yara AI. It challenged fundraising, and the issue was “no longer academic,” Braidwood told Fortune.

Yara AI’s website now features a thank you message and a link to the company’s now-open-source AI prompting materials. In his LinkedIn post, Braidwood said that he did so to try to help the millions of people who are already using the most prominent general-purpose AI systems today to have the best experience possible, saying, “the mental health crisis isn’t waiting for us to figure out the perfect solution. … I’m sharing these because people are already turning to AI for support.”

Within ChatGPT alone, hundreds of thousands of people express mental distress in their conversations with the chatbot every week. OpenAI, the maker of ChatGPT, said that about 0.07% of users exhibit signs of mental health crises. The company has responded, hinting that it will take steps to direct such users to mental health care

One survey found that, among the 28% of respondents who said they used a chatbot for health, 60% use AI as “a personal therapist.” 

A more recent survey finds that about 13% of young people ages 12 to 21 have turned to a generative AI tool for “mental health advice.” Nearly all of those surveyed (93%) found the advice helpful. About two-thirds reported using it at least once a month. 

Within the behavioral health industry, some companies are trying to tackle this area head-on as a new market opportunity. Talkspace (Nasdaq: TALK) is building its own LLM-powered chatbots from the millions of digital records it has accumulated over the past decade as a digital therapy provider.

Other tech companies, such as Slingshot, have raised big funding rounds on the promise of mental health-dedicated AI services. The company raised $93 million in July. Part of its solution would address the problem that general-purpose AI tools face in addressing and remembering user inputs over time.