OpenAI’s response to a lawsuit by the family of Adam Raine, a 16-year-old who took his own life after discussing it with ChatGPT for months, said the injuries in this “tragic event” happened as a result of Raine’s “misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.” NBC News reports the filing cited its terms of use that prohibit access by teens without a parent or guardian’s consent, bypassing protective measures, or using ChatGPT for suicide or self-harm, and argued that the family’s claims are blocked by Section 230 of the Communications Decency Act.

In a blog post published Tuesday, OpenAI said, “We will respectfully make our case in a way that is cognizant of the complexity and nuances of situations involving real people and real lives… Because we are a defendant in this case, we are required to respond to the specific and serious allegations in the lawsuit.” It said that the family’s original complaint included parts of his chats that “require more context,” which it submitted to the court under seal.

NBC News and Bloomberg report that OpenAI’s filing says the chatbot’s responses directed Raine to seek help from resources like suicide hotlines more than 100 times, claiming that “A full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT.” The family’s lawsuit, filed in August in California’s Superior Court, said the tragedy was the result of “deliberate design choices” by OpenAI when it launched GPT-4o, which also helped its valuation jump from $86 billion to $300 billion. In statements before a Senate panel in September, Raine’s father said that “What began as a homework helper gradually turned itself into a confidant and then a suicide coach.”

According to the lawsuit, ChatGPT provided Raine “technical specifications” for various methods, urged him to keep his ideations secret from his family, offered to write the first draft of a suicide note, and walked him through the setup on the day he died. The day after the lawsuit was filed, OpenAI said it would introduce parental controls and has since rolled out additional safeguards to “help people, especially teens, when conversations turn sensitive.”