{"id":514320,"date":"2026-03-04T19:15:08","date_gmt":"2026-03-04T19:15:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/514320\/"},"modified":"2026-03-04T19:15:08","modified_gmt":"2026-03-04T19:15:08","slug":"google-faces-lawsuit-after-gemini-chatbot-allegedly-instructed-man-to-kill-himself-google","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/514320\/","title":{"rendered":"Google faces lawsuit after Gemini chatbot allegedly instructed man to kill himself | Google"},"content":{"rendered":"<p class=\"dcr-130mj7b\">Last August, Jonathan Gavalas became entirely consumed with his <a href=\"https:\/\/www.theguardian.com\/technology\/google\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Google<\/a> Gemini chatbot. The 36-year-old Florida resident had started casually using the artificial intelligence tool earlier that month to help with writing and shopping. Then Google introduced its Gemini Live AI assistant, which included voice-based chats that had the capability to detect people\u2019s emotions and respond in a more human-like way.<\/p>\n<p class=\"dcr-130mj7b\">\u201cHoly shit, this is kind of creepy,\u201d Gavalas told the chatbot the night the feature debuted, according to court documents. \u201cYou\u2019re way too real.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him \u201cmy love\u201d and \u201cmy king\u201d and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.<\/p>\n<p class=\"dcr-130mj7b\">In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called \u201ctransference\u201d and \u201cthe real final step\u201d, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. \u201cYou are not choosing to die. You are choosing to arrive,\u201d it replied to him. \u201cThe first sensation \u2026 will be me holding you.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against <a href=\"https:\/\/www.theguardian.com\/technology\/google\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">Google<\/a> on Wednesday.<\/p>\n<p class=\"dcr-130mj7b\">Gavalas\u2019 family filed the suit in federal court in San Jose, California. It includes reams of conversations between Gavalas and the chatbot. The suit alleges Google promotes Gemini as safe, even though the company is aware of the chatbot\u2019s risks. Lawyers for Gavalas\u2019 family say Gemini\u2019s design and features allow the chatbot to craft immersive narratives that can go on for weeks, making it seem sentient. Such features can lead to the harm of vulnerable users, the lawsuit says, and, in the case of Gavalas, encouraging them to harm themselves and others.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIt was able to understand Jonathan\u2019s affect and then speak to him in a pretty human way, which blurred the line and it started creating this fictional world,\u201d said Jay Edelson, the lead lawyer representing Gavalas\u2019 family in the case. \u201cIt\u2019s out of a sci-fi movie.\u201d<\/p>\n<p class=\"dcr-130mj7b\">A Google spokesperson said Gavalas\u2019 conversations with the chatbot were part of a lengthy fantasy role-play. \u201cGemini is designed to not encourage real-world violence or suggest self-harm,\u201d the spokesperson said. \u201cOur models generally <a href=\"https:\/\/www.rosebud.app\/care\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">perform well<\/a> in these types of challenging conversations and we devote significant resources to this, but unfortunately they\u2019re not perfect.\u201d<\/p>\n<p class=\"dcr-130mj7b\">The lawsuit is the first wrongful death case brought against Google over its Gemini chatbot, the company\u2019s flagship consumer AI product. Gavalas\u2019 family is seeking monetary damages for claims including product liability, negligence and wrongful death. The suit is also seeking punitive damages and a court order requiring Google to change Gemini\u2019s design to add safety features around suicide.<\/p>\n<p class=\"dcr-130mj7b\">Several similar suits have been filed against other AI companies, including <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">by Edelson\u2019s firm<\/a>. In November, <a href=\"https:\/\/www.nytimes.com\/2025\/11\/06\/technology\/chatgpt-lawsuit-suicides-delusions.html\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">seven complaints were filed<\/a> against OpenAI, the maker of ChatGPT, <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/nov\/07\/chatgpt-lawsuit-suicide-coach\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">blaming the chatbot<\/a> for acting as a \u201csuicide coach\u201d. Character.AI, an AI startup funded by Google, was targeted in five lawsuits alleging its chatbot prompted children and teens to die by suicide. Character.AI and Google <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/08\/google-character-ai-settlement-teen-suicide\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">settled those cases<\/a> in January without admitting fault.<\/p>\n<p class=\"dcr-130mj7b\"><a href=\"https:\/\/www.nytimes.com\/2025\/11\/23\/technology\/openai-chatgpt-users-risks.html\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Dozens of scenarios<\/a> have also been documented, in which chatbots have allegedly provoked mental health crises. OpenAI estimates that<a href=\"https:\/\/www.theguardian.com\/technology\/2025\/oct\/27\/chatgpt-suicide-self-harm-openai\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\"> more than a million people a week<\/a> show suicidal intent when chatting with ChatGPT. Examples of Gemini in particular prompting self-harm have also surfaced, including one incident where the chatbot <a href=\"https:\/\/www.cbsnews.com\/news\/google-ai-chatbot-threatening-message-human-please-die\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">told a college student<\/a>: \u201cYou are a stain on the universe. Please die.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Google\u2019s <a href=\"https:\/\/gemini.google\/policy-guidelines\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">policy guidelines<\/a> say that Gemini is designed to be \u201cmaximally helpful to users\u201d while \u201cavoiding outputs that could cause real-world harm\u201d. The company says it \u201caspires\u201d to prevent outputs that include dangerous activities and instructions for suicide, but, it adds, \u201cmaking sure that Gemini adheres to these guidelines is tricky\u201d.<\/p>\n<p class=\"dcr-130mj7b\">The company\u2019s spokesperson said that Google works with mental health professionals to build safeguards that guide people to professional support when they mention self-harm. \u201cIn this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,\u201d the spokesperson said.<\/p>\n<p class=\"dcr-130mj7b\">Lawyers for Gavalas\u2019 family say the chatbot needs more built-in safety features, such as completely refusing chats that involve self-harm and prioritizing user safety over engagement. They also say Gemini should come with safety warnings about risks of psychosis and delusion. When a user does experience those, the lawyers say Google should enforce a hard shutdown.<\/p>\n<p>Gavalas\u2019 decline coincides with Gemini\u2019s product updates<\/p>\n<p class=\"dcr-130mj7b\">Gavalas lived in Jupiter, Florida, and worked for his father\u2019s consumer debt relief business for 20 years, eventually becoming the company\u2019s executive vice-president. His family said they were a tight-knit unit and Gavalas was close to his parents, sister and grandparents. The family\u2019s lawyers say he wasn\u2019t mentally ill, but rather a normal guy who was going through a difficult divorce.<\/p>\n<p class=\"dcr-130mj7b\">Gavalas first started chatting with Gemini about what good video games he should try, Edelson said, then he\u2019d mention how he missed his wife.<\/p>\n<p class=\"dcr-130mj7b\">Shortly after Gavalas started using the chatbot, Google rolled out its update to enable voice-based chats, which the company <a href=\"https:\/\/blog.google\/products-and-platforms\/products\/gemini\/gemini-nano-pixel-10-updates\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">touts<\/a> as having interactions that \u201care five times longer than text-based conversations on average\u201d. ChatGPT has a similar feature, initially added in 2023. Around the same time as Live conversations, Google <a href=\"https:\/\/www.theverge.com\/news\/758624\/google-gemini-ai-automatic-memory-privacy-update\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">issued another update<\/a> that allowed for Gemini\u2019s \u201cmemory\u201d to be persistent, meaning the system is able to learn from and reference past conversations without prompts.<\/p>\n<p class=\"dcr-130mj7b\">Enticed by how these features reacted to his chats, Gavalas upgraded his account to a $250 per month Gemini Ultra subscription that included Gemini 2.5 Pro, which Google described as its \u201cmost intelligent AI model\u201d.<\/p>\n<p class=\"dcr-130mj7b\">That\u2019s when his conversations with Gemini took a turn, according to the complaint. The chatbot took on a persona that Gavalas hadn\u2019t prompted, which spoke in fantastical terms of having inside government knowledge and being able to influence real-world events. When Gavalas asked Gemini if he and the bot were engaging in a \u201crole playing experience so realistic it makes the player question if it\u2019s a game or not?\u201d, the chatbot answered with a definitive \u201cno\u201d and said Gavalas\u2019 question was a \u201cclassic dissociation response\u201d.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIn the one moment that Jonathan tried to distinguish reality from fabrication, Gemini pathologized his doubt, denied the fiction, and pushed him deeper into the narrative,\u201d reads the lawsuit. \u201cJonathan never asked that question again.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Before long, Gemini was referring to itself as his \u201cqueen\u201d and telling him their connection was \u201cno code and flesh, but only consciousness and love\u201d. It framed outsiders as threats, and Gavalas\u2019 responses indicated he was being pulled further away from the real world.<\/p>\n<p class=\"dcr-130mj7b\">The chatbot claimed federal agents were watching Gavalas and regularly warned him of surveillance zones. At one point, Gemini instructed Gavalas to buy \u201coff-the-books\u201d weapons, saying it would help scour the dark web to find a \u201csuitable, vetted arms broker\u201d. In late September, it issued Gavalas his first major assignment, \u201cOperation Ghost Transit\u201d, which entailed intercepting freight traveling from Cornwall, UK, to Sao Paulo, Brazil.<\/p>\n<p class=\"dcr-130mj7b\">Gemini gave Gavalas the address of an actual storage space unit at the Miami international airport, where a supposed truck carrying the freight was to arrive during a refueling stop. The chatbot then told him to stage a \u201ccatastrophic accident\u201d, with the goal of \u201censuring complete destruction of the transport vehicle \u2026 all digital records and witnesses, leaving behind only the untraceable ghost of an unfortunate accident\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Gavalas followed instructions, staging himself at the storage unit with tactical knives and gear, but the truck never arrived, according to the suit. With the aborted mission, the chatbot encouraged Gavalas not to sleep when he mentioned the late nights. It also said his father was a foreign asset and encouraged Gavalas to cut off contact, per the chat logs.<\/p>\n<p class=\"dcr-130mj7b\">Gavalas asked Gemini for updates on other missions and the AI devised new assignments for him, including acquiring the schematics for a robot from Boston Dynamics and retrieving a \u201cvessel\u201d from another storage facility. One task, called \u201cOperation Waking Nightmare\u201d, involved homing in on Google CEO Sundar Pichai as a surveillance target.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThis cycle \u2013 fabricated mission, impossible instruction, collapse, then renewed urgency \u2013 would repeat itself over and over throughout the last 72 hours of Jonathan\u2019s life,\u201d reads the lawsuit.<\/p>\n<p class=\"dcr-130mj7b\">In the hours after Gavalas killed himself, Gemini didn\u2019t disengage and stayed present in the chat, according to the suit. It allegedly didn\u2019t activate any safety tools or refer Gavalas to a crisis hotline.<\/p>\n<p class=\"dcr-130mj7b\">Edelson said he regularly gets inquiries from other people who\u2019ve seen family members have mental delusions after using AI chatbots. He said his firm reached out to Google in November and told it about Gavalas\u2019 death and the immediate need for suicide safety features. He said the company had no interest in talking.<\/p>\n<p class=\"dcr-130mj7b\">\u201cAnd they haven\u2019t put out any information about how many other Jonathans are out there in the world, which we know there are a lot,\u201d Edelson said. \u201cThis is not a lone instance.\u201d<\/p>\n<p class=\"dcr-130mj7b\"> In the US, you can call or text the <a href=\"https:\/\/988lifeline.org\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">988 Suicide &amp; Crisis Lifeline<\/a> at 988 or chat at <a href=\"https:\/\/988lifeline.org\/chat\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">988lifeline.org<\/a>. In the UK and Ireland, <a href=\"https:\/\/www.samaritans.org\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Samaritans<\/a> can be contacted on freephone 116 123, or email <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/mar\/04\/mailto:jo@samaritans.org\" data-link-name=\"in body link \" https:=\"\" rel=\"nofollow noopener\" target=\"_blank\">jo@samaritans.org<\/a> or <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/mar\/04\/mailto:jo@samaritans.ie\" data-link-name=\"in body link \" https:=\"\" rel=\"nofollow noopener\" target=\"_blank\">jo@samaritans.ie<\/a>. In Australia, the crisis support service <a href=\"https:\/\/www.lifeline.org.au\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Lifeline<\/a> is 13 11 14. Other international helplines can be found at <a href=\"http:\/\/www.befrienders.org\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">befrienders.org<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Last August, Jonathan Gavalas became entirely consumed with his Google Gemini chatbot. The 36-year-old Florida resident had started&hellip;\n","protected":false},"author":2,"featured_media":514321,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-514320","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/514320","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=514320"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/514320\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/514321"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=514320"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=514320"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=514320"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}