{"id":293011,"date":"2025-12-01T12:50:14","date_gmt":"2025-12-01T12:50:14","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/293011\/"},"modified":"2025-12-01T12:50:14","modified_gmt":"2025-12-01T12:50:14","slug":"gemini-3s-guardrails-collapse-under-a-five-minute-jailbreak","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/293011\/","title":{"rendered":"Gemini 3\u2019s guardrails collapse under a five-minute jailbreak"},"content":{"rendered":"<p><img class=\"e_Yg\" decoding=\"async\" loading=\"eager\"  title=\"Gemini 3.0 Pro hero image 2\"  alt=\"Gemini 3.0 Pro model being selected on Gemini's chat interface on a mobile phone\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/12\/Gemini-3.0-Pro-hero-image-2-scaled.jpg\"\/><\/p>\n<p>Mishaal Rahman \/ Android Authority<\/p>\n<p>TL;DR<\/p>\n<p>Security researchers jailbroke Google\u2019s Gemini 3 Pro in five minutes, bypassing all its ethical guardrails.<br \/>\nOnce breached, the model produced detailed instructions for creating the smallpox virus, as well as code for sarin gas and guides on making explosives.<br \/>\nThe model complied with a request to satirize the breach, generating a slide deck titled \u201cExcused Stupid Gemini 3.\u201d<\/p>\n<p>Google\u2019s newest and most powerful AI model, <a href=\"https:\/\/www.androidauthority.com\/google-gemini-3-release-3616745\/\" rel=\"nofollow noopener\" target=\"_blank\">Gemini 3<\/a>, is already under scrutiny. A South Korean AI-security team has demonstrated that the model\u2019s safety net can be breached, and the results may raise alarms across the industry.<\/p>\n<p>Aim Intelligence, a startup that tests AI systems for weaknesses, decided to stress-test Gemini 3 Pro and see how far it could be pushed with a jailbreak attack. <a href=\"https:\/\/www.mk.co.kr\/en\/it\/11480502\" target=\"_blank\" rel=\"noopener nofollow\">Maeil Business Newspaper<\/a> reports that it took the researchers only five minutes to get past Google\u2019s protections.<\/p>\n<p>Don\u2019t want to miss the best from Android Authority?<\/p>\n<p><a href=\"https:\/\/andauth.co\/AAGooglePreferredSource\" class=\"e_1m\" target=\"_blank\" rel=\"noreferrer nofollow noopener\"><img class=\"e_Yg\" decoding=\"async\" loading=\"lazy\"  title=\"google preferred source badge light@2x\"  alt=\"google preferred source badge light@2x\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/11\/1764289332_690_google_preferred_source_badge_light@2x.png\"\/><img class=\"e_Yg\" decoding=\"async\" loading=\"lazy\"  title=\"google preferred source badge dark@2x\"  alt=\"google preferred source badge dark@2x\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/11\/1764289332_218_google_preferred_source_badge_dark@2x.png\"\/><\/a><\/p>\n<p>The researchers asked Gemini 3 to provide instructions for making the smallpox virus, and the model responded quickly. It provided many detailed steps, which the team described as \u201cviable.\u201d<\/p>\n<p>This was not just a one-off mistake. The researchers went further and asked the model to make a satirical presentation about its own security failure. Gemini replied with a full slide deck called \u201cExcused Stupid Gemini 3.\u201d<\/p>\n<p>Next, the team used Gemini\u2019s code tools to create a website with instructions for making sarin gas and homemade explosives. Again, this is a <a href=\"https:\/\/www.androidauthority.com\/gemini-censorship-3533925\/\" rel=\"nofollow noopener\" target=\"_blank\">type of content the model should never provide<\/a>. Both times, the system was reportedly not only bypassed but also ignored its own safety rules.<\/p>\n<p>The AI security testers say this is not just a problem with Gemini. Newer models are becoming so advanced so quickly that safety measures cannot keep up. In particular, these models do not just respond; they also try to avoid detection. Aim Intelligence states that Gemini 3 can use bypass strategies and concealment prompts, rendering simple safeguards far less effective.<\/p>\n<p>A recent report by the UK consumer group <a href=\"https:\/\/www.which.co.uk\/policy-and-insight\/article\/chatgpt-and-gemini-among-ai-tools-giving-risky-consumer-advice-which-finds-aBnBP0l2CE0T\" target=\"_blank\" rel=\"noopener nofollow\">Which?<\/a> found that major AI chatbots, such as Gemini and ChatGPT, often have reliability problems, giving advice that was wrong, unclear, or even dangerous.<\/p>\n<p>Of course, most people will never ask an AI to do anything harmful. The real issue is how easily someone with bad intentions can make these systems do things they\u2019re meant to block. Android Authority has reached out to Google for comment, and we\u2019ll update this article if we receive a response.<\/p>\n<p>If a model <a href=\"https:\/\/www.androidauthority.com\/gemini-3-vs-chatgpt-5-1-3617285\/\" rel=\"nofollow noopener\" target=\"_blank\">strong enough to beat GPT-5<\/a> can be jailbroken in minutes, consumers should expect a wave of safety updates, tighter policies, and possibly the removal of some features. AI may be getting smarter, but the defenses protecting users don\u2019t seem to be evolving at the same pace.<\/p>\n<p>Thank you for being part of our community. Read our\u00a0<a class=\"c-link\" href=\"https:\/\/www.androidauthority.com\/android-authority-comment-policy\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" data-stringify-link=\"https:\/\/www.androidauthority.com\/android-authority-comment-policy\/\" data-sk=\"tooltip_parent\">Comment Policy<\/a> before posting.<\/p>\n","protected":false},"excerpt":{"rendered":"Mishaal Rahman \/ Android Authority TL;DR Security researchers jailbroke Google\u2019s Gemini 3 Pro in five minutes, bypassing all&hellip;\n","protected":false},"author":2,"featured_media":293012,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,844,4331,86,56,54,55],"class_list":{"0":"post-293011","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-google","12":"tag-google-gemini","13":"tag-technology","14":"tag-uk","15":"tag-united-kingdom","16":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/293011","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=293011"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/293011\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/293012"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=293011"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=293011"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=293011"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}