Gemini 3.0 Pro model being selected on Gemini's chat interface on a mobile phone

Mishaal Rahman / Android Authority

TL;DR

Security researchers jailbroke Google’s Gemini 3 Pro in five minutes, bypassing all its ethical guardrails.
Once breached, the model produced detailed instructions for creating the smallpox virus, as well as code for sarin gas and guides on making explosives.
The model complied with a request to satirize the breach, generating a slide deck titled “Excused Stupid Gemini 3.”

Google’s newest and most powerful AI model, Gemini 3, is already under scrutiny. A South Korean AI-security team has demonstrated that the model’s safety net can be breached, and the results may raise alarms across the industry.

Aim Intelligence, a startup that tests AI systems for weaknesses, decided to stress-test Gemini 3 Pro and see how far it could be pushed with a jailbreak attack. Maeil Business Newspaper reports that it took the researchers only five minutes to get past Google’s protections.

Don’t want to miss the best from Android Authority?

google preferred source badge light@2xgoogle preferred source badge dark@2x

The researchers asked Gemini 3 to provide instructions for making the smallpox virus, and the model responded quickly. It provided many detailed steps, which the team described as “viable.”

This was not just a one-off mistake. The researchers went further and asked the model to make a satirical presentation about its own security failure. Gemini replied with a full slide deck called “Excused Stupid Gemini 3.”

Next, the team used Gemini’s code tools to create a website with instructions for making sarin gas and homemade explosives. Again, this is a type of content the model should never provide. Both times, the system was reportedly not only bypassed but also ignored its own safety rules.

The AI security testers say this is not just a problem with Gemini. Newer models are becoming so advanced so quickly that safety measures cannot keep up. In particular, these models do not just respond; they also try to avoid detection. Aim Intelligence states that Gemini 3 can use bypass strategies and concealment prompts, rendering simple safeguards far less effective.

A recent report by the UK consumer group Which? found that major AI chatbots, such as Gemini and ChatGPT, often have reliability problems, giving advice that was wrong, unclear, or even dangerous.

Of course, most people will never ask an AI to do anything harmful. The real issue is how easily someone with bad intentions can make these systems do things they’re meant to block. Android Authority has reached out to Google for comment, and we’ll update this article if we receive a response.

If a model strong enough to beat GPT-5 can be jailbroken in minutes, consumers should expect a wave of safety updates, tighter policies, and possibly the removal of some features. AI may be getting smarter, but the defenses protecting users don’t seem to be evolving at the same pace.

Thank you for being part of our community. Read our Comment Policy before posting.