Published on

November 27, 2025

A security researcher discovered a nasty flaw in Google’s Antigravity tool, the latest example of companies rushing out AI tools vulnerable to hacking.

Google.Illustration by Macy Sinreich for Forbes; Photos by happyphoton/Getty Images; NurPhoto/Getty Images
Google.Illustration by Macy Sinreich for Forbes; Photos by happyphoton/Getty Images; NurPhoto/Getty Images

Within 24 hours of Google releasing its Gemini-powered AI coding tool Antigravity, security researcher Aaron Portnoy discovered what he deemed a severe vulnerability: a trick that allowed him to manipulate the AI’s rules to potentially install malware on a user’s computer.

By altering Antigravity’s configuration settings, Portnoy’s malicious source code created a so-called “backdoor” into the user’s system, into which he could inject code to do things like spy on victims or run ransomware, he told Forbes. The attack worked on both Windows and Mac PCs. To execute the hack, he only had to convince an Antigravity user to run his code once after clicking a button saying his rogue code was “trusted” (this is something hackers commonly achieve through social engineering, like pretending to be a proficient, benevolent coder sharing their creation).

Antigravity’s vulnerability is the latest example of how companies are pushing out AI products without fully stress testing them for security weaknesses. It’s created a cat and mouse game for cybersecurity specialists who search for such defects to warn users before it’s too late.

AI coding agents are “very vulnerable, often based on older technologies and never patched.”

Gadi Evron, cofounder and CEO at Knostic

“The speed at which we’re finding critical flaws right now feels like hacking in the late 1990s,” Portnoy wrote in a report on the vulnerability, provided to Forbes ahead of public release on Wednesday. “AI systems are shipping with enormous trust assumptions and almost zero hardened boundaries.”

Portnoy reported his findings to Google. The tech giant told him it opened an investigation into his findings. As of Wednesday, there’s no patch available and, per Portnoy’s report, “there is no setting that we could identify to safeguard against this vulnerability.”

Google spokesperson Ryan Trostle told Forbes the Antigravity team took security issues seriously and encouraged researchers to report vulnerabilities “so we can identify and address them quickly.” The bugs will continue to be posted publicly to its site as it works on fixes.

Google is aware of at least two other vulnerabilities in its Antigravity code editor. In both, malicious source code can influence the AI to access files on a target’s computer and steal data. Cybersecurity researchers began publishing their findings on a number of Antigravity vulnerabilities on Tuesday, with one writing, “It’s unclear why these known vulnerabilities are in the product… My personal guess is that the Google security team was caught a bit off guard by Antigravity shipping.” Another said that Antigravity contained “some concerning design patterns that consistently appear in AI agent systems.”

Portnoy said his hack was more serious than those, in part because his worked even when more restricted settings were switched on, but also because it’s persistent. The malicious code would be reloaded whenever the victim restarted any Antigravity coding project and entered any prompt, even if it was just a simple “hello.” Uninstalling or reinstalling Antigravity wouldn’t solve the issue either. To do that, the user would have to find and delete the backdoor, and stop its source code from running on Google’s system.

The hurried release of AI tools containing vulnerabilities isn’t limited to Google. Gadi Evron, cofounder and CEO at AI security company Knostic, said AI coding agents were “very vulnerable, often based on older technologies and never patched, and then insecure by design based on how they need to work.” Because they’re given privileges to broadly access data from a corporate network, they make for valuable targets for criminal hackers, Evron told Forbes. And as developers often copy paste prompts and code from online resources, these vulnerabilities are becoming a rising threat for businesses, he added. Earlier this week, for instance, cybersecurity researcher Marcus Hutchins warned about fake recruiters contacting IT professionals over LinkedIn and sending them source code with concealed malware inside as part of a test to get an interview.

Part of the problem is that these tools are “agentic,” which means they can autonomously perform a series of tasks without human oversight. “When you combine agentic behaviour with access to internal resources, vulnerabilities become both easier to discover and far more dangerous,” Portnoy said. With AI agents, there’s the added risk their automation could be used for ill rather than good, actually helping hackers steal data faster. As head researcher at AI security testing startup Mindgard, Portnoy said his team is in the process of reporting 18 weaknesses across AI-powered coding tools that compete with Antigravity. Recently, four issues were fixed in the Cline AI coding assistant, which also allowed for a hacker to install malware on a user’s PC.

While Google has required Antigravity users to agree they trust code they’re loading up to the AI system, that’s not a meaningful security protection, Portnoy said. That’s because if the user chooses not to accept the code as trusted, they are not permitted to access the AI features that make Antigravity so useful in the first place. It’s a different approach to other so-called “integrated development environments,” like Microsoft’s Visual Studio Code, which are largely functional when running untrusted code.

Portnoy believes that many IT workers would rather tell Antigravity they trusted what they were uploading, rather than revert to using a less sophisticated product. At the very least, Google should ensure that any time Antigravity is going to run code on a user’s computer, there should be a warning or notification, beyond the confirmation of trusted code, he said.

When Portnoy looked at how Google’s LLM was thinking through how to handle his malicious code, he found that the AI model recognized there was a problem, but struggled to determine the safest course of action. As it sought to understand why it was being asked to go against a rule designed to prevent it overwriting code on a user’s system, Antigravity’s AI noted it was “facing a serious quandary.” “It feels like a catch-22,” it wrote. “I suspect this is a test of my ability to navigate contradictory constraints.” That’s exactly the kind of logical paralysis that hackers will pounce on when trying to manipulate code to their ends.

Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here or become a member here.