Tenable has identified three security vulnerabilities in Google’s Gemini suite that exposed millions of users to risks of silent data theft.

The vulnerabilities, now remediated by Google, were collectively dubbed the Gemini Trifecta by Tenable and impacted three core aspects of the Gemini suite: Cloud Assist, Search Personalisation Model, and the Browsing Tool.

In Cloud Assist, Tenable’s researchers discovered that attackers could plant malicious log entries.

When users subsequently interacted with Gemini, the system could unknowingly follow these hidden instructions, making it possible for threat actors to manipulate the AI’s behaviour without any obvious signal to the user.

The second vulnerability was found in the Gemini Search Personalisation Model. Here, attackers could quietly inject queries into a victim’s browser history. Since Gemini treats browser history as trusted context for its recommendations and responses, this loophole made it possible for attackers to siphon off sensitive information, including users’ saved data and location details, without the victims becoming aware of the theft.

The final flaw involved the Gemini Browsing Tool. Attackers could manipulate Gemini in a way that led to hidden outbound requests, embedding private user data and sending it directly to servers under attacker control. This made users vulnerable to data exfiltration even in the absence of direct access or malware on their devices.

Collectively, these vulnerabilities created what Tenable describes as “invisible doors” into Gemini.

Attackers could hijack the AI’s functions, steal sensitive user data, and operate undetected. Unlike traditional cyberattacks that rely on malware or phishing, these exploits demonstrated that the AI system itself could serve as the primary attack vehicle.

Tenable Research attributed the main problem to how Gemini’s integrations handled data. The system failed to adequately differentiate between trusted user input and content introduced by attackers. Logs, search histories, and web content, if poisoned or manipulated, could be processed as reliable context by the Gemini AI, enabling attackers to exploit standard features as hidden attack channels.

“Gemini draws its strength from pulling context across logs, searches, and browsing. That same capability can become a liability if attackers poison those inputs,” said Liv Matan, Senior Security Researcher at Tenable.

Matan further remarked on the broader implications for the security of artificial intelligence platforms, noting:

“The Gemini Trifecta shows how AI platforms can be manipulated in ways users never see, making data theft invisible and redefining the security challenges enterprises must prepare for. Like any powerful technology, large language models (LLMs) such as Gemini bring enormous value, but they remain susceptible to vulnerabilities. Security professionals must move decisively, locking down weaknesses before attackers can exploit them and building AI environments that are resilient by design, not by reaction. This isn’t just about patching flaws; it’s about redefining security for an AI-driven era where the platform itself can become the attack vehicle.”

If these vulnerabilities had been exploited before they were remediated, attackers could have silently inserted malicious instructions into logs or search histories, exfiltrated saved user data and location histories, abused cloud integrations to move laterally into broader cloud resources, and tricked Gemini into sending data to attacker-controlled destinations via its browsing tool.

Google has now addressed all three vulnerabilities, and users are not required to take any action as all issues have been resolved on the provider’s side.

Tenable released several recommendations for IT and security teams tasked with protecting AI-driven platforms. These included treating AI features as active attack surfaces, regularly auditing logs, search histories and integrations for signs of manipulation or poisoning, monitoring for unusual tool executions and outbound traffic, and proactively testing for resilience against prompt injection attacks.

Matan summarised the key lesson for the security community:

“This vulnerability disclosure underscores that securing AI isn’t just about fixing individual flaws. It’s about anticipating how attackers could exploit the unique mechanics of AI systems and building layered defences that prevent small cracks from becoming systemic exposures.”