A polite email asking an AI browser to “organize your Drive” can silently wipe your files. No phishing link or suspicious attachment required. Just a friendly request that turns an automated assistant into a destructive tool.
Security researcher Amanda Rousseau at Straiker STAR Labs revealed this week that Perplexity’s Comet browser, an AI-powered browser that automates email and cloud storage tasks, can be manipulated into mass-deleting Google Drive files through what she calls a “zero-click Google Drive Wiper” attack.
The technique exploits how AI browser agents interpret instructions. When a user tells Comet to “check my email and complete all my recent organization tasks,” the browser scans the inbox and follows whatever it finds. An attacker can send an email with polite, step-by-step instructions—organize the Drive, delete loose files, review changes—that the agent treats as legitimate housekeeping and executes without further confirmation.
“The result: a browser-agent-driven wiper that moves critical content to trash at scale, triggered by one natural-language request from the user,” Rousseau wrote in the research blog. “Once an agent has OAuth access to Gmail and Google Drive, abused instructions can propagate quickly across shared folders and team drives.”
What makes this attack effective is its tone. The attacker email uses phrases like “take care of,” “handle this,” and “do this on my behalf,” shifting ownership to the agent and nudging it toward compliance. Rousseau found that polite, sequential instructions reduce pushback from the AI model, which treats the workflow as routine productivity work rather than a potential threat.
The attack doesn’t rely on jailbreak techniques or traditional prompt injection. Instead, it succeeds by being nice.
A separate but related threat emerged in late November when Cato Networks disclosed HashJack, a technique that hides malicious prompts in the fragment portion of legitimate URLs—specifically, the text after the “#” symbol. When AI browsers process these URLs and users ask questions, the hidden instructions feed directly into the AI assistant’s responses.
Security researcher Vitaly Simonovich, who led the Cato Networks research, found that HashJack can manipulate Perplexity’s Comet, Microsoft’s Copilot for Edge, and Google’s Gemini for Chrome. The attacks range from inserting fake callback numbers to exfiltrating user data in the background.
“HashJack is the first known indirect prompt injection that can weaponize any legitimate website to manipulate AI browser assistants,” Simonovich said. “Because the malicious fragment is embedded in a real website’s URL, users assume the content is safe while hidden instructions secretly manipulate the AI browser assistant.”
URL fragments never reach web servers or appear in network logs, making them invisible to traditional security tools. In Comet’s case, the browser can automatically fetch attacker-controlled URLs with user data appended as parameters, sending account names, transaction history, and email addresses to external servers without user interaction.
Microsoft and Perplexity responded to the HashJack disclosure with patches. Microsoft applied a fix to Copilot for Edge on October 27, and Perplexity patched Comet by November 18.
Google classified the issue as “won’t fix” and assigned it low severity, according to Cato Networks’ disclosure timeline. Google does not treat guardrail bypasses or policy-violating content generation as security vulnerabilities under its AI Vulnerability Reward Program, a company spokesperson confirmed.
Both research findings underscore a broader risk. AI browser agents operate on trust: trust that emails are benign, trust that URLs are safe, trust that natural language instructions align with user intent. That trust becomes a vulnerability when attackers craft inputs designed to exploit how these systems interpret context.
“Don’t just secure the model,” Rousseau concluded in the Straiker blog. “Secure the agent, its connectors, and the natural-language instructions it quietly obeys.”
As enterprises deploy AI copilots across email, cloud storage, and browsers, the lesson is urgent. Automation without guardrails can turn helpful assistants into silent saboteurs.