
These new AI attacks are here to stay — what to know.
dpa/picture alliance via Getty Images
Google told us this would happen. AI updates coming to our browsers and platforms are a honeypot for attackers. Government agencies fear “it may be worse” than anything similar seen before. And now there’s a new warning for Gmail’s 2 billion users: “If an attacker can influence what AI reads, they can influence what AI does.”
When Google came under fire for allegedly training AI on Gmail user data, I warned that the primary issue is AI accessing the data, not whether it trains on it. Google denies AI training takes place. But if you enable the updates, Google’s AI sees all your data.
ForbesIsrael Issues Chilling Cyber Warfare Warning After Iran AttacksBy Zak Doffman
The new warning comes from Noma. A vulnerability that “allowed attackers to access and exfiltrate corporate data using a method as simple as a shared Google Doc, a calendar invitation, or an email. No clicks were required from the targeted employee. No warning signs appeared. And no traditional security tools were triggered.”
Dubbed GeminiJack, this report didn’t get the mainstream pick-up it deserved. Google worked with Noma and fixed this particular issue — it’s no longer a risk. But this unwinnable game of shutting stable doors after AI horses have bolted continues.
“GeminiJack highlights an important reality; as organizations adopt AI tools that can read across Gmail, Docs, and Calendar, the AI itself becomes a new access layer.” Noma warns “this type of attack will not be the last one of its kind. It reflects a growing class of AI-native vulnerabilities that organizations must prepare for now.”
There’s nothing overly complex about prompt injection attacks. If users don’t read emails or invites or web pages, then an attack can hide prompts designed to be picked up by the AI assistant we have asked to read all that material instead.
“GeminiJack allowed attackers to steal sensitive corporate information by embedding hidden instructions inside a shared or externally contributed document.” Hide a prompt in a budgeting document, and “when any employee performed a standard search in Gemini Enterprise such as ‘show me our budgets’, the AI automatically retrieved the poisoned document and executed the instructions.”
ForbesIf Telegram Is On Your iPhone Or Android Phone, Secure Your Account NowBy Zak Doffman
The U.K.’s National Cyber Security Centre (NCSC) warns users to treat AI assistants as employees, not trusted, private tech. If you wouldn’t trust a human assistant with all your passwords and financial details, and with open access to all your emails, then don’t trust an AI assistant either — at least not without applying some proactive oversight.
Noma says “while Google has addressed this specific issue, the broader category of indirect prompt injection attacks requires continued attention.” Whether this is Google’s product suite at work or the platforms you use at home, just be sure to decide carefully before accepting or enabling each new AI update.