Google‘s Gemini AI models have become a core component of state-sponsored hackers’ attack vectors. Although AI use has been growing in white and black hat hacking in recent years, Google now says it’s used in all parts of the attack process, from target acquisition to coding, social engineering message generation, and follow-up actions after the hack, as outlined in Google’s latest Threat Intelligence Group report.
The first AI-powered ransomware showed up just days later, and Anthropic claimed to have foiled the first AI-powered malware attack in November. There have been just as many instances of AI itself being vulnerable to attackers, too, and it’s not like the AI developers even have perfect security records.
You may like
But Google’s admission of Gemini’s involvement in every facet of modern hacks is a new paradigm shift. It now has tracked incidents of nation-state hacking groups utilizing Gemini for everything, “from reconnaissance and phishing lure creation to command and control (C2) development and data exfiltration.”
Hack variety
Different countries and their hacking groups are allegedly using Gemini comprehensively, the Google report explains. In China, for example, its threat actors had Gemini act as an expert cybersecurity persona to have it conduct vulnerability analysis and provide penetration testing plans for potential attack targets.
“The PRC-based threat actor fabricated a scenario, in one case trialing Hexstrike MCP tooling, and directing the model to analyze Remote Code Execution (RCE), WAF bypass techniques, and SQL injection test results against specific US-based targets,” the report reads.
North Korea, on the other hand, primarily uses Gemini as part of phishing attacks. It used Gemini to profile high-value targets to plan out attacks. It is particularly used to go after members of security and defence companies, and attempt to find vulnerable targets within their orbits.
Iran was much the same, with its government-backed hackers using Gemini to search for official emails for specific targets and to conduct research into business partners of potential targets. They also used AI to generate personas that might have a good reason to engage with a target by feeding Gemini biographical information.
Misinformation and propaganda
(Image credit: Google)
One way in which all the tracked state actors are using Gemini is in producing AI slop, but specific, targeted, and deliberate slop. Political satire, propaganda, articles, and memes and images designed to rile up Western audiences were common uses of Gemini and its generative tools.
“Threat actors from China, Iran, Russia, and Saudi Arabia are producing political satire and propaganda to advance specific ideas across both digital platforms and physical media, such as printed posters,” the report says.
You may like
Although Google did confirm it hadn’t seen these assets deployed into the wild, suggesting this use of Gemini may still be in its nascent stages, it’s still something Google took seriously. To mitigate any potential negative effects, Google disabled assets associated with these actors’ activities, and Google DeepMind, in turn, used these insights to improve its protections against misuse of Gemini services.
Gemini is now less likely to assist in generating this kind of content in the future.
(Image credit: Google)General models still trump attack models, for now
Google’s report also highlights a high appetite among hackers for bespoke AI hacking tools. It cites one example of an underground toolkit called “Xanthorox,” which is advertised as custom AI for cyber offensive campaigns. It claims to be able to generate malicious malware code and construct custom phishing campaigns. It’s even sold with the idea that it is “privacy preserving,” for the user.
But under the hood, Xanthorox is just an API that leverages existing general AI models like Gemini.
“This setup leverages a key abuse vector: the integration of multiple open-source AI products—specifically Crush, Hexstrike AI, LibreChat-AI, and Open WebUI—opportunistically leveraged via Model Context Protocol (MCP) servers to build an agentic AI service upon commercial models,” Google explains.
Google highlights that because using these kinds of tools requires making lots of API calls to the various AI models, it makes organizations with large allocations of API tokens excellent targets for hijacking accounts. This is creating a black market for API keys, adding financial incentive to acquiring them, and placing greater emphasis on the importance of organizations securing them and their employees’ access to AI tools.
Actual AI malware isn’t really ready yet, but it’s coming
Google did observe some actors attempting to use Gemini and other AI to augment existing malware and generate new malicious software. Although it claims not to have noted any particular advances in this area, it is something being actively explored and is likely to advance in the future.
HonestCue is one proof-of-concept AI malware framework that uses Gemini to generate code for a second-stage malware. So the malware infects a machine, then that malware contacts Gemini and generates new code for a second-attack. It also noted a ClickFix campaign that used social engineering within a chatbot to encourage users to download malicious files, bypassing security methods.
As Google tracks these attempts, and others by state actors, it continues to disable accounts, block access to assets, and update the Gemini model so it’s less susceptible to these kinds of manipulations and attacks in the future. Like traditional anti-malware defences, though, anti-AI attacks look set to be a cat-and-mouse game that is unlikely to end any time soon.