Google‘s Gemini AI models have become a core component of state-sponsored hackers’ attack vectors. Although AI use has been growing in white and black hat hacking in recent years, Google now says it’s used in all parts of the attack process, from target acquisition to coding, social engineering message generation, and follow-up actions after the hack, as outlined in Google’s latest Threat Intelligence Group report.

Slowly, then all at once

AI-powered ransomware showed up just days later, and Anthropic claimed to have foiled the first AI-powered malware attack in November. There have been just as many instances of AI itself being vulnerable to attackers, too, and it’s not like the AI developers even have perfect security records.

You may like

But Google’s admission of Gemini’s involvement in every facet of modern hacks is a new paradigm shift. It now has tracked incidents of nation-state hacking groups utilizing Gemini for everything, “from reconnaissance and phishing lure creation to command and control (C2) development and data exfiltration.”

security and defence companies, and attempt to find vulnerable targets within their orbits.

Iran was much the same, with its government-backed hackers using Gemini to search for official emails for specific targets and to conduct research into business partners of potential targets. They also used AI to generate personas that might have a good reason to engage with a target by feeding Gemini biographical information.