Artificial Intelligence & Machine Learning
,
Cyberwarfare / Nation-State Attacks
,
Fraud Management & Cybercrime
China, Iran, North Korea Hackers Exploit Gemini Across Attack Life Cycle
Rashmi Ramesh (rashmiramesh_) •
February 13, 2026 Â Â

Image: Thrive Studios ID/Shutterstock
State-backed hackers weaponized Google’s artificial intelligence model Gemini to accelerate cyberattacks, using the productivity tool as an offensive asset for reconnaissance, social engineering and malware development.
See Also: The Power of Peer-to-Peer Communities
Hackers backed by China, Iran and North Korea used Gemini to profile targets, generate phishing messages, troubleshoot code and research vulnerabilities, showed the latest Google Threat Intelligence Group report, published Thursday. The company said it has disabled accounts associated with the abuse and implemented new defenses.
The findings mark a shift in how nation-state actors integrate AI into operations. Large language models have not produced breakthrough capabilities that meaningfully alter the threat landscape, but AI has become what Google describes as an essential tool for technical research and generating phishing lures.
A Chinese threat actor tracked as APT31 employed a structured approach, prompting Gemini with an expert cybersecurity persona to automate vulnerability analysis. The group fabricated scenarios and directed the model to analyze remote code execution, web application firewall bypass techniques and SQL injection test results against specific targets in the United States.
Another Chinese actor, tracked as UNC795, relied on Gemini to troubleshoot code and to create an AI-integrated code auditing capability.
The Iranian government-backed actor tracked as APT42 used Gemini to augment reconnaissance and targeted social engineering. The group used the model to search for official email addresses and conduct reconnaissance on potential business partners. APT42 provided Gemini with target biographies and asked the model to craft personas to get engagement from targets.
North Korean threat activity that Google designates UNC2970 used Gemini to synthesize open-source intelligence and profile high-value targets. The actor searched for information on major cybersecurity and defense companies and mapped technical job roles and salary information.
An unattributed threat actor tracked as UNC6418 used Gemini to conduct targeted intelligence gathering, seeking sensitive account credentials and email addresses. Shortly after, Google observed the actor target all these accounts in a phishing campaign focused on Ukraine.
Non-nation state cybercriminals also showed increased interest in AI tools. Google identified ClickFix campaigns where threat actors abused the public sharing feature of AI services, including Gemini, to host deceptive social engineering content. The campaigns, first observed in early December 2025, attempted to trick users into installing malware.
Google identified new malware families experimenting with AI integration. A framework tracked as HonestCue uses Gemini’s application programming interface. The malware sends a prompt via Gemini’s API and receives C# source code as the response. The secondary stage uses the legitimate .NET CSharpCodeProvider framework to compile and execute the payload directly in memory.
A phishing kit tracked as CoinBait appears to have accelerated its construction using AI code generation tools. The kit masquerades as a major cryptocurrency exchange for credential harvesting. Google assesses with high confidence that a portion of this activity overlaps with UNC5356, a financially motivated threat cluster.
An examination of the samples indicates the kit was built using Lovable AI. A key indicator of large language model use is the presence of verbose logging messages in the source code, consistently prefixed with Analytics:.
Beyond direct misuse, Google identified and disrupted model extraction attacks, also known as distillation attacks. These occur when an adversary uses legitimate access to systematically probe a machine learning model to extract information used to train a new model. Adversaries use knowledge distillation to take information from one model and transfer it to another, effectively representing intellectual property theft.
Google DeepMind and the Threat Intelligence Group identified model extraction attacks emanating from researchers and private sector companies globally. In one case, more than 100,000 prompts attempted to coerce Gemini into outputting full reasoning processes. The breadth of questions suggested an attempt to replicate Gemini’s reasoning ability in non-English target languages.
Model extraction attacks do not represent a risk to average users. The risk is concentrated among model developers and service providers.
Google also documented underground marketplaces attempting to offer AI services designed for malicious activity. A toolkit called Xanthorox advertised itself as a custom AI for cyber offensive purposes, but the investigation revealed that it is powered by several third-party and commercial AI products, including Gemini.
Google said it has disabled accounts and infrastructure tied to the malicious activity and implemented targeted defenses in Gemini’s classifiers.
The report said that financially motivated threat actors continue to experiment but have not yet made breakthroughs in developing AI tooling. Google has not observed advanced persistent threat or information operations actors achieving breakthrough capabilities that fundamentally alter the threat landscape.