People receiving an AI phishing email are 4.5 times more likely to click on the malicious link or file, according to Microsoft.

In its annual Digital Defense Report, Redmond says these AI-automated emails achieved 54 percent click-through rates last year compared to 12 percent for non-AI phishes.

Not only does AI increase phishers’ chances of getting clicks, but it also potentially increases phishing profitability by up to 50 times, the report claims.

As we’ve seen in previous attacks, AI enables criminals to craft more targeted phishing emails, written in the victims’ local language, and using more believable lures – and now, it appears that these efforts are paying dividends. 

“This massive return on investment will incentivize cyber threat actors who aren’t yet using AI to add it to their toolbox in the future,” Redmond wrote in the 2025 report, calling this increase in scale and efficiency of attacks “the most significant change in phishing over the last year.”

The annual report covers Microsoft fiscal year 2025, which ran from July 2024 through June 2025.

Crims love AI, too

As most readers likely suspect, digital crime didn’t decrease during this time frame, and miscreants increased the efficiency and effectiveness of their attacks thanks to an AI boost. In addition to automating phishing emails, AI makes it easier and faster for criminals to scan for vulnerabilities and exploit them at scale, conduct reconnaissance and target individuals and organizations for social engineering attacks, and even create malware.

It also provides attackers with new tools such as voice cloning and deepfake videos, and opens up entirely new attack surfaces – such as large language models – to exploit.

Nation-state actors, too, have continued to incorporate AI into their cyber influence operations

And, it’s not just financially motivated criminals abusing AI. “Nation-state actors, too, have continued to incorporate AI into their cyber influence operations,” Amy Hogan-Burney, Microsoft corporate VP of customer security and trust, wrote in a blog about the digital threat report. “This activity has picked up in the past six months as actors use the technology to make their efforts more advanced, scalable, and targeted.”

Case in point: In July 2023, Microsoft documented zero samples of AI-generated content from government-backed groups. That number jumped to 50 in July 2024, about 125 samples as of January, and about 225 as of July.

But while the Redmond threat intelligence team found nation-state attacks remain a serious threat – with 623 such events documented in the US alone – most organizations face a more immediate risk from cybercriminals looking to make a buck off of someone’s poor security practices.

At least 52 percent of all attacks with known motives over the year were fueled by financial gain, while espionage-only attacks, usually associated with nation-state groups, comprised just 4 percent.

In cases where Microsoft’s incident responders could determine the attackers’ objectives, 37 percent involved data theft, 33 percent involved extortion, 19 percent used attempted destructive or human-operated ransomware attacks, and 7 percent of attacks were for purposes of infrastructure building, where the criminals compromise organizations’ infrastructure to stage future attacks.

ClickFix surge

Another newish attack method that took off over the 12-month report period is ClickFix, a social-engineering technique that tricks users into executing malicious commands on their own machines, often under the guise of legitimate fixes or prompts, thereby bypassing conventional phishing defenses.

This surge began in November, and both cybercriminal and nation-state gangs have used ClickFix attacks to deliver infostealers, remote access trojans, backdoors, and other malware to victims’ environments.

“ClickFix was the most common initial access method that Microsoft Defender Experts observed in Defender Expert notifications in the last year, accounting for 47 percent of attacks,” according to the report.

For comparison: the second most-used initial access method – phishing – was used in 35 percent of attacks.

This also illustrates what Microsoft describes as a “sharp change in how threat actors achieve initial access” compared to previous years. Criminals are logging in, not breaking in, and instead of relying on “simple phishing,” they are employing “multi-stage attack chains that mix technical exploits, social engineering, infrastructure abuse, and evasion through legitimate platforms.”

One example combined email bombing, voice-phishing calls, and Microsoft Teams impersonation to let the attacker convincingly pose as IT support and gain remote access. 

Email bombing involves enrolling someone’s email account in thousands of newsletters and online services to flood their inbox with hundreds or thousands of subscription emails, and thus hide critical alerts like password or multi-factor authentication resets and fraud alerts.

“This year, email bombing evolved from being used as a smokescreen to being used as a first-stage attack vector in a broader malware delivery chain,” the report noted. 

“Email bombing is now often used as a precursor to vishing or Teams-based impersonation, where the attacker contacts the target posing as IT support and offering to resolve the issue,” it continued. “Once trust is established, targets are guided into installing remote access tools, enabling attackers to gain hands-on-keyboard control, deploy malware, and maintain persistence.” ®