There’s a peculiar tint to the modern digital landscape: everything is somehow both the same as it’s always been, and yet entirely different. We still use Google, but we get a handy AI summary up top. We still get phished, but it’s being done to us by AI. On this latter point, Microsoft’s 2025 Digital Defense Report (PDF warning) points out that AI is now actually 4.5x more successful at getting users to click malicious links than standard attempts (via The Register).

More specifically, “AI-automated phishing emails achieved 54% click-through rates compared to 12% for standard attempts” because “AI enables more targeted phishing and better phishing lures.” The bulk of the data from the report is collected from Microsoft’s fiscal year 2025, from July 1, 2024 to June 30, 2025.

In addition, “AI automation has the potential to increase phishing profitability by up to 50 times by scaling highly targeted attacks to thousands of targets at minimal cost. This massive return on investment will incentivise cyber threat actors who aren’t yet using AI to add it to their toolbox in the future.”

Related articles

Phishing is the attempt to trick people into clicking malicious links or downloading malicious files by pretending to be legitimate. For instance, it might be an email pretending to be from your employer, trying to get you to download an infected file that’s disguised as an innocent presentation or spreadsheet. Or it might send you to a website that will ask for your details.

Microsoft explains that AI can “automate phishing campaigns, generate deepfakes, and craft highly convincing fraudulent messages.” That makes sense because AI has developed enough that it can craft exploits and attacks that a very intelligent and knowledgeable bad actor could.

Hacker, IT and person with code on computer, programming and phishing scam with malware or virus.

(Image credit: seksan Mongkhonkhamsao @ Getty Images)

These phishing stats just point towards a more general—and, of course, expected—trend towards AI being used for nefarious purposes, not just for phishing:

“We’re witnessing adversaries deploy generative AI for a variety of activities, including scaling social engineering, automating lateral movement, engaging in vulnerability discovery, and even real-time evasion of security controls. Autonomous malware and AI-powered agents are now capable of adapting their tactics on the fly, challenging defenders to move beyond static detection and embrace behavior-based, anticipatory defense.”

It can be easy to jump on the anti-AI bandwagon upon hearing things like this—and I’m no stranger to such sentiment—but I’m conscious that I’m hearing about this on the same day I’m hearing that AI has discovered a promising new cancer treatment method. Pros and cons, as always.

Plus, there’s the fact that AI is used to help defend from cyber attacks these days. I suppose that’s just what happens in an arms race, though; the neorealist in me sees such tit-for-tat escalations as inevitable to maintain equilibrium between different states and powers.

The good news is that it doesn’t seem there’s much different, in principle, that we should be doing—just ramping up more of the same. For instance, Microsoft says that “no matter how much the cyber threat landscape changes, multifactor authentication (MFA) still blocks over 99% of unauthorized access attempts, making it the single most important security measure an organization can implement.”

Of course, MFA might do little to prevent you from falling for a phishing attack. On that front, though, Microsoft’s recommendations are again more and better implementations of the same defences we’re used to: Inbox filters, restrictions on external communications, limiting remote access tools, educating users, and keeping an eye out for common patterns of attack behaviours.

Razer Blade 16 gaming laptop

Best gaming rigs 2025

All our favorite gear