Cybercriminals are using generative artificial intelligence and synthetic media tools to scale phishing, vishing and callback scams into high-volume, high-precision operations that are harder to spot and costlier to contain.
The result is billions of dollars lost to fraudsters, according to the FBI’s 2024 Internet Crime Report, the most recent year for the report. The agency logged 859,532 complaints and $16.6 billion in losses last year, a 33% increase from 2023. Phishing and spoofing were the most common types of online crime, at 193,407 incidents.
AI-Fueled Trust Exploitation
Social engineering succeeds when attackers exploit human trust, and AI is making that exploitation faster, cheaper and more convincing. An Oct. 16 Kaufman Rossin analysis warned that fraudsters now use vishing, a form of phishing that uses voice calls instead of emails.
“Vishing attacks use social engineering techniques to impersonate legitimate callers, such as bank representatives, tech support agents or government officials, in order to trick victims into sharing sensitive information, such as login credentials or credit card numbers,” the analysis said.
These tactics blur the boundary between genuine correspondence and deception.
Meanwhile, “boss scams,” where criminals impersonate managers and pressure staff to buy gift cards, target new employees. By using data from social media posts, attackers can gain credibility and exploit human psychology before IT systems can intervene.
Advertisement: Scroll to Continue
It was reported Oct. 6 that AI-generated voices are now “indistinguishable from genuine ones” in controlled listening tests, enabling more persuasive vishing and callback scams.
A Consumer Reports investigation found that some commercial voice cloning tools can create convincing replicas with minimal safeguards.
These advances make deception scalable. Fake interactive-voice-response systems powered by generative AI can now mimic authentic bank or tech support lines, adjusting tone and prompts based on the victim’s replies. The FBI’s report said “cyber-enabled fraud” accounted for 83% of total losses in 2024, representing about $13.7 billion across 333,981 complaints. This underscores how trust exploitation has become a defining feature of financial cybercrime.
From Awareness to Resilience
As attackers industrialize persuasion, enterprises are shifting from awareness to layered resilience. Experts advise enforcing multifactor authentication, vaulting credentials, encrypting communications and deploying anomaly detection systems that flag irregular patterns invisible to humans. The Financial Services Information Sharing and Analysis Center recommended using AI-driven analytics to identify deviations in transaction behavior before funds move.
The National Cybersecurity Center of Excellence at NIST encouraged organizations to stress-test incident response playbooks under simulated AI-enabled phishing events, ensuring coordination across IT, compliance and finance. Meanwhile, a KnowBe4 white paper advised expanding employee training to include synthetic-voice and video-deepfake scenarios, teaching staff to verify unfamiliar requests through separate channels instead of responding directly.
The PYMNTS Intelligence report “The AI MonitorEdge Report: COOs Leverage GenAI to Reduce Data Security Losses” found that 55% of large organizations have implemented AI-powered cybersecurity solutions and reported measurable declines in fraud incidents and improved detection times. The shift reflects a growing realization that AI is the weapon and the defense.
Kaufman Rossin recommended pre-designating escalation teams and retaining forensic experts and legal counsel.
Incident response maturity is now a board-level priority rather than a technical afterthought.
The New Front Line
For CFOs, auditors and risk executives, the battleground has moved from network perimeters to human interfaces. In payments, open banking and FinTech ecosystems, identity and trust can be breached through a single synthetic conversation. Securing digital rails remains essential, but preventing manipulation now requires verifying intent as rigorously as identity.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.