Cyber security firm Norton has warned that New Zealanders face a growing wave of scams driven by artificial intelligence and deepfake technology, as criminals adapt old fraud tactics to new tools.

The company said AI systems now underpin a range of voice, video, web and messaging scams. It said these techniques make fraud attempts more convincing and harder for individuals and businesses to spot.

Data from the National Cyber Security Centre shows New Zealanders are losing more than NZ$5 million per quarter to scams and fraud. Direct losses reached NZ$5.7 million in the most recent reporting period. Norton’s recent Gen Threat Report found hundreds of thousands of AI-generated scam websites appeared worldwide this year.

Michal Salát, a threat research expert at Norton, said AI has changed both the scale and style of online crime.

“AI tools are advancing at lightning speed, making everyday life more efficient and creative. But scammers are always close behind, weaponising the same technology to trick, manipulate, and steal,” said Salát.

Norton has set out five main ways it says scammers used AI and deepfakes in 2025. The list covers voice cloning, AI-built phishing sites, AI-driven romance fraud, upgraded business email compromise, and fake celebrity endorsements.

Deepfake voices

Norton said voice cloning had become a mainstream risk this year. Cheap and accessible tools can now recreate someone’s voice using only a few seconds of recorded speech.

Scammers then place calls that sound like a family member, friend, or company representative. They often claim an emergency and demand fast action.

Recent information from BNZ cited by Norton suggests voice cloning is now one of the top AI-related scam concerns for New Zealanders. The bank has warned that callers can convincingly mimic the voices of trusted people during fraudulent conversations.

Norton said the method works because it exploits fear and urgency. Emotional pressure can override a person’s usual checks, especially if the caller appears to be a close relative in distress.

It urged people to verify any urgent request through a known phone number or separate contact channel. It also recommended that families agree a “safe word” for emergencies and pause before acting on pressure in a call.

AI-built phishing sites

Norton researchers reported a sharp rise in fake websites created with AI site-building tools. It said criminals prompt these systems to copy the appearance and layout of banks, delivery firms and technology brands.

The resulting pages can closely match the design and branding of the real organisations. They can also include chat or support features that appear genuine.

According to Norton telemetry, web skimming attempts increased quarter over quarter. These are attacks where criminals inject malicious code into online checkout pages and capture card details and billing data.

Norton said New Zealand registered a 416% increase in such web skimming attempts this year. It also observed more than 580 new malicious AI-generated websites every day worldwide.

The firm said simple spelling tricks in URLs, known as typosquatting, remained common. It cited examples such as “coiinbase” in place of the cryptocurrency exchange Coinbase.

It advised users to check web addresses closely, avoid links in unsolicited messages, and use official apps or saved bookmarks. It also recommended multi-factor authentication on accounts and security software that blocks known phishing domains.

AI romance fraud

Norton said romance and friendship scams had evolved through the use of AI chatbots and deepfake imagery. Fraudsters now run long, fluent conversations across multiple time zones with minimal effort.

Chatbots can hold consistent discussions and maintain a believable persona over weeks or months. Some scammers then introduce deepfake videos of an invented partner as supposed proof of identity.

In New Zealand, Norton said AI-driven scams increasingly use manipulated images and personal data. It said this makes deception feel direct and tailored to each target.

Avast researchers, cited by Norton, reported that the risk of sextortion scams in New Zealand rose by 137% in early 2025. These scams use AI-generated deepfake images and highly targeted messages that draw on details from past data breaches.

Victims often receive claims that a criminal holds explicit material and will share it unless they pay. Norton said the inclusion of accurate personal details makes these threats more believable and harder to ignore.

It advised people to treat any online partner who avoids meeting in person with caution. It also said users should watch for patterns of small financial requests that grow over time and use reverse image searches to check profile photos.

Corporate deepfakes

Business email compromise has also shifted. Norton said criminals now combine traditional phishing with AI-generated audio and video.

Attackers clone the voice of a senior executive using samples from public speeches, earnings calls or media appearances. They then place calls or arrange video-style meetings that appear legitimate.

Norton referenced a reported case involving advertising group WPP. The Guardian reported that scammers cloned the CEO’s voice and used it during a fake Teams-style call. The caller instructed staff to share access credentials and move funds under a plausible story.

The attempt did not lead to a large confirmed loss. Norton said it still showed how blended email, audio and video can make spoofed instructions harder for staff to question.

The company urged businesses to enforce out-of-band checks for any unusual payment or data request. It said firms should consider dual approval for high-value transfers and training so employees feel able to pause and verify instructions, even when they appear to come from leaders.

Fake celebrity pitches

Norton also highlighted fake celebrity endorsement and investment scams. These use deepfake videos of public figures promoting unregulated products or schemes.

The firm said such videos spread quickly on social media platforms. It said users often share clips before moderators or fact-checkers can act.

In 2025, Norton said multiple deepfake videos of Elon Musk circulated on YouTube and X, formerly Twitter. These videos promoted fraudulent cryptocurrency giveaways.

Victims believed they were sending funds to Musk’s team. They instead lost money to the scammers behind the sites and wallet addresses in the clips.

Norton said similar scams now feature actors, athletes and local influencers. It said the combination of urgent offers and a familiar face exploits both trust in authority and fear of missing out.

The firm said users should confirm endorsements on official websites or verified accounts and treat any promise of guaranteed returns with scepticism. It also encouraged people to report suspicious videos so platforms can remove them faster.

Salát said AI has not created new motives for fraud but has changed the way criminals operate. “AI has supercharged old scams with new tricks, making them faster, more convincing, and more scalable than ever before,” said Salát.

Norton said awareness, scepticism and layered digital safeguards remain central defences as AI-driven scams continue to develop in 2026.