Do you think you can tell the difference between a real person and an AI-generated clone? New research finds you probably can’t.
Artificial intelligence isn’t just blurring the lines between what’s real and fake, in some cases it has completely erased them, taking with it billions of dollars from unsuspecting consumers and businesses that have fallen victim to AI-related fraud.
AI-generated content isn’t just blurring the lines of what’s real and what’s fake — it’s completely erasing them. Senior Investigative Reporter Bigad Shaban explains what’s at stake and what can be done to safeguard communities from deepfakes.
Anytime you’re … allowing more people to have powerful technology tools, there is a risk of it being misused, being weaponized”
Samir Kumar, co-founder of Touring Capital, a venture capital firm that invests in AI-related companies
“The technology itself is not necessarily good or bad, it’s what we as humans do with it,” said Samir Kumar, who heads a San Francisco-based venture capital firm that focuses on finding and investing in the latest AI innovations. “Anytime you’re democratizing a capability, allowing more people to have powerful technology tools, there is a risk of it being misused, being weaponized.”
With just a few clicks, AI can create incredibly realistic video and audio in seconds. These so-called “deepfakes” can embody celebrities, politicians, and just about anything else you can think of.

AI-generated
AI-generated
Free and easy-to-use websites and apps now allow just about anyone to create AI-generated content in a matter of seconds. The relative ease has resulted in a flood of fake images, video, and audio on social media, including this AI-generated news report featuring a space ship emerging from the ocean.
“If you are unethical, you’re doing bad things, you are trying to cause financial reputational harm, you now have even more powerful tools,” Kumar said. “It’s like you have superpowers.”
While technological developments in AI-generated video have exploded just within the past few months, experts warn AI-generated audio may be a more prevalent tool in manufacturing misinformation, deceit, and fraud.
Sound waves reveal key differences between human audio and AI clone
Audio clips saying the same words, normalized across time and overall volume
Can video or audio be used more seamlessly to create AI deepfakes?
In comparison to audio, video inherently has a lot more elements to scrutinize. Sudden distortions, awkward facial expressions, and even misplaced body parts are frequent giveaways the images you are seeing were created with AI. In addition, most AI-generated videos are viewed on social media, so skeptical users can replay clips that seem a little off or scroll through the comments to see if others have pegged the video as fake.
“You are getting a huge amount of information that can be used to say, ‘Hmmm, is this real or has this been tampered with?” said Sarah Barrington, a researcher at the UC Berkeley School of Information who specializes in detecting deepfakes. “And that’s not the case for audio.”

Sarah Barrington is an engineer and AI researcher at the UC Berkeley School of Information.
Simply picking up a phone call, according to Barrington, could leave you vulnerable. She says the increasing risk for fraud and misinformation is why she and her colleagues have focused their research on AI-generated audio. In order to search for solutions, she argues, we first need to understand just how easily people can fall prey to AI clones.
“This explosion in capability means that we’re really worried about the bad that comes with the good,” she said. “We’re seeing scams, we’re seeing fraud, we’re seen disinformation.”
Senior Investigative Reporter Bigad Shaban sat down with UC Berkeley researchers to find out just how difficult it has become to distinguish between what’s real and AI-generated. Bigad joins Anchor Raj Mathai on NBC Bay Area Tonight to discuss the research and what it means unsuspecting consumers who could become targets of AI-related fraud.
To find out how well adults and teenagers can distinguish real voices from AI copycats, Barrington and her fellow researchers at Berkeley put more than 600 people to the test. Those surveyed listened to pairs of audio clips, back-to-back, and had to determine whether they were recorded by the same person or if one was an AI clone created by Berkeley researchers using a website anyone can access for just $5.
“The bad news is humans are pretty bad this,” she said. “The pace of generative AI right now is unlike anything we have ever seen.”
Barrington’s research found that more than 80% of the time, people failed to correctly identify AI deepfakes, even when explicitly warned that some of the audio they were about to hear could be AI-generated.
“So people are primed to listen for what might be a fake voice,” she said. “In the real world, that’s not going to be the case.”
Can you detect an AI clone?
Put your own AI detection skills to the test by listening to actual audio used in the Berkeley research study. For each round, listen to both the ‘A’ and ‘B’ clips and then select whether they are both recordings of the same person or if one is actually an AI clone of the other.
Billions of dollars lost to AI-related fraud
AI-related fraud is expected to cost consumers and businesses nearly $40 billion each year by 2027. That’s a 50% spike from what we are already seeing today, according to the Deloitte Center for Financial Services.
“We’re about to move into a world where we can digitally replicate anyone’s identity,” she said. “That’s a scary world to live in.”
Thousands of voters in New Hampshire last year got a robocall from what sounded like former President Joe Biden, urging them to skip the upcoming primary. “It’s important that you save your vote for the November election,” the voice said.
The caller, however, was not the president. It was an AI clone.
About 8,000 miles away, in Hong Kong, a finance worker was summoned to a video conference last year by who he thought was his company’s chief financial officer. The employee also spotted other coworkers in the meeting, but what he didn’t know was everyone he was seeing and hearing was fake. The only real element, according to Hong Kong police, was the $25 million he transferred to fraudsters after believing it was a legitimate request from his own boss.
Congress trying to catch up to AI
“I’m worried,” said Congressman Kevin Mullin (D-CA), who represents parts of Silicon Valley. “The technology is moving so rapidly that we are playing catch-up, but you also don’t want to put a set of rules in place that will very quickly be outdated by the advance of technology.”

Rep. Kevin Mullin (D-San Mateo), represents parts of Silicon Valley and says he is “absolutely worried” about the lack of federal regulation relating to artificial intelligence and its increasingly ability to produce incredibly realistic deepfakes.
Congress has approved some new laws centering on AI, such as the “Take It Down Act,” which centers on criminal and financial penalties for so-called “revenge pornography,” where adult material is released without a person’s consent, including AI-generated material. More recently, the “AI Lead Act” was introduced in the U.S. Senate in September, which aims to incentivize tech companies to embrace better safety protocols relating to artificial intelligence by making it easier for consumers to sue when they believe they’ve been harmed by AI-generated content. The bill remains pending on Capitol Hill. Despite such legislative proposals, Congress has yet to pass substantial regulations for the entire AI industry.
“You need these legal and regulatory incentives and disincentives to make sure people are not doing bad things with this new capability that we have,” Kumar said. “You’re not going to be able to completely remove it, but you can mitigate the harm that can happen with synthetic media.”

Samir Kumar founded Touring Capital, a venture capital firm that specializes in identifying and investing in the latest AI-related innovations.
Kumar believes the industry is in serious need of a new, national framework for AI that would include tougher penalties for fraudsters and new requirements for tech companies to ensure all AI-generated content is embedded with some kind of digital watermark to track where it was made and how it was altered.
“It’s certainly a possibility that you could go too far in regulation and stifle innovation, and we don’t want that to happen,” Kumar said. “But I don’t think the answer is the Wild West and do whatever you want.”
Contact The Investigative Unit
submit tips | 1-888-996-TIPS | e-mail Bigad