Abby Ham’s experience highlights the growing threat and personal impact of sexually explicit deepfake technology, as victims struggle for online protection.
KNOXVILLE, Tenn. — The use of sexualized artificial intelligence images is exploding online, with studies showing 98% of all deepfake videos are sexually explicit. Most target women, but experts warn that virtually anyone can become a victim.
Former WBIR Anchor Abby Ham knows that firsthand.
She regularly posts makeup and fashion videos on social media for her followers. Recently, those same videos and photos were stolen, manipulated and reposted without her consent. Dozens of altered images appeared on TikTok, many of them sexually explicit, making it appear as though they were real.
“I got a DM that said there’s some fake account that’s spoofing you and your images,” Ham said.
The fake content distorted her voice, altered her clothing and changed the context of her original posts. At times, Ham said the images were so realistic she questioned herself.
“There were moments where I was like, ‘did I do that?'” she said. “They would take any picture that I had up on my page and make it provocative — shorten the skirt, lower the top. It was really disturbing.”
READ MORE: Federal lawmakers hoping to pass bill addressing AI-made ‘revenge porn’
Ham said the situation worsened when she was blocked from viewing the account, making it difficult to report. Friends who did report the page received responses from TikTok saying there were no violations.
FBI Special Agent Mark Miller, who investigates these types of crimes in the Knoxville area, said his office receives multiple complaints each week involving explicit deepfakes. He said the technology has advanced rapidly, making the content easier to create and harder to detect.
“It used to be that you needed a supercomputer,” Miller said. “Now you can do it with a device that’s simply in your pocket.”
As the technology improves, lawmakers are racing to catch up. Last year, Congress passed the Take It Down Act, making it a federal felony to distribute non-consensual intimate images, including deepfakes. The law also requires social media platforms to remove reported content within 48 hours.
“It doesn’t matter whether it’s an actual image or an artificially generated image,” Miller said.
READ MORE: Scammers using AI and fake ads to dupe weight-loss drug buyers online
Lawmakers are now pushing further. The U.S. Senate passed the Defiance Act last month, giving deepfake victims the right to sue both the creators and platforms that fail to respond. The House has not yet taken up the bill.
“They don’t take the images off the internet,” Sen. Dick Durbin (D-IL) said. “They don’t come to the rescue of the people who are victims. That’s why this legislation is critical.”
For Ham, the impact is personal.
“I have children,” she said. “I have children who could easily access this someday, and it’s embarrassing.”
Despite the experience, Ham says she will continue using social media but wants others to understand the risks.
“This could happen to you,” she said.
If you’re targeted save all evidence, including screenshots, messages and videos. Report the content on the platform where it appears and flag the account involved. Victims can also report cases to the Federal Trade Commission or the FBI’s Internet Crime Complaint Center.Â
READ MORE: The rise of deepfake cyberbullying poses a growing problem for schools