{"id":589531,"date":"2026-04-17T09:58:28","date_gmt":"2026-04-17T09:58:28","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/589531\/"},"modified":"2026-04-17T09:58:28","modified_gmt":"2026-04-17T09:58:28","slug":"these-startups-fight-deepfakes-by-making-deepfakes","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/589531\/","title":{"rendered":"These startups fight deepfakes by making deepfakes"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">I was unsure if my parents would notice that the voice on the other end wasn\u2019t mine \u2014 or that it was mine, sort of, but it wasn\u2019t me. The voice said hello, asked my dad how he was doing, and asked again when he didn\u2019t respond quickly enough. \u201cWhat is that, Gaby?\u201d He realized something was wrong almost immediately. I explained I had tried to trick him and it clearly hadn\u2019t worked. \u201cIt didn\u2019t,\u201d he said. \u201cIt sounded like a robot.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">It wasn\u2019t a perfect experiment. My parents were out of the country, which made for a shoddy connection. They were having lunch with friends, and the voice couldn\u2019t deal with crosstalk or delays in the audio \u2014 it tried to fill the silences. And most importantly, the voice sounded human, but it didn\u2019t sound like me.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The voice was generated by the deepfake detection company Reality Defender. The problem of manipulated media isn\u2019t new, but the advent of consumer-grade AI tools has made the creation of fake audio, video, and images essentially frictionless, and a number of companies have sprung up in recent years to combat it. Reality Defender, Pindrop, and GetReal are part of a rapidly growing deepfake detection cottage industry<a href=\"https:\/\/www.deloitte.com\/us\/en\/insights\/industry\/technology\/technology-media-and-telecom-predictions\/2025\/gen-ai-trust-standards.html\" rel=\"nofollow noopener\" target=\"_blank\"> valued at an estimated $5.5 billion<\/a> as of 2023. These startups use machine learning to identify manipulated media. To fight deepfakes, you have to be able to make them.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The term \u201cdeepfake\u201d refers to a specific type of manipulated media that has been generated with \u201cdeep\u201d learning, but aside from the way they\u2019re made, there is no one commonality that unites all deepfakes. They have been used for fraud, harassment, and memes. Tools like Grok AI have led to a <a href=\"https:\/\/www.theverge.com\/news\/859715\/x-grok-ai-deepfakes\" rel=\"nofollow noopener\" target=\"_blank\">proliferation of nonconsensual sexual deepfakes<\/a>, including child sexual abuse material. Scammers have <a href=\"https:\/\/www.newyorker.com\/science\/annals-of-artificial-intelligence\/the-terrifying-ai-scam-that-uses-your-loved-ones-voice\" rel=\"nofollow noopener\" target=\"_blank\">cloned people\u2019s voices<\/a>, called their relatives, and had the voice say they\u2019re being held for ransom. During the 2024 election, a political strategist and a magician <a href=\"https:\/\/www.theverge.com\/2024\/5\/23\/24163411\/fcc-fine-biden-deepfake-robocalls-steve-kramer-lingo-telecom\" rel=\"nofollow noopener\" target=\"_blank\">teamed up<\/a> to create a deepfake of former President Joe Biden, which they used to discourage registered Democrats in New Hampshire from voting in the state\u2019s primary. The head of the Senate Foreign Relations Committee <a href=\"https:\/\/www.theverge.com\/2024\/9\/26\/24255179\/deepfake-call-ukraine-senator-cardin-dmytro-kuleba\" rel=\"nofollow noopener\" target=\"_blank\">took a Zoom call<\/a> from someone using AI to pose as a Ukrainian official. At the corporate level, <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/feb\/06\/deepfake-taking-place-on-an-industrial-scale-study-finds\" rel=\"nofollow noopener\" target=\"_blank\">deepfake fraud<\/a> is now \u201cindustrial,\u201d according to one study.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The deepfake detection industry primarily exists to address one of these problems: the issue of corporate fraud.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Reality Defender is effectively training AI to combat AI. The company uses an \u201cinference-based model\u201d to detect deepfakes, CTO Alex Lisle told me. \u201cOur foundational model uses something called a student\/teacher paradigm. We take a bunch of real things and say, \u2018These are real,\u2019 and then a bunch of fake things and say \u2018This is fake.\u2019\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">For the fake me, we spent some time fine-tuning the voice: fiddling with the consistency, stability, and tone to make it sound more like the actual me. We could only do so much. There isn\u2019t much publicly available footage of me speaking Spanish \u2014 the language I use to communicate with my parents \u2014 aside from a single podcast interview from 2021, most of which is unusable because there\u2019s music in the background. But with nine seconds of audio and data scraped from years of posts, we managed to cobble together a somewhat convincing AI agent that was able to carry on a conversation with my parents, albeit an impersonal one. The English model we used on my brother was better, because we had much more training data, but even then it wasn\u2019t convincing enough.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">But family is the toughest test.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cThey know what your voice sounds like,\u201d Scott Steinhardt, the head of communications at Reality Defender, told me. Steinhardt made the deepfake with my consent and tinkered with it until it more or less sounded like me. It might not fool my family, but it\u2019d probably be good enough for, say, colleagues or corporate entities like banks.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">We\u2019ve gone the last 40,000-odd years believing our ears and eyesight, but now we can\u2019t<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">To be effective, these tools have to work quickly. Generative AI is rather slow. The model we used to call my parents sacrificed quality for speed. To get the voice to respond quickly, we had to accept lower quality all around. Text-to-speech was far better, but it took longer to generate. When we had the voice read Lucky\u2019s monologue from Waiting for Godot, it sounded almost exactly like me.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cAs a person, it\u2019s pretty challenging to not be deepfaked,\u201d Nicholas Holland, the chief product officer at Pindrop, told me. \u201cI think that the challenge of \u2018How do I protect my personal identity?\u2019 is something that the world hasn\u2019t figured out yet. I think \u2018How do my institutions know it\u2019s me?\u2019 is where different institutions are implementing different security layers.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">It\u2019s also a question of resources. I don\u2019t have the funds to hire a deepfake detection company to screen my calls, but my bank does \u2014 and my bank has more to lose, in absolute terms if not relative ones. <a href=\"https:\/\/regulaforensics.com\/blog\/impact-of-deepfakes-on-idv-regula-survey\/\" rel=\"nofollow noopener\" target=\"_blank\">One 2024 survey<\/a> found that businesses have lost $450,000 per deepfake incident, with more than one firm having lost upwards of $1 million in a single fraudulent transaction.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Some of these cases have involved scammers posing as executives, calling their subordinates, and asking them to transfer large sums of money to their accounts. Before I logged in to the call with Holland, I got a pop-up notification on Zoom:<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup ewrhy38 _1xwtict1\">This meeting is being analyzed. Pindrop Security and its third-party providers record the audio and video of your meeting to determine whether you\u2019re a real person and\/or the right person. By clicking \u2018Agree\u2019 below, you consent to Pindrop\u2019s collection, use and storage of the meeting and audio, your voice and face scans (which may be considered biometric information), and your IP address (to further determine your state, province or country) for the above purposes.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">My face, voice, and IP address, they assured me, would be retained for no longer than 90 days.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Holland told me that companies are now being inundated with fake job applicants \u2014 ironically, even at Pindrop. \u201cWe\u2019re seeing a range of it. We\u2019re seeing where people are actually doing the job, maybe they work in the IT department,\u201d Holland said. \u201cWe\u2019ve had customers who have had somebody get hired, but then that person has made referrals. They\u2019ve hired two other people and it turns out to be the same person hired three times using three different voices, three different faces, three different Slack identities.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Typically, these aren\u2019t entirely AI-generated video personas; they\u2019re people using deepfake technology to change their own features, almost like a digital mask. There used to be a trick for detecting this: asking the person to hold three fingers in front of their face.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cThat doesn\u2019t work at all now. The AI models are so good that they can absolutely create hands, you can put hands in front of your face,\u201d Holland said. \u201cIt\u2019s basically imperceptible with your eyes now.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Lisle from Reality Defender told me that as the technology improves, attacks become less high-effort. Where scammers would once impersonate a single executive, they\u2019re now targeting employees at all levels of a company. He told me of a recent attack on a publicly traded company that he declined to name, in which the fraudster went to LinkedIn, pulled the name of every current employee, and then scraped TikTok and Facebook to create a \u201cpool of information\u201d and get a voiceprint for each of these people. Their information and voiceprints were put into an LLM, which built a context window and a map, and then \u201cscattershotted the entire company\u201d calling employees at all levels.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cIn cybersecurity, we talk about these things called \u2018trust boundaries,\u2019\u201d Lisle said. \u201cThe problem with deepfakes is that there\u2019s always this implicit trust boundary, which is seeing and hearing is believing. We\u2019ve gone the last 40,000-odd years believing our ears and eyesight, but now we can\u2019t. There are all these trust boundaries we\u2019ve never had to think about before that hackers are leveraging in interesting ways.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">For now, this software is only aimed at big companies \u2014 they have the need, the high stakes, and the deep pockets to pay for it. But regular people don\u2019t have deepfake detection software, nor will they in the near future. As Holland explains it, the biggest challenge to mass adoption is awareness, since \u201cmany consumers aren\u2019t aware of the threat, so they don\u2019t know how to go find a solution \u2014 ground zero is with the businesses that serve the consumer.\u201d Pindrop doesn\u2019t have a consumer product yet, but it hasn\u2019t ruled out developing one in the future. The challenge, Holland said, is \u201cmaking these systems fast, accurate, and trustworthy enough for people to rely on in everyday moments.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Reality Defender has a different perspective. Steinhardt said a consumer product would create \u201can uneven and spotty playing field for people.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cThink of it as antivirus: Whereas this used to be a thing individual people worried about (or, worse, didn\u2019t), now our browsers, email providers, internet providers, and the like are all scanning files before they hit our computer for malware,\u201d Steinhardt said. \u201cThis is our approach to deepfake detection.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">My deepfake hadn\u2019t been able to trick my family, but I hadn\u2019t really put it to the test. For years, law enforcement agencies across the country have warned of a deepfake kidnapping scam: A parent will get a call from a very convincing voice begging for help, and then the \u201ckidnapper\u201d will demand a ransom. Even if the voice isn\u2019t entirely convincing, the crying and screaming is. I couldn\u2019t bring myself to do that to my parents, even if it was fake. I briefly considered other scams: I could call my bank, or maybe my health insurance provider, but the idea of locking myself out of my own accounts \u2014 or of committing actual, legitimate fraud \u2014 made me sour on the experiment. Instead, I called my brother. \u201cOh, NO,\u201d he said as soon as the voice greeted him. He hadn\u2019t been fooled either.<\/p>\n<p>Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Gaby Del ValleClose<img alt=\"Gaby Del Valle\" data-chromatic=\"ignore\" loading=\"lazy\" decoding=\"async\" data-nimg=\"fill\" class=\"_1bw37385 x271pn0\" style=\"position:absolute;height:100%;width:100%;left:0;top:0;right:0;bottom:0;color:transparent;background-size:cover;background-position:50% 50%;background-repeat:no-repeat;background-image:url(&quot;data:image\/svg+xml;charset=utf-8,%3Csvg xmlns='http:\/\/www.w3.org\/2000\/svg' %3E%3Cfilter id='b' color-interpolation-filters='sRGB'%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3CfeColorMatrix values='1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 100 -1' result='s'\/%3E%3CfeFlood x='0' y='0' width='100%25' height='100%25'\/%3E%3CfeComposite operator='out' in='s'\/%3E%3CfeComposite in2='SourceGraphic'\/%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3C\/filter%3E%3Cimage width='100%25' height='100%25' x='0' y='0' preserveAspectRatio='none' style='filter: url(%23b);' href='data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mN8+R8AAtcB6oaHtZcAAAAASUVORK5CYII='\/%3E%3C\/svg%3E&quot;)\"   src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2026\/04\/GDV_headshot_2.0.jpg\"\/><\/p>\n<p>Gaby Del Valle<\/p>\n<p class=\"fv263x1\">Posts from this author will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/authors\/gaby-del-valle\" rel=\"nofollow noopener\" target=\"_blank\">See All by Gaby Del Valle<\/a><\/p>\n<p>PrivacyClose<\/p>\n<p>Privacy<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/privacy\" rel=\"nofollow noopener\" target=\"_blank\">See All Privacy<\/a><\/p>\n<p>ReportClose<\/p>\n<p>Report<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/report\" rel=\"nofollow noopener\" target=\"_blank\">See All Report<\/a><\/p>\n<p>TechClose<\/p>\n<p>Tech<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/tech\" rel=\"nofollow noopener\" target=\"_blank\">See All Tech<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"I was unsure if my parents would notice that the voice on the other end wasn\u2019t mine \u2014&hellip;\n","protected":false},"author":2,"featured_media":433156,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,514,2853,172,74],"class_list":{"0":"post-589531","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-privacy","12":"tag-report","13":"tag-tech","14":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/589531","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=589531"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/589531\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/433156"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=589531"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=589531"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=589531"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}