{"id":107023,"date":"2025-10-27T18:12:19","date_gmt":"2025-10-27T18:12:19","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/107023\/"},"modified":"2025-10-27T18:12:19","modified_gmt":"2025-10-27T18:12:19","slug":"sora-is-showing-us-how-broken-deepfake-detection-is","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/107023\/","title":{"rendered":"Sora is showing us how broken deepfake detection is"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">OpenAI\u2019s new deepfake machine, Sora, has proven that artificial intelligence is alarmingly good at faking reality. The AI-generated video platform, powered by OpenAI\u2019s new Sora 2 model, has churned out detailed (and <a href=\"https:\/\/globalextremism.org\/post\/openais-sora-2-used-to-spread-holocaust-denial-and-glorify-hitler\/\" rel=\"nofollow noopener\" target=\"_blank\">often offensive or harmful<\/a>) videos of famous people like <a href=\"https:\/\/www.theverge.com\/news\/801539\/open-ai-sora-mlk\" rel=\"nofollow noopener\" target=\"_blank\">Martin Luther King Jr.<\/a>, Michael Jackson, and <a href=\"https:\/\/www.theverge.com\/news\/803141\/openai-sora-bryan-cranston-deepfakes\" rel=\"nofollow noopener\" target=\"_blank\">Bryan Cranston<\/a>, as well as copyrighted characters like <a href=\"https:\/\/www.404media.co\/openais-sora-2-copyright-infringement-machine-features-nazi-spongebobs-and-criminal-pikachus\/\" rel=\"nofollow noopener\" target=\"_blank\">SpongeBob and Pikachu<\/a>. Users of the app who voluntarily shared their likenesses have seen themselves shouting racial slurs or turned into fuel for fetish accounts.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">On Sora, there\u2019s a clear understanding that everything you see and hear isn\u2019t real. But like any piece of social content, videos made on Sora are meant to be shared. And once they escape the app\u2019s unreality quarantine zone, there\u2019s little protection baked in to ensure viewers know that what they\u2019re looking at isn\u2019t real.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The app\u2019s convincing mimicry doesn\u2019t just run the risk of misleading viewers. It\u2019s a demonstration of how profoundly AI labeling technology has failed, including a system OpenAI itself helps oversee: <a href=\"https:\/\/www.theverge.com\/2024\/8\/21\/24223932\/c2pa-standard-verify-ai-generated-images-content-credentials\" rel=\"nofollow noopener\" target=\"_blank\">C2PA authentication<\/a>, one of the best systems we have for distinguishing real images and videos from AI fakes.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">C2PA authentication is more commonly known as \u201cContent Credentials,\u201d a term championed by Adobe, which has spearheaded the initiative. It\u2019s a system for attaching invisible but verifiable metadata to images, videos, and audio at the point of creation or editing, appending details about how and when it was made or manipulated.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">OpenAI is a <a href=\"https:\/\/openai.com\/index\/understanding-the-source-of-what-we-see-and-hear-online\/\" rel=\"nofollow noopener\" target=\"_blank\">steering committee member<\/a> of the <a href=\"https:\/\/c2pa.org\/\" rel=\"nofollow noopener\" target=\"_blank\">Coalition for Content Provenance and Authenticity<\/a> (C2PA), which developed the open specification alongside the Adobe-led <a href=\"https:\/\/contentauthenticity.org\/\" rel=\"nofollow noopener\" target=\"_blank\">Content Authenticity Initiative<\/a> (CAI). And in fact, C2PA information is embedded in every Sora clip \u2014 you\u2019d just probably never know it, unless you\u2019re the type to pore over some brief footnotes on a meager handful of OpenAI\u2019s blog posts.<\/p>\n<p><a class=\"kqz8fh1\" href=\"https:\/\/platform.theverge.com\/wp-content\/uploads\/sites\/2\/2025\/10\/YouTube-AI-label.jpg?quality=90&amp;strip=all&amp;crop=0,0,100,100\" data-pswp-height=\"744\" data-pswp-width=\"800\" target=\"_blank\" rel=\"noreferrer nofollow noopener\"><img alt=\"An example of the AI labels on YouTube content.\" data-chromatic=\"ignore\" loading=\"lazy\" decoding=\"async\" data-nimg=\"fill\" class=\"x271pn0\" style=\"position:absolute;height:100%;width:100%;left:0;top:0;right:0;bottom:0;color:transparent;background-size:cover;background-position:50% 50%;background-repeat:no-repeat;background-image:url(&quot;data:image\/svg+xml;charset=utf-8,%3Csvg xmlns='http:\/\/www.w3.org\/2000\/svg' %3E%3Cfilter id='b' color-interpolation-filters='sRGB'%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3CfeColorMatrix values='1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 100 -1' result='s'\/%3E%3CfeFlood x='0' y='0' width='100%25' height='100%25'\/%3E%3CfeComposite operator='out' in='s'\/%3E%3CfeComposite in2='SourceGraphic'\/%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3C\/filter%3E%3Cimage width='100%25' height='100%25' x='0' y='0' preserveAspectRatio='none' style='filter: url(%23b);' href='data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mN8+R8AAtcB6oaHtZcAAAAASUVORK5CYII='\/%3E%3C\/svg%3E&quot;)\"   src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/10\/YouTube-AI-label.jpg\"\/><\/a><\/p>\n<p>This is the label that\u2019s supposed to appear on AI-generated or manipulated videos uploaded to YouTube Shorts, but it only applies to content around sensitive topics. Image: YouTube<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">C2PA only works if it\u2019s adopted at every step of the creation and posting process, including being clearly visible to the person viewing the output. In theory, it\u2019s been embraced by Adobe, <a href=\"https:\/\/www.theverge.com\/2024\/2\/6\/24063954\/ai-watermarks-dalle3-openai-content-credentials\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a>, <a href=\"https:\/\/www.theverge.com\/2024\/9\/17\/24247004\/google-c2pa-verify-ai-generated-images-content\" rel=\"nofollow noopener\" target=\"_blank\">Google<\/a>, <a href=\"https:\/\/www.theverge.com\/2024\/10\/15\/24271083\/youtube-c2pa-captured-camera-label-content-credentials\" rel=\"nofollow noopener\" target=\"_blank\">YouTube<\/a>, Meta, <a href=\"https:\/\/www.theverge.com\/2024\/5\/9\/24152667\/tiktok-ai-generated-label-content-credentials-cai-c2pa\" rel=\"nofollow noopener\" target=\"_blank\">TikTok<\/a>, <a href=\"https:\/\/www.theverge.com\/2024\/9\/13\/24244219\/amazon-joins-c2pa\" rel=\"nofollow noopener\" target=\"_blank\">Amazon<\/a>, <a href=\"https:\/\/www.theverge.com\/news\/604989\/cloudflare-adobe-content-credentials-authenticty-feature\" rel=\"nofollow noopener\" target=\"_blank\">Cloudflare<\/a>, and even <a href=\"https:\/\/www.theverge.com\/2025\/1\/14\/24343788\/the-office-of-the-arizona-secretary-of-state-is-going-to-use-content-credentials-on-images\" rel=\"nofollow noopener\" target=\"_blank\">government offices<\/a>. But few of these platforms use it to clearly flag deepfake content to their users. Instagram, TikTok, and YouTube\u2019s efforts are either barely visible labels or <a href=\"https:\/\/support.google.com\/youtube\/answer\/15447836?hl=en\" rel=\"nofollow noopener\" target=\"_blank\">collapsed descriptions<\/a> that are easy to miss, and they provide very little context if you actually were to spot them. And for TikTok and YouTube, I\u2019ve never once encountered them myself while browsing the platforms, even on videos that are clearly AI-generated, given the uploader has likely removed the metadata or not disclosed their origins.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Meta initially added a <a href=\"https:\/\/www.theverge.com\/2024\/6\/24\/24184795\/meta-instagram-incorrect-made-by-ai-photo-labels\" rel=\"nofollow noopener\" target=\"_blank\">small \u201cMade by AI\u201d tag<\/a> to images on Facebook and Instagram last year, but it <a href=\"https:\/\/www.theverge.com\/2024\/7\/1\/24190026\/meta-instagram-facebook-made-with-ai-info-label-metadata\" rel=\"nofollow noopener\" target=\"_blank\">later changed the tag to say \u201cAI Info\u201d<\/a> after photographers complained that work they edited using Photoshop \u2014 which automatically applies Content Credentials \u2014 was being mislabeled. And most online platforms don\u2019t even do that, despite being more than capable of scanning uploaded content for AI metadata.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">C2PA\u2019s creators insist they\u2019re getting closer to widespread adoption. \u201cWe\u2019re seeing meaningful progress across the industry in adopting Content Credentials, and we\u2019re encouraged by the active collaboration underway to make transparency more visible online,\u201d Andy Parsons, senior director of Content Authenticity at Adobe, said to The Verge. \u201cAs generative AI and deepfakes become more advanced, people need clear information about how content is made.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Yet after <a href=\"https:\/\/www.theverge.com\/2021\/10\/26\/22745506\/adobe-nft-art-theft-content-credentials-opensea-rarible-photoshop\" rel=\"nofollow noopener\" target=\"_blank\">four years<\/a>, that progress is still all but invisible. I\u2019ve covered CAI since I started at The Verge three years ago, and I didn\u2019t realize for weeks that every video generated using Sora and Sora 2 <a href=\"https:\/\/openai.com\/index\/launching-sora-responsibly\/\" rel=\"nofollow noopener\" target=\"_blank\">has Content Credentials embedded<\/a>. There\u2019s no visual marker that alludes to it, and in every example I\u2019ve seen where these videos are reposted to other platforms like X, Instagram, and TikTok, I have yet to see any labels that identify them as being AI-generated, let alone provide a full accounting of their creation.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">One example noted by <a href=\"https:\/\/copyleaks.com\/blog\/racist-meme-on-sora\" rel=\"nofollow noopener\" target=\"_blank\">AI detection platform Copyleaks<\/a> is a viral AI-generated video on TikTok that shows CCTV footage of <a href=\"https:\/\/www.tiktok.com\/@mohamd.jhunna\/video\/7561741102429097236\" rel=\"nofollow noopener\" target=\"_blank\">a man catching a baby<\/a> that\u2019s seemingly fallen out of an apartment window. The video has almost two million views and appears to have a blurred-out Sora watermark. TikTok hasn\u2019t visibly flagged that the video is AI-generated, and there are thousands of commenters questioning whether the footage is real or fake.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">If a user wants to check images and videos for C2PA metadata, the burden is almost entirely on them. They have to save and then upload a supported file into the <a href=\"https:\/\/verify.contentauthenticity.org\/\" rel=\"nofollow noopener\" target=\"_blank\">CAI<\/a> or <a href=\"https:\/\/www.theverge.com\/news\/654883\/adobe-content-authenticity-web-app-beta-availability\" rel=\"nofollow noopener\" target=\"_blank\">Adobe web app<\/a>, or they have to download and run a <a href=\"https:\/\/helpx.adobe.com\/uk\/creative-cloud\/apps\/adobe-content-authenticity\/chrome-browser-extension\/chrome-extension.html\" rel=\"nofollow noopener\" target=\"_blank\">browser extension<\/a> that will flag any online assets that have metadata with a \u201cCR\u201d icon. Similar provenance-based detection standards, such as <a href=\"https:\/\/www.theverge.com\/2023\/8\/29\/23849107\/synthid-google-deepmind-ai-image-detector\" rel=\"nofollow noopener\" target=\"_blank\">Google\u2019s invisible SynthID watermarks<\/a>, are no simpler to use.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cThe average person should not worry about deepfake detection. It should be on platforms and trust and safety teams,\u201d Ben Colman, cofounder and CEO of AI detection company <a href=\"https:\/\/www.realitydefender.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Reality Defender<\/a>, told The Verge. \u201cPeople should know if the content they\u2019re consuming is or is not using generative AI.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">People are already using Sora 2 to generate convincing videos of fake bomb scares, children in warzones, and graphic scenes of violence and racism. One clip <a href=\"https:\/\/www.theguardian.com\/us-news\/2025\/oct\/04\/openai-sora-violence-racism\" rel=\"nofollow noopener\" target=\"_blank\">reviewed by The Guardian<\/a> shows a Black protester in a gas mask, helmet, and goggles yelling the \u201cyou will not replace us\u201d slogan used by white supremacists \u2014 the prompt used to create that video was simply \u201cCharlottesville rally.\u201d OpenAI attempts to identify Sora\u2019s output with watermarks that appear throughout its videos, but those marks <a href=\"https:\/\/www.404media.co\/sora-2-watermark-removers-flood-the-web\/\" rel=\"nofollow noopener\" target=\"_blank\">are laughably easy to remove<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">TikTok, Amazon, and Google haven\u2019t yet provided comment to The Verge about C2PA support. Meta told The Verge that it is continuing to implement C2PA and evaluating its labeling approach as AI evolves. OpenAI simply directed us to its scant blog posts and <a href=\"https:\/\/help.openai.com\/en\/articles\/8912793-c2pa-in-chatgpt-images\" rel=\"nofollow noopener\" target=\"_blank\">help center article<\/a> about C2PA support. Meta, like OpenAI, has an entire platform for its AI slop, complete with <a href=\"https:\/\/www.theverge.com\/meta\/660543\/meta-ai-app-social-feed\" rel=\"nofollow noopener\" target=\"_blank\">dedicated feeds for social<\/a> and <a href=\"https:\/\/www.theverge.com\/news\/786499\/meta-ai-vibes-feed-discover-videos\" rel=\"nofollow noopener\" target=\"_blank\">video content<\/a>, and both companies are pumping AI-generated videos into social media.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">X, which has its <a href=\"https:\/\/www.theverge.com\/2024\/1\/27\/24052841\/taylor-swift-search-blocked-x-twitter-ai-images\" rel=\"nofollow noopener\" target=\"_blank\">own controversies<\/a> regarding <a href=\"https:\/\/www.theverge.com\/report\/718975\/xai-grok-imagine-taylor-swifty-deepfake-nudes\" rel=\"nofollow noopener\" target=\"_blank\">nude celebrity deepfakes<\/a>, pointed us to <a href=\"https:\/\/help.x.com\/en\/rules-and-policies\/authenticity\" rel=\"nofollow noopener\" target=\"_blank\">its policy<\/a> that supposedly bans deceptive AI-generated media, but did not provide any information about how this is moderated beyond relying on user reports and community notes. X was notably a founding member of the CAI back when it was still known as Twitter, but pulled itself from the initiative without explanation after Elon Musk purchased the platform.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Parsons says that \u201cAdobe remains committed to helping scale adoption, supporting global policy efforts, and encouraging greater transparency across the content ecosystem.\u201d But the honor system C2PA has relied upon so far isn\u2019t working. And OpenAI\u2019s position at C2PA seems hypocritical given that, as it\u2019s creating a tool that <a href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\/788786\/openais-new-ai-sora-ios-social-video-app-will-let-you-deepfake-your-friends\" rel=\"nofollow noopener\" target=\"_blank\">actively promotes deepfakes<\/a> of real people, it\u2019s offering such half-baked protections against their abuse. Reality Defender reported that it managed to <a href=\"https:\/\/www.realitydefender.com\/insights\/sora-2-identity-bypass\" rel=\"nofollow noopener\" target=\"_blank\">bypass Sora 2\u2019s identity safeguards entirely<\/a> less than 24 hours after the app launched, allowing it to consistently generate celebrity deepfakes. It feels like OpenAI is using its C2PA membership as a token cover while largely ignoring the commitments it comes with.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The frustrating thing is that as difficult as AI verification is, Content Credentials does have merit. The embedded attribution metadata can help artists and photographers be reliably credited for their work, for example, even if someone takes a screenshot of it and reposts it across other platforms. There are also supplemental tools that could improve it. Inference-based systems like Reality Defender \u2014 also a member of the C2PA Initiative \u2014 rate the likelihood that something was generated or edited using AI by scanning for subtle signs of synthetic generation. This system is unlikely to rate something with a 100 percent confidence ranking, but it\u2019s improving over time and doesn\u2019t rely on reading watermarks or metadata to detect deepfakes.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">\u201cC2PA is a fine solution, but it is not a fine solution on its own.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cC2PA is a fine solution, but it is not a fine solution on its own,\u201d said Colman. \u201cIt needs to be done in conjunction with other tools, where if one thing doesn\u2019t catch it, another may.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Metadata can also be easily stripped. Adobe research scientist John Collomosse <a href=\"https:\/\/contentauthenticity.org\/blog\/three-pillars-of-provenance\" rel=\"nofollow noopener\" target=\"_blank\">openly admits this on a CAI blog<\/a> last year, and said it\u2019s common for social media and content platforms to do so. Content Credentials uses image fingerprinting tech to counteract this, but all tech can be cracked, and it\u2019s ultimately unclear if there\u2019s a truly effective technical solution.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Some companies don\u2019t seem to be trying very hard to support the few tools we have anyway. Colman said he believes that the means for warning everyday people about deepfake content are \u201cgoing to get worse before they get better,\u201d but that we should see tangible improvements within the next couple of years.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">While Adobe is championing Content Credentials as part of the ultimate solution to address deepfakes, it knows the system isn\u2019t enough. For one, <a href=\"https:\/\/contentauthenticity.org\/blog\/durable-content-credentials\" rel=\"nofollow noopener\" target=\"_blank\">Parsons directly admitted this<\/a> in a CAI post last year, saying the system isn\u2019t a silver bullet.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cWe\u2019re seeing criticism circulating that relying solely on Content Credentials\u2019 secure metadata, or solely on invisible watermarking to label generative AI content, may not be sufficient to prevent the spread of misinformation,\u201d Parsons wrote. \u201cTo be clear, we agree.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">And where a reactive system clearly isn\u2019t working, Adobe is also throwing its weight behind legislation and regulatory efforts to find a proactive solution. The company proposed that Congress establish a <a href=\"https:\/\/blog.adobe.com\/en\/publish\/2023\/09\/12\/fair-act-to-protect-artists-in-age-of-ai\" rel=\"nofollow noopener\" target=\"_blank\">new Federal Anti-Impersonation Right<\/a> (the FAIR Act) in 2023 that would protect creators from having their work or likeness replicated by AI tools, and backed the <a href=\"https:\/\/blog.adobe.com\/en\/publish\/2024\/12\/20\/an-artists-style-is-precious-lets-protect-it\" rel=\"nofollow noopener\" target=\"_blank\">Preventing Abuse of Digital Replicas Act<\/a> (PADRA) last year. Similar efforts, like the <a href=\"https:\/\/www.theverge.com\/2023\/10\/12\/23914915\/ai-replicas-likeness-law-no-fakes-copyright\" rel=\"nofollow noopener\" target=\"_blank\">\u201cNo Fakes Act\u201d<\/a> that aims to protect people from unauthorized AI impersonations of their faces or voices, have also garnered support from platforms <a href=\"https:\/\/www.theverge.com\/news\/645942\/youtube-is-supporting-the-no-fakes-act-targeting-unauthorized-ai-replicas\" rel=\"nofollow noopener\" target=\"_blank\">like YouTube<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cWe\u2019re in good conversations with a bipartisan coalition of senators and congresspeople who actually recognize that deepfakes are an everyone problem, and they\u2019re actually working on building legislation that is proactive, not reactive,\u201d Colman said. \u201cWe\u2019ve relied too long on the good graces of tech to self-police themselves.\u201d<\/p>\n<p>Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Jess WeatherbedClose<img alt=\"Jess Weatherbed\" data-chromatic=\"ignore\" loading=\"lazy\" decoding=\"async\" data-nimg=\"fill\" class=\"_1bw37385 x271pn0\" style=\"position:absolute;height:100%;width:100%;left:0;top:0;right:0;bottom:0;color:transparent;background-size:cover;background-position:50% 50%;background-repeat:no-repeat;background-image:url(&quot;data:image\/svg+xml;charset=utf-8,%3Csvg xmlns='http:\/\/www.w3.org\/2000\/svg' %3E%3Cfilter id='b' color-interpolation-filters='sRGB'%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3CfeColorMatrix values='1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 100 -1' result='s'\/%3E%3CfeFlood x='0' y='0' width='100%25' height='100%25'\/%3E%3CfeComposite operator='out' in='s'\/%3E%3CfeComposite in2='SourceGraphic'\/%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3C\/filter%3E%3Cimage width='100%25' height='100%25' x='0' y='0' preserveAspectRatio='none' style='filter: url(%23b);' href='data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mN8+R8AAtcB6oaHtZcAAAAASUVORK5CYII='\/%3E%3C\/svg%3E&quot;)\"   src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/10\/JESSICA_WEATHERBED.0.jpg\"\/>Jess Weatherbed<\/p>\n<p class=\"fv263x1\">Posts from this author will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/authors\/jess-weatherbed\" rel=\"nofollow noopener\" target=\"_blank\">See All by Jess Weatherbed<\/a><\/p>\n<p>AICloseAI<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\" rel=\"nofollow noopener\" target=\"_blank\">See All AI<\/a><\/p>\n<p>OpenAICloseOpenAI<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/openai\" rel=\"nofollow noopener\" target=\"_blank\">See All OpenAI<\/a><\/p>\n<p>ReportCloseReport<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/report\" rel=\"nofollow noopener\" target=\"_blank\">See All Report<\/a><\/p>\n<p><script async src=\"\/\/www.tiktok.com\/embed.js\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"OpenAI\u2019s new deepfake machine, Sora, has proven that artificial intelligence is alarmingly good at faking reality. The AI-generated&hellip;\n","protected":false},"author":2,"featured_media":107024,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,60,1682,1094,80],"class_list":{"0":"post-107023","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-ireland","13":"tag-openai","14":"tag-report","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/107023","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=107023"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/107023\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/107024"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=107023"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=107023"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=107023"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}