In case you missed it, April 2 was the 10th annual International Fact-Checking Day. The choice of date is intentional – the day after April Fool’s Day, that annual homage to lying, or at least to the fun, harmless kind of lies. (Sorry if you clicked!) It makes sense that the following day would be dedicated to tuning into our internal lie detectors – a chance to hone our critical thinking and discover new resources and tools to help us in our quest to separate fact from fiction.

Yet, given the extraordinary rise in AI-generated digital content, a single day seems quaint, like a single day of exercise after months of dietary overindulgence. This is particularly true in the context of the United States-Israel war with Iran, during which people have tasked AI with generating vast amounts of misinformation, often in the form of images and videos, with the goal of making money via the “misinformation economy.” Unfortunately, these types of images can be particularly powerful persuasive tools. We’re far less skeptical when we “see” something with our own eyes, or at least if we think we do (Seo, 2020).

This can be particularly dangerous in life-or-death arenas like war. Just in the last week or so, a range of news outlets, including the BBC, have called the explosion of AI-driven misinformation related to the U.S.-Israel war with Iran “unprecedented.” Digital media expert Timothy Graham told this to the BBC: “What used to require professional video production can now be done in minutes with AI tools. The barrier to creating convincing synthetic conflict footage has essentially collapsed.”

A Reality Check

Sofia Rubison is a senior editor at NewsGuard, an organization that rates the reliability of global news sources and seems to concur with that “unprecedented” descriptor. Recently, on the podcast Question Everything, Rubison conveyed her expert opinion that the “sheer volume of fake videos and photos that were being spread online” is an increase from the past. When asked whether this level of AI videos is new, she said, “I definitely think so.”

In addition to its reports on the reliability of news sources, NewsGuard posts a weekly false claim in its Reality Check newsletter. It shares claims that have both gone viral and are especially likely to be harmful. They recently investigated the claim that a video of Israeli Prime Minister Benjamin Netanyahu drinking coffee at a coffee shop was a deepfake. The video was posted as “proof of life” in response to false claims that he was dead, yet many people, including a number of amateur online sleuths, thought it was a deepfake. These claims were widely circulated, leading this claim to meet the NewsGuard criteria: It went viral and it had the capacity to cause harm.

Flawed AI Detectors

There are tools that any of us can use to assess whether videos are real or AI, including Grok, which is integrated into X. But these tools are not always accurate. As Rubison says, “Grok, the AI account, is actually one of the biggest spreaders of false claims on this platform.” She continues, “X, the platform does not purport that their model is able to accurately fact-check false claims or that it’s able to accurately detect when something is AI-generated or not.” Despite that, Grok’s responses can feel like a definitive verdict.

A better AI detector, from the company Hive, runs tests on videos to determine if they are AI-generated. Hive is more accurate than many other detectors, but in the case of the Netanyahu video, it still determined that there was an over 95% likelihood that the Netanyahu was AI-generated. Hive was wrong in this case—the video was real—an indication that even the best AI detectors are not flawless. Rubison says that NewsGuard does use Hive to make an initial assessment, but never relies solely on one platform to make a determination as to whether a video is AI or not.

With respect to the Netanyahu video, Rubinson explained that her team at NewsGuard conducted an extensive fact-check beyond online tools and found it to be real. For one, she noted that Reuters, a respected news organization, looked at stock footage from the café, which matched the background in the video. In addition, the café itself posted photos and videos of the event on social media, which also matched up. It would have required many, many people colluding for this to be an AI video.

Every Day Is Fact-Checking Day

Given the harm that AI-generated content can wreak, particularly in the context of war, we urge you to make every day Fact-Checking Day. For one, you can sign up for Reality Check or follow one of the many fact-checking websites aimed at debunking misinformation. Incorporating a fact-checking site into your regular reading does two things: It can help identify lies that you’ve encountered elsewhere in your information intake, and, more importantly, it is a regular reminder to think critically about the images and videos that form the constant “noise” of your social media feed.