If you are still active on social media platform X, you might have picked up on a peculiar argument recently. X’s artificial intelligence chatbot, Grok, got into a war of words with a disinformation expert in a weird “he said/it said” exchange.
Tal Hagin, an Osint analyst and media literacy expert, had asked Grok to verify a post on X that claimed Iranian missiles had struck Tel Aviv. But Grok failed to get the location and the date correct for the video, and then doubled down on the error, responding with an AI-generated image.
When Hagin took Grok to task, it shot back with a seemingly barbed comment. “Generating an image was to illustrate scale, clearly contextual. Wars amplify propaganda; I refine with real-time tools for accuracy. Cross-check primaries yourself.”
The problem is that no one is really sure who to trust, even among, as Grok suggested, the primaries.
When it comes to war, the modern front is digital – specifically, AI-driven. It is not just about using AI to identify targets; disinformation on social media is rife, with even state accounts sharing AI-manipulated footage.
So how can platforms regain trust? In an era where everything is content and controversy drives engagement, it is a challenge.
This week, the oversight board that oversees content moderation decisions on Instagram and Facebook called on parent company Meta to do more to help users of its platforms better identify AI-generated content. It is looking for stronger detection tools, more information on the origin of media and better labelling of the content. It also wants Meta to put policies in place that will allow it to respond better to deceptive content that is AI-generated.
The call came after the oversight board overturned Meta’s decision to leave on its platform a post without a high-risk AI label.
The post in question showed damage to buildings that claimed to be live footage from Haifa in Israel. Although it was reported to Meta, the post was not reviewed by the company or sent to third-party fact checkers. The case was subsequently appealed to the oversight board, which took it up.
[ EU launches ‘Democracy Shield’ to fight disinformation and interference onlineOpens in new window ]
According to Meta, the post did not violate its community standards on misinformation, as it did not “directly contribute to the risk of imminent physical harm”.
While the case specifically relates to posts from 2025, it could easily be about content found on most social media platforms today.
AI content is quickly created and even faster to spread. The most recent technological developments make it easy for even the most inexperienced user to create realistic videos or manipulate existing footage to show something that the original doesn’t.
Experts have been sounding the alarm for some time. In January, AI and online misinformation researchers warned that AI agents, the next phase of smart technology that is supposed to improve our lives while also making a ton of cash for tech companies, could be used against us. Free-speech activist Maria Ressa and AI and social science researchers from institutions such as Harvard, Oxford and Yale warned of scenarios where human-like AI bots could be used to undermine democracy, steering public opinion through social media and messaging channels.
Bot farms have been used in the past to try to influence elections across the world; imagine how much more successful they could be with AI agents that can react in a more human way, that can lie more easily and be more convincing.
[ Sam Altman’s AI comments reveal a distinctly anti-human world viewOpens in new window ]
If you were surprised by the amount of AI-generated fakery online in the last two weeks, you haven’t been paying attention. AI proponents assured us that, as smart as AI is, the systems for flagging such content are equally as smart. Except that isn’t always true.
The New York Times tested a range of tools that claimed to be able to spot the fakes among genuine images and videos, using clues such as hidden watermarks, composition errors and so on. The results didn’t inspire confidence. While the tools correctly identified some of the AI content, others slipped through.
It gets more difficult to weed out the fakes when lower-quality images and videos are spread via social media.
It is concerning that AI can create such believable fakes. It is even more concerning that the systems meant to tackle such content and flag it as fake are failing on the job. While laws such as the EU’s AI Act will help deal with some of the worst offences, they can’t tackle every scenario.
Perhaps the most worrying, and lasting, effect is that we can no longer trust the evidence in front of us, even if it looks genuine. Genuine events can be dismissed as AI-generated slop, leaving us all at the risk of ignoring significant developments.
Scepticism is healthy, and manipulation of images is not new to the AI era. However, the realism of AI-generated content is so high that you can rewrite history with it, and the AI checkers may never flag it as nonsense.
When we can no longer trust our own eyes, we have to be constantly on the lookout for fakes. And that, ultimately, is exhausting.