A video began circulating on Facebook shortly before the Irish presidential election in October. It was a report by the national broadcaster, RTÉ, with bombshell news.
Frontrunner candidate Catherine Connolly told a campaign event she was bowing out of the race. A crestfallen supporter shouted out “No, Catherine” before the clip cut to a reporter explaining what would come next. The election was off and her leading rival would be acclaimed.
A shocking development only days before the election. Except the whole thing was fake.
Ireland’s new President Catherine Connolly was elected in a landslide vote at the end of October, despite the circulation of an AI-generated deepfake video which falsely claimed her withdrawal from the race.Pool/Getty Images
Ms. Connolly slammed the video as “a disgraceful attempt to mislead voters and undermine our democracy.” Meta eventually agreed to take it down and Ms. Connolly went on to win handily. But the video – which can still be seen, though is now branded clearly as being AI-generated – is an example of how dangerous false information can be.
Society can fight back against what has become a hypnotic stream of fakery. Society must. A world in which illusion, fraud and lies are the common currency becomes one in which there is no agreed-upon version of truth, undermining the very concept of reality.
Right around this moment it may be tempting for readers to think, well I’m not on social media, so I’m probably missing the worst of this garbage. Unfortunately, between the rise of generative AI and the viral power of bots, the trash has a way of seeping through to everyone.
Consider the artificial intelligence synopses that appear first when doing a web search. Data show that fewer and fewer people are scrolling down and clicking on links to find the answer they were seeking. But relying on the synopsis is risky given that AI uses the available information, and the source material is increasingly unsound.
The number of phony scientific papers is doubling every 18 months, posing real dangers when AI scrapes up false information and uses it in response to health queries.
A source of deliberately bad information is Russia, which seeds the internet with propaganda specifically for the purposes of being picked up by AI. An ouroboros of digital deception.
Phony information can also percolate through society the old-fashioned way, passed from one person to the next. One group-chat to the next. And information of dubious origin, when coming from trusted friends or family, may be treated with less skepticism than it should.
Printing presses thunder away in The Globe and Mail’s Toronto press room in circa 1938. Prior to the internet age, the flow of information to the public through newspapers, radio, and television made for a less fragmented and unruly mass media environment.John Boyd/The Globe and Mail
From a shared reality to post-truth
Nearly six centuries ago, Johannes Gutenberg invented the printing press. The technology made it possible, for the first time, for a large number of people over a wide area to be reading the same thing. A shared reality. Newspapers, radio and then television broadcasts expanded the concept.
But then the common ground began to break down.
The internet fragmented the flow of information. In parallel, a healthy skepticism about sourcing and authority turned for many people into a tribal approach to information. Trust the stuff coming from your side and doubt what the other side says.
Cartoonist Martin Shovel summed it up well, contrasting Descartes’ “I think therefore I am” with a post-truth approach: “I believe therefore I’m right.”
However, actual video or audio evidence was still hard to dismiss. No longer.
Just as fake videos make it easy to present as true something that is false, their ubiquity also allows people to deny what they really did say or do. What’s fake is arguably real and what’s real is arguably fake. It leads to a world in which people can believe nothing but their own feelings, a world that has lost its foundation.
In such a world it is easy to manufacture outrage. There was a recent example of this in the United States.
When a restaurant chain called Cracker Barrel announced in the summer that it would change its logo, the reaction seemed fantastically out of proportion. How could so many people care? What exactly was “woke” about the new design?
An earlier version of the Cracker Barrel logo (top/left) and the new logo, unveiled on a restaurant in New York in August, 2025. A data analytics company studied the online backlash to restaurant chain’s new logo and found that much of the controversy was inflamed by bots.
Ted Shaffrey/AP; Wyatte Grantham-Philips/AP
To learn more, the data analytics company Peakmetrics looked at 52,000 posts on X, formerly Twitter, in the first 24 hours after the company announced the new logo. It found that 44.5 per cent of them showed bot-like characteristics and were flagged as likely automated. Fully 49 per cent of the posts calling for a boycott were automated.
While some people may have cared honestly and passionately about the Cracker Barrel logo, it was bots wielded by culture warriors that “created the appearance of a sweeping grassroots movement.”
Bots can also inflame more serious controversies. The SNC-Lavalin scandal, in which then-prime minister Justin Trudeau put pressure on his attorney-general to help the Quebec company avoid criminal prosecution, created legitimate public outrage in Canada. But a McMaster University researcher found it was helped along.
Sophia Melanson Ricciardone, a postdoctoral fellow, found that bots “significantly influenced” the language of the human commentators in the online conversation. And her article, published by the International Journal of Digital Humanities, showed that bots “reinforced political echo chambers around the #SNCLavalin Twitter discourse in 2019 more effectively than human interlocutors.”
Journalist Sandra Laffont holds a workshop to train teenagers to recognize online misinformation at College Henri Barbusse in Vaulx-en-Velin, France, in November, 2018. Some European countries have begun to incorporate media literacy training into school systems.MATTHEW AVIGNONE/The New York Times News Service
Mediating social media
What can society do about the tsunami of bad information? One place to start is social media.
A state in Australia found banning youth from social media so popular it is being rolled out nationwide. A less extreme approach is to do as Scandinavian countries do and incorporate media literacy training, including how to spot disinformation, into the school system. Either of these approaches would, over time, create a population that has a different relationship with social media.
As for the current adult population, it’s past time for social media users to recognize that they are not just its product – their eyeballs sold to advertisers – they are also its pawns. Social media is manipulative by design, its algorithms engineered to reward outrage. And there is no reason to assume the person whipping up emotion is who they say they are. Some major pro-Trump influencer accounts on X, claiming in their bios to be patriotic Americans, were exposed last weekend to be foreign-based.
When controversies do flare up online, politicians can stop them breaking into the real world by not being as quick to comment. That just provides oxygen. And the media should play its part by not laundering online controversies into respectability. Any story that includes a passage to the effect of “many people online are saying” should have been reconsidered by an editor.
Media also must work on strengthening its role as trusted provider of information. With government communications saturated with spin and social media a maelstrom of rage and fakery, traditional media are more important than ever.
The Australian government plans to enforce age-restrictions on major social media platforms in December. Under the ban, children under 16 won’t be able to create or keep accounts on platforms like Facebook, Instagram, Snapchat, TikTok and more.WILLIAM WEST/AFP/Getty Images
However, Statistics Canada data show that trust in media is lowest among younger generations, a particular problem because these people are most likely to get their information from online sources. Even among seniors – the most trusting group – barely one in five have high confidence in the media. More work, obviously, is needed.
Finally, technology can play a role in fighting the hoaxes made easy by technology. Sony Electronics Inc. has developed a way of embedding in a photo’s digital file data such as which camera took the picture and when it was taken. In a world awash with phony images, knowing there is one you can trust is a powerful selling point.
Society is at this fraught moment because of two parallel trends. Traditional sources of information lost credibility at the same time as it became much easier to create fake information and push it into the world.
People have their own role to play in fighting this situation. The fake item someone is most likely to fall for is one that fits their worldview. It’s one that seems intuitively plausible and presses their buttons. To be less gullible, stop for a moment before sharing, before becoming enraged.
Consider again that deepfake video of the Irish presidential candidate. Would a legitimate newscast have offered no reasons for the leading candidate quitting the race? Was it plausible that one of the other candidates would be appointed to the presidency, the election cancelled?
So take that extra beat to think before reacting. Often just a little reflection can allow someone to break the spell of misinformation.
The Sunday Editorial
Alberta’s (welcome) healthcare heresy
The airport of the future has arrived – but its landing is delayed in Canada