There may come a time for each of us when we are fooled by a picture or video generated by artificial intelligence. For this reporter, it’s already happened.

The culprit was a TikTok video supposedly filmed by a security camera in someone’s living room. A golden retriever hops through a doggie door, a gushing garden hose in its jaws, and happily sprays water all over the place. Highly amusing.

Only after reading the skeptical comments did your once-prideful media literate correspondent look more closely, and concede the video was likely AI-generated.

The stakes in our AI age are much higher than getting duped by dog content. AI-generated misinformation and financial frauds are already running rampant, and the quality is only getting better. Services that rely on algorithms to detect AI content are not foolproof, nor are social media labelling systems.

The muddiness around what is real is creating challenges for news outlets – not only in determining the veracity of images, but at a time when trust in media is at risk, ensuring audiences will believe them, too.

To that end, camera manufacturers, tech companies and news organizations are increasingly working together on technical standards to authenticate photographs from the very start. Sony Electronics Inc. has developed such a system for news outlets, which The Globe and Mail has tested for the past 10 months. The company’s cameras effectively issue a birth certificate for each photograph.

Embedded in the digital file’s metadata is information about the individual camera that was used, when the photo was taken (using a system that is distinct from the camera’s internal settings, which can be altered) and even 3-D depth data to help determine if, say, someone is taking a photo of a photo.

The information is sealed and cannot be edited after the fact, said Ivan Iwatsuki, vice-president of co-creation strategy at Sony. “Once it’s created, it’s done,” he said.

Sony’s system is compatible with a technical standard called C2PA which can record every edit performed on a photo, such as cropping and lighting adjustments, to provide a provenance chain of sorts. For news agencies, the system provides a way to verify the authenticity of pictures taken by photojournalists around the world, and provide more transparency and assurance to audiences, as well.

“The fake content issue is a serious problem in our society,” Mr. Iwatsuki said. “This is one of the most important things that we should be doing as a camera manufacturer.”

Open this photo in gallery:

The Globe has been helping to test the Sony verification technology.Fred Lum/The Globe and Mail

Sony first started testing the system in 2023 with the Associated Press, and has been figuring out how best to integrate the system into existing workflows. The Globe and Mail, meanwhile, has worked with Sony to preserve the authenticity information throughout the production process.

The standard that Sony’s system is compatible with, C2PA, stems from something called the Coalition for Content Provenance and Authenticity, founded by Adobe, Microsoft, the BBC and a few others in 2021.

The goal is not only to collaborate on measures to authenticate content, but to convince a wide swath of industry players, including social media platforms, to adopt these standards.

More companies have joined in recent years, particularly as generative AI has taken off and the need for verification has increased.

More recently, Google, Meta, TikTok and OpenAI have signed on. Images produced by OpenAI’s ChatGPT, for example, contain C2PA metadata indicating the source.

“Truth in journalism has never been under more threat than it is today, and there’s certain newspapers and news agencies that are trying to future proof themselves,” said Nick Didlick, a long-time photojournalist in Vancouver who consults with Sony. Even so, these systems are not perfect. “There’s still going to be people who want to hack,” he said.

Open this photo in gallery:

The Sony system used by The Globe relies on C2PA, a standard that more tech companies are using to provide safeguards against misuse of AI.Fred Lum/The Globe and Mail

A camera enthusiast who goes by the online alias Horshack is one of those people. When Nikon released C2PA capabilities for one of its cameras this August, Horshack set about finding a way to circumvent it. (The Globe and Mail is not identifying Horshack in order to preserve his relationships in the camera industry.)

He didn’t expect to be able to do so, but within about 20 minutes, he caught on to an obvious flaw, he told The Globe.

By exploiting a feature that allows users to overlay one photo on another, Horshack could get the camera to assign C2PA credentials to a photo that it did not, in fact, take. Later, he was able to do the same with an AI-generated image – specifically, a pug flying an airplane.

Horshack wrote about his findings on a photography forum in September and soon Nikon posted on its website that “an issue has been identified” and that it had suspended its authentication service while working on a fix. Representatives for the company did not reply to requests for comment.

There are problems with C2PA metadata as images travel the web, too. When OpenAI joined the coalition, it noted the metadata can be removed intentionally or accidentally. The metadata does not transfer to a screenshot, for example, and social media platforms tend to strip this information when pictures are uploaded.

LinkedIn does not remove the metadata; users can click on pictures and videos that are C2PA-certified to learn whether they are wholly or partially AI-generated, find out the camera or tool used to create them, and other details.

Open this photo in gallery:

This street scene from Montreal should have a digital certificate attached, but not all platforms that share or screen-capture it will take the certificate.Fred Lum/The Globe and Mail

The metadata is only one part of the C2PA standard, however. Added security and verification measures include invisible digital watermarks that are much harder to remove. Google, for example, has a watermarking feature called SynthID for AI-generated content.

But each company approaches the problem differently, and decides which elements of the standard to implement.

“The difficulty here is you need everybody to be on board,” said Hany Farid, a professor at the University of California, Berkeley, and expert on digital forensics. Bad actors bent on spreading misinformation certainly are not going to embrace a provenance standard, and there are at least two glaring absences among the members of the content coalition.

Twitter was once part of it, but in its current incarnation as X under the ownership of Elon Musk, it is no longer a member. Apple, whose smartphones surely account for a large portion of the pictures taken every day, isn’t there either.

Still, the progress made in the past few years is promising, Prof. Farid added. “This is part of the solution. It is not the solution.” (Regulation would help, he said.)

Open this photo in gallery:

Digital forensics expert Hany Farid, reviewing a video of Meta CEO Mark Zuckerberg, takes notes on which tech companies adopt digital certification and which do not.Ben Margot/The Associated Press

Technical measures can only go so far. “Where this tech is most effective is inside newsrooms and organizations committed to information accuracy,” said Clifton van der Linden, associate professor of political science at McMaster University. “But that still depends on the public trusting credible newsrooms over whatever they encounter in their social feeds.”

When a conspiracy theory takes hold, people who believe it will only see more evidence of a conspiracy. A media outlet can describe how it verifies information and dive into the details of its photo provenance system, but for the conspiracy minded, the media is surely in on the ruse, too.

Prof. Farid, for one, said that may always be the case. “There’s a majority of the people that it will help,” he said of authentication measures. “That’s really the best you can do.”

And this reporter, rest assured, will scrutinize dogs on the internet a little more carefully.

Eyes on AI: More from The Globe and MailMachines Like Us podcast

AI was not the first thing to damage public’s trust in news media. How did we become so willing to believe the things we see and hear are hoaxes? Journalism professor Jay Rosen spoke with Machines Like Us about how we got here, and what we can do. Subscribe for more episodes.

Latest AI trends

Should kids use artificial intelligence? Parent reactions are mixed

Ottawa launches AI task force, moves up deadline to deliver updated national strategy

Canadian CEOs are embracing generative AI’s speed and efficiency. The impact on their employees is less certain