Videos generated using OpenAI’s Sora app by The Globe’s Samatha Edwards, putting herself in several scenes

When I opened Sora, OpenAI’s new social-media app, the first video I saw was Queen Elizabeth II in a Costco explaining why she loves cheese puffs. Next was a South Park parody, featuring the Pepe the Frog meme ranting about Tylenol and babies. Later, a grainy TV clip of Martin Luther King Jr. delivering a version of his “I have a dream” speech, but about Sora’s content moderation policies.

All these videos are AI-generated, and so is everything else on the app’s feed, where the short clips that fill its endless scroll range from extremely realistic looking to wildly unhinged.

Sora’s built-in text-to-video generative AI tool allows users to turn a brief prompt – such as, say, “police body cam footage of an ICE agent arresting a cat at a protest” – into an astoundingly impressive 10-second clip in only a few minutes. (ICE arresting [insert innocent bystander] videos have spawned a Sora sub-genre.)

In just a few weeks since the app’s launch, Sora users have filled the platform with copyright-infringing parodies, surveillance footage of seemingly real events, and a lot of creepy AI slop.

Users see the app as a boon for creativity, allowing regular people to make fantastical videos with visual effects once reserved for Hollywood studios. And, unsurprisingly, many are already using the tool for more nefarious hijinks, leaving misinformation experts worried that the app could be harnessed to create harmful deepfakes to sow division online.

Opinion: To the freshman class of 2025: Will you let AI think for you, or learn how to think for yourself?

Unlike other AI video apps, such as Meta’s recently launched Vibes, Sora encourages users to insert themselves, their friends and public figures into videos. To create an eerily lifelike avatar, the only material Sora needs is a selfie video of a user moving their head and saying three numbers.

I transformed myself into characters of varying believability: a mad scientist preparing a flubber-like substance, a stressed journalist writing on deadline. Over Thanksgiving, I nearly duped my very offline father with a Sora video of me riding a rollercoaster with a cat strapped to my chest. The background looked green screened, and my voice sounded like an awkward facsimile. But my dad was dumbfounded.

Sora videos, which feature a watermark that can be easily removed, quickly surfaced on my own TikTok and Instagram feeds. Some of the videos looked so real that for a few seconds I really did wonder if a dog had flown off a front porch during a tornado. A barrage of comments showed others were wondering the same thing: “Is this AI?”

This is one of the fears experts have about Sora. As technology advances, the average person may no longer be able to tell the difference between real and fake videos online.

Is that photo real or fake? Digital ‘birth certificates’ are a new way to get answers

A survey of 5,000 Canadians from the digital literacy non-profit MediaSmarts found that while just under half of respondents said they believed they could identify AI-generated images online, many struggled to do so during an experiment, mistaking fake images for real ones.

“If people only stayed in one nice, neat corner of the internet watching AI-generated videos of cats making pizza, there’d be little to no harm in that,” says Kara Brisson-Boivin, director of research for MediaSmarts. “But we know that content moves seamlessly between platforms, and the likelihood that this type of AI-generated content will infiltrate other platforms, so even if you’re not on Sora, you’re going to see it on your Instagram feed.”

On Sora you can make videos featuring other users on the app, which could be friends or strangers who allow their AI avatars to be used by anyone. (None of my friends wanted to join, citing excuses such as “I don’t want Sam Altman to own my biometrics” and “it seems really creepy.”) But this feature could also be used to make troubling deepfakes or spread misinformation.

In my casual experiment to test the extent of this danger, I put myself into various incriminating scenarios. In one video, I steal a cat from an old lady, then confess to the crime in the back of a police car. In others, I’m an irate customer who throws my coffee at a barista; a snarky commuter making false health claims; and an impassioned animal-rights activist recorded via police body-cam footage. (Videos that are supposed to look low quality, as if filmed via a shaky camcorder or on an old cellphone, tend to look the most believable.)

“It contributes to this already problematic tipping of the scales toward ‘I can’t trust things online,’” says Ms. Brisson-Boivin. “And when the internet is the main forum for which we have civic discourse and we’re primed to believe we can’t trust anything, that’s a challenge.”

The ultimate fraud machine: Scammers are using AI to create convincing deepfakes, and authorities are using it to catch them

Another troubling outcome could be if these videos are made by bad-faith actors trying to interfere in elections or sway public opinion, warns Muhammad Abdul-Mageed, a Canada Research Chair in Natural Language Processing and Machine Learning at the University of British Columbia.

“If you’re able to generate these at scale and embed them in social networks, then you now own the story. You can drive the attention of people toward a certain direction,” says Dr. Abdul-Mageed.

Scrolling though the Sora sloposphere, I struggled to understand how the app was connected to OpenAI’s stated mission. How do videos of Mario talking about mushroom cartels on InfoWars align with the company’s goal of building ethical artificial general intelligence for the good of humanity?

In a blog post announcing Sora, OpenAI chief executive officer Sam Altman suggested that the app is like “ChatGPT for creativity” and could radically transform the arts in a Cambrian explosion-type moment. In my two weeks on the app, at times I found it entertaining. But mostly it was brain rot: hypnotizing and mindless.

I’d seen dozens of cats in life-threatening predicaments, Martin Luther King Jr. parodies (before OpenAI banned these videos after a complaint from the King estate), monkeys firing machine guns.

Will Mr. Altman’s prediction that Sora will transform human creativity prove true? Maybe. But it seems just as likely we’re on the precipice of a new era of the internet, where AI-generated content invades every social platform, forcing us to become skeptical of every single thing we see online.