Nick Grimm: Anyone using social media of late won’t have been able to help noticing the dramatic increase in the amount of AI generated videos appearing online. Much of it is coming from a powerful new tool called Sora 2, but its sheer power is raising alarm bells. Luke Radford reports.

Luke Radford: Outside of the circus, dogs don’t often get behind the wheel. So you could be forgiven for looking twice if you went on social media recently.

AI Video: Evening, you know why I pulled you over? Wait, is that a dog? Sir, step out of the vehicle for me. All right, pause off the wheel, buddy. Hey, don’t, hey, stop, stop the car!

Luke Radford: This video and thousands like it have been created by Sora 2. It’s the more powerful upgraded version of a video AI tool created by OpenAI, the company behind ChantGPT. It was made more widely available to the public at the start of this month. The videos aren’t perfect. This one of famous wheelchair-bound physicist, Stephen Hawking, riding a skateboard on a half pipe wouldn’t fool many people.

AI Video: Hawking rolling in, run two. Look at the speed building on that chair, he’s locked in. Big pump coming up the wall. Launching, 420 tailspin attempts. It loads, it can’t find the landing. Sideways.

Luke Radford: But many of them are very, very believable. AI video tools aren’t new, but what sets Sora 2 apart is that it’s incredibly powerful and widely available. Jeremy Carrasco is a former technical producer and AI consultant.

Jeremy Carrasco: As far as accessibility, I don’t think that there was a lack of accessibility before for the people that wanted to create content for TikTok or Instagram. But again, I think that it is, again, lowering the floor for decent AI content. And the fact that it’s put into more of a social media app, I think it builds a certain amount of hype to it that other video AIs didn’t have before.

Luke Radford: Sora 2 is also well-suited to creating viral content. It has its own social media app where everything posted is generated by AI. It’s also just really good at making funny videos. Part of the skill of using AI is prompting. That is coming up with the right text input to get the result you’re after. But that doesn’t seem to be such a problem with Sora 2.

Jeremy Carrasco: Previously, a lot of AI slop really felt very sloppy because it wasn’t very interesting, but Sora 2 can make interesting things kind of by default. So I think just the volume of viral or interesting things coming out of Sora is much higher. So even though the same amount of people might be using it, there’s just more funny or viral content coming out just as a nature of how good the model is.

Luke Radford: But where Sora 2 is really pushing the limit is deepfake technology. It’s creating realistic versions of real people, which leads to situations like the late king of pop, Michael Jackson, stealing your fried chicken.

AI Video: Your chicken’s looking nice, pal. Woo! Yo, he stole my chicken! Got a crispy treasure case. Slow me down, hot and golden. I’m mad at this town. Move, move, Friday, victory.

Luke Radford: It was also used to create lots of material based on copyright-protected characters like Mario. That didn’t last very long, with OpenAI CEO Sam Altman announcing that their company would be giving rights holders more control over what could be created with their characters. But Jeremy Carrasco says the user base is already pushing back.

Jeremy Carrasco: You know, the people who are using Sora, like if you scroll the platforms, like they want it to be even looser, right? So like there’s an entire creator community that thinks that they’re even still being too strict and they should have no restrictions on creating copyrighted material, no restrictions on generating other people’s likeness without their permission. And so if you keep lowering that bar, you’re going to keep creeping into more and more dangerous scenarios because your community is going to keep asking more of you. That’s what I’m seeing.

Luke Radford: Sora 2’s ease of access and powerful tools are also raising alarm bells in other ways. While copyrighted characters have been banned, depictions of real life people are still possible. And that function’s been used to create fake videos with disturbing content, such as one depicting Martin Luther King Jr. spouting racist remarks. Director of the Center of AI and Digital Ethics at the University of Melbourne, Professor Jeannie Patterson says in Australia, at least, there’s little legal protection.

Jeannie Patterson: The question about is there any restrictions on doing it? No, because the restrictions on representing a face online come from either defamation or misleading conduct. And you can’t defame a dead person. And if they’re dead, you’re not misleading people about their existence. So there’s very few controls on reanimating dead people.

Luke Radford: Can regulation keep up with this?

Jeannie Patterson: I’m a little bit worried by Sora, because if you asked me a week ago, you know, I’d be going, of course regulation can keep up with AI. Of course it can. Australia has great regulators and great regulation. No, regulation can’t keep up with this particular use. It’s forces not tricking us into thinking that the image is a real person. It’s forces that it looks real. And if you see something saying something often enough and it looks real enough, you start to believe it. So, you know, it’s still eroding truth, but I don’t think it’s breaking any law other than perhaps copyright law. So I’m really worried by it. I think the best solution is education, but what are you educating people in? You’re educating people in, oh, by the way, the only things you can believe are things that you physically have seen.

Luke Radford: At the moment, videos created with Sora 2 have a clearly visible watermark showing it’s generated by AI. And the CEO of OpenAI, Sam Altman, says they’re continuing to tweak safety features. But given how many videos are already out there, you’ll need to be on guard when scrolling.

Nick Grimm: That report by Luke Radford and Nicholas Maher.Â