Welcome to Rendering, a Deadline column reporting at the intersection of AI and showbiz. Rendering examines how artificial intelligence is disrupting the entertainment industry, taking you inside key battlegrounds and spotlighting change makers wielding the technology for good and ill. Got a story about AI? Rendering wants to hear from you: jkanter@deadline.com.

If you hadn’t noticed, content owners are starting to draw battle lines over the vexed issue of copyright protection in the age of AI. Disney is sending cease and desist letters to Google, Warner Bros. Discovery is suing gen-AI giant Midjourney, and the BBC has accused Perplexity of scraping its website. Even Deadline’s parent company, Penske Media Corporation, has issued a writ over Google AI summaries killing search traffic.

In this context, it’s not surprising that one of the industry’s most coordinated AI campaigns sprang up last week. Boasting the support of Scarlett Johansson and Cate Blanchett, the “Stealing Isn’t Innovation” movement has coalesced around a 130-word statement decrying artistic “theft.” The statement does not name any harbingers of doom, but is pretty clear that the fault lies with tech titans, whom signatories accuse of looting copyrighted material “without authorization or regard” for the law.

The lawsuits and the campaigners are manifestations of a conventional wisdom: It is content-gobbling artificial intelligence that is to blame for rights infractions, rather than individuals asking a model to imagine, say, a Wes Anderson-directed Harry Potter film. In other words, it is the prompted, rather than the prompter, plundering the IP. 

It’s why I took notice when I heard a challenge to this orthodoxy from Jason Zada, an Emmy-winning producer who founded Secret Level, an “AI-native” production studio that produced the divisive Coca-Cola holiday ad last year. Zada believes that the power to keep AI outputs copyright clean lies in the hands of the prompter, meaning Secret Level forbids IP being cited in its text inputs.

Chatting over Zoom, Zada asked: If you purchased a Stormtrooper costume from a store and filmed Star Wars scenes in your backyard, is the shop to blame for the IP being flouted? Unlikely — but the difference is that costume stores are not drawing on a vast well of IP against the wishes of content owners. 

“If you really study how AI learns, it’s looking at a bunch of different things. It’s describing them. It’s not copying them,” Zada responds. “The human brain is actually the dirtiest model out there. We’re constantly absorbing everything that we see, and then we put it down as original thought, and it’s not at all.”

Zada’s position on prompting may not be fashionable in Hollywood, but clients like Coca-Cola are comfortable with his approach. Secret Level’s Coca-Cola holiday ad, featuring generative AI critters, got plenty of hate last year, but Zada is unapologetic: “There’s a push for ‘real’ that’s weird to me. It’s like people screaming about still using a horse and carriage as an automobile drives by.”

Zada will evangelize about artificial intelligence as part of the Secret Level Academy, an $849 online AI filmmaking masterclass, which gets underway on February 2. Alongside him will be Secret Level’s newest hire, Christina Lee Storm, the former Netflix executive who helped draw up the TV Academy’s responsible AI production standards

These guidelines emphasize the need to use AI models trained on “ethically sourced, properly licensed, and clean data,” which diverges somewhat from Zada’s thinking. Secret Level has produced content using the likes of Google’s Veo, currently the subject of a legal threat from Disney, and Zada says “only a few” clean models exist (he cites Adobe Firefly and Moonvalley). “Even though the models might be dirty, it’s up to the person who’s prompting to be the responsible one. And I really stand by that,” he explains.

So, is it time for rights holders and campaigners like Johansson to put equal emphasis on the prompters, who misuse AI (even accidentally) to pass off content they don’t own? Maybe. At the very least, evolving thinking around “clean” AI suggests that producers should draw up ethical prompting policies when experimenting with the technology. The debate is far more nuanced than big tech = the bad guys.