China’s AI race with the United States is no longer just about chips or chatbots. It’s about video—and the power to flood the internet with convincing fakes.
ByteDance’s new model, Seedance 2.0, has stunned the internet in the past few days after users generated a hyperrealistic clip of “Tom Cruise” and “Brad Pitt” fighting on a rooftop about “Jeffrey Epstein” from just a simple text prompt. The footage looked cinematic, complete with fluid motion and synced audio. No actors. No cameras. Just code.
That leap matters far beyond Hollywood. Tools like Seedance mark the arrival of what could be called high-quality slopaganda: synthetic content that is cheap to produce, emotionally charged and realistic enough to pass casual scrutiny.
Unlike the clumsy deepfakes of a few years ago, this new generation of AI video is actually believable—and could have major political implications.
Trump in the Age of High-Quality Slopaganda
Seedance 2.0 is not just another AI milestone—it marks the arrival of what could become a new political weapon.
For President Donald Trump, that shift cuts both ways. His political rise was fueled by viral spectacle and dominance of the attention economy. He understands how to command a news cycle. But in a world where anyone can generate convincing video of him saying or doing almost anything, control becomes fragile.
A fabricated clip showing Trump confused, extreme or contradictory could spread to millions before fact-checkers intervene. Even if debunked, repetition leaves an imprint. The danger is not just persuasion—it’s erosion. If voters can no longer distinguish authentic footage from fiction, trust in political communication collapses. In that environment, even a media-savvy figure like Trump risks being drowned out by an endless stream of convincing noise.
Midterms To Be Digital Minefield
The coming midterm elections could be a minefield. Midterms are often decided by narrow margins in a handful of battleground districts, where turnout matters enormously. A fabricated video targeting a Senate or House candidate could spread for hours—or days—before platforms respond.
AI-generated robocalls could mimic a candidate’s voice. Fake “leaked” footage could surface on the eve of voting. Even if disproven, the damage may linger, depressing turnout or fueling claims of illegitimacy.
Russia, Iran and North Korea Not Far Behind
If such tools are widely available inside China, they will not stay there. Advanced generative systems spread quickly—through open-source imitation, commercial partnerships or parallel development. And if China can build them, others can weaponize them.
That includes Russia, Iran and North Korea—governments that have spent the past decade targeting the United States and its allies with divisive online disinformation. From election interference to culture-war amplification, these states have shown they don’t need to persuade a majority of voters to succeed. They only need to deepen mistrust and widen cracks in already polarized societies. Give them high-fidelity AI video tools, and their campaigns become more scalable, more convincing and harder to debunk.
The goal would not necessarily be to convince Americans of one grand narrative. It would be to overwhelm the system—to create so much conflicting, emotionally charged content that voters stop trusting anything at all.
Hollywood Sounds the Alarm
Seedance 2.0, is already facing challenges after users generated hyperrealistic clips featuring famous characters and performers without consent. One widely shared example recreated iconic Disney-style characters in cinematic action scenes, raising immediate copyright concerns. Marvel, Star Wars and various other cartoons have also been recreated.
Hollywood studios and industry groups say the tool appears to rely on copyrighted material and recognizable likenesses, prompting accusations of large-scale intellectual property violations. Performers’ unions warn that such technology threatens actors’ livelihoods by replicating faces and voices digitally. ByteDance says it is strengthening safeguards, but the backlash underscores mounting global tensions over AI, ownership and creative control.
Reality Itself at Stake
What makes this moment different is scale. AI no longer just manipulates images—it manufactures entire scenes with convincing dialogue and emotion. The strategic advantage may not lie in winning a single narrative battle, but in overwhelming the information space altogether. If voters cannot reliably distinguish authentic footage from synthetic fiction, democratic debate becomes unstable.
In that sense, China’s AI progress is not simply a tech milestone. It signals a future in which influence can be automated, doubt is constant and reality itself becomes contested ground.