Dozens of national, regional, and local elections are on the calendar across Europe in 2026, but the messages reaching voters are increasingly synthetic, created by machines, and verified by no one.

AI-made faces, staged crises, and nostalgia built for engagement now circulate as standard campaign material around the bloc. In some cases, foreign actors also interfere in national elections and deploy artificial personas to steer voters.

Brussels has tried to keep up by building the world’s toughest digital rulebook, the Digital Services Act, to counter this threat. Yet as elections accelerate, lawmakers are learning that writing law is easier than governing the algorithm – and that AI-driven politics are spreading at a pace hard to keep up with.

Deepfakes, slop, and GenAI 

Across Europe’s elections, generative AI, “AI slop,” and deepfakes now form three stacked layers of digital political influence. 

Deepfakes – manipulated audio or video falsely attributed to real figures – remain the high-risk, low-volume threat, flaring briefly in places like the 2025 Irish presidential elections. 

AI slop does the dirty work: the industrial production of cheap, emotionally charged synthetic images and videos dramatizing migration, crime and social decay. This content now dominates recommendation feeds, particularly on TikTok, driven largely by monetised AI-only accounts rather than official campaigns. 

In this ecosystem, generative AI – artificially created text, imagery and music from basic prompts – functions as a multiplier, accelerating polarising narratives faster than Europe’s regulatory machinery can respond.  

The use of AI exists across the political spectrum. Still, the far right’s political vocabulary – words like identity, threat, cultural siege – maps almost perfectly onto what algorithms reward: repetitive and provocative content.

AI manipulation across the bloc

Evidence of AI’s impact on elections is already emerging. During France’s national elections in 2024, researchers at the non-profit AI Forensics identified roughly 60 fully AI-generated political posts on official party accounts across the French political spectrum, mainly on Instagram and Facebook.

That filled social media feeds with dystopian skylines, collapsing cities, and hyper-real migration scenes designed to be felt, not verified.

“It’s less about persuading people what is fake or real… It’s more about getting an emotional response. Fear, scandal, shock,” Natalia Stanusch, a researcher at AI Forensics, told Euractiv.

In Germany, candidates largely avoided using generative AI in the 2025 federal elections, but TikTok supporter networks embraced synthetic nationalist visuals. In Hungary, which is preparing for a high-stakes ballot in 2026, fake profiles of attractive young people have been feeding users pro-government messaging.

Moldova, meanwhile, offered a preview of full-spectrum foreign AI interference. Russian-linked networks deployed synthetic TikTok “grandmothers” urging voters to back Kremlin-approved candidates, according to a report published by the Romanian journalism centre Mediacritica, which tracks disinformation. 

Transparency

A few years ago, Brussels feared a deepfake that could topple a government overnight. That moment never came. What did arrive was much harder to fight: large-scale AI propaganda.

On paper, the EU has built a serious defence. The AI Act classifies political manipulation as a “high-risk” activity, requiring transparency and oversight by national authorities and the new European AI Office. The Digital Services Act requires the largest platforms to reduce election-related risks. Other EU plans and codes add rules for political ads and online disinformation.

In reality, major platforms such as TikTok retain broad discretion to determine what qualifies as manipulation, giving them significant control over how the rules are applied.

There is no reliable way to measure how much AI-generated content circulates undetected or goes unreported, and enforcement remains the system’s weakest link.

(mm, cm)