Labeling is also a big part of the response, as required by a separate EU law specific to artificial intelligence. Researchers at the University of Amsterdam flagged that a majority of the posts they tracked for the Dutch election lacked an AI-labeling disclaimer. For those who did, it was the platform that added it, not the political parties.

More laws that could deal with the matter are on their way.

The European Commission is drafting guidance for so-called high-risk AI systems that can pose a risk to people’s fundamental rights, which will enter into force in August 2026 at the earliest. “These guidelines will include a section on AI systems intended to influence election outcomes or referendums,” said Commission spokesperson Thomas Regnier. 

Developers of the most complex AI models, such as OpenAI’s GPT or Google’s Gemini, have already had to comply with a series of obligations since August, including mitigating “systemic risks” to democratic processes. 

Next month, Brussels will unveil another proposal, meant to support EU countries in upholding the fairness and integrity of election campaigns against foreign manipulation and interference. That is not expected to contain any binding legal requirements.

Eliza Gkritsi contributed reporting.