Generative artificial intelligence tools are a “societal disaster” and a “major threat” to truth and democracy, a Trinity College Dublin academic has warned.

An Oireachtas committee meeting to discuss the role of truth and democracy in an “AI-driven world” heard views that generative AI produces “plausible output without any regard for truth or accuracy”.

Citing a deepfake video that depicted then presidential candidate Catherine Connolly withdrawing from the election days before the public cast their votes, Abeba Birhane said such technology can “erode the foundations of democratic life”.

Describing the video as a “high-stakes” example, Birhane noted it amassed tens of thousands of views on Facebook before it was removed by Meta.

The assistant professor of AI and director of Trinity’s AI Accountability Lab told committee members that as the AI industry, social media and search platforms grow “less trustworthy”, they erode the “foundations of democratic life” such as trust and accountability.

Labelling generative AI, which includes tools such as ChatGPT and X’s Grok, as a “social disaster”, Birhane said they are a “major threat to truth, democratic processes, information ecosystems, knowledge production and the entire social fabric itself”.

Birhane said that platforms including Facebook, Google and OpenAI’s ChatGPT are now operating at an “infrastructural scale in Ireland, shaping information, communication and access to knowledge”.

“Yet their algorithms remain opaque, their governance remains private, with minimal democratic accountability to the public who depend on them,” she said.

“Large tech and AI companies, despite selling promises of innovation and societal benefit, monetise and undermine the very society they claim to serve. What is needed is not just regulation, but active enforcement.”

Ella Jakubowska, head of policy at advocacy group European Digital Rights, argued the EU’s landmark AI Act, which is “supposed to create a framework for accountability”, is falling victim to the “EU’s broad deregulation agenda”.

She told committee members that amendments to the Act published late last year propose “taking the teeth out of this law and turning it into a piece of self-regulation”.

“This agenda is being pushed at the highest levels of the European Commission, under the banner of ‘simplification’,” she said, arguing that instead, core protections were being “reopened”.

The hard truth about AI at work? It will not tell youOpens in new window ]

The amendments, which include proposed delays to some regulations, came amid intense pressure from big tech companies and the US government, alongside efforts to make the EU more competitive.

Separately, Social Democrats TD Sinéad Gibney questioned whether the establishment of the AI Office within the Department of Enterprise before it becoming an independent State agency could “influence its later development”.

Jakubowska said she shared Gibney’s concerns, adding: “One of our key demands for the AI Act implementation across the member states has been for market surveillance authorities to be completely independent.

“We think that’s the only way they can guarantee that they can perform their duties.”

Artificial intelligence is here … and it is already rewriting the rules of educationOpens in new window ]