Open this photo in gallery:

Minister of AI Evan Solomon faces a challenge dealing with some of the disinformation currently polluting the public sphere.Christopher Katsarov/The Canadian Press

Ronald Deibert is director of the University of Toronto’s Citizen Lab and the author of Chasing Shadows: Cyber Espionage, Subversion and the Global Fight for Democracy.

AI is democratizing intelligence: making the ability to analyze data, automate tasks and improve decision‑making more accessible. … So many things that used to take tons of time now happen in seconds.”

So says Evan Solomon, Canada’s newly appointed Minister of AI, in an interview with Toronto Life magazine.

But what happens when tasks made more efficient are ones that actually corrode society? Mr. Solomon – apparently an optimist – should spend more time looking into those less heart-warming stories. They are hard to miss, and devastating in their effects.

For example, Anthropic – the San Francisco–based AI firm – recently reported that it discovered hackers had been using its Claude chatbot to conduct sweeping cybercrime operations, undertaking reconnaissance of firms’ infrastructures, creating malware to steal information, and then analyzing the data to determine what could be used to extort victims. A full suite of labour-intensive cybercrime tasks – “things that used to take tons of time,” in Mr. Solomon’s words – now fully automated, thanks to Claude.

85% of Canadians want government regulation for AI, poll shows

Opinion: Once an AI world leader, Canada is now losing the AI startup race

Meanwhile, reporting from The Intercept showed the U.S. military is now pursuing AI-driven covert influence operations to “suppress dissenting arguments.” The U.S. initiative is hardly surprising. News of state-sponsored, AI-enabled disinformation campaigns are appearing with increasing frequency, with links to China, Russia, Iran, Rwanda, Israel and other governments.

So common are these now that nearly every world event features them. A recent New York Times investigation showed how both Israeli- and Iranian-aligned actors unleashed AI-generated videos and synthetic content during the two countries’ most recent conflict, fabricating events like air strikes on Israel’s Ben Gurion airport or massive civil protests in support of an Iranian regime change. AI-constructed disinformation floods the Russia-Ukraine and Gaza wars, too. As with cybercrime, psyops that used to take a lot of time and resources can now be done with the click of a button.

A feature of many of these ops are the private firms that mount them. The convergence of powerful new AI tools with a deteriorating political climate has created a perfect ecosystem for “dark PR” companies to thrive in. Poorly regulated, ethically dubious startups now offer industrial-scale ideological manipulation as a service. Among them is Israel-based Psy-Group (rebranded as White Knight Group), whose motto “Reality is a matter of perception” is like a mashup of philosopher Jean Baudrillard and Mossad. Another Israel-based group, dubbed “Team Jorge” by the group of undercover journalists who outed them, has boasted of using AI-enabled disinformation to meddle in dozens of elections worldwide.

The results are predictable and frightening. A tsunami of AI-enabled disinformation is already upon us, polluting the public sphere and seeping back into the language models that AI systems feed upon. What to do?

Governments, when they’ve addressed these threats at all, have mostly misframed them as “foreign interference” – part of the age-old geopolitical competition of states. However, this is not an issue solely confined to states interfering in each others’ domestic affairs. The same systems can be deployed by a domestic lobby group or a multinational company looking to discredit a whistleblower or derail an investigation. Moreover, framing the threat in geopolitical terms typically precipitates calls to arm up or risk falling behind, adding fuel to the fire.

AI companies themselves have taken tentative steps to fix the problem, but these are far from sufficient. Firms like Anthropic and OpenAI now maintain “threat intelligence” teams to track malicious actors. As with social media, however, these units face a built-in contradiction: they are tasked with policing harms that stem directly from their own business models. They’re like poorly resourced concussion medics working on behalf of fight clubs.

Simultaneously, public interest researchers who study AI disinformation face roadblocks. Social media and AI platforms have shut down open interfaces previously used by researchers, while becoming litigious toward critics and researchers, stifling independent watchdogs.

What is needed now is not euphoric cheerleading. At minimum, governments should impose mandatory transparency reporting and independent audits of AI platforms; pass legal requirements that tech platforms be open to public-interest research; encourage investigations into the negative psycho-social effects of AI, especially on youth; and introduce sanctions targeting firms whose operations are implicated in AI malfeasance.

It says something bleak about our times that systems capable of modeling language and reasoning with remarkable efficiency are being exploited for covert purposes, deployed not to expand human understanding but to systematically subvert it. Instead of “democratizing intelligence,” as Mr. Solomon claims, we are witnessing a kind of collective brain damage. The Minister of AI should be mitigating those risks, rather than naively championing the tech behind it all.