AI Minister Evan Solomon during the G7 Industry, Digital and Technology Ministers’ Meeting in Montreal in Dec. 9.Christopher Katsarov/The Canadian Press
Ottawa’s forthcoming online harms and privacy bills should regulate aspects of artificial intelligence, including by introducing a requirement for platforms to label AI-generated photos and video, according to members of the government’s AI strategy task force.
Federal AI Minister Evan Solomon is preparing to publish a government AI strategy as early as next month. Last year he appointed an expert task force to advise him, including on the safety aspects of the technology.
Members of that task force have advised the minister to take action to protect children under the age of 18 from being harmed by AI models, such as chatbots.
A paper on safe AI and public trust submitted to the task force by one of its members, Taylor Owen, founding director of McGill University’s Center for Media, Technology and Democracy, said that online platforms should have a responsibility to inform consumers when material is AI-generated.
How AI is disrupting career paths and forcing Gen Z – and everyone else – to adapt
It also said photos and videos generated by AI should include a digital watermark to help people differentiate between genuine images and those created artificially.
The paper expressed concern that AI chatbots have the potential to cause significant harm. Some have encouraged desperate young people to hide eating disorders and to end their own lives.
Prof. Owen’s submission to the task force said that while AI tools can enhance creativity, facilitate learning, and aid interaction and self-expression, they also can pose significant risks, including to children.
“Empirical studies have documented instances where AI chatbots, for example, fail to respond appropriately to users experiencing mental health crises, reinforce cognitive distortions through mirroring language and cultivate a false sense of emotional reciprocity,” Prof. Owen wrote.
He warned that AI chatbots have the ability to manipulate users, amplify disinformation, and enable non-consensual image generation and impersonation.
Prof. Owen said platforms’ responsibility to indicate AI-generated material extends to fake and misleading political content, including AI-generated videos of politicians making speeches and AI-generated social-media accounts.
He said there was “widespread distribution of fake videos” during Canada’s last federal election, and that his team at McGill identified thousands of them.
In a speech in Quebec City on Thursday, Prime Minister Mark Carney said the advent of artificial intelligence creates enormous opportunities, and can empower Canadians with new skills.
“Our upcoming AI for All strategy will begin to tackle the challenges to maximize the potential of AI for all Canadians,” he said.
Opinion: We are only one AI chatbot away from falling in love
Both Mr. Solomon’s privacy bill and the online harms bill, to be steered through Parliament by Canadian Identity Minister Marc Miller, are expected to be introduced within months.
Mary Wells, dean of engineering at the University of Waterloo and a member of the task force, recommended that a new AI framework should adopt a tiered approach to categorizing AI risk, focusing on effects on people rather than the underlying technology.
A memo she submitted said Canadians under 18 “should not be allowed to interact with any AI companion models (synthetic relationships) that attempt to develop an emotional bond or have been designed to be manipulative or addictive in nature.”
“Another example could be prohibiting the use and deployment of AI systems that deliberately deceive users into believing they are interacting with a human.”
Prof. Wells wrote that when Canadians consume AI-generated content, it should be clearly identified as such.
“Chatbots must be required to identify themselves as such. AI-generated media should require visible watermarks and/or other identifiers,” the paper said.
She said people should also have the right to know if their data have been used to train a commercial AI system, or are being accessed by an AI system, and must have the right to withdraw permission for continued access to those data.
Opinion: Don’t hate ChatGPT-5. Your chatbot is not your friend
James Neufeld, founder and chief executive officer of samdesk, a Canadian company that has developed AI capable of monitoring global disruption, advised the task force on security aspects of AI. He urged the government to invest further in the development of Canadian AI systems, which he said are built with Canadian values “baked into the core.”
He said some AI systems developed abroad had used racial profiling to assess risk, but he argued Canadian systems would not do this as they are trained with Canadian values in mind.
“I’m not advocating for taking off the guardrails. However, we should maybe not shift all balance and focus towards forcing our values and rules to others that may not want to hear it, and more towards championing and adopting local technologies, which would … in many cases have Canadian values baked into the core,” he said.
Sofia Ouslis, a spokesperson for Mr. Solomon, said in an e-mail that the government’s strategy “is focused on ensuring AI delivers real benefits for Canadians – strengthening health care, modernizing public services, and supporting economic growth.”