Yoshua Bengio on Capitol Hill in Washington in July, 2023.Alex Wong/Getty Images
The federal government is preparing to make a significant investment in a non-profit founded by Yoshua Bengio, a renowned artificial-intelligence researcher who is focused on building safe and trustworthy AI systems.
Federal AI Minister Evan Solomon said that Ottawa has signed a letter of intent to provide financial backing to LawZero, which Mr. Bengio launched to develop technical solutions for what he sees as the significant risks posed by powerful AI models. That includes cybersecurity threats and the ability of these systems to deceive humans.
A source familiar with the matter said the amount under discussion is more than $100-million. The Globe and Mail is not identifying the source because they are not authorized to discuss the funding publicly.
Mr. Solomon declined to provide a figure for the investment but said it would be substantial. “This is a bet we want to make. We want to support Canadian tech,” he said in a phone interview from New Delhi, where he was attending an AI conference.
Quebec’s Mila institute raising $100-million fund for AI startups
Such an amount would indeed be significant for LawZero, which is barely a year old, employs some 30 people and is tackling an immense technical problem. The federal government has made only a few large investments to back AI outfits so far. Ottawa previously gave some $240-million to Cohere Inc. to assist with training AI models.
Mr. Solomon highlighted Mr. Bengio’s credentials, noting that he is the most-cited computer scientist in the world. He also positioned the investment as part of the government’s efforts to foster trust in AI. “You have to have a regulatory strategy, a legislative strategy and a technical strategy,” he said. “It’s important to do multiple strategies because this technology is changing so fast.”
A November poll from the Angus Reid Institute found that the majority of respondents were concerned about the technology and that only 11 per cent were considered AI optimists.
Along with Canadian peers Geoffrey Hinton and Richard Sutton, Mr. Bengio’s work helped pave the way for the current generation of AI technologies. He previously served as scientific director at Quebec’s Mila AI institute until last year.
He is also one of the most vocal in the field about the dangers of AI. Since 2023, he has raised concerns that humanity risks losing control over superintelligent AI systems of the future, particularly when integrated into critical infrastructure and military applications. Even today’s chatbots have shown negative traits, such as sycophancy and deception, he has argued. There is also the risk that AI systems misinterpret instructions and act in ways that are harmful to humanity.
Mr. Bengio founded LawZero last June with US$30-million in philanthropic funding to find solutions. Doing so requires a large amount of capital to hire researchers and pay for the computer processing costs associated with building AI models.
“The vast majority is for compute, because we aim to develop technology that requires a lot of compute,” Mr. Bengio said of the letter of intent from Ottawa. “This is not a university project, but something that’s pushing the frontier of AI.”
LawZero plans to hire more researchers and employ more than 100 people next year. AI engineers are in demand, however, and the non-profit cannot offer the same salaries as top companies such as OpenAI, Anthropic and Google. “There’s no way to compete with those guys,” Mr. Bengio said.
Still, LawZero co-president Sam Ramadori said some researchers are attracted to the non-profit because of its dedication to safety. “The mission is becoming more and more critical, and the researchers know it,” he said.
Recently, employees have left AI companies in part because of the tension between turning a profit and ensuring systems are safe and ethically deployed. A safety researcher named Mrinank Sharma departed Anthropic this month, writing in a social-media post, “I’ve repeatedly seen how hard it is to truly let our values govern our actions.” (Mr. Bengio said he’s been in touch with Mr. Sharma.)
Others in the field consider some of Mr. Bengio’s concerns to be distant or outlandish, and he has been branded as a “doomer.” But Mr. Bengio has said he has become more optimistic about the ability to control powerful AI systems, in part because of the early research done at LawZero.
“I’m now quite convinced that there is a way to design AI that will give us the truth,” he told The Globe in January. “But it’s going to take time, let’s say a couple of years, to even demonstrate the methodology practically.”