Text to Speech Icon

Listen to this article

Estimated 5 minutes

The audio version of this article is generated by AI-based technology. Mispronunciations can occur. We are working with our partners to continually review and improve the results.

Artificial Intelligence Minister Evan Solomon says OpenAI’s recent commitments to adjust its policies in the wake of the shooting in Tumbler Ridge, B.C., do not go far enough and plans to meet with CEO Sam Altman.

“While we note their willingness to strengthen law enforcement referral protocols, establish direct points of contact with Canadian authorities and enhance safeguards, we have not yet seen a detailed plan for how these commitments will be implemented in practice,” Solomon said in a statement on Friday.

Senior company officials met with Solomon and other ministers earlier this week and Ann O’Leary, OpenAI’s vice-president of global policy, wrote in a public letter to Solomon on Thursday that new policies were being put in place to address concerns about when problematic content is flagged to law enforcement.

O’Leary said in her letter that OpenAI is establishing a direct point of contact with Canadian law enforcement, upgrading its model to allow the company to direct users to local mental health supports when warranted and strengthening its detection system to help identify repeat policy violators.

WATCH | OpenAI says Tumbler Ridge shooter bypassed ChatGPT ban:

OpenAI says Tumbler Ridge shooter bypassed ChatGPT ban with 2nd account | Hanomansing Tonight

Ann O’Leary, OpenAI’s vice-president of global policy, says the company found a second ChatGPT account belonging to the Tumbler Ridge, B.C., shooter after her name was made public in the wake of the murders and shared the account with police. The revelation came as the firm outlined a series of ‘immediate steps’ it would be taking in response to the killings in Tumbler Ridge, B.C. Sarah Galashan has the latest.

Despite those commitments, Solomon said Friday that the California-based company still has questions to answer.

“The tragedy in Tumbler Ridge has raised serious questions about how digital platforms respond when credible warning signs of violence emerge. Canadians deserve greater clarity about how human review decisions are made,” the minister said in his statement.

Earlier this month, Jesse Van Rootselaar killed her mother and half-brother at the family home before going to the local secondary school, where she killed five students, an educational assistant and then herself.

It was revealed after the shooting that Van Rootselaar had a ChatGPT account that the company banned in June for posts about gun violence. OpenAI didn’t flag the account to police, and has said that Van Rootselaar’s activities didn’t meet the company’s reporting threshold at the time because it didn’t identify credible or imminent planning.

B.C. Premier David Eby said Thursday that he also expects to have a meeting with Altman.

Solomon and other ministers met with OpenAI officials earlier this week. Immediately following the meeting, Solomon said he was “disappointed” the company didn’t provide thorough answers.

“We expected [OpenAI] to have some concrete proposals that we could understand, that [they] had changed their protocols in the wake of the horrific tragedy in Tumbler Ridge. But we did not hear any substantial new safety protocols outside of some changes to their model,” Solomon said Tuesday night.

New content-flagging protocol

O’Leary revealed in her letter that the company had discovered a second ChatGPT account belonging to Van Rootselaar in the wake of the murders.

The second account was flagged to police, the letter said.

O’Leary also said that the company would’ve flagged the shooter’s banned account to police under new safety policies the company started to develop “several months ago.” She didn’t specify when exactly that process had begun.

“With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today,” O’Leary’s letter reads.

Solomon has said all options are on the table when it comes to regulating OpenAI and other companies.

Most MPs on Parliament Hill on Friday agreed that the government will need to introduce legislation that would require companies to flag problematic accounts to police.

“We need to do something. What happened, the tragedy in Tumbler Ridge, I think is a lesson and protecting Canadians should be the priority of every government,” Liberal MP Gurbux Saini told CBC News.

WATCH | ‘Something could have been done,’ says May:

‘Something could have been done’ if OpenAI reported what they knew about Tumbler Ridge shooter: May

On Friday, Green Party Leader Elizabeth May reacted to new details about OpenAI’s protocol responding to the Tumbler Ridge, B.C., shooter’s banned ChatGPT account — and a recently reported second account. ‘Something could have been done if only the rich bastards in the AI industry had reported what they knew,’ May said.

Conservative ethics critic Michael Barrett said he would be in favour of some form of regulatory framework.

“We’re going to need to really work together as parliamentarians to get our arms around this thing because we see that there are real risks, real consequences to people,” he said.

Green Party Leader Elizabeth May said any regulations that are brought in need to “go way beyond wagging a finger” at tech companies.

“Meeting those families was really hard and it stays with you,” May said of travelling to Tumbler Ridge for a vigil earlier this month.

“The fact that something could have been done if only the rich bastards in the AI industry had reported what they knew makes it a little bit harder,” she said.