Text to Speech Icon

Listen to this article

Estimated 5 minutes

The audio version of this article is generated by AI-based technology. Mispronunciations can occur. We are working with our partners to continually review and improve the results.

B.C. Premier David Eby called on OpenAI to share information on what it knew about the Tumbler Ridge shooter’s violent online activity, and why the U.S.-based company did not alert authorities prior to one of the worst shootings in Canadian history.

This comes in response to the revelation that OpenAI knew — but did not inform Canadian officials — that the teenager who committed the mass shooting in Tumbler Ridge had been banned from its ChatGPT platform for months prior to the shooting.

“From the outside, it looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia,” he said on Monday.

WATCH | Premier said OpenAI had opportunity to prevent tragedy:

B.C. premier says it ‘looks like’ OpenAI could have prevented Tumbler Ridge killings

British Columbia Premier David Eby on Monday called on OpenAI to explain why police weren’t told in advance about Tumbler Ridge shooter Jesse Van Rootselaar’s interactions with its ChatGPT chatbot, which were flagged internally but were only reported after the Feb. 10 killings. ‘From the outside, it looks like OpenAI had the opportunity to prevent this tragedy…. I’m angry about that,’ Eby said.

“I’m angry about that, I’m trying hard not to rush to judgment,” Eby said.

“I am trying to figure out how it could be possible that a large group of staff within an organization could bring this kind of information forward and ask that police be called and a decision be made not to do that.”

The information that OpenAI staff had internally raised concerns about the account’s activity was first reported by the Wall Street Journal.

While OpenAI confirmed to CBC News it did not report the account to police before the shooting, CBC News has not independently verified the details about internal conversations within the company.

The OpenAI logo appears on a mobile phone in front of a screen showing part of the company websiteCBC News has not independently verified reporting that says that OpenAI staff had internally raised concerns about the account. (Peter Morgan/The Associated Press)

If the American company does not come forward with the information, which it has shared with the RCMP, Eby said that British Columbians will find out anyway — either through a coroner’s inquest or a public inquiry.

Eby also urged the federal government to create a national standard for when AI companies must report users plotting violence on their platforms.

“It will have to be done carefully, but ensuring a consistent standard for all AI companies across the country is required,” he said.

OpenAI representatives met with B.C. Minister of State for AI Rick Glumac on the early afternoon of Feb. 10 — the same day that RCMP say Jesse Van Rootselaar killed eight people in Tumbler Ridge, B.C., including five children and an education assistant at Tumbler Ridge Secondary School, and then killed herself.

The next morning, on Feb. 11, RCMP identified the shooter as Van Rootselaar.

Then, at 2 p.m. PT, OpenAI met with a representative from the premier’s office to discuss the company’s interest in opening an office in B.C., Eby said.

WATCH | OpenAI had banned Tumbler Ridge mass shooter’s account:

OpenAI had banned account of Tumbler Ridge shooter months before mass shooting

We’re learning more about the investigation into a mass shooting in Tumbler Ridge, B.C., on Feb. 10, when six children and two adults were killed at a high school and the shooter’s home in the small, tight-knit community in northeast B.C. ChatGPT developer OpenAI has reached out to RCMP about an account associated with shooter Jesse Van Rootselaar prior to the attack. The CBC’s Meera Bains reports.

OpenAI did not mention at any point to government officials what it had known for months: that it had banned the teenager who would go on to commit the mass shooting in Tumbler Ridge from its platform following activity in June 2025, due to what OpenAI said in a statement amounted to “misuses of our models in furtherance of violent activities.”

On Feb. 12, OpenAI asked Eby’s office for the RCMP’s contact information.

Company says activity didn’t meet threshold

OpenAI, the company behind ChatGPT, said in a statement that the account’s activity was not determined to meet the threshold for alerting law enforcement, but that the company reached out to law enforcement following the shooting.

“This was a devastating tragedy, and we are doing all we can to support the ongoing investigation,” said OpenAI spokesperson Jamie Radice.  

“We reached out to law enforcement immediately after the identity of the shooter was made public, and we are engaged with the RCMP to support their ongoing work.”

A man in a suit gestures with his hand while speaking at an event.AI and Digital Innovation Minister Evan Solomon says the federal government is concerned over OpenAI safety measures following the deadly shootings in Tumbler Ridge, B.C. (Christopher Katsarov/The Canadian Press)

Federal AI Minister Evan Solomon will meet with OpenAI’s senior safety team in Ottawa on Tuesday. 

He did not say if the government is considering legislation to better regulate AI companies, but said “all options are on the table.”

“I want them to give us details of what their protocols are, [and] what they are specifically in Canada, because Canadians demand safety,” he said.

AI regulations part of potential solution, expert says

Eby said that he believes any regulation should come from Ottawa. His government had previously tabled a bill to hold social media companies accountable to protect people from harm online, but it was placed on hold in 2024 following an agreement with the companies.

The federal government has also attempted to bring forward legislation focused on preventing harm online, but those bills were disrupted due to federal elections.

Vered Shwartz, an assistant professor of computer science at the University of British Columbia who specializes in artificial intelligence, says there are technical challenges in using automated content moderation to identify risks. 

She says that users could be wrongly flagged — especially when trying to identify when to report potential violence before it happens — and gave the example of a father whose account was disabled by Google after a photo he sent of his infant son to a doctor was flagged as “harmful content.”

However, Shwartz says there are benefits to bringing in regulations like those proposed by Eby.

“I think it should be part of the solution, I just think that it’s really tricky to draw the lines of what kind of content should be flagged versus not,” the professor said.