Listen to this article
Estimated 4 minutes
The audio version of this article is generated by AI-based technology. Mispronunciations can occur. We are working with our partners to continually review and improve the results.
OpenAI says the company found a second ChatGPT account belonging to the Tumbler Ridge, B.C., shooter after her name was made public — even though another account was banned in June for posts about gun violence.
The revelation comes in a letter written by Ann O’Leary, OpenAI’s vice-president of global policy, addressed to Artificial Intelligence Minister Evan Solomon.
The second account was flagged to police, the letter said.
O’Leary also wrote that the California-based company would’ve flagged the shooter’s initial account to police under new safety policies the company started to develop “several months ago.”
“Mental health and behavioural experts now help us assess difficult cases, and we have made our referral criteria more flexible to account for the fact that a user may not discuss the target, means and timing of planned violence in a ChatGPT conversation but that there may be potential risk of imminent violence,” O’Leary wrote in the letter that has been shared with media.
It’s not clear from the letter when the new protocol took effect. But the company didn’t flag Jesse Van Rootselaar to police when it banned her first account.
WATCH | More about ChatGPT and the shooting:
Could OpenAI have prevented the Tumbler Ridge mass shooting?
Canadian officials have been scrambling to figure out what OpenAI knew about the Tumbler Ridge, B.C., mass shooter, after a bombshell report released by the Wall Street Journal. It revealed that OpenAI had banned the shooter’s ChatGPT account months before the tragedy, but decided not to report the account to authorities at the time.
OpenAI has said Van Rootselaar’s activities didn’t meet the company’s threshold for informing law enforcement because it didn’t identify credible or imminent planning at the time.
“With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today,” O’Leary’s letter reads.
Earlier this month, Van Rootselaar killed her mother and half-brother at the family home before going to the local secondary school, where she killed five students, an educational assistant and then herself.
B.C. Premier David Eby said Thursday that he believes the company could’ve prevented the tragedy had it alerted police.
“They tragically missed the mark in [not] bringing this information forward. The consequences of that will be borne by the families of Tumbler Ridge for the rest of their lives,” Eby told reporters.
The premier said that OpenAI CEO Sam Altman has agreed to meet with him to discuss the company’s safety policies.Â
“I think it’s important that Mr. Altman hear about how his team’s decision not to bring this information forward has resulted in devastation,” he said.
Eby said members of his staff met with company officials on Thursday but he refused to take part, insisting on a meeting with Altman.Â
Ottawa considering new regulations
O’Leary’s letter comes after senior company officials met with Solomon and other ministers.
Solomon said after the meeting that he was disappointed the company didn’t provide thorough answers.
“We expected [OpenAI] to have some concrete proposals that we could understand, that [they] had changed their protocols in the wake of the horrific tragedy in Tumbler Ridge. But we did not hear any substantial new safety protocols outside of some changes to their model,” Solomon said Tuesday night.
WATCH | OpenAI ‘did not provide any concrete proposals,’ minister says:
OpenAI ‘did not provide any concrete proposals’ in meeting after Tumbler Ridge shooting: AI minister
Artificial Intelligence Minister Evan Solomon reiterated his disappointment on Wednesday following a meeting with OpenAI’s senior safety team held in the wake of the Tumbler Ridge, B.C., shooting. The tech company confirmed the ChatGPT account of the teen shooter had been flagged internally last June, but not reported to police.
O’Leary’s letter outlined some commitments the company is taking to address the government’s concerns, including: establishing a direct point of contact with Canadian law enforcement, upgrading its model to allow the company to direct users to local mental health supports when warranted and strengthening its detection system to help identify repeat policy violators.
A spokesperson from Solomon’s office said the government is reviewing the letter and “will have more to say in the coming days.”
Solomon said this week that the government is considering legislation to bring in new regulations in the wake of the shooting and that all options are on the table.
Eby said Thursday that any rules must apply to all AI companies.
“We don’t want an AI company operating in Canada that says, ‘Hey, sign up with us. We’re the company that doesn’t tell the cops if you’re planning a violent attack,'” he said.