NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
IranTensionsIsraelFebruaryDiplomaticTrumpSignificantTechnologyTimelineMilitaryCrisisStatesPolicyDigestFacesRegionalChineseCompanyTurkeyFridayChinaTradeDespiteNations
IranTensionsIsraelFebruaryDiplomaticTrumpSignificantTechnologyTimelineMilitaryCrisisStatesPolicyDigestFacesRegionalChineseCompanyTurkeyFridayChinaTradeDespiteNations
All Articles
OpenAI vows safety policy changes after Tumbler Ridge shooting
BBC World
Clustered Story
Published about 4 hours ago

OpenAI vows safety policy changes after Tumbler Ridge shooting

BBC World · Feb 27, 2026 · Collected from RSS

Summary

The tech firm has been criticised for not reporting the suspect's ChatGPT account to police despite it being flagged internally due to concerns over content.

Full Article

2 hours agoNadine YousifSenior Canada reporterAFP via Getty ImagesCanadian officials have criticised OpenAI for failing to report the suspect's ChatGPT account to police, and say they believe the shooting could have been preventedOpenAI says it will strengthen its safety measures after the company failed to alert police about the Tumbler Ridge shooting suspect's ChatGPT account despite it being flagged internally months before the attack.In an open letter to Canadian officials, the company said the suspect was able to create a second account after the first was banned, slipping past its internal detection systems.It said it has also since changed how it reports accounts to police, and that the suspect's activity would be referred to law enforcement if it were flagged today.An account linked to the suspect, 18‑year‑old Jesse Van Rootselaar, was banned by OpenAI in June 2025 — seven months before the shooting.Eight people were killed in the 10 February attack, which took place at a residence and the local secondary school in Tumbler Ridge, a small town in British Columbia, Canada. The victims included the suspect's mother and 11‑year‑old stepbrother, as well as five young school children and an educator. Van Rootselaar died of a self-inflicted gunshot wound, police said. The shooting was one of the deadliest in Canadian history. Canada summons OpenAI senior staff over Tumbler Ridge shootingWho were the victims of the shootings in Tumbler Ridge, Canada?Canadian officials met OpenAI senior staff earlier this week in Ottawa, after the company revealed it had shut down a ChatGPT account used by the suspect in June 2025 for violating usage terms.That account was not reported to police, however, because it did not at the time meet its threshold for "credible and imminent planning" of serious violence, the company said.In its letter to Canadian officials on Thursday, penned by OpenAI's vice-president of global policy and shared with media outlets, the company said it had implemented a series of changes in recent months, including enlisting the help of "mental health and behavioural experts" to assess cases and making the criteria for referral to police "more flexible". Because of the changes, OpenAI said it would have reported the suspect's ChatGPT account under the new guidelines.The letter does not specify when those new protocols took effect.The company also revealed that the suspect was able to create a second account, despite being flagged by OpenAI systems in the past. That second account was shared with police after the shooting, it said."We commit to strengthening our detection systems to better prevent attempts to evade our safeguards and prioritize identifying the highest risk offenders," the company wrote.OpenAI said it will also establish a direct point of contact with Canadian law enforcement so it can quickly flag any possible future cases with "potential for real world violence". That direct line of communication is one of the requests made by Canadian officials following their meeting with OpenAI staff on Tuesday.Canada's AI minister Evan Solomon has described what occurred as a "failure". He told reporters that he was left "disappointed" after the meeting, saying that he did not hear "any substantial new safety protocols" from OpenAI.Solomon also opened the door for future legislation on the matter if OpenAI fails to implement changes quickly. "All options for us are on the table, because at the end of the day, Canadians want to feel safe," Solomon after Tuesday's meeting. British Columbia Premier David Eby has said he believes the shooting would have been prevented if the company had alerted police to Van Rootselaar's account months ago."They tragically missed the mark in [not] bringing this information forward. The consequences of that will be borne by the families of Tumbler Ridge for the rest of their lives," Eby told reporters on Thursday.Eby added that OpenAI Sam Altman has agreed to meet to discuss the company's safety policies. "I think it's important that Mr Altman hear about how his team's decision not to bring this information forward has resulted in devastation," he said.


Share this story

Read Original at BBC World

Related Articles

Engadgetabout 9 hours ago
OpenAI will notify authorities of credible threats after Canada mass shooter's second account was discovered

OpenAI has vowed to strengthen its safety protocols and to notify law enforcement of credible threats sooner in a letter addressed to Canadian authorities, according to Politico and The Washington Post. If you’ll recall, Canadian politicians summoned the company’s leaders after reports came out that it didn’t notify authorities when it banned the account owned by the Tumbler Ridge, British Columbia mass shooting suspect back in 2025. Some of OpenAI’s leaders have already met with Candian officials, and British Columbia Premier David Eby said Sam Altman had also agreed to meet with him. While OpenAI has yet to announce changes to its rules, Ann O’Leary, its vice president of global policy, reportedly wrote in the letter that the company will tweak its detection systems so that they can better prevent banned users from coming back to the platform. Apparently, after OpenAI banned the shooter’s original account due to “potential warnings of committing real-world violence,” the perpetrator was able to create another account. The company only discovered the second account after the shooter’s name was released, and it has since notified authorities. Further, OpenAI will now notify authorities if it detects “imminent and credible” threats in ChatGPT conversations, even if the user doesn’t reveal “a target, means, and timing of planned violence.” O’Leary explained that if the new rules had been in effect when the shooter’s account was banned in 2025, the company would have notified the police. OpenAI will also establish a point of contact for Canadian law enforcement so it can quickly share information with authorities when needed. The Canadian government sees OpenAI’s decision not to report the shooter’s original account as a failure. It threatened to regulate AI chatbots in the country if their creators cannot show that they have proper safeguards to protect its users. It’s unclear at the moment if OpenAI also plans to roll out the same changes in the US and elsewhere i

Engadget2 days ago
Canadian government demands safety changes from OpenAI

Canadian officials summoned leaders from OpenAI to Ottawa this week to address safety concerns about ChatGPT. The crux of the government concerns was that OpenAI did not notify authorities when it banned the account of a user who allegedly committed a mass shooting in British Columbia earlier this month.  "The message that we delivered, in no uncertain terms, was that we have ‌an expectation that there are going to ⁠be changes implemented, and if they're not forthcoming very quickly, the government is going to be making changes," Justice Minister Sean Fraser said of the company and its AI chatbot. It's unclear what those government-led changes or rules might be. There have been two previous, unsuccessful attempts to pass an online harms act in Canada. A recent report by The Wall Street Journal claimed that in 2025, some OpenAI employees flagged the account of the alleged shooter, Jesse Van Rootselaar, as containing potential warnings of committing real-world violence and called for leadership to notify law enforcement. Although Van Rootselaar's account was banned for policy violations, a company rep said that the account activity did not meet OpenAI's criteria for engaging the local police.  “Those reports were deeply disturbing, reports saying that OpenAI did not contact law enforcement in a timely manner," said Canadian Artificial Intelligence Minister Evan Solomon ahead of the discussion with company leaders. "We will have a sit-down meeting to have an explanation of their safety protocols and when they escalate and their thresholds of escalation to police, so we have a better understanding of what’s happening and what they do." OpenAI has been implicated in mulitple wrongful death suits. The company's ChatGPT was accused of encouraging "paranoid beliefs" before a man killed his mother and himself in a December 2025 lawsuit. It is also at the center of one of several wrongful death lawsuits against the makers of AI chatbots for helping teenagers plan and commit s

Politico Europe2 days ago
Canada’s AI minister blames OpenAI for ‘failure’ after mass shooting

Ottawa says it's ready to step in on AI chatbots if safety protocols fall short.

Politico Europe4 days ago
Canada summons OpenAI reps over school shooting suspect’s ChatGPT account

The account was flagged internally months before the shooting.

TechCrunch6 days ago
OpenAI debated calling police about suspected Canadian shooter’s chats

Jesse Van Rootselaar's descriptions of gun violence were flagged by tools that monitor ChatGPT for misuse.

The Verge6 days ago
Suspect in Tumbler Ridge school shooting described violent scenarios to ChatGPT

The suspect in the mass shooting at Tumbler Ridge, British Columbia, Jesse Van Rootselaar, was raising alarms among employees at OpenAI months before the shooting took place. This past June, Jesse had conversations with ChatGPT involving descriptions of gun violence that triggered the chatbot's automated review system. Several employees raised concerns that her posts could be a precursor to real-world violence and encouraged company leaders to contact the authorities, but they ultimately declined. According to the Wall Street Journal, leaders at the company decided that Rootselaar's posts did not constitute a "credible and imminent risk of … Read the full story at The Verge.