NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
IranDiplomaticFebruaryTrumpTensionsElectionSignificantTechnologyTimelineMilitaryIsraelPolicyDigestTalksFacesComplacentHolyroodRegionalChineseCompanyTurkeyCrisisStatesFriday
IranDiplomaticFebruaryTrumpTensionsElectionSignificantTechnologyTimelineMilitaryIsraelPolicyDigestTalksFacesComplacentHolyroodRegionalChineseCompanyTurkeyCrisisStatesFriday
All Articles
Canada’s AI minister blames OpenAI for ‘failure’ after mass shooting
Politico Europe
Clustered Story
Published 2 days ago

Canada’s AI minister blames OpenAI for ‘failure’ after mass shooting

Politico Europe · Feb 25, 2026 · Collected from RSS

Summary

Ottawa says it's ready to step in on AI chatbots if safety protocols fall short.


Share this story

Read Original at Politico Europe

Related Articles

BBC Worldabout 5 hours ago
OpenAI vows safety policy changes after Tumbler Ridge shooting

The tech firm has been criticised for not reporting the suspect's ChatGPT account to police despite it being flagged internally due to concerns over content.

Engadgetabout 10 hours ago
OpenAI will notify authorities of credible threats after Canada mass shooter's second account was discovered

OpenAI has vowed to strengthen its safety protocols and to notify law enforcement of credible threats sooner in a letter addressed to Canadian authorities, according to Politico and The Washington Post. If you’ll recall, Canadian politicians summoned the company’s leaders after reports came out that it didn’t notify authorities when it banned the account owned by the Tumbler Ridge, British Columbia mass shooting suspect back in 2025. Some of OpenAI’s leaders have already met with Candian officials, and British Columbia Premier David Eby said Sam Altman had also agreed to meet with him. While OpenAI has yet to announce changes to its rules, Ann O’Leary, its vice president of global policy, reportedly wrote in the letter that the company will tweak its detection systems so that they can better prevent banned users from coming back to the platform. Apparently, after OpenAI banned the shooter’s original account due to “potential warnings of committing real-world violence,” the perpetrator was able to create another account. The company only discovered the second account after the shooter’s name was released, and it has since notified authorities. Further, OpenAI will now notify authorities if it detects “imminent and credible” threats in ChatGPT conversations, even if the user doesn’t reveal “a target, means, and timing of planned violence.” O’Leary explained that if the new rules had been in effect when the shooter’s account was banned in 2025, the company would have notified the police. OpenAI will also establish a point of contact for Canadian law enforcement so it can quickly share information with authorities when needed. The Canadian government sees OpenAI’s decision not to report the shooter’s original account as a failure. It threatened to regulate AI chatbots in the country if their creators cannot show that they have proper safeguards to protect its users. It’s unclear at the moment if OpenAI also plans to roll out the same changes in the US and elsewhere i

Engadget2 days ago
Canadian government demands safety changes from OpenAI

Canadian officials summoned leaders from OpenAI to Ottawa this week to address safety concerns about ChatGPT. The crux of the government concerns was that OpenAI did not notify authorities when it banned the account of a user who allegedly committed a mass shooting in British Columbia earlier this month.  "The message that we delivered, in no uncertain terms, was that we have ‌an expectation that there are going to ⁠be changes implemented, and if they're not forthcoming very quickly, the government is going to be making changes," Justice Minister Sean Fraser said of the company and its AI chatbot. It's unclear what those government-led changes or rules might be. There have been two previous, unsuccessful attempts to pass an online harms act in Canada. A recent report by The Wall Street Journal claimed that in 2025, some OpenAI employees flagged the account of the alleged shooter, Jesse Van Rootselaar, as containing potential warnings of committing real-world violence and called for leadership to notify law enforcement. Although Van Rootselaar's account was banned for policy violations, a company rep said that the account activity did not meet OpenAI's criteria for engaging the local police.  “Those reports were deeply disturbing, reports saying that OpenAI did not contact law enforcement in a timely manner," said Canadian Artificial Intelligence Minister Evan Solomon ahead of the discussion with company leaders. "We will have a sit-down meeting to have an explanation of their safety protocols and when they escalate and their thresholds of escalation to police, so we have a better understanding of what’s happening and what they do." OpenAI has been implicated in mulitple wrongful death suits. The company's ChatGPT was accused of encouraging "paranoid beliefs" before a man killed his mother and himself in a December 2025 lawsuit. It is also at the center of one of several wrongful death lawsuits against the makers of AI chatbots for helping teenagers plan and commit s

Politico Europe4 days ago
Canada summons OpenAI reps over school shooting suspect’s ChatGPT account

The account was flagged internally months before the shooting.

TechCrunch6 days ago
OpenAI debated calling police about suspected Canadian shooter’s chats

Jesse Van Rootselaar's descriptions of gun violence were flagged by tools that monitor ChatGPT for misuse.

The Verge6 days ago
Suspect in Tumbler Ridge school shooting described violent scenarios to ChatGPT

The suspect in the mass shooting at Tumbler Ridge, British Columbia, Jesse Van Rootselaar, was raising alarms among employees at OpenAI months before the shooting took place. This past June, Jesse had conversations with ChatGPT involving descriptions of gun violence that triggered the chatbot's automated review system. Several employees raised concerns that her posts could be a precursor to real-world violence and encouraged company leaders to contact the authorities, but they ultimately declined. According to the Wall Street Journal, leaders at the company decided that Rootselaar's posts did not constitute a "credible and imminent risk of … Read the full story at The Verge.