World February 25, 2026

Canada Presses OpenAI to Strengthen Safety Measures or Face Legislation

Ottawa demands rapid changes after a banned ChatGPT account linked to a suspected mass shooter raised questions about escalation to police

By Hana Yamamoto
Canada Presses OpenAI to Strengthen Safety Measures or Face Legislation

Canadian ministers told OpenAI to rapidly improve its safety procedures or the government will impose changes through law, following revelations that an account banned from ChatGPT was linked to an alleged mass shooter. The move follows a Feb. 10 attack in British Columbia and comes amid plans to revive targeted online-hate legislation.

Key Points

  • Canadian ministers demanded immediate improvements to OpenAI’s safety processes or said the government will legislate - impacts technology and regulatory sectors.
  • OpenAI had banned the account linked to the alleged shooter but concluded it did not meet internal thresholds for notifying law enforcement; the company said systems flagged "misuses of our models in furtherance of violent activities" - impacts AI development and platform governance.
  • Ottawa plans to revisit online-hate legislation with more targeted measures after a previous 2024 draft stalled - impacts legal, communications and social media sectors.

Canadian ministers made clear to OpenAI this week that Ottawa expects immediate upgrades to the company’s safety protocols or it will compel those changes by legislation, a senior official said on Wednesday.

Ministers summoned OpenAI’s safety team for discussions on Tuesday after the ChatGPT maker disclosed it had not alerted police about an account it had banned that is now linked to an alleged mass shooter. Authorities say Jesse Van Rootselaar, 18, killed eight people on February 10 before taking her own life in a small town in British Columbia.

OpenAI told Canadian officials it had banned Van Rootselaar’s account on ChatGPT last year for violating platform policies, and internally concluded the account did not satisfy its thresholds for reporting to law enforcement. Separately, OpenAI has stated it banned the account in 2025 after internal systems flagged what the company described as "misuses of our models in furtherance of violent activities." The company said it considered contacting police but determined the account did not present an imminent and credible risk of serious physical harm to others.

"The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they’re not forthcoming very quickly, the government is going to be making changes," Justice Minister Sean Fraser told reporters.

OpenAI was not immediately available for comment.


Context and Ottawa’s legislative stance

Ottawa’s intervention follows a broader policy debate in Canada over online harms. In 2024, the federal Liberal government put forward draft legislation aimed at combating online hate, but that initiative stalled after critics said the proposal was too broad. Ministers now say they will pursue more narrowly focused measures this year.

Prime Minister Mark Carney emphasized the government’s determination to explore all lawful options. "Anything that anyone could have done to prevent that tragedy or future tragedies must be done. We will fully explore it to the full lengths of the law," he told reporters.


Questions about escalation and oversight

Federal ministers said they were alarmed by reports suggesting there may have been an opportunity to escalate concerns to police. Evan Solomon, the federal minister in charge of artificial intelligence, said authorities were "really disturbed by the reports that there might have been an opportunity to escalate this to law enforcement ... and we want to make sure if any company has that opportunity, they would escalate further."

Officials expect OpenAI to provide details on additional steps it will take; the company said on Tuesday it would shortly update Ottawa on any further measures.

Crime experts quoted by officials have said that while increased scrutiny of AI platforms and social media is warranted, there may also have been missed opportunities by police or other authorities to prevent the tragedy. Police had previously removed firearms from Van Rootselaar’s home, but those weapons were later returned.


Implications

The meeting underscores growing government attention to how AI companies evaluate and respond to potential threats discovered through their platforms. Ottawa’s warning to OpenAI signals an appetite to legislate if voluntary changes are not forthcoming, and it situates AI safety within a broader push to refine online-hate rules.

Given the sensitivity of the incident and the federal response, stakeholders in technology, public safety, and regulatory affairs will be watching for how OpenAI and other AI developers revise escalation procedures and for what legislative measures the government ultimately advances.

Risks

  • OpenAI’s existing thresholds for reporting to police may be judged insufficient, prompting regulatory changes that affect AI and tech companies' compliance costs - impacts technology and legal sectors.
  • If legislative action is taken quickly, companies may face new mandatory reporting obligations that could affect platform operations and moderation policies - impacts social media, AI and compliance-focused markets.

More from World

Ukraine Says Over 1,780 African Nationals Are Fighting for Russia, Accuses Moscow of Deceptive Recruitment Feb 25, 2026 Rubio Holds Caribbean Talks as Leaders Warn Cuba Crisis Could Ripple Across Region Feb 25, 2026 Supreme Court Panel Convicts Ex-Lawmaker and Associates in Marielle Franco Killing Feb 25, 2026 OpenAI Reveals Bans After ChatGPT Used in Dating Scams, Fake Law Firms and Influence Operations Feb 25, 2026 Settlers Burn Vehicles and Tents in Susiya, Residents Say Feb 25, 2026