Canadian ministers made clear to OpenAI this week that Ottawa expects immediate upgrades to the company’s safety protocols or it will compel those changes by legislation, a senior official said on Wednesday.
Ministers summoned OpenAI’s safety team for discussions on Tuesday after the ChatGPT maker disclosed it had not alerted police about an account it had banned that is now linked to an alleged mass shooter. Authorities say Jesse Van Rootselaar, 18, killed eight people on February 10 before taking her own life in a small town in British Columbia.
OpenAI told Canadian officials it had banned Van Rootselaar’s account on ChatGPT last year for violating platform policies, and internally concluded the account did not satisfy its thresholds for reporting to law enforcement. Separately, OpenAI has stated it banned the account in 2025 after internal systems flagged what the company described as "misuses of our models in furtherance of violent activities." The company said it considered contacting police but determined the account did not present an imminent and credible risk of serious physical harm to others.
"The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they’re not forthcoming very quickly, the government is going to be making changes," Justice Minister Sean Fraser told reporters.
OpenAI was not immediately available for comment.
Context and Ottawa’s legislative stance
Ottawa’s intervention follows a broader policy debate in Canada over online harms. In 2024, the federal Liberal government put forward draft legislation aimed at combating online hate, but that initiative stalled after critics said the proposal was too broad. Ministers now say they will pursue more narrowly focused measures this year.
Prime Minister Mark Carney emphasized the government’s determination to explore all lawful options. "Anything that anyone could have done to prevent that tragedy or future tragedies must be done. We will fully explore it to the full lengths of the law," he told reporters.
Questions about escalation and oversight
Federal ministers said they were alarmed by reports suggesting there may have been an opportunity to escalate concerns to police. Evan Solomon, the federal minister in charge of artificial intelligence, said authorities were "really disturbed by the reports that there might have been an opportunity to escalate this to law enforcement ... and we want to make sure if any company has that opportunity, they would escalate further."
Officials expect OpenAI to provide details on additional steps it will take; the company said on Tuesday it would shortly update Ottawa on any further measures.
Crime experts quoted by officials have said that while increased scrutiny of AI platforms and social media is warranted, there may also have been missed opportunities by police or other authorities to prevent the tragedy. Police had previously removed firearms from Van Rootselaar’s home, but those weapons were later returned.
Implications
The meeting underscores growing government attention to how AI companies evaluate and respond to potential threats discovered through their platforms. Ottawa’s warning to OpenAI signals an appetite to legislate if voluntary changes are not forthcoming, and it situates AI safety within a broader push to refine online-hate rules.
Given the sensitivity of the incident and the federal response, stakeholders in technology, public safety, and regulatory affairs will be watching for how OpenAI and other AI developers revise escalation procedures and for what legislative measures the government ultimately advances.