World February 25, 2026

OpenAI’s Earlier Account Ban of Tumbler Ridge Shooter Prompts Scrutiny of Online Signals and Company Protocols

Company barred the shooter’s ChatGPT account months before the attack; Canadian officials press for clarity on safety and reporting thresholds

By Jordan Park
OpenAI’s Earlier Account Ban of Tumbler Ridge Shooter Prompts Scrutiny of Online Signals and Company Protocols

OpenAI confirmed it had banned the ChatGPT account of Jesse Van Rootselaar last June after identifying misuse tied to violent activities, but said it did not refer the user to law enforcement because it could not verify credible or imminent planning. The disclosure has intensified examination of the shooter’s prior online behavior, raised questions about missed intervention opportunities, and prompted Canadian officials to seek explanations of AI firms’ safety practices amid an ongoing investigation into the Tumbler Ridge killings.

Key Points

  • OpenAI banned Jesse Van Rootselaar’s ChatGPT account last June after detecting misuse tied to violent activities, but decided not to refer the account to law enforcement because it could not establish credible or imminent planning.
  • Canadian officials, including the AI minister and British Columbia’s premier, have called for explanations and greater transparency from OpenAI regarding its safety practices and decision-making.
  • The case has drawn attention to the balance between privacy protections and potential obligations of tech companies to report threats, with implications for AI, social media, and law enforcement interactions.

OpenAI has acknowledged it suspended an account linked to Jesse Van Rootselaar months before the 18-year-old carried out a mass killing in Tumbler Ridge, British Columbia, intensifying scrutiny of the shooter’s online activity and the safety practices of technology companies.

The company said it banned the ChatGPT account last June after detecting "misuses of our models in furtherance of violent activities." OpenAI said it debated whether to alert law enforcement but ultimately judged that the account activity did not satisfy the higher threshold required for referral, because it was unable to identify credible or imminent planning. The company also noted concerns that intervening can be distressing for young people and families and may raise privacy issues.

Canadian Artificial Intelligence Minister Evan Solomon called company representatives to Ottawa this week to seek clarity on OpenAI’s safety protocols and its decision not to report the account to police. The conversation followed public criticism suggesting that interactions with chatbots and other online platforms may sometimes precede or even encourage violent acts.


The Tumbler Ridge attack left eight people and the shooter dead and injured others when the assailant began by killing a mother and sibling at home before moving to a school, where an educator and five students were shot dead and two more were hospitalized with serious injuries. Police identified the attacker as Jesse Van Rootselaar.

Investigators from the Royal Canadian Mounted Police said the probe remains active and that certain details are subject to applicable legislation or court processes. Authorities have previously stated they were aware of Van Rootselaar’s history of mental health issues. Police had at one point removed firearms from Van Rootselaar’s residence, and those weapons were later returned.


Political and public figures have criticized OpenAI’s choice not to notify law enforcement. British Columbia Premier David Eby said the tragedy could have been prevented if OpenAI had warned authorities about violent online activity, urging the company to be more transparent. "It looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia," he said.

OpenAI described the shooting as "a devastating tragedy" and said it reached out to law enforcement once the shooter’s identity became public, adding that it is engaged with police to support ongoing investigative work.


Experts in criminology and youth mental health expressed mixed reactions, noting both the need for greater oversight of online platforms and the challenges that arise when private companies consider reporting individual users.

Patrick Watson, a professor of criminology at the University of Toronto who is unaffiliated with the case, said the household where the attack occurred "was clearly a household where there were many problems," and called for stronger scrutiny of companies that are building new public forums with limited accountability.

Tracy Vaillancourt, a University of Ottawa professor specializing in youth mental health and violence prevention, characterized OpenAI’s decision not to refer Van Rootselaar to police as "a missed opportunity," while acknowledging the tension between protecting privacy and reducing credible threats. Vaillancourt said people using platforms such as ChatGPT may fear surveillance, but argued that the power of AI suggests there should be better ways to reduce risks.

By contrast, Cynthia Khoo, a technology and human rights lawyer, warned against turning AI companies into de facto extensions of law enforcement. Khoo cautioned that delegating investigative or surveillance powers to private firms risks serious invasions of privacy and could disproportionately affect already marginalized communities.


Public records of Van Rootselaar’s online activity indicate a history of mental health disclosures and creative projects that raised concerns. In a now-deleted Reddit post, Van Rootselaar wrote about diagnoses including attention deficit hyperactivity disorder, depression, obsessive compulsive disorder, and being on the autism spectrum. The post also said the user had a history of risky behavior connected to psychedelic substance use.

Van Rootselaar had also created a game in the Roblox Studio application that involved shooting characters at a mall. Roblox told authorities that it removed the account and the content from its platform the day after the Tumbler Ridge massacre, and that the game recorded only seven visits.

The shooter was born male and, according to police, had identified as female and started transitioning six years earlier. A U.S. government report cited in public discussion indicated that most mass shooters are male and that transgender people account for a small proportion of such attackers; those figures have been referenced in commentary but do not alter the details of this case.


Officials, experts and technology advocates now face difficult questions about where responsibility should rest when online platforms detect potentially violent behavior. The case underlines tensions between protecting user privacy and preventing harm, and highlights the limits companies describe when assessing the threshold for reporting users to authorities.

As the RCMP continues its investigation, Canadian officials are pressing AI firms for clearer safety protocols and greater transparency about how they evaluate and respond to threats. The probes and public debate are unfolding while families and communities affected by the Tumbler Ridge tragedy await further details under the law and court processes.

Risks

  • Potential failures to detect or report credible threats on AI platforms could lead to public backlash, regulatory scrutiny, and policy changes affecting technology firms and the broader AI sector.
  • Efforts to deputize private technology companies to monitor and report user behavior risk significant privacy invasions and could disproportionately impact marginalized groups, raising legal and human rights concerns.
  • Uncertainty around thresholds for referral to law enforcement and limitations in platforms’ ability to identify imminent planning may leave authorities without actionable intelligence, posing challenges for public safety and policing systems.

More from World

Switzerland to Provide 50,000-Franc Payouts to Victims of Crans-Montana Bar Fire Feb 25, 2026 Poll Shows Lula Leading First-Round Matchups but Tied With Flavio Bolsonaro in Potential Run-Off Feb 25, 2026 Surviving a strike, Ali Shamkhani returns to Iran’s inner security circle Feb 25, 2026 Majority of Americans Back Childhood Vaccines and School Mandates, Poll Shows Feb 25, 2026 U.S. Directs Diplomats to Push Back on Foreign Data Sovereignty Rules Feb 25, 2026