OpenAI has acknowledged it suspended an account linked to Jesse Van Rootselaar months before the 18-year-old carried out a mass killing in Tumbler Ridge, British Columbia, intensifying scrutiny of the shooter’s online activity and the safety practices of technology companies.
The company said it banned the ChatGPT account last June after detecting "misuses of our models in furtherance of violent activities." OpenAI said it debated whether to alert law enforcement but ultimately judged that the account activity did not satisfy the higher threshold required for referral, because it was unable to identify credible or imminent planning. The company also noted concerns that intervening can be distressing for young people and families and may raise privacy issues.
Canadian Artificial Intelligence Minister Evan Solomon called company representatives to Ottawa this week to seek clarity on OpenAI’s safety protocols and its decision not to report the account to police. The conversation followed public criticism suggesting that interactions with chatbots and other online platforms may sometimes precede or even encourage violent acts.
The Tumbler Ridge attack left eight people and the shooter dead and injured others when the assailant began by killing a mother and sibling at home before moving to a school, where an educator and five students were shot dead and two more were hospitalized with serious injuries. Police identified the attacker as Jesse Van Rootselaar.
Investigators from the Royal Canadian Mounted Police said the probe remains active and that certain details are subject to applicable legislation or court processes. Authorities have previously stated they were aware of Van Rootselaar’s history of mental health issues. Police had at one point removed firearms from Van Rootselaar’s residence, and those weapons were later returned.
Political and public figures have criticized OpenAI’s choice not to notify law enforcement. British Columbia Premier David Eby said the tragedy could have been prevented if OpenAI had warned authorities about violent online activity, urging the company to be more transparent. "It looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia," he said.
OpenAI described the shooting as "a devastating tragedy" and said it reached out to law enforcement once the shooter’s identity became public, adding that it is engaged with police to support ongoing investigative work.
Experts in criminology and youth mental health expressed mixed reactions, noting both the need for greater oversight of online platforms and the challenges that arise when private companies consider reporting individual users.
Patrick Watson, a professor of criminology at the University of Toronto who is unaffiliated with the case, said the household where the attack occurred "was clearly a household where there were many problems," and called for stronger scrutiny of companies that are building new public forums with limited accountability.
Tracy Vaillancourt, a University of Ottawa professor specializing in youth mental health and violence prevention, characterized OpenAI’s decision not to refer Van Rootselaar to police as "a missed opportunity," while acknowledging the tension between protecting privacy and reducing credible threats. Vaillancourt said people using platforms such as ChatGPT may fear surveillance, but argued that the power of AI suggests there should be better ways to reduce risks.
By contrast, Cynthia Khoo, a technology and human rights lawyer, warned against turning AI companies into de facto extensions of law enforcement. Khoo cautioned that delegating investigative or surveillance powers to private firms risks serious invasions of privacy and could disproportionately affect already marginalized communities.
Public records of Van Rootselaar’s online activity indicate a history of mental health disclosures and creative projects that raised concerns. In a now-deleted Reddit post, Van Rootselaar wrote about diagnoses including attention deficit hyperactivity disorder, depression, obsessive compulsive disorder, and being on the autism spectrum. The post also said the user had a history of risky behavior connected to psychedelic substance use.
Van Rootselaar had also created a game in the Roblox Studio application that involved shooting characters at a mall. Roblox told authorities that it removed the account and the content from its platform the day after the Tumbler Ridge massacre, and that the game recorded only seven visits.
The shooter was born male and, according to police, had identified as female and started transitioning six years earlier. A U.S. government report cited in public discussion indicated that most mass shooters are male and that transgender people account for a small proportion of such attackers; those figures have been referenced in commentary but do not alter the details of this case.
Officials, experts and technology advocates now face difficult questions about where responsibility should rest when online platforms detect potentially violent behavior. The case underlines tensions between protecting user privacy and preventing harm, and highlights the limits companies describe when assessing the threshold for reporting users to authorities.
As the RCMP continues its investigation, Canadian officials are pressing AI firms for clearer safety protocols and greater transparency about how they evaluate and respond to threats. The probes and public debate are unfolding while families and communities affected by the Tumbler Ridge tragedy await further details under the law and court processes.