Stock Markets April 25, 2026 04:18 AM

OpenAI CEO Expresses Regret After Company Did Not Alert Police About Banned Account Linked to School Shooter

Sam Altman offers apology to Tumbler Ridge community after company declined to report an account tied to Jesse Van Rootselaar under its internal standards

By Hana Yamamoto
OpenAI CEO Expresses Regret After Company Did Not Alert Police About Banned Account Linked to School Shooter

On April 23, OpenAI Chief Executive Sam Altman wrote to leaders in Tumbler Ridge to apologize for the company's failure to notify law enforcement about a ChatGPT account linked to Jesse Van Rootselaar. The account had been banned in June for policy violations, but OpenAI said the case did not meet its internal threshold for reporting. Altman said he has spoken with local and provincial officials and pledged cooperation to help prevent similar incidents.

Key Points

  • OpenAI Chief Sam Altman apologized on April 23 for not informing law enforcement about a ChatGPT account linked to Jesse Van Rootselaar; the account had been banned in June for policy violations.
  • Police say Jesse Van Rootselaar killed eight people in a school in February before taking her own life; Altman said he has spoken with Tumbler Ridge Mayor Darryl Krakowka and British Columbia Premier David Eby and described the community's pain as "unimaginable."
  • OpenAI stated the banned account's issues did not meet its internal criteria for reporting to law enforcement; the company has committed to working with government officials to help prevent similar tragedies.

April 25 - OpenAI Chief Executive Sam Altman issued an apology to residents of Tumbler Ridge after the company did not alert police about a ChatGPT account linked to Jesse Van Rootselaar. Authorities say Van Rootselaar killed eight people in a school in February before taking her own life.

In a letter dated April 23, Altman wrote that he was "deeply sorry" that law enforcement had not been notified about the account that OpenAI had banned in June for policy violations. The company previously stated that the banned account's conduct did not meet its internal criteria for escalating the matter to law enforcement.

Altman also detailed direct outreach with local officials. He said he spoke with Tumbler Ridge Mayor Darryl Krakowka and with British Columbia Premier David Eby, describing the scale of the community's suffering as "unimaginable." The company affirmed it is committed to working with government officials to help prevent a similar tragedy from recurring.

The sequence of events, as described by OpenAI and by local authorities, is straightforward: an account associated with Jesse Van Rootselaar was banned last year for violating OpenAI's policies; subsequently, when investigators linked that account to Van Rootselaar after the February incident, OpenAI said its internal rules at the time did not trigger a report to law enforcement.

Altman's letter and the company's prior statements together underscore the gap between content-moderation actions and thresholds for formal reporting. OpenAI's response included a personal apology from its chief executive and a stated commitment to coordinate with public officials as they look to reduce the chance of similar events in the future.

Details provided in the company's communications and in Altman's letter focus on the timeline of the account ban, the internal decision framework that determined reporting eligibility, and the outreach to municipal and provincial leaders. No new operational details about changes to those internal criteria were included in the communications cited in the letter.


Context notes: The facts reported here reflect the company's statements and Altman's letter: the June banning of the account, the determination that the issues did not meet OpenAI's internal criteria for reporting to law enforcement, Altman's apology dated April 23, and his conversations with Mayor Darryl Krakowka and Premier David Eby. Local authorities attribute the February deaths to Jesse Van Rootselaar.

Risks

  • Internal content-moderation criteria may fail to trigger law enforcement notification in cases later linked to violent incidents - this has implications for public safety and trust in technology platforms.
  • Community trauma and reputational risk for the company following the revelation that a banned account was not reported could affect stakeholder confidence in the technology sector and in platform governance.
  • Uncertainty remains about whether existing internal reporting thresholds are sufficient to identify threats early, creating potential policy and operational gaps for both platform operators and public safety authorities.

More from Stock Markets

Supreme Court Asked to Block judicial review as Administration Moves to End TPS for Haitians and Syrians Apr 25, 2026 Hyundai Unveils Plan to Introduce 20 New Models in China Over Five Years Apr 25, 2026 Singapore Positions Itself as Neutral Hub as AI Firms Seek Safe Harbor from U.S.-China Tech Tensions Apr 25, 2026 Teck Q1 Profit Tops Estimates on Record Copper Sales and Rising Prices Apr 25, 2026 NTSB: Runway Warning System Did Not Activate Before Fatal Air Canada Express Crash Apr 25, 2026