AI safety practices are under scrutiny after the openai school shooting case involving a Canadian suspect whose online activity was flagged months before the attack.
ChatGPT-maker OpenAI revealed it identified Jesse Van Rootselaar‘s account in June 2025 through its abuse detection systems for “furtherance of violent activities”. However, the San Francisco-based tech company concluded at the time that the activity did not meet its internal threshold for referral to law enforcement.
The company said it specifically weighed whether to alert the Royal Canadian Mounted Police (RCMP) about the account. Moreover, it decided not to proceed, judging that the signals did not indicate an imminent and credible plan to cause serious physical harm.
OpenAI subsequently banned the account in June 2025 for violating its usage policy. That decision came months before the tragedy that would later unfold in a remote part of British Columbia.
The 18-year-old suspect later carried out a school attack in the small town of Tumbler Ridge, killing eight people before dying from a self-inflicted gun shot wound. The incident, reported last week, is one of the worst school shootings in Canada’s history and has intensified debate over how tech companies handle high-risk user behavior.
According to the RCMP, Van Rootselaar first killed her mother and stepbrother at the family home before targeting the nearby school. Moreover, police said the shooter had a prior history of mental health-related contacts with law enforcement, though the specific nature of those interactions was not detailed.
Police reported that the victims included a 39-year-old teaching assistant and five students aged 12 to 13. The town, home to about 2,700 people in the Canadian Rockies, lies more than 1,000km (600 miles) north-east of Vancouver, near the border with Alberta.
In explaining its decision not to approach authorities earlier, OpenAI said its standard for contacting law enforcement centers on whether a case involves an imminent and credible risk of serious physical harm to others. At the time, the company said, it did not identify concrete or imminent planning that would have triggered such a referral.
The company noted that this referral threshold is intended to balance user privacy with safety obligations. However, the revelation has prompted fresh questions about whether existing criteria for escalation are adequate when early warning signs emerge in user interactions with AI tools such as ChatGPT.
After news of the school shooting broke, employees at OpenAI contacted the RCMP with details about Van Rootselaar and the historical activity associated with the account. The Wall Street Journal was the first to report on the company’s internal deliberations and subsequent outreach.
In a public statement, an OpenAI spokesperson said: “Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we will continue to support their investigation.” Moreover, the company emphasized its ongoing co-operation with Canadian authorities.
That said, the motive for the shooting remains unclear, and investigators have not yet disclosed what, if any, direct link exists between the suspect’s interactions with ChatGPT and the eventual attack. The case has nonetheless sharpened global focus on how AI platforms detect, assess and respond to potential threats.
Authorities described the Tumbler Ridge attack as Canada’s deadliest such rampage since 2020, when a gunman in Nova Scotia killed 13 people and set fires that caused the deaths of another nine. However, the recent event has an added dimension because it intersects with the evolving responsibilities of AI providers.
As policymakers and law enforcement review the circumstances, the openai school shooting episode is likely to fuel debate over when and how AI companies should escalate concerning user behavior to authorities, and whether existing referral standards sufficiently address emerging digital risks.
In summary, the tragedy in British Columbia underscores the complex balance between privacy, risk assessment and public safety as AI platforms become deeply embedded in everyday life, forcing both regulators and technology firms to reassess their protocols for handling potential threats.


