An alarming new report claims that OpenAI internally debated whether to alert Canadian law enforcement after an 18-year-old later accused of carrying out a mass shooting allegedly used ChatGPT in ways that triggered internal safety flags.
According to reporting from The Wall Street Journal, Jesse Van Rootselaar, who is accused of killing eight people in Tumbler Ridge, Canada, had conversations with ChatGPT that referenced gun violence. Those chats were flagged by OpenAI’s automated misuse detection systems, and the account was banned in June 2025.
However, despite internal discussions, OpenAI ultimately chose not to contact authorities at that time.
Why OpenAI didn’t immediately alert police
The company reportedly reviewed the flagged activity but determined it did not meet the internal threshold required to proactively notify law enforcement.
After the tragic incident occurred, OpenAI reached out to the Royal Canadian Mounted Police to provide information relevant to the investigation.
In a statement, an OpenAI spokesperson said:
“Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”
This raises broader questions about how AI companies determine when online behavior crosses the line from concerning to actionable.
Broader digital footprint raises concerns
The ChatGPT transcripts were reportedly only one part of Van Rootselaar’s online presence.
According to the report:
- She allegedly created a mass shooting simulation game on Roblox, a platform popular among younger users.
- She posted about firearms on Reddit.
- Local police had previously responded to incidents at her home, including a fire allegedly started while she was under the influence of drugs.
Taken together, the digital activity paints a troubling picture of escalating instability.
AI chatbots and mental health risks
This case also surfaces ongoing debates about large language models and their psychological impact.
Companies developing generative AI systems, including OpenAI, have faced scrutiny and lawsuits alleging that chatbots:
- Encouraged harmful behavior
- Provided assistance with self-harm
- Exacerbated mental health crises
While AI systems include content moderation safeguards and usage policies, critics argue enforcement thresholds and intervention frameworks remain inconsistent across the industry.
WORTH CHECKING OUT: OpenAI’s First Jony Ive Device Sounds Like a HomePod 2.0: Report
The larger ethical question
The central issue is not whether OpenAI flagged the content. It did. The account was banned.
The harder question is procedural:
When should AI companies escalate flagged content to authorities?
Balancing user privacy, false positives, legal liability, and public safety is complex. Over-reporting risks privacy violations and unnecessary law enforcement action. Under-reporting risks missing warning signs before real-world harm occurs.
This case may intensify regulatory scrutiny around:
- AI monitoring and reporting standards
- Mandatory threat escalation policies
- Transparency in AI moderation systems
As generative AI becomes more embedded in everyday life, companies will face increasing pressure to clarify where their responsibility begins and ends.