Technology
Danish Kapoor
Danish Kapoor

OpenAI sued by victims’ relatives after Canadian school shooting

Following the incident that took place on February 10 in the town of Tumbler Ridge in British Columbia, Canada, and was recorded as one of the deadliest school attacks in the country’s history, the families of those who lost their lives initiated a lawsuit against OpenAI. This development, which came just a few days after OpenAI CEO Sam Altman’s apology message to the townspeople after the incident, brought the company’s artificial intelligence security policies into question again.

In the attack in question, 18-year-old Jesse Van Rootselaar entered a local high school, killing five students and a teacher, and seriously wounding two others. While it was stated that the attacker later ended his life, the police investigation revealed that Van Rootselaar also killed his mother and 11-year-old half-brother before the incident. In this respect, the incident is considered not only a school attack but also a multifaceted family tragedy.

Lawyers representing victims’ families filed six separate lawsuits in federal court in San Francisco on Wednesday, according to information reported by NPR. One of the lawsuits, filed on behalf of attack survivor Maya Gebala, alleges that OpenAI’s security systems flagged Van Rootselaar’s conversations with ChatGPT in June 2025 for “armed violence activity and planning.” Despite this, it is claimed that the company merely closed the relevant account and that the security team’s suggestion to notify the authorities was not implemented.

How has OpenAI updated its security processes?

According to the allegations in the case file, it is stated that the attacker, whose first account was closed, later created a new account and continued his communication with ChatGPT. This situation raises the question of whether measures such as closing accounts on platforms are sufficient on their own. On the other hand, the OpenAI front emphasizes that it has a “zero tolerance” policy against the violent use of its tools.

In the statement made by the company spokesperson, it was stated that various measures were strengthened in line with the information shared with the authorities in Canada. Accordingly, updates were made to the systems so that ChatGPT can better analyze signs of users’ mental state, direct them to local support and mental health resources, and detect potential risks of violence more quickly and forward them to higher authorities. In addition, it was reported that mechanisms to detect recurring violations have also been developed.

In a blog post recently published by OpenAI, it was emphasized that risky behaviors are not always understood with a single message, and that patterns emerging over long-term conversations may be more decisive. This approach shows that efforts to increase the contextual analysis capacity of artificial intelligence-based systems are continuing.

The Tumbler Ridge case stands out as one of the latest examples of the legal processes that artificial intelligence companies face in terms of product design and security liability. Similarly, last summer, the family of a teenager who committed suicide in 2025 filed a lawsuit against OpenAI, claiming that ChatGPT had noticed previous attempts. Such cases bring up the question of to what extent artificial intelligence platforms should intervene in user behavior for discussion in a broader context.

📡 Follow Teknoblog
In order not to miss the technology agenda, 📰 add it to Google News, 💬 join our WhatsApp channel, ▶ subscribe to YouTube, 📷 follow us on Instagram and 𝕏 X.

Danish Kapoor