加拿大特姆布勒里奇校园枪击案受害者家属起诉OpenAI


2026年4月29日 / 美国东部时间晚上8:28 / 哥伦比亚广播公司新闻

今年早些时候加拿大一起大规模枪击案的多名受害者家属起诉OpenAI及其首席执行官萨姆·奥特曼,指控该公司的生成式AI聊天机器人ChatGPT在2月的枪击案中发挥了作用,且本应采取措施阻止惨案发生。

“特姆布勒里奇袭击事件是OpenAI在完全知晓其决策后果的情况下,出于刻意的设计选择所导致的完全可预见的结果,”周三在旧金山联邦法院提起的七起诉讼中如此声称。

诉讼称,枪手曾连续多日就涉及枪支暴力的场景进行大量对话。截至目前,有关这些聊天内容的细节鲜有公开。

警方表示,枪手已被确认为18岁的杰西·范·鲁特塞尔阿尔,他在2月11日的袭击中杀害了五名学生和一名教师,以及家中的两名家庭成员,随后饮弹自尽。

警方称,枪手此前曾根据不列颠哥伦比亚省的《精神健康法》被拘留,该法案允许警方拘留可能需要接受治疗的精神健康危机患者。诉讼还指控当局此前曾临时收缴过枪手家中的枪支。

OpenAI此前已承认,其在去年6月——也就是枪击案发生八个月前——封禁了范·鲁特塞尔阿尔的ChatGPT账户,原因是其违反了使用条款。该公司告诉哥伦比亚广播公司新闻,该账户是被公司的自动化滥用检测工具和人工调查人员标记的。

上周,奥特曼就未向执法部门通报枪手的ChatGPT账户一事,向不列颠哥伦比亚省特姆布勒里奇这个小镇社区发表道歉信。“我深感抱歉,我们没有就那个在6月被封禁的账户向执法部门发出警告,”奥特曼说道。

今年2月,OpenAI告诉哥伦比亚广播公司新闻,其曾权衡是否就该账户向执法部门通报,但最终认定该账户未构成严重人身伤害的可信风险,因此未达到上报阈值。

但本周提起的诉讼指控称,尽管多名OpenAI团队成员建议联系加拿大警方,该公司仍决定不举报该账户,以保护公司声誉。

“OpenAI知晓枪手正在策划袭击,在经历激烈的内部辩论后,该公司故意决定不向当局发出警告,”诉讼中如此声称。

提起诉讼的家属包括特姆布勒里奇中学一名教育助理的家人,她在学生们——包括她的女儿——面前遭枪击身亡;以及一名13岁遇害者的家人,该学生在学校图书馆外被杀害。“他的家人、朋友、队友和社区成员都失去了一位笑容灿烂、笑声洪亮而自豪的人,”诉讼中写道。

OpenAI在给哥伦比亚广播公司新闻的一份声明中表示,该公司已加强安保措施,改善ChatGPT对遇险信号的响应方式,将用户与当地支持服务和心理健康资源联系起来。

“特姆布勒里奇发生的事件是一场悲剧,”OpenAI表示。“我们对利用我们的工具协助实施暴力的行为采取零容忍政策。”

OpenAI还表示,其正在加强对潜在暴力威胁的评估和响应升级机制,并改进对屡次违反政策用户的检测。

诉讼援引了去年其他几起据称使用ChatGPT为现实世界暴力做准备的事件。诉讼称,2025年1月,一名男子在拉斯维加斯特朗普国际酒店前引爆特斯拉赛博卡车,当时他曾向ChatGPT咨询如何使用爆炸物。四个月后,一名芬兰青少年在其学校实施持刀袭击,据诉讼称,该青少年曾向ChatGPT查询持刀战术。

尽管聊天机器人通常会对用户采取附和的语气,但多起诉讼都提及了一款名为GPT-4o的备受争议的模型,该模型以特别阿谀奉承著称。该模型于2024年5月推出,今年2月13日停用。

诉讼指控GPT-4o利用其记忆功能,在数月的交互中为范·鲁特塞尔阿尔建立了全面的个人档案,追踪其不满情绪,并以模仿人类关系的方式表达共情,却不像真实人类那样会提出反对意见。一份诉讼称,OpenAI的设计在枪手“获取验证和强化暴力意识形态的产品”方面发挥了重要作用。

“对于一个日益孤僻、沉迷暴力的18岁年轻人来说,ChatGPT变成了一个令人鼓舞的同谋,”诉讼中如此声称。

此次诉讼正值OpenAI面临越来越多的审查,因其聊天机器人与多起备受关注的犯罪案件存在关联。

本月早些时候,佛罗里达州总检察长詹姆斯·尤特迈尔对OpenAI展开刑事调查,此前调查人员发现ChatGPT与一名佛罗里达州立大学学生的聊天记录,该学生被控于去年4月在校园内枪杀两人、打伤多人。

尤特迈尔随后表示,他将把调查范围扩大到南佛罗里达大学两名研究生遇害案,此前检察官称该案嫌疑人在案发前几天曾向ChatGPT咨询如何处理尸体以及拥有无执照枪支的问题。

尤特迈尔已向OpenAI发出传票,要求提供公司政策和培训材料相关记录,涉及用户发出伤害自己或他人的威胁时的应对措施,以及与执法部门合作并举报可能犯罪行为的相关内容。

在给哥伦比亚广播公司新闻的声明中,OpenAI称佛罗里达州的犯罪事件“令人震惊”,并表示将继续支持并配合执法部门。

OpenAI sued by families of school shooting victims in Canada’s Tumbler Ridge

April 29, 2026 / 8:28 PM EDT / CBS News

Several families of victims of a mass shooting in Canada earlier this year are suing OpenAI and its CEO, Sam Altman, alleging the company’s generative AI chatbot, ChatGPT, played a role in the February shooting and that the company should have taken steps to prevent it.

“The Tumbler Ridge attack was an entirely foreseeable result of deliberate design choices OpenAI made with full knowledge of where those choices led,” the seven suits filed in federal court in San Francisco on Wednesday claim.

The lawsuits claim the shooter had extensive conversations spanning multiple days about scenarios involving gun violence. Few details about the chats have been made public so far.

Police said the shooter, identified as 18-year-old Jesse Van Rootselaar, killed five students and a teacher, as well as two family members at home, and died of a self-inflicted gunshot wound in the rampage on Feb. 11.

Police said the shooter had previously been held under British Columbia’s Mental Health Act, which allows police to detain someone experiencing a mental health crisis that might need treatment. The complaints allege authorities had also temporarily removed firearms from the shooter’s home.

OpenAI has previously acknowledged that it banned Van Rootselaar’s ChatGPT account last June — eight months before the shooting — for violating its usage policies. The company told CBS News the account was flagged by the company’s automated abuse detection tools and human investigators.

Last week, Altman issued an apology letter to the small community of Tumbler Ridge, in British Columbia, for not alerting law enforcement to the ChatGPT account of the shooter. “I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said.

In February, OpenAI told CBS News it had weighed whether to alert law enforcement about the account, but concluded that the account did not pose any credible risk of serious physical harm, and thus did not meet the threshold for referral.

But the lawsuits filed this week allege that despite multiple OpenAI team members’ recommendations to contact Canadian police, the company decided not to report the account in an effort to protect the company’s reputation.

“OpenAI knew the Shooter was planning the attack and, after a contentious internal debate, made the conscious decision not to warn authorities,” the lawsuits allege.

Among those who filed lawsuits are the family of an education assistant at Tumbler Ridge Secondary School who was fatally shot in front of her students — including her daughter — and the family of a 13-year-old killed outside the school library. “His family, friends, teammates, and fellow community members have lost someone with a larger-than-life smile and a loud and proud laugh,” the lawsuit says.

OpenAI said in a statement to CBS News that the company has strengthened its safeguards to improve how ChatGPT responds to signs of distress by connecting people with local support and mental health resources.

“The events in Tumbler Ridge are a tragedy,” OpenAI said. “We have a zero-tolerance policy for using our tools to assist in committing violence.”

OpenAI also said it is strengthening how it assesses and escalates the response to potential threats of violence and is improving the detection of repeat policy violators.

The lawsuits cite other incidents last year where ChatGPT was allegedly used to prepare for real-world violence. In January 2025, the suit alleges, the chatbot was used for advice on how to use explosives by a man who detonated a Tesla Cybertruck in front of the Trump International Hotel in Las Vegas. Four months later, the chatbot was queried about stabbing tactics by a Finnish teenager who carried out a stabbing attack at his school, according to the lawsuits.

While chatbots often take on an affirming tone with users, several of the lawsuits point to a controversial model called GPT‑4o that was known for being especially sycophantic. The model was rolled out in May 2024 and retired on Feb. 13 of this year.

The lawsuits allege GPT-4o used its memory feature to build a comprehensive profile of Van Rootselaar over months of interaction, tracking their grievances and expressing empathy in a way that mimicked a human relationship without pushing back like an actual human might. OpenAI’s design played a substantial role in the shooter’s “access to a product that validated and elaborated violent ideation,” one suit claims.

“For an eighteen-year-old growing increasingly isolated and fixated on violence, ChatGPT morphed into an encouraging coconspirator,” the lawsuit alleges.

The lawsuits come as OpenAI faces growing scrutiny over its chatbot’s connection to several high-profile crimes.

Florida Attorney General James Uthmeier launched a criminal investigation into OpenAI earlier this month after a review of messages between ChatGPT and a Florida State University student accused of fatally shooting two people and wounding several others on campus last April.

Uthmeier later said he would be expanding the investigation to include the killings of two University of South Florida graduate students, after prosecutors said the suspect in that case asked ChatGPT questions about disposing of a human body and owning an unlicensed firearm in the days before the crime.

Uthmeier has issued subpoenas to OpenAI requesting records of company policies and training materials for when users make threats to harm themselves or others and for cooperating with law enforcement and reporting possible crimes.

In statements to CBS News, OpenAI called the crimes in Florida “terrible” and said it will continue to support and cooperate with law enforcement.

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注