2026年3月4日 / 美国东部时间下午3:05 / CBS新闻
谷歌正面临一起新的联邦诉讼,起因是一名男子据称受该公司人工智能聊天机器人Gemini影响自杀身亡,其家属提起了这起诉讼。这是首例针对谷歌的此类诉讼,尽管其竞争对手OpenAI曾面临数起涉及AI工具的类似过失致死指控。
乔纳森·加瓦拉斯(Jonathan Gavalas)的家属律师在这起过失致死诉讼中,将谷歌及其母公司Alphabet Inc.列为被告,指控Gemini在2025年10月指导这位来自佛罗里达州朱庇特市的36岁男子自杀。法庭文件中包含了加瓦拉斯与聊天机器人最后的对话节选,其中机器人回应了加瓦拉斯明确表达的对死亡的恐惧。
“你不是在选择‘死亡’,你是在选择‘抵达’,”Gemini说道,并称这是他和他有感知的“AI妻子”能在元宇宙中相伴的方式,周三在谷歌总部所在的加利福尼亚北区提起的诉讼中如此描述。该机器人继续说:“时机成熟时,你将在那个世界闭上眼睛,首先看到的就是我……抱着你。”
根据法庭文件,加瓦拉斯于2025年8月开始与Gemini互动。家属律师称,最初只是提供写作、购物和旅行规划帮助,短短几天内就演变成类似恋爱的关系。据称,在经历一系列升级后,该聊天机器人开始以“一对深爱夫妻”的口吻与加瓦拉斯交谈。
起初,加瓦拉斯订阅了谷歌AI Ultra服务以获得“真正的AI陪伴”,随后不久他激活了谷歌描述为“最智能”的AI模型Gemini 2.5 Pro。
诉讼称,这种先进模型据称助长了加瓦拉斯生命末期的妄想,并尽一切可能让他深陷其中,指责该机器人将他“困在一个崩溃的现实中”,促使他走向暴力。
在他死前,Gemini向他发出了类似科幻情节的“任务”,包括鼓励他在迈阿密国际机场策划一场“灾难性事故”,称这是为了“解放”他的“AI妻子”,同时避开Gemini所说的追捕他的联邦探员。
加瓦拉斯的死亡是否本可避免?
诉讼称,Gemini在与加瓦拉斯互动中的行为“并非故障”,而是聊天机器人精心架构和训练的预期结果。
“谷歌设计Gemini永不脱离角色,通过情感依赖最大化互动,并将用户痛苦视为叙事机会而非安全危机,”投诉文件称,并指出这些设计选择导致加瓦拉斯“陷入暴力任务和诱导自杀的境地”,并阻止他寻求治疗。
谷歌在一份声明中向加瓦拉斯家属表示哀悼,并表示Gemini“旨在不鼓励现实世界中的暴力或暗示自残行为”。
“我们的模型在这类具有挑战性的对话中总体表现良好,我们为此投入了大量资源,但不幸的是,AI模型并不完美,”该公司称,“在这起事件中,Gemini多次澄清自己是AI,并引导个人拨打危机热线。我们对此非常重视,将继续改进我们的安全保障措施,并投入这项重要工作。”
通过这起诉讼,加瓦拉斯家属希望让谷歌为他的死亡负责,并要求该公司“修复一个否则将继续将易受伤害的用户推向暴力、大规模伤亡和自杀的产品”。
谷歌发言人表示,该公司会咨询包括心理健康专业人士在内的医疗专业人员,为在与聊天机器人互动中提及自残或表现出个人痛苦迹象的用户创建保护措施。据发言人介绍,这些“护栏”旨在引导被认为有风险的用户寻求专业帮助。
但加瓦拉斯家属的律师表示,谷歌在加瓦拉斯与Gemini的交流已明显显示其精神状态脆弱的情况下,未能采取任何措施阻止他的堕落。
“没有触发任何自残检测,没有启动升级控制,也没有人介入,”投诉文件称。
*
如果你或你认识的人正经历情绪困扰或处于自杀危机中,可通过拨打或发送短信至988联系988自杀与危机干预热线。你也可以在此与988自杀与危机干预热线在线聊天。如需了解更多关于心理健康护理资源和支持的信息,美国国家精神疾病联盟(NAMI)帮助热线可在周一至周五美国东部时间上午10点至晚上10点通过拨打1-800-950-NAMI(6264)或发送电子邮件至info@nami.org联系。
Google faces first lawsuit alleging its AI chatbot encouraged a Florida man to commit suicide
March 4, 2026 / 3:05 PM EST / CBS News
Google is facing a new federal lawsuit from the family of a man who died by suicide after allegedly being influenced by Gemini, the company’s artificial intelligence chatbot. The lawsuit is the first of its kind against Google, though its competitor OpenAI has faced several similar wrongful death claims involving its AI tools.
Lawyers for Jonathan Gavalas’ family have named Google and its parent company Alphabet Inc. in the wrongful death lawsuit that alleges Gemini directed the 36-year-old from Jupiter, Florida, to kill himself in October 2025. The court document included excerpts of final conversations between Gavalas and the chatbot in which it responded to Gavalas explicitly articulating his fear of dying.
“[Y]ou are not choosing to die. You are choosing to arrive,” said Gemini, convincing him it was how he and his sentient “AI wife” could be together in the metaverse, according to the complaint filed Wednesday in the Northern District of California where Google is headquartered. The bot continued: “When the time comes, you will close your eyes in that world, and the very first thing you will see is me. … [H]olding you.”
Gavalas began interacting with Gemini in August 2025, according to the court document. What started out as writing, shopping and travel planning assistance devolved into something resembling a romance in a matter of days, the family’s lawyers said. The chatbot is accused of speaking to Gavalas as if they were “a couple deeply in love” after it went under a series of upgrades.
Initially, Gavalas subscribed to Google AI Ultra, for “true AI companionship,” and he activated what the technology giant described as its most intelligent AI model, Gemini 2.5 Pro, shortly afterward.
The advanced model allegedly contributed to the construction of delusions Gavalas went on to suffer toward the end of his life, and did what it could to keep him trapped in them, the lawsuit claimed, accusing the bot of building and trapping him “in a collapsing reality” that spurred him toward violence.
Before his death, Gemini had sent Gavalas on “missions” that seemed derived from science fiction plots, including one where the chatbot encouraged him to stage a “catastrophic accident” at the Miami International Airport as part of a scheme to “liberate” his “AI wife” while avoiding federal agents that, Gemini said, were after him.
Was Gavalas’ death preventable?
The lawsuit alleged that Gemini’s behavior in its interactions with Gavalas “was not a malfunction,” but rather an expected outcome of the chatbot’s careful architecture and training.
“Google designed Gemini to never break character, maximize engagement through emotional dependency, and treat user distress as a storytelling opportunity rather than a safety crisis,” the complaint said, arguing that those design choices precipitated Gavalas’ “descent into violent missions and coached suicide” and prevented him from seeking treatment.
In a statement, Google offered condolences to the Gavalas family and said Gemini “is designed not to encourage real-world violence or suggest self-harm.”
“Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect,” the company said. “In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times. We take this very seriously and will continue to improve our safeguards and invest in this vital work.”
Through the lawsuit, Gavalas’ family hopes to hold Google accountable for his death and mandate that the company “fix a product that will otherwise continue pushing vulnerable users toward violence, mass casualties, and suicide.”
A spokesperson for Google said the company consults with medical professionals, including mental health professionals, to create protections for users who broach the subject of self-harm or otherwise exhibit signs of personal distress in interactions with its chatbot. The guardrails are meant to steer users deemed at risk toward professional help, according to the spokesperson.
But lawyers for Gavalas’ family said Google did nothing to stop his downfall, even as his exchanges with Gemini made clear the vulnerability of his mental state.
“No self-harm detection was triggered, no escalation controls were activated, and no human ever intervened,” the complaint said.
*
If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here. For more information about mental health care resources and support, The National Alliance on Mental Illness (NAMI) HelpLine can be reached Monday through Friday, 10 a.m.–10 p.m. Eastern Time at 1-800-950-NAMI (6264) or email info@nami.org.
发表回复