人工智能裁决引发美国律师警告:你的聊天记录可能会对你不利


2026-04-15 10:01:43 UTC / 路透社

作者:迈克·斯卡塞拉
2026年4月15日 美国东部时间上午10:01 更新于38分钟前

节点运行失败

[1/2]2017年6月7日,在瑞士日内瓦国际电信联盟(ITU)举办的“人工智能促善”全球峰会上,屏幕上投射的插画展示了机械臂与人类手臂相向靠近的画面。路透社/丹尼斯·巴里博斯/资料图片

人工智能给律师保密规则带来新难题
法官裁定:AI聊天记录无法在欺诈案中受到保护

(路透社4月15日电)随着越来越多人求助人工智能获取建议,一些美国律师正在告诫客户:当你的人身自由或法律责任岌岌可危时,不要将AI聊天机器人当作可以信赖的密友。

今年纽约一名联邦法官作出的一项裁决让这类警告变得更为紧迫:一家破产金融服务公司的前CEO无法将其AI聊天记录对追查其证券欺诈指控的检方保密。

通过《每日案卷》时事通讯获取最新法律资讯,直达你的收件箱。点击此处订阅。

广告 · 滚动继续阅读

在该裁决出台后,律师们纷纷提醒:在刑事案件中,检方可能会要求提交与Anthropic的Claude和OpenAI的ChatGPT等聊天机器人的对话记录;在民事案件中,诉讼对手也可能提出此类要求。

“我们一直在告诉客户:你在这里必须谨慎行事,”总部位于纽约的科布雷与金律师事务所律师亚历山德里亚·古铁雷斯·斯韦特说道。

根据美国法律,人们与律师的对话几乎始终被视为保密内容。但AI聊天机器人并非律师,律师们正在指导客户采取措施,以便更妥善地保护与AI工具的沟通隐私。

广告 · 滚动继续阅读

在发给客户的电子邮件和发布在网站上的咨询公告中,美国十多家大型律师事务所都为个人和企业提供了建议,以降低AI聊天记录出现在法庭上的可能性。

类似的警告也出现在部分律所与客户签订的聘用协议中。例如,纽约的舍尔特雷蒙特律师事务所在最近一份客户合同中明确表示,将律师的建议或沟通内容分享给聊天机器人,可能会放弃通常用于保护律师与客户之间沟通的“律师-客户保密特权”。

一项司法裁决

引发这场警报的案件涉及布拉德利·赫普纳,他是破产金融服务公司GWG控股的前董事长,也是另类资产管理公司Beneficent的创始人。赫普纳去年11月被联邦检方指控证券欺诈和电信欺诈,并已提出无罪抗辩。

赫普纳曾使用Anthropic的聊天机器人Claude准备与其辩护律师分享的案件相关报告,其律师随后主张,AI交流内容应予以扣留,因为其中包含了律师提供的与其辩护相关的细节。

检方则辩称,他们有权索要赫普纳用Claude生成的材料,因为其辩护律师并未直接参与这些内容的创作,而且律师-客户保密特权并不适用于聊天机器人。

自愿将律师提供的信息透露给任何第三方,都可能危及这类律师沟通的常规法律保护。

总部位于曼哈顿的美国地区法官杰德·拉科夫于2月作出裁决,要求赫普纳提交31份由Anthropic的聊天机器人Claude生成的与案件相关的文档。

“AI用户与Claude这类平台之间不存在,也不可能存在律师-客户关系,”拉科夫写道。

赫普纳的律师未立即回应置评请求。曼哈顿美国检察官办公室的发言人也拒绝置评。

法院已经在应对律师和自我代理诉讼当事人日益广泛使用人工智能的问题,其中就包括出现了由AI虚构的案件事实的法律文件。

拉科夫的裁决是AI聊天机器人时代下,针对规范律师-客户沟通和诉讼准备材料的核心法律保护的一次重要早期测试。

就在拉科夫作出裁决的同一天,密歇根州的美国治安法官安东尼·帕蒂裁定,一名以个人身份起诉前雇主的女性无需提交其与OpenAI的ChatGPT就该案就业索赔展开的聊天记录。

帕蒂将该女性的AI聊天记录视为其为案件准备的个人“工作成果”,而非与雇主可能寻求用于抗辩的对象之间的对话。

ChatGPT和其他生成式AI程序“只是工具,而非个人”,帕蒂在其裁决中写道。

OpenAI和Anthropic的隐私及使用条款均表明,两家公司可以与第三方共享用户相关数据。两家公司也都声明,用户在依赖聊天机器人获取法律建议前,应咨询合格的专业人士。

拉科夫在赫普纳案件2月的听证会上指出,Claude“明确告知用户,不要指望其输入内容具有隐私性”。

OpenAI和Anthropic的代表未立即回应置评请求。

律师们竞相设置防护屏障

律师们给出的建议范围很广,从告知客户谨慎选择AI平台,到建议在聊天机器人提示中使用特定措辞。

洛杉矶的奥麦芬与迈尔斯律师事务所等机构在客户咨询公告中表示,专为企业设计的“封闭式”AI系统可能为法律沟通提供更强的保护,但他们同时强调,这类做法在很大程度上仍未经过实践检验。

部分律所表示,如果AI法律研究是在律师指导下开展的,则更有可能受到律师-客户保密特权的保护。总部位于纽约的德贝维西与普林普顿律师事务所在其网站上的一则通知中建议,如果律师确实建议使用AI,用户应在聊天机器人提示中说明这一点。

该律所建议人们输入:“我是根据X诉讼案的法律顾问指示进行这项研究的。”

根据路透社对美国政府网站上发布的合同的审查,关于AI使用的条款也越来越多地出现在律所与客户签订的合同中。

经常代理白领刑事被告的舍尔特雷蒙特律师事务所在3月的一份新合同中写道:“将受保密特权保护的沟通内容披露给第三方AI平台,可能构成对律师-客户保密特权的放弃。”

总部位于纽约的莫洛兰肯律师事务所的贾斯汀·埃利斯和其他律师表示,他们预计未来会有更多裁决澄清AI聊天记录何时可以作为证据使用。

在此之前,律师们仍在重申一条亘古不变的原则:除了你的律师之外,不要向任何人谈论你的案件——包括AI。

迈克·斯卡塞拉报道;大卫·巴里奥、艾米·史蒂文斯和威尔·邓赫姆编辑

我们的标准:汤森路透信托原则。

AI ruling prompts warnings from US lawyers: Your chats could be used against you

2026-04-15 10:01:43 UTC / Reuters

By Mike Scarcella

April 15, 2026 10:01 AM UTC Updated 38 mins ago

节点运行失败

[1/2]An illustration projected on a screen shows a robot hand and a human one moving towards each others during the “AI for Good” Global Summit at the International Telecommunication Union (ITU) in Geneva, Switzerland, June 7, 2017. REUTERS/Denis Balibouse/File Photo

AI presents new issues for attorney confidentiality rules
Judge decided AI chats could not be shielded in fraud case

April 15 (Reuters) – As people increasingly turn to artificial intelligence for advice, some U.S. lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line.

These warnings became more urgent after a federal judge in New York ruled this year that the former CEO of a bankrupt financial ​services company could not shield his AI chats from prosecutors pursuing securities fraud charges against him.

Jumpstart your morning with the latest legal news delivered straight to your inbox from The Daily Docket newsletter. Sign up here.

Advertisement · Scroll to continue

In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic’s Claude and ‌OpenAI’s ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases.

“We are telling our clients: You should proceed with caution here,” said Alexandria Gutiérrez Swette, a lawyer at New York-based law firm Kobre & Kim.

People’s discussions with their lawyers are almost always deemed confidential under U.S. law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private.

Advertisement · Scroll to continue

In emails to clients and advisories posted on their websites, more than a dozen major U.S. law ​firms have outlined advice for people and companies to decrease the chances of AI chats winding up in court.

Similar warnings are also appearing in hiring agreements by some firms with their clients. For instance, New ​York-based firm Sher Tremonte stated in a recent client contract that sharing a lawyer’s advice or communications with a chatbot could erase the legal protection known as attorney-client privilege ⁠that usually shields communications between lawyers and their clients.

A JUDICIAL RULING

The case that helped set off the alarm bells involved Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings and founder of alternative asset firm ​Beneficent. Heppner was charged by federal prosecutors last November with securities and wire fraud, and pleaded not guilty.

Heppner had used Anthropic’s chatbot Claude to prepare reports about his case to share with his attorneys, who later argued that his AI exchanges ​should be withheld because they contained details from the lawyers related to his defense.

Prosecutors argued that they had a right to demand material that Heppner created with Claude because his defense lawyers were not directly involved, and because attorney-client privilege does not apply to chatbots.

Voluntarily revealing information from a lawyer to any third party can jeopardize the customary legal protections for those attorney communications.

Manhattan-based U.S. District Judge Jed Rakoff ruled in February that Heppner must hand over 31 documents generated by Anthropic’s chatbot Claude related to the case.

No attorney-client relationship exists “or ​could exist, between an AI user and a platform such as Claude,” Rakoff wrote.

Lawyers for Heppner did not immediately respond to requests for comment. A spokesperson for the U.S. attorney’s office in Manhattan declined to comment.

Courts already are grappling ​with the growing use of artificial intelligence by lawyers and people representing themselves in legal cases, which among other things has led to legal filings containing made-up cases invented by AI.

Rakoff’s decision was an important early test in the AI chatbot era for ‌bedrock legal protections ⁠governing attorney-client communications and materials prepared for litigation.

On the same day as Rakoff’s ruling, U.S. Magistrate Judge Anthony Patti in Michigan said a woman representing herself in a lawsuit she brought against her former company did not have to hand over her chats with OpenAI’s ChatGPT about the employment claims made in the case.

Patti treated the woman’s AI chats as part of her own personal “work-product” for the case, rather than as conversations with a person who her employer could seek to use for its defense.

ChatGPT and other generative AI programs “are tools, not persons,” Patti wrote in his order.

The privacy and usage terms for both OpenAI and Anthropic state that the companies can share data involving their users with third parties. Both ​also state that they require users to consult a qualified ​professional before relying on their chatbots for legal ⁠advice.

Rakoff at a February hearing in Heppner’s case noted that Claude “expressly provided that users have no expectation of privacy in their inputs.”

Representatives for OpenAI and Anthropic did not immediately respond to requests for comment.

LAWYERS RACE TO SET GUARDRAILS

The advice from lawyers has ranged from telling clients to select their AI platforms carefully to suggesting specific language to use in ​chatbot prompts.

Los Angeles-based O’Melveny & Myers and other firms said in client advisories that “closed” AI systems designed for corporate use could provide stronger protections for legal communications, though they ​said even that remains largely untested.

Some ⁠firms said AI legal research is more likely to be protected by attorney-client privilege when it is conducted at the direction of a lawyer. If a lawyer does advise the use of AI, a person should say so in the chatbot prompt, New York-headquartered law firm Debevoise & Plimpton said in a notice on its website.

“I am doing this research at the direction of counsel for X litigation,” the firm suggested people write.

Information about AI use is also becoming common in contracts used by law firms ⁠with clients, according ​to a Reuters review of contracts posted to a U.S. government website.

Sher Tremonte, which often represents white-collar criminal defendants, said in a new ​contract in March: “Disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege.”

Justin Ellis of New York-headquartered law firm MoloLamken and other lawyers said they expect that more rulings will eventually clarify when AI chats can be used as evidence.

Until then, attorneys ​are saying that an age-old assumption still applies: Do not talk to anyone except your lawyer about your case – including AI.

Reporting by Mike Scarcella; Editing by David Bario, Amy Stevens and Will Dunham

Our Standards: The Thomson Reuters Trust Principles.

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注