解读:Anthropic起诉政府案:这家AI公司称发生了什么


By Akash Sriram
2026年3月9日 美国东部时间晚上8:27 更新于3小时前

[1/2] 这张2024年5月20日拍摄的插图显示了Anthropic的标志。REUTERS/Dado Ruvic/Illustration/File Photo

  • 公司
  • Anthropic PBC 关注
  • Palantir Technologies Inc 关注

3月9日 – Anthropic于周一起诉美国政府,升级了这场该AI公司称之为对其拒绝对Claude模型解除安全限制的报复行为的争议。

这家亚马逊支持的公司表示,它愿意与军方合作,但前提是不接受任何条件。

订阅《每日 docket》新闻通讯,获取最新法律新闻,直接发送到您的收件箱,开启您的早晨。点击此处注册。

它还在美国哥伦比亚特区巡回上诉法院提起了相关案件,挑战政府援引的另一项法律授权。

广告 · 继续滚动

以下内容基于Anthropic在诉讼中提出的指控。

Anthropic称争议的内容是什么

Anthropic表示,它花费数年时间将Claude打造成政府部署最广泛的前沿AI模型,包括在机密军事网络中使用,并开发了专门的”Claude Gov”版本,放宽了许多标准限制以适应国家安全工作。

冲突始于2025年秋季,当时五角大楼就其GenAI.mil平台进行谈判,国防部要求Anthropic完全放弃其使用政策,并允许Claude用于政府所说的”所有合法用途”。

Anthropic称,它基本同意,但有两点不可协商:不允许Claude在无人监督的情况下用于致命自主战争,或对美国人进行大规模监视。

该公司表示,Claude尚未经过这些用途的测试,无法安全执行这些任务。它还表示,如果无法达成协议,愿意协助将工作转移给其他供应商。

五角大楼官员对争议的起因有不同说法。国防部首席技术官公开表示,在一次美国对委内瑞拉的突袭行动后,紧张局势升级,当时一位Anthropic高管致电Palantir的对应人员询问Claude是否在该行动中被使用。

这一说法并未出现在Anthropic的申诉中。

从最后通牒到全面禁令

国防部长Pete Hegseth于2月24日会见了Anthropic首席执行官Dario Amodei,提出最后通牒:四天内遵守要求,否则面临两种惩罚之一——根据《国防生产法》被强制要求,或作为”国家安全风险”被逐出国防供应链。

Amodei于2月26日公开拒绝了这一要求。第二天,在东部时间下午5:01的期限到期前,总统Donald Trump在Truth Social上发布指令,要求所有联邦机构立即停止使用Anthropic的技术。

在这条社交媒体帖子中,总统称Anthropic是一家”激进左翼、觉醒公司”。

几小时后,Hegseth在X平台宣布,Anthropic是”国家安全供应链风险”,任何军事承包商或供应商不得与该公司进行商业交易。

各机构迅速跟进。美国总务管理局终止了Anthropic的全政府合同。财政部、国务院和联邦住房金融局公开切断了联系。Anthropic的申诉称,禁令发布数小时后,五角大楼使用Anthropic的工具对伊朗发动了大规模空袭。

白宫发言人Liz Huston表示,政府不会允许一家公司”通过规定世界上最强大军队的运作方式来危害我们的国家安全”,并补充说美军”永远不会被大型科技领袖的意识形态奇想所挟持”,将遵循宪法,”而不是任何觉醒AI公司的服务条款”。

Anthropic为何决定起诉

Anthropic辩称,供应链指定没有事实依据。该公司指出其FedRAMP授权、活跃的安全许可,以及多年来的政府好评,包括Hegseth在2月24日会议上称Claude的能力”精湛”。

随后两名五角大楼高级官员告诉记者,”没有供应链风险的证据”,该指定”是意识形态驱动的”。

Anthropic提出五项法律主张,称这些行动违反了《行政程序法》、第一修正案、第五修正案、总统的法定权力,以及《行政程序法》关于未经授权的机构制裁的禁令。

Akash Sriram在班加罗尔报道

我们的标准:路透社信托原则。

Explainer: Anthropic’s case against the government: what the AI company says happened

By Akash Sriram
March 9, 2026 8:27 PM UTC Updated 3 hours ago

节点运行失败
Item 1 of 2 Anthropic logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

[1/2]Anthropic logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

  • Companies
  • Anthropic PBC Follow
  • Palantir Technologies Inc Follow

March 9 – Anthropic sued the U.S. government on Monday, escalating a dispute the AI company frames as retaliation for refusing to remove safety limits on its Claude model.

The Amazon-backed company said it was willing to work with the military. Just not on any terms.

Jumpstart your morning with the latest legal news delivered straight to your inbox from The Daily Docket newsletter. Sign up here.

It has also ​filed a related case in the U.S. Court of Appeals for the D.C. Circuit challenging a separate legal authority the government ‌invoked.

Advertisement · Scroll to continue

The following account is based on allegations made by Anthropic in its lawsuit.

WHAT ANTHROPIC SAYS THE DISPUTE IS ABOUT


Anthropic said it spent years building Claude into the government’s most widely deployed frontier AI model, including on classified military networks, developing a specialized “Claude Gov” version and loosening many of its standard restrictions to accommodate national security work.

The conflict began ​in the fall of 2025 during negotiations over the Pentagon’s GenAI.mil platform, when the Department of Defense demanded Anthropic abandon its usage ​policy entirely and allow Claude to be used for, in the government’s words, “all lawful uses”.

Advertisement · Scroll to continue

Anthropic said it largely agreed, ⁠except on two points it considered non-negotiable: it would not allow Claude to be used for lethal autonomous warfare without human oversight or for mass ​surveillance of Americans.

The company says Claude has not been tested for those uses and cannot perform them safely. It said it also offered to help ​transition the work to another provider if no agreement could be reached.

Pentagon officials have offered a different account of how the dispute began. The department’s chief technology officer said publicly that tensions escalated after a U.S. raid in Venezuela, when an Anthropic executive called a counterpart at Palantir to ask whether Claude had been used in the operation.

That ​account does not appear in Anthropic’s complaint.

FROM ULTIMATUM TO ALL-OUT BAN


Secretary of Defense Pete Hegseth met with Anthropic CEO Dario Amodei on February 24, ​presenting an ultimatum: comply within four days or face one of two punishments – compulsion under the Defense Production Act, or expulsion from the defense supply chain as a ‘national ‌security risk’.

Amodei ⁠rejected the demand publicly on February 26. The next day, before a 5:01 p.m. Eastern deadline had expired, President Donald Trump posted a directive on Truth Social ordering every federal agency to immediately cease all use of Anthropic’s technology.

In the social media post, the president characterized Anthropic as a “RADICAL LEFT, WOKE COMPANY”.

Hours later, Hegseth announced on X that Anthropic was a “Supply-Chain Risk to National Security” and that no military contractor or supplier could do commercial business ​with the company.

Agencies fell in line quickly. ​The General Services Administration terminated ⁠Anthropic’s government-wide contract. Treasury, State, and the Federal Housing Finance Agency publicly cut ties. The Anthropic complaint alleges the Pentagon launched a major air attack on Iran using Anthropic’s tools hours after the ban.

White House spokeswoman Liz ​Huston said the administration would not allow a company to “jeopardize our national security by dictating how the greatest ​and most powerful military ⁠in the world operates,” adding that U.S. forces would “never be held hostage by the ideological whims of Big Tech leaders” and would follow the Constitution, “not any woke AI company’s terms of service”.

WHY ANTHROPIC DECIDED TO SUE


Anthropic argues the supply chain designation has no factual basis. The company points to its FedRAMP authorization, ⁠active security ​clearances, and years of government praise, including from Hegseth, who called Claude’s capabilities “exquisite” at the ​February 24 meeting.

Two senior Pentagon officials subsequently told reporters there was “no evidence of supply-chain risk” and that the designation was “ideologically driven”.

Anthropic raises five legal claims, arguing the actions violated the Administrative ​Procedure Act, the First Amendment, the Fifth Amendment, the president’s statutory authority, and the APA’s prohibition on unauthorized agency sanctions.

Reporting by Akash Sriram in Bengaluru

Our Standards: The Thomson Reuters Trust Principles.

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注