美国法官暂时阻止五角大楼将 Anthropic 列入黑名单


2026年3月26日 23:10 UTC / 路透社

作者:杰克·奎恩

(图:2020年1月8日,美国弗吉尼亚州阿灵顿五角大楼简报室,讲台后方可见五角大楼标志。路透社/阿尔·德拉戈/资料照片 购买授权,请点击此处打开新标签页)

  • 摘要
  • 公司
  • Anthropic 指控美国国防部长越权
  • 五角大楼称国家安全风险认定合法
  • 林法官的裁决并非最终判决,案件仍待审理

3月26日(路透社)- 美国一名法官周四暂时阻止了五角大楼将 Anthropic 列入黑名单的决定,这是这家开发Claude大模型的公司与军方就战场人工智能安全展开的高风险斗争中的最新转折。

Anthropic 在加利福尼亚州联邦法院提起的诉讼称,国防部长皮特·赫格塞斯(Pete Hegseth)将 Anthropic 认定为国家安全供应链风险的行为越权。政府可将此标签用于那些使军事系统面临潜在敌方渗透或破坏的公司。

路透社伊朗简报通讯将为您提供伊朗战争的最新动态和分析。请在此处注册。

广告 · 继续滚动

报告广告

Anthropic 指控政府依据《宪法第一修正案》对其进行报复,侵犯了其言论自由权。该公司表示,自己未获机会对这一认定提出异议,违反了其《第五修正案》赋予的正当法律程序权利。

美国联邦地区法官丽塔·林(Rita Lin)是前民主党总统乔·拜登任命的法官,在一份43页的裁决中支持了该公司的观点,但表示裁决将在7天后生效,以便政府有机会提出上诉。

赫格塞斯此举史无前例,此前 Anthropic 拒绝允许军方使用其AI聊天机器人Claude进行美国监控或自主武器系统。这一决定导致 Anthropic 被排除在某些军事合同之外。Anthropic 高管曾表示,这可能使其损失数十亿美元的商业机会并造成声誉损害。

广告 · 继续滚动

Anthropic 称,AI模型的可靠性不足以安全用于自主武器,并且反对国内监控,认为这侵犯了权利,但五角大楼表示,私营公司不应限制军事行动。

在周四的裁决中,林法官表示,政府的行动似乎并非针对其宣称的国家安全利益,而是为了惩罚 Anthropic。

“记录支持这样一种推断,即 Anthropic 正因其在媒体上批评政府的合同立场而受到惩罚,”林写道。

“因将公众监督引入政府合同立场而惩罚 Anthropic,这是典型的违反《第一修正案》的非法报复行为,”法官补充道。

Anthropic 发言人丹妮尔·科恩(Danielle Cohen)表示,公司对这一决定感到满意。

“虽然本案是为了保护 Anthropic、我们的客户和合作伙伴而有必要提起的,但我们的重点仍然是与政府积极合作,确保所有美国人都能从安全可靠的人工智能中受益,”科恩在一份声明中说。

Anthropic 的这一认定是美国首次依据一项鲜为人知的政府采购法规,公开将一家公司认定为供应链风险。该法规旨在保护军事系统免受外国破坏。

Anthropic 3月9日提起的诉讼称,这一决定是非法的,缺乏事实依据,并且与军方过去对Claude的赞誉相矛盾。

根据法庭文件,司法部反驳称,Anthropic 拒绝解除限制可能导致五角大楼在如何使用Claude方面产生不确定性,并在行动中面临军事系统瘫痪的风险。

政府表示,这一认定源于 Anthropic 拒绝接受合同条款,而非其对人工智能安全的看法。

Anthropic 在华盛顿特区还有另一宗待审诉讼,涉及五角大楼对其供应链风险的另一项单独认定,这可能导致其被排除在民用政府合同之外。

杰克·奎恩(Jack Queen)在纽约报道,卡尼什卡·辛格(Kanishka Singh)在华盛顿补充报道;安德鲁·钟(Andrew Chung)提供额外报道;诺埃尔·沃尔德(Noeleen Walder)、马修·刘易斯(Matthew Lewis)和斯蒂芬·科茨(Stephen Coates)编辑

我们的标准:路透社信托原则(请点击此处打开新标签页)

US judge blocks Pentagon’s Anthropic blacklisting for now

March 26, 2026 11:10 PM UTC / Reuters

By Jack Queen

The Pentagon logo is seen behind the podium in the briefing room at the Pentagon in Arlington, Virginia, U.S., January 8, 2020. REUTERS/Al Drago/File Photo Purchase Licensing Rights, opens new tab

  • Summary
  • Companies
  • Anthropic alleges US Defense secretary overstepped his authority
  • Pentagon says national security risk designation is lawful
  • Judge Lin’s ruling is not final, case still pending

March 26 (Reuters) – A U.S. judge ​on Thursday temporarily blocked the Pentagon’s blacklisting of Anthropic, the latest turn in the Claude maker’s high-stakes fight with the military over ‌AI safety on the battlefield.

Anthropic’s lawsuit in California federal court alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated Anthropic a national security supply-chain risk, a label the government can apply to companies that expose military systems to potential infiltration or sabotage by adversaries.

The Reuters Iran Briefing newsletter keeps you informed with the latest developments and analysis of the Iran war. Sign up here.

Advertisement · Scroll to continue

Report Ad

Anthropic alleged the government violated ​its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. ​The company said it was not given a chance to dispute the designation, in violation of its ⁠Fifth Amendment right to due process.

U.S. District Judge Rita Lin, an appointee of former Democratic President Joe Biden, agreed with ​the company in a 43-page ruling, but said it would not take effect for seven days to give the administration a chance to ​appeal.

Hegseth’s unprecedented move, which followed Anthropic’s refusal to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons, blocked Anthropic from certain military contracts. Anthropic executives have said it could cost the company billions of dollars in lost business and reputational harm.

Advertisement · Scroll to continue

Anthropic says that AI models are not reliable ​enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights, but the Pentagon ​says private companies should not be able to constrain military action.

In Thursday’s ruling, Lin said the administration’s actions did not appear to be directed ‌at the ⁠government’s stated national security interests, but rather, to punish Anthropic.

“The record supports an inference that Anthropic is being punished for criticizing the government’s contracting position in the press,” Lin wrote.

“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” the judge added.

Anthropic spokesperson Danielle Cohen said the company was pleased with the decision.

“While this case was necessary to protect Anthropic, ​our customers, and our partners, ​our focus remains on working ⁠productively with the government to ensure all Americans benefit from safe, reliable AI,” Cohen said in a statement.

Anthropic’s designation was the first time a U.S. company has been publicly designated a supply-chain risk ​under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage.

Anthropic’s March 9 lawsuit says ​the decision was unlawful, ⁠unsupported by facts and inconsistent with the military’s past praise of Claude.

The Justice Department countered that Anthropic’s refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing.

The government ⁠said ​the designation stemmed from Anthropic’s refusal to accept contractual terms, not its views on ​AI safety.

Anthropic has a second lawsuit pending in Washington, D.C., over a separate Pentagon supply-chain risk designation that could lead to its exclusion from civilian government contracts.

Reporting by Jack ​Queen in New York and Kanishka Singh in Washington; additional reporting by Andrew Chung; editing by Noeleen Walder, Matthew Lewis and Stephen Coates

Our Standards: The Thomson Reuters Trust Principles., opens new tab

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注