2026-04-08T21:54:35.488Z / 路透社
作者:杰克·奎因
2026年4月8日 晚上9:54 UTC 更新于15分钟前
节点运行失败
这张2024年5月20日的示意图展示了Anthropic的标志。路透社/达多·鲁维奇/示意图/档案照片 购买授权,打开新标签页
- 内容摘要
- 公司
- 华盛顿特区上诉法院驳回Anthropic要求暂停五角大楼供应链风险标签的请求
- Anthropic称此次黑名单是对其AI安全观点的报复
- 美国司法部表示,裁决基于合同条款以及Claude人工智能的使用限制
【纽约4月8日路透社电】华盛顿特区联邦上诉法院周三暂时驳回了阻止五角大楼将人工智能公司Anthropic列入国家安全黑名单的请求,这是特朗普政府的一次胜利,此前另一家上诉法院在Anthropic发起的另一项独立法律挑战中得出了相反结论。
开发热门AI助手Claude的Anthropic指控,国防部长皮特·赫格斯瑟尔将该公司列为国家安全供应链风险实体的做法越权,原因是该公司拒绝移除其产品上的某些使用限制条款。这一标签将禁止Anthropic获得五角大楼合同,还可能触发全政府范围的黑名单。
路透社伊朗简报通讯将为您提供伊朗局势的最新动态和分析。在此注册。
广告 · 滚动继续
Anthropic高管表示,这一认定可能导致该公司损失数十亿美元的业务,并损害其声誉。
美国哥伦比亚特区巡回上诉法院的一个法官小组驳回了Anthropic在案件审理期间暂停该认定的申请。该裁决并非最终判决。
Anthropic的一位女发言人在周三的裁决发布后发表声明称,公司相信法院最终会认定供应链风险认定是非法的。
代理司法部长托德·布兰奇周三在社交媒体帖子中称赞这一裁决是军队战备的胜利。
布兰奇使用了特朗普为国防部新起的名称,表示:“军事权威和作战控制权属于总司令和战争部,而非科技公司。”
这起诉讼是Anthropic针对赫格斯瑟尔前所未有的举措提起的两起诉讼之一。此前,Anthropic出于安全和伦理考量,拒绝允许军方将AI聊天机器人Claude用于美国监视或自主武器系统。
赫格斯瑟尔依据两项不同法律发布了将Anthropic列入名单的命令,Anthropic正在分别对这两项命令提起诉讼。
3月26日,加利福尼亚州的一名联邦法官阻止了其中一项命令,称五角大楼似乎非法报复了Anthropic关于AI安全的观点。
Anthropic被列入名单是美国首次根据旨在保护军事系统免受敌方破坏或渗透的晦涩政府采购法规,将一家美国公司公开列为供应链风险实体。
在诉讼中,Anthropic表示,政府根据宪法第一修正案侵犯了其言论自由权,因为政府报复了其关于AI安全的观点。该公司表示,它没有机会对其被列入名单提出异议,这违反了第五修正案规定的正当程序权利。
诉讼称,这些认定是非法的,没有事实依据,且与军方过去对Claude的赞誉相矛盾。
根据一份法庭文件,美国司法部表示,Anthropic拒绝解除限制可能会导致五角大楼对其如何使用Claude产生不确定性,并有可能在行动期间导致军事系统失灵。
政府表示,其决定源于Anthropic拒绝接受合同条款,而非其关于AI安全的观点。
华盛顿特区的这起案件涉及一项法律,该法律可能导致在跨部门审查程序后,黑名单范围扩大到更广泛的文职政府部门。
加利福尼亚州的案件涉及一项范围更窄的法规,该法规将Anthropic排除在与军事信息系统相关的五角大楼合同之外。
杰克·奎因在纽约报道;诺琳·瓦尔德、辛西娅·奥斯特曼和林肯·费斯特编辑。
我们的标准:汤森路透信托原则,打开新标签页
US court declines to block Pentagon’s Anthropic blacklisting for now
2026-04-08T21:54:35.488Z / Reuters
By Jack Queen
April 8, 2026 9:54 PM UTC Updated 15 mins ago
节点运行失败
Anthropic logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo Purchase Licensing Rights, opens new tab
- Summary
- Companies
- DC appeals court denies Anthropic’s request to pause Pentagon supply-chain risk label
- Anthropic claims blacklisting is retaliation for its views on AI safety
- Justice Department says decision based on contract terms, Claude Ai usage restrictions
NEW YORK, April 8 (Reuters) – A Washington, D.C., federal appeals court on Wednesday declined to block the Pentagon’s national security blacklisting of AI company Anthropic for now, a win for the Trump administration that comes after another appeals court came to the opposite conclusion in a separate legal challenge by Anthropic.
Anthropic, developer of the popular Claude AI assistant, alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated the company a national security supply-chain risk over its refusal to remove certain usage guardrails on its products, a label that blocks Anthropic from Pentagon contracts and could trigger a government-wide blacklisting.
The Reuters Iran Briefing newsletter keeps you informed with the latest developments and analysis of the Iran war. Sign up here.
Advertisement · Scroll to continue
Anthropic executives have said the designation could cost the company billions of dollars in lost business and reputational harm.
A panel of judges of the U.S. Court of Appeals for the District of Columbia Circuit denied Anthropic’s bid to pause the designation while the case plays out. The decision is not a final ruling.
An Anthropic spokeswoman said in a statement following Wednesday’s ruling that the company is confident the court will ultimately agree the supply-chain risk designation is unlawful.
Acting Attorney General Todd Blanche hailed the ruling as a victory for military readiness in a social media post Wednesday.
“Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company,” Blanche said, using Trump’s new name for the Defense Department.
The lawsuit is one of two Anthropic filed over Hegseth’s unprecedented move, which came after Anthropic refused to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons due to safety and ethics concerns.
Hegseth issued orders designating Anthropic under two different laws, and Anthropic is challenging each of them separately.
A California federal judge blocked one of the orders on March 26, saying the Pentagon appeared to have unlawfully retaliated against Anthropic for its views on AI safety.
Anthropic’s designation was the first time a U.S. company has been publicly designated a supply-chain risk under obscure government-procurement statutes aimed at protecting military systems from enemy sabotage or infiltration.
In its lawsuits, Anthropic says the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute its designation, in violation of its Fifth Amendment right to due process.
The lawsuits say the designations were unlawful, unsupported by facts and inconsistent with the military’s past praise of Claude.
The Justice Department says that Anthropic’s refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing.
The government said its decision stemmed from Anthropic’s refusal to accept contractual terms, not its views on AI safety.
The D.C. case concerns a law that could lead to the blacklist widening to the broader civilian government following an interagency review process.
The California case deals with a narrower statute that excludes Anthropic from Pentagon contracts related to military information systems.
Reporting by Jack Queen in New York; Editing by Noeleen Walder, Cynthia Osterman and Lincoln Feast.
Our Standards: The Thomson Reuters Trust Principles., opens new tab
发表回复