五角大楼向AI公司发出最后通牒:周五前解除军事使用限制,否则将失去2亿美元合同


战争部长彼得·黑格斯泰特周二在会议上发出最后通牒,要求Claude模型解除军事使用限制,否则将终止2亿美元合同

作者:摩根·菲利普斯
福克斯新闻

发布时间:2026年2月24日 美国东部时间下午6:09

据多位知情人士透露,五角大楼已向人工智能公司Anthropic发出最后通牒,要求其在周五之前解除对其Claude AI系统军事用途的限制,并警告称,如果该公司拒绝,可能会取消价值2亿美元的合同或采取其他惩罚性措施。

这场冲突源于五角大楼声称,Anthropic曾询问其产品是否被用于今年1月抓捕委内瑞拉领导人尼古拉斯·马杜罗的军事行动,这暗示该公司可能在军方使用其产品时不予批准。五角大楼坚持要求AI公司必须允许产品用于所有合法的军事用途——无需公司监督或批准。

Anthropic表示,其红线是不允许产品用于完全自主武器或对美国人进行大规模监控。

消息人士称,战争部长彼得·黑格斯泰特周二在五角大楼与Anthropic首席执行官达里奥·阿莫代伊的会议上发出了最后通牒,尽管他称赞了该公司的技术,并表示国防部希望继续与该公司合作。

据知情人士透露,黑格斯泰特告诉阿莫代伊,如果该公司不允许Claude用于所有合法目的,它可能面临五角大楼合同终止、被指定为供应链风险——这可能限制其与国防供应商合作的能力——或可能援引《国防生产法》强制获取该技术。

根据2025年夏季授予的一份2亿美元合同,Claude目前是唯一在五角大楼机密网络中运行的先进商业AI模型,这使得这场纠纷的赌注大幅提高。

五角大楼官员辩称,国防部不能依赖一家对其技术的某些使用方式设置绝对限制的上市公司,即使这些使用方式是合法的。据一位知情人士透露,在会议上,黑格斯泰特将这种情况比作被告知军方不能使用特定飞机执行任务。

这场纠纷代表了美国国防系统中谁控制先进AI护栏的早期考验——是私人公司还是五角大楼。这一结果可能会影响军方与主要AI开发者的合作方式,因为军方正逐步将更强大的机器学习工具整合到国家安全行动中。

以安全为导向的AI公司Anthropic表示,其政策旨在随着先进AI系统变得越来越强大,降低被滥用的风险。

据报道,美国军方在抓捕委内瑞拉领导人尼古拉斯·马杜罗的行动中使用了Anthropic的AI工具Claude。(Kurt “CyberGuy” Knutsson)

在会议上,阿莫代伊解释了这些限制,并辩称这些限制不会干扰合法、正当的国防部行动,一位知情人士表示。

五角大楼一位高级官员声称,其立场”与大规模监控或自主瞄准无关”,因为”始终有人参与,国防部始终遵守法律”。

尽管紧张局势升级,双方官员都表示,目前在国防部的合法使用框架中并未考虑完全自主武器,这表明冲突很大程度上关乎控制权而非战场应用。

战争部长彼得·黑格斯泰特与其他被告一起被列为诉讼当事人。(Julia Demaree Nikhinson/AP)

人工智能工具Claude在美军突袭行动中助力抓捕委内瑞拉独裁者马杜罗:报道

消息人士称,在周二的会议上,黑格斯泰特明确提到了《国防生产法》的潜在使用、终止Anthropic现有合同,以及如果该公司不同意允许其产品用于所有合法目的,可能将其指定为供应链风险。

这些措施反映了两种截然不同的联邦杠杆形式。

将其指定为供应链风险可能会通过暗示该公司存在可靠性或治理问题,来限制Anthropic与联邦供应商和承包商合作的能力,而援引《国防生产法》将是罕见的尝试,即利用国家安全权力强制获取被视为国防需求关键的前沿AI系统。

终止合同将带来超出终止供应商关系的后果。由于Claude目前通过价值2亿美元的协议嵌入五角大楼的机密网络,取消合同可能会扰乱现有工作流程,并要求国防部将敏感系统转移到替代供应商。

五角大楼正在审查由达里奥·阿莫代伊领导的Anthropic,将其列为”供应链风险”。(Priyanshu Singh/Reuters)

点击此处下载福克斯新闻应用程序

五角大楼官员还表示,埃隆·马斯克的Grok AI聊天机器人已同意允许其产品用于所有合法目的,包括可能集成到机密系统中,其他前沿AI公司也”即将达成类似安排”。Grok没有立即回应置评请求。

Anthropic在一份公司发言人的声明中表示:”Anthropic首席执行官达里奥·阿莫代伊今天上午在五角大楼会见了黑格斯泰特部长。在谈话中,达里奥对国防部的工作表示感谢,并感谢部长的服务。我们继续就使用政策进行真诚对话,以确保Anthropic能够继续支持政府的国家安全使命,这符合我们模型能够可靠和负责任地完成的工作。”

Pentagon gives AI firm ultimatum: lift military limits by Friday or lose $200M deal

War Secretary Pete Hegseth delivered an ultimatum during a Tuesday meeting over Claude model’s military use limitations in $200M contract

By Morgan Phillips
Fox News

Published February 24, 2026 6:09pm EST

The Pentagon has given artificial intelligence firm Anthropic until Friday to lift restrictions on how its Claude AI system can be used by the military, warning it could cancel a $200 million contract or take other punitive steps if the company refuses, according to multiple sources familiar with the discussions.

The skirmish broke out after the Pentagon claimed Anthropic had asked whether its product was used in the January military operation to capture Venezuelan leader Nicolás Maduro, in a way that suggested the company may not approve if it was. The Pentagon insists AI companies must allow products to be utilized for all lawful military use cases — without company oversight or approval.

Anthropic suggests its red lines are not allowing its products to be used for fully autonomous weapons or mass surveillance of Americans.

War Secretary Pete Hegseth delivered an ultimatum during a Tuesday meeting at the Pentagon with Anthropic CEO Dario Amodei, even as Hegseth praised the company’s technology and said the department wants to continue working with the firm, sources said.

Hegseth told Amodei that if the company did not allow Claude to be used for all lawful purposes, it could face termination of its Pentagon contract, designation as a supply chain risk — potentially limiting its ability to work with defense vendors — or possible invocation of the Defense Production Act to compel access to the technology, according to sources familiar with the meeting.

Claude is currently the only advanced, commercial AI model of its kind operating inside the Pentagon’s classified networks, under a $200 million contract awarded in summer 2025, significantly raising the stakes of the dispute.

Pentagon officials argue the Department of Defense cannot depend on a private company that maintains categorical restrictions on certain uses of its technology, even if those uses are lawful. During the meeting, Hegseth compared the situation to being told the military could not use a specific aircraft for a mission, according to a source familiar with the exchange.

The dispute represents an early test of who controls the guardrails on advanced AI inside U.S. defense systems — private companies or the Pentagon. The outcome could shape how the military partners with leading AI developers as it moves to integrate more powerful machine learning tools into national security operations.

Anthropic, which has branded itself as a safety-oriented AI company, has said its policies are meant to reduce the risk of misuse as advanced AI systems become more powerful.

The U.S. military reportedly used Anthropic’s AI tool Claude during the operation that captured Venezuelan leader Nicolás Maduro.(Kurt “CyberGuy” Knutsson)

During the meeting, Amodei walked through those restrictions and argued restrictions would not interfere with lawful, legitimate War Department operations, according to a source familiar with the meeting.

A senior Pentagon official claimed its position “has nothing to do with mass surveillance or autonomous targeting” because”there’s always a human involved and the department always follows the law.”

Even as tensions rose, officials on both sides indicated that fully autonomous weapons are not currently contemplated under the department’s lawful use framework, suggesting the clash is as much about control as about battlefield applications.

War Secretary Pete Hegseth is named in the lawsuit, along with other defendants.(Julia Demaree Nikhinson/AP)

AI TOOL CLAUDE HELPED CAPTURE VENEZUELAN DICTATOR MADURO IN US MILITARY RAID OPERATION: REPORT

During Tuesday’s meeting, Hegseth explicitly referenced potential use of the Defense Production Act, termination of Anthropic’s existing contract and the possibility of designating the company a supply chain risk if it does not agree to allow its products to be used for all lawful purposes, sources said.

Such steps reflect two very different forms of federal leverage.

A supply chain risk designation could restrict Anthropic’s ability to work with federal vendors and contractors by signaling the company poses reliability or governance concerns, while invoking the Defense Production Act would represent a rare attempt to use national security authorities to compel access to frontier AI systems deemed critical to defense needs.

Terminating the contract would carry consequences beyond ending a vendor relationship. Because Claude is currently embedded inside the Pentagon’s classified networks in a $200 million agreement, cancellation could disrupt existing workflows and require the department to transition sensitive systems to an alternative provider.

The Pentagon is reviewing Anthropic, led by Dario Amodei, above, as a “supply chain risk.”(Priyanshu Singh/Reuters)

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Pentagon officials also said Elon Musk’s Grok AI chatbot has agreed to allow its products to be used for all lawful purposes, including potential integration into classified systems, and that other frontier AI firms are “close” to similar arrangements. Grok did not immediately respond to a request for comment.

Anthropic, in a statement attributed to a company spokesperson, said: “Anthropic CEO Dario Amodei met with Secretary Hegseth at the Pentagon this morning. During the conversation, Dario expressed appreciation for the Department’s work and thanked the Secretary for his service. We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注