Hegseth希望五角大楼弃用Anthropic的Claude,但军方用户表示没那么容易


2026年3月19日 上午10:02 UTC /路透社

作者:Mike Stone、Alexandra Alper和Raphael Satter

美国华盛顿特区从空中俯瞰的五角大楼,2022年3月3日拍摄。路透社/ Joshua Roberts

  • 摘要
  • 公司
  • 五角大楼员工不愿放弃Anthropic的AI工具
  • Anthropic的Claude被认为优于其他替代方案
  • 特朗普政府因供应链风险担忧下令逐步淘汰

3月19日电 与美国军方密切合作的五角大楼工作人员、前官员和IT承包商表示,尽管收到了移除命令,但他们不愿意放弃Anthropic的AI工具,认为该工具优于其他替代方案。

在Anthropic与五角大楼就军方如何使用其人工智能工具的监管问题产生争议后,国防部长彼得·赫格塞斯于3月3日将该公司认定为供应链风险,在六个月逐步淘汰期后禁止五角大楼及其承包商使用其工具。

订阅路透社商业新闻简报,获取每日突发商业新闻直抵您的收件箱。点击此处注册。

广告 · 滚动继续阅读

但这一举措正遭遇阻力,部分军方用户在拖延执行,另有一些人则准备在预期争议解决后恢复使用Anthropic的平台。

“国防部的资深IT人员讨厌这一举措,因为他们终于让操作人员习惯了使用AI,”一位IT承包商表示。“他们认为这很愚蠢。”该承包商称,Anthropic的Claude AI模型“是最好的”,而xAI的Grok对相同问题的回答常常不一致。

系统重新认证可能需要数月时间

这些抱怨表明,将Anthropic从五角大楼的网络中移除既不会迅速也不会轻松。一位承包商称,为军事用途重新认证运行在Anthropic产品上的系统可能需要数月时间。

广告 · 滚动继续阅读

一些五角大楼官员、员工和承包商因未获公开发言授权而要求匿名。

美国国防部、Anthropic和xAI均未回应置评请求。

AI工具已成为美国军方的必需品,军方将其用于从武器瞄准、行动规划到处理机密材料和分析信息等各种任务。

Anthropic于2025年7月宣布获得2亿美元国防合同,并迅速融入军方工作流程。Claude成为首个获准在机密军事网络上运行的AI模型,熟悉其使用情况的官员称采用率很高。在联邦政府内部,Anthropic的模型被广泛认为比竞争对手的产品更强大。

路透社此前报道称,五角大楼在与伊朗冲突期间使用Claude工具支持美军行动,消息人士称尽管被列入黑名单,该技术仍在使用中。一位专家称这是“最明确的信号”,表明五角大楼对该工具的高度重视。

此外,政府承包商RunSafe Security首席执行官乔·桑德斯表示:“用替代模型替换这些模型成本很高。”他补充说,这些替代系统需要经过漫长的流程才能重新认证,以用于机密或军事网络。

他说,对于将现有系统替换为新系统的情况,认证可能需要12至18个月。

“这不仅代价高昂,还会导致生产力损失,”桑德斯补充道,他曾协助军方整合AI聊天机器人。

停止使用Claude的命令正在五角大楼内部传达。一位官员称,员工们在遵守命令,因为“没有人愿意为此断送职业生涯”,但他形容这一转变是浪费。

该官员表示,原本由Claude处理的任务,如查询大型数据集获取信息,在某些情况下现在正通过Microsoft Excel等工具手动完成。几个人表示,Anthropic的Claude代码工具在五角大楼内部被广泛用于编写软件代码。

另一位高级官员称,失去该工具让开发人员感到沮丧,但补充说他们不应依赖单一工具。

艰难的过渡

移除Claude将是一项重大工程。

例如,Palantir的Maven智能系统——一个为军方提供情报分析和武器瞄准的软件平台——使用了多个由Anthropic的Claude代码构建的提示词和工作流程,据两位知情人士透露。一位消息人士称,Palantir与国防部及其他可能价值超过10亿美元的美国国家安全机构签订了Maven相关合同,该公司将不得不更换Claude为另一个AI模型,并重建其部分软件。

一位五角大楼技术专家表示,一些员工正在“缓慢推进”Claude的替换工作,因为他们正在积极利用Claude创建工作流程(一系列自动化任务)。

开发人员感到沮丧,因为转向新的AI代理意味着将失去他们创建的用于筛选海量数据的代理。

美国国防部已要求包括主要国防公司在内的承包商评估并报告其对Anthropic产品的依赖情况,并开始逐步淘汰。官员和承包商表示,他们现在面临一个战略问题:是迅速转向OpenAI、谷歌或xAI,还是以一种允许在五角大楼恢复使用Anthropic的方式逐步淘汰。

一家联邦机构的首席信息官表示,该机构计划缓慢推进淘汰进程,认为政府和Anthropic将在六个月期限前达成协议。

“我们看到的是五角大楼内部以及政治层面在采用问题上的张力,”罗纳德·里根总统基金会和研究所主任罗杰·扎克海姆表示。

Mike Stone、Alexandra Alper和Raphael Satter在华盛顿报道;David Jeans在纽约补充报道;Chris Sanders、Rod Nickel编辑

我们的标准:路透社信托原则。

Hegseth wants Pentagon to dump Anthropic’s Claude, but military users say it’s not so easy

March 19, 2026 10:02 AM UTC / Reuters

By Mike Stone, Alexandra Alper and Raphael Satter

The Pentagon is seen from the air in Washington, U.S., March 3, 2022. REUTERS/Joshua Roberts

  • Summary
  • Companies
  • Pentagon staff reluctant to abandon Anthropic’s AI tools
  • Anthropic’s Claude deemed superior to alternatives
  • Trump administration orders phase-out due to supply-chain risk concerns

March 19 – Pentagon staffers, former officials and IT contractors who work closely with the U.S. military say they are reluctant to give up Anthropic’s AI tools, which they view as superior to alternatives, despite orders to remove them.

After a dispute between ​Anthropic and the Pentagon over guardrails for how the military could use its artificial intelligence tools, Defense Secretary Pete Hegseth designated the company a supply-chain risk on March ‌3, barring its use by the Pentagon and its contractors following a six-month phase-out.

Get a daily digest of breaking business news straight to your inbox with the Reuters Business newsletter. Sign up here.

Advertisement · Scroll to continue

Report Ad

But the move is running into resistance, with some military users dragging their feet and others preparing to revert to Anthropic’s platform in anticipation of the dispute being resolved.

“Career IT people at DoD hate this move because they had finally gotten operators comfortable using AI,” said one IT contractor. “They think it’s stupid.” The contractor said Anthropic’s Claude AI model “is the best,” while xAI’s Grok often produced ​inconsistent answers to the same query.

RECERTIFYING SYSTEMS COULD TAKE MONTHS

The complaints suggest uprooting Anthropic from the Pentagon’s networks will be neither quick nor painless. One contractor said recertifying systems that run ​on Anthropic’s products for military use could take months.

Advertisement · Scroll to continue

Some Pentagon officials, staff and contractors spoke anonymously because they were not authorized to speak publicly.

The Defense ⁠Department, Anthropic and xAI did not respond to requests for comment.

AI tools have become essential for the U.S. military, which uses them for tasks ranging from targeting weapons and helping plan operations to ​handling classified material and analyzing information.

Anthropic announced a $200 million defense contract in July 2025 and quickly became embedded in the military’s workflow. Claude became the first AI model approved to operate on classified military networks, ​and officials familiar with its use said adoption was strong. Within the federal government, Anthropic’s models were widely viewed as more capable than rival offerings.

Reuters has previously reported that the Pentagon used Claude tools to support U.S. military operations during the conflict with Iran, and sources said the technology remains in use despite the blacklisting. One expert described that as “the clearest signal” of how highly the Pentagon values the tool.

Furthermore, “It’s a substantial cost to replace those models with alternatives,” ​said Joe Saunders, the CEO of government contractor RunSafe Security. Saunders added that those alternative systems would go through a long process to recertify them for use on classified or military networks.

In the ​case of an existing system being replaced with a new one, certification could take 12 to 18 months, he said.

“It’s not just costly, it’s a loss of productivity,” added Saunders, who helped the military incorporate AI chatbots.

Orders to ‌stop using ⁠Claude are filtering through the Pentagon. One official said staff are complying because “no one wants to end their career over this,” but described the shift as wasteful.

Tasks previously handled by Claude, such as querying large datasets for information, are in some cases now being done manually with tools such as Microsoft Excel, the official said. Anthropic’s Claude Code tool was widely used within the Pentagon to write software code, several of the people said.

Losing that tool has left developers frustrated, another senior official said, but added they should not rely on a single tool.

TOUGH TRANSITION

Removing Claude will be a major undertaking.

For example, Palantir’s Maven Smart ​Systems – a software platform that supplies militaries with ​intelligence analysis and weapons targeting – uses multiple prompts ⁠and workflows that were built using Anthropic’s Claude Code, according to two people familiar with the matter. Palantir, which holds Maven-related contracts with the Defense Department and other U.S. national security agencies that have a potential value of more than $1 billion, will have to replace Claude with another AI model and rebuild ​parts of its software, one of the sources said.

Some staff are “slow-rolling” their replacement of Claude because they are actively using it to create workflows, ​which are series of automated ⁠tasks, a Pentagon technologist said.

Developers are frustrated because shifting to new AI agents would mean losing the agents they created to sift through vast amounts of data.

The Defense Department has ordered contractors, including major defense firms, to assess and report their reliance on Anthropic products and to begin winding them down. Officials and contractors say they now face a strategic question: whether to pivot quickly to OpenAI, Google or xAI, or to unwind ⁠Anthropic in a ​way that allows for a rapid return if the Pentagon reinstates it.

One chief information officer at a federal agency said ​it plans to slow‑roll the phase‑out, betting that the government and Anthropic will reach an agreement before the six‑month deadline.

“What we are seeing play out here is the tension of adoption, both inside the Pentagon as well as the political level,” said ​Roger Zakheim, director of the Ronald Reagan Presidential Foundation and Institute.

Reporting by Mike Stone, Alexandra Alper and Raphael Satter in Washington; Additional reporting by David Jeans in New York; Editing by Chris Sanders, Rod Nickel

Our Standards: The Thomson Reuters Trust Principles.

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注