五角大楼内部备忘录命令军事指挥官从关键系统中移除 Anthropic AI 技术


2026年3月10日 / 美国东部时间晚上7:08 / CBS新闻

根据CBS新闻获得的一份内部备忘录,美国国防部已正式通知美军高级领导人物,他们必须在180天内从其系统中移除 Anthropic 的人工智能产品。

该备忘录日期为3月6日,即五角大楼正式将 Anthropic 认定为供应链风险的第二天。备忘录于周一分发给高级领导人,称 Anthropic 的人工智能“对所有[美国战争部]系统和网络构成不可接受的供应链风险”。

由美国国防部首席信息官克里斯滕·戴维斯(Kirsten Davies)签署的这份文件,是特朗普政府与 Anthropic 之间日益升级的争执中的最新一击。该通知揭示了军事指挥官需要采取广泛措施,从关键国家安全系统中移除 Anthropic AI,包括核武器、弹道导弹防御和网络战系统。

备忘录还要求,任何与五角大楼有业务往来的其他公司,必须在180天内停止在与国防部合同相关的工作中使用所有 Anthropic 产品。

戴维斯在备忘录中警告称,对手“可能利用五角大楼日常运营中的漏洞”,而这种利用可能对“作战人员”构成“潜在灾难性风险”。她表示,只有她能批准例外情况。

“仅会考虑为直接支持国家安全行动且不存在可行替代方案的关键任务提供豁免,且申请部门必须提交全面的风险缓解计划以获得批准,”她写道。

一位五角大楼高级官员证实了该备忘录的真实性。

Anthropic 尚未立即回应置评请求。

阅读备忘录

据报道,联邦政府的这一行动是史无前例的——这是美国首次将一家公司认定为供应链风险。在特朗普总统第一任期内,政府曾采取类似行动限制华为等外国公司。

此前,双方就 Anthropic 提出的两条“红线”请求陷入僵局,这两条红线旨在明确阻止美国军方使用其 Claude 模型对美国人进行大规模监控或开发完全自主武器。

“我们认为跨越这些红线违背美国价值观,我们希望捍卫美国价值观,”Anthropic 首席执行官达里奥·阿莫代伊(Dario Amodei)告诉 CBS 新闻。

五角大楼此前表示,希望能够“无限制地”使用 Claude 进行“所有合法目的”,并辩称 Anthropic 担忧的 AI 使用方式已被禁止。据熟悉军方 AI 使用情况的消息人士透露,Claude 目前正被美军用于对伊朗的战争中。

Anthropic 目前是唯一一家其模型部署在五角大楼机密系统上的 AI 公司。在上个月双方谈判破裂后,Anthropic 的最大竞争对手之一——ChatGPT 创造者 OpenAI 表示,它已与五角大楼签署协议。

周一,Anthropic 对联邦政府提起两起诉讼,指控五角大楼官员将该公司认定为供应链风险的决定构成非法报复。

“宪法不允许政府滥用其巨大权力来惩罚一家公司因行使受保护的言论,”该公司在诉讼中表示,“没有任何联邦法规授权此处采取的行动。”

白宫发言人莉兹·休斯顿(Liz Huston)回应诉讼时表示,特朗普总统“绝不会允许一个激进的左翼‘觉醒’公司通过规定世界上最强大军队的运作方式来危害我们的国家安全。”

一位直接了解 Claude 军事能力的消息人士告诉 CBS 新闻,Claude 为军方执行的主要任务是筛选大量情报报告,如合成模式、总结调查结果以及比人类分析师更快地呈现相关信息。

“军方现在每天大约处理一千个潜在目标,并打击其中大部分,下一次打击的周转时间可能在四小时以内,”退役海军上将马克·蒙哥马利(Mark Montgomery)表示,他现在是民主基金会(Foundation for Defense of Democracies)的高级董事。“人类仍在决策链中,但 AI 正在完成过去需要数天分析的工作——并且以前所未有的规模进行。”

Internal Pentagon memo orders military commanders to remove Anthropic AI technology from key systems

March 10, 2026 / 7:08 PM EDT / CBS News

The Defense Department has officially notified senior leadership figures throughout the U.S. military that they must remove Anthropic’s artificial intelligence products from their systems within 180 days, according to an internal memorandum obtained by CBS News.

The memo was dated March 6, a day after the Pentagon formally designated Anthropic a supply chain risk. It was distributed to senior leaders on Monday, alleging Anthropic’s AI “presents an unacceptable supply chain risk for use in all [Department of War] systems and networks.”

The document, signed by Defense Department Chief Information Officer Kirsten Davies, represents the latest salvo in an escalating feud between the Trump Administration and Anthropic. The notice sheds light on the wide-ranging steps military commanders will need to take to remove Anthropic AI from key national security systems, including those for nuclear weapons, ballistic missile defense and cyber warfare.

It also demanded that any other company doing business with the Pentagon must stop using all Anthropic products on work related to Defense Department contracts within 180 days.

In the memo, Davies warned that adversaries “can exploit vulnerabilities” of the daily operations of the Pentagon, and possible exploitation could pose “potential catastrophic risks to the warfighter.” Davies said she is the only one who can grant an exception.

“Exemptions will only be considered for mission-critical activities directly supporting national security operations where no viable alternative exists, and the requesting Component must submit a comprehensive risk mitigation plan for approval,” she wrote.

A senior Pentagon official confirmed the memo’s authenticity.

Anthropic did not immediately respond to a request for comment.

Read the memo

The federal government’s action is said to be unprecedented — the first time an American company has been designated a supply chain risk. During President Trump’s first term, the government took similar action to restrict foreign-based companies like Chinese telecommunications giant Huawei.

It comes after an impasse over Anthropic’s request for two “red lines” that would explicitly prevent the U.S. military from using its Claude model to conduct mass surveillance on Americans or power fully autonomous weapons.

“We believe that crossing those lines is contrary to American values, and we wanted to stand up for American values,” Anthropic CEO Dario Amodei told CBS News.

The Pentagon previously said it wanted to be able to use Claude for “all lawful purposes,” without restrictions, arguing that the uses of AI that Anthropic is concerned about are already prohibited. Claude is currently being used by the US military in the war on Iran, according to sources familiar with the military’s use of AI.

Anthropic is currently the only AI company whose models are deployed on the Pentagon’s classified systems. After talks between the two sides broke down last month, one of Anthropic’s largest rivals — ChatGPT creator OpenAI — said it had signed a deal with the Pentagon.

On Monday, Anthropic filed two lawsuits against the federal government, alleging that Pentagon officials’ decision to deem the company a supply chain risk amounted to illegal retaliation.

“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the company said in the lawsuit. “No federal statute authorizes the actions taken here.”

White House spokesperson Liz Huston responded to the lawsuit by saying President Trump “will never allow a radical left, woke company to jeopardize our national security by dictating how the greatest and most powerful military in the world operates.”

A source directly familiar with Claude’s military capabilities told CBS News the main task Claude is undertaking for the military is sifting through large amounts of intelligence reports, like synthesizing patterns, summarizing findings, and surfacing relevant information faster than a human analyst could.

“The military is now processing roughly a thousand potential targets a day and striking the majority of them, with turnaround time for the next strike potentially under four hours,” said retired Navy Admiral Mark Montgomery, now a senior director at the Foundation for Defense of Democracies. “A human is still in the loop, but AI is doing the work that used to take days of analysis — and doing it at a scale no previous campaign has matched.”

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注