特朗普指示美国机构停用 Anthropic 的人工智能,五角大楼称该初创公司存在供应链风险


2026年2月27日 美国东部时间晚上8:56 / 路透社

作者:安德里亚·沙拉尔、杰弗里·达斯汀和瑞安·帕特里克·琼斯

图片

  • 摘要
  • 公司
  • 五角大楼将 Anthropic 列为供应链风险,影响国防承包商
  • 特朗普威胁称,若 Anthropic 不配合,将采取进一步行动
  • 总统下令在六个月内逐步淘汰政府机构使用的 Anthropic 技术

华盛顿,2月27日(路透社) – 美国总统唐纳德·特朗普周五表示,他正指示所有联邦机构停止与人工智能实验室 Anthropic 的合作,而五角大楼则宣布该公司构成供应链风险,这标志着持续数周的关于技术护栏的争执达到顶点,显然对这家初创公司的业务造成打击。

特朗普在 Truth Social 平台上发文称:“我指示美国政府的所有联邦机构立即停止使用 Anthropic 的技术。我们不需要它,不想要它,并且不会再与他们开展业务!”

他补充说,国防部和其他使用该公司产品的机构将有六个月的过渡期。

与此同时,五角大楼将 Anthropic 列为供应链风险(这一标签通常仅用于敌对国家的公司),这意味着国防承包商可能被禁止将 Anthropic 的 AI 技术用于为五角大楼开展的工作中。国防工业基础包括数万家承包商,其中包括多家大型上市公司。

这些行动与五角大楼周五设定的最后期限有关,该期限旨在解决与总部位于旧金山的 Anthropic 公司日益升级的争执,双方争议的焦点是军方如何在战争中使用人工智能。

Anthropic 去年获得了五角大楼最高 2 亿美元的合同,其发言人未立即回应置评请求。

特朗普的宣布并未提及五角大楼威胁可能援引《国防生产法》要求 Anthropic 遵守相关规定,但美国总统誓言,如果 Anthropic 不配合未来的技术淘汰工作,他将采取进一步行动。

特朗普警告称,若 Anthropic 不协助淘汰其技术,他将“动用总统的全部权力使其遵守规定,并将随之产生重大民事和刑事后果”。

武器与监控担忧

这一挫折发生在 Anthropic 作为人工智能领域的领军者,正急于在其备受期待的首次公开募股(IPO)之前,通过向企业和政府(尤其是国家安全领域)推销新型技术来赢得激烈竞争之际。该公司表示尚未最终决定是否进行 IPO。

与此同时,关于技术护栏的争议引发了人们的担忧:美国国防部在部署人工智能用于国家安全任务时,可能会遵循美国法律,但几乎不受其他约束,无论技术开发者承诺了何种安全或伦理服务条款。

Anthropic 曾寻求确保其 AI 不会被用于完全自主武器或大规模国内监控(而五角大楼称其对这些应用不感兴趣)。

该初创公司表示,它是首家通过云服务提供商亚马逊将模型部署到机密网络的前沿人工智能实验室,也是首家为国家安全客户定制模型的公司。

其产品 Claude 已在情报界和武装部队中得到应用。

美国民主党参议员、情报特别委员会副主席马克·华纳批评了特朗普采取的行动。

“总统下令在联邦政府层面停用一家领先的美国人工智能公司,并伴有煽动性言论攻击该公司,这引发了严重担忧:国家安全决策是否正由精心分析还是政治考量驱动。”

这场冲突是至少可追溯至 2018 年的一系列事件中的最新爆发。当年,谷歌员工抗议五角大楼使用该公司的 AI 分析无人机镜头,导致硅谷与华盛顿的关系紧张。随后双方达成和解,亚马逊和微软等公司竞相争夺国防业务,去年仍有更多首席执行官承诺与特朗普政府合作。

但理论上的“杀伤性机器人”一直是人权和技术活动人士的担忧。与此同时,乌克兰和加沙已成为战场上日益增多的自动化系统的试验场。

旧金山记者杰弗里·达斯汀、多伦多记者瑞安·帕特里克·琼斯、华盛顿记者安德里亚·沙拉尔和渥太华记者伊斯梅尔·沙基尔报道;达芙妮·普萨莱达基斯撰写;凯特琳·韦伯和马修·刘易斯编辑

我们的标准:汤森路透信托原则。

Trump directs US agencies to toss Anthropic’s AI as Pentagon calls startup a supply risk

February 27, 2026 8:56 PM UTC / Reuters

By Andrea Shalal, Jeffrey Dastin and Ryan Patrick Jones

U.S. President Donald Trump walks to depart from the White House, ahead of his trip to Corpus Christi, Texas, in Washington, D.C., U.S., February 27, 2026. REUTERS/Evelyn Hockstein

  • Summary
  • Companies
  • Pentagon labels Anthropic a supply-chain risk, affecting defense contractors
  • Trump threatens further action if Anthropic is not helpful
  • President’s post orders six-month phaseout of Anthropic technology across government

WASHINGTON, Feb 27 (Reuters) – U.S. President Donald Trump said on Friday he is directing every federal agency to stop work with artificial intelligence lab Anthropic while the Pentagon declared it a supply-chain risk, capping a weeks-long fight over technology guardrails in an apparent blow to the startup’s business.

In a post on Truth Social, Trump said, “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!”

The Reuters Inside Track newsletter is your essential guide to the biggest events in global sport. Sign up here.

Advertisement · Scroll to continue

He added there would be a six-month phaseout for the Defense Department and other agencies that use the company’s products.

Meanwhile, the Pentagon’s supply-chain risk designation, typically reserved for companies in adversary nations, means that defense contractors could be barred from deploying Anthropic’s AI as part of work for the Pentagon. The defense industrial base includes tens of thousands of contractors, including major public companies.

The actions were pegged to a Friday deadline that the Pentagon set to resolve an escalating feud with San Francisco-based Anthropic, over concerns about how the military could use AI at war.

Advertisement · Scroll to continue

Spokespeople for Anthropic, which won a $200 million ceiling Pentagon contract last year, did not immediately respond to a request for comment.

Trump’s announcement stopped short of the Pentagon’s threat that it could invoke the Defense Production Act to require Anthropic’s compliance. But the U.S. president vowed further action if Anthropic did not cooperate going forward.

Trump warned he would use “the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow” if Anthropic did not help with the phaseout of its technology.

WEAPONS, SURVEILLANCE CONCERNS

The setback comes as AI leader Anthropic raced to win a fierce competition selling novel technology to businesses and government, particularly for national security, ahead of its widely expected initial public offering. The company has said it has not finalized an IPO decision.

At the same time, the battle over technological guardrails had raised concerns that the Department of Defense would follow U.S. law but little other constraint when deploying AI for national-security missions, regardless of safety or ethics service terms embraced by the technology’s developers.

Anthropic had sought guarantees that its AI would not be used for fully autonomous weapons or for mass domestic surveillance – applications in which the Pentagon has said it had no interest.

Anthropic was the first frontier AI lab to put its models on classified networks via cloud provider Amazon.com and the first to build customized models for national security customers, the startup has said.

Its product Claude is in use across the intelligence community and armed services.

U.S. Senator Mark Warner, a Democrat and vice chairman of the Select Committee on Intelligence, criticized the action taken by Trump, a Republican.

“The president’s directive to halt the use of a leading American AI company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.”

The conflict is the latest eruption in a saga that dates back at least to 2018. That year, employees at Alphabet’s Google protested the Pentagon’s use of the company’s AI to analyze drone footage, straining relations between Silicon Valley and Washington. A rapprochement ensued, with companies including Amazon and Microsoft jousting for defense business, and still more CEOs pledging cooperation last year with the Trump administration.

But theoretical “killer robots” have remained a concern held by human-rights and technology activists. At the same time, Ukraine and Gaza have become theaters for increasingly automated systems on the battlefield.

Reporting by Jeffrey Dastin in San Francisco, Ryan Patrick Jones in Toronto, Andrea Shalal in Washington and Ismail Shakil in Ottawa; Writing by Daphne Psaledakis; Editing by Caitlin Webber and Matthew Lewis

Our Standards: The Thomson Reuters Trust Principles.

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注