赫格塞斯宣布 Anthropic 构成供应链风险,限制军事承包商与这家 AI 巨头开展业务


2026年2月27日 / 美国东部时间晚上10:46 / CBS新闻

国防部长彼得·赫格塞斯周五认定人工智能公司 Anthropic 是“对国家安全构成供应链风险”的企业,此前双方就该公司试图为五角大楼使用其技术设置限制条件已爆发数日愈演愈烈的公开冲突。

赫格塞斯在X平台(原推特)上宣布,自即日起,“任何与美国军方开展业务的承包商、供应商或合作伙伴均不得与 Anthropic 开展任何商业活动”。考虑到与五角大楼签订合同的公司数量众多,这一决定可能产生广泛影响。

“美国士兵永远不会被科技巨头的意识形态 whims(一时兴起/异想天开)所牵制。这一决定不容置疑。”赫格塞斯写道。

特朗普总统周五早些时候宣布,所有联邦机构必须“立即”停止使用 Anthropic 的服务,不过国防部和其他某些机构可在最多六个月的过渡期内继续使用其 AI 技术,同时逐步转向其他服务提供商。

Anthropic 周五在一份声明中誓言将“在法庭上挑战任何供应链风险认定”,称此举“在法律上站不住脚”,并警告这将为“任何与政府谈判的美国公司树立危险先例”。该公司表示,赫格塞斯没有合法权力禁止军事承包商与 Anthropic 开展业务,因为风险认定仅适用于承包商与五角大楼的合作项目。

“将 Anthropic 认定为供应链风险是前所未有的行动——历史上此类措施仅用于美国对手,从未公开适用于美国本土公司。”Anthropic 表示。

周五晚间,OpenAI 首席执行官山姆·奥特曼在社交媒体上发布称,其公司“已与战争部达成协议,将在他们的机密网络中部署我们的模型”。

“我们最重要的两项安全原则是禁止国内大规模监控以及人类对包括自主武器系统在内的武力使用负有责任。战争部同意这些原则,并在法律和政策中予以体现,我们已将其纳入协议中。”奥特曼写道,他补充道,OpenAI 正请求国防部“向所有 AI 公司提供相同条款,在我们看来,我们认为所有人都应愿意接受这些条款”。

切断与 Anthropic 合作的决定源于与五角大楼的纠纷,这场纠纷凸显了关于 AI 在国家安全中的作用以及这项强大技术可能带来的潜在风险存在的广泛分歧。

该公司是唯一一家其模型部署在五角大楼机密网络中的 AI 公司,它一直试图设置限制条件,防止其技术被用于对美国民众进行大规模监控或在未经人类批准的情况下开展军事行动。但五角大楼坚持任何协议都应允许使用 Anthropic 的 Claude 模型进行“所有合法目的”。


自由媒体:人工智能会毁灭我们吗?市场无法抉择


五角大楼曾向 Anthropic 设定周五下午5:01的最后期限,要求要么达成协议,要么失去与军方的丰厚合同。

军方的立场是,五角大楼对美国民众进行大规模监控本身已属非法,且内部政策限制军方使用完全自主武器。随着本周双方谈判破裂,五角大楼官员公开指责该公司试图将自身观点强加给军方。

周五,赫格塞斯称 Anthropic“伪善且傲慢”,指责其试图“胁迫美国军方屈服”。

“他们的真实目的昭然若揭:夺取对美国军方作战决策的否决权。这是不可接受的。”赫格塞斯声称。

但 Anthropic 首席执行官达里奥·阿莫代伊辩称,设置限制条件是必要的,因为 Claude 模型还不足以支持完全自主武器系统,且强大的 AI 模型可能引发严重隐私问题。他表示,公司理解军事决策由五角大楼制定,从未试图“临时随意”限制其技术的使用。

“然而,在少数情况下,我们认为 AI 可能会破坏而非捍卫民主价值观。”阿莫代伊周四在一份声明中表示,“某些用途也超出了当今技术能够安全可靠处理的范围。”

阿莫代伊多年来一直直言不讳地指出不受约束的 AI 技术可能带来的风险,并支持对 AI 安全和透明度制定监管的呼吁。

周五晚些时候,该公司仍坚持立场,称:“无论战争部施加何种威胁或惩罚,我们在大规模国内监控或完全自主武器方面的立场都不会改变。”

“我们对这些事态发展深感遗憾。”Anthropic 表示,“作为首家将模型部署在美国政府机密网络中的前沿 AI 公司,Anthropic 自2024年6月起就一直在支持美国士兵,并且有充分意愿继续这样做。”

在军方达成协议的最后期限前夕的周四,五角大楼首席技术官埃米尔·迈克尔告诉 CBS 新闻,五角大楼已做出让步,提供了对限制大规模监控和自主武器的联邦法律及内部军事政策的书面确认。

“在某种程度上,你必须相信你的军方会做出正确的决定。”迈克尔表示,他同时指出,“我们永远不会说我们无法以书面形式向一家公司证明我们的立场。”

Anthropic 称这一让步“不足为据”。该公司发言人表示,新的表述“夹杂着法律术语,可能被随意用来无视这些保障措施”。

Hegseth declares Anthropic a supply chain risk, restricting military contractors from doing business with AI giant

February 27, 2026 / 10:46 PM EST / CBS News

Defense Secretary Pete Hegseth deemed artificial intelligence firm Anthropic a “supply chain risk to national security” on Friday, following days of increasingly heated public conflict over the company’s effort to place guardrails on the Pentagon’s use of its technology.

Hegseth declared on X that effective immediately, “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” The decision could have a wide-ranging impact, given the sheer number of companies that contract with the Pentagon.

“America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final,” Hegseth wrote.

President Trump announced earlier Friday that all federal agencies must “immediately” stop using Anthropic, though the Defense Department and certain other agencies can continue using its AI technology for up to six months while transitioning to other services.

Anthropic vowed in a statement Friday to “challenge any supply chain risk designation in court,” calling the move “legally unsound” and warning it would set a “dangerous precedent for any American company that negotiates with the government.” The company wrote that Hegseth doesn’t have the legal authority to ban military contractors from doing business with Anthropic, since a risk designation would only apply to contractors’ work with the Pentagon.

“Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company,” Anthropic said.

In a social media post Friday night, OpenAI CEO Sam Altman said his company “reached an agreement with the Department of War to deploy our models in their classified network.”

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” Altman wrote, adding that OpenAI is asking the Defense Department “to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.”

The decision to cut off Anthropic came after a dispute with the Pentagon that highlighted sweeping disagreements about the role of AI in national security and the potential risks that the powerful technology could pose.

The company — which is the only AI firm whose model is deployed on the Pentagon’s classified networks — has sought guardrails that prevent its technology from being used to conduct mass surveillance of Americans or carry out military operations without human approval. But the Pentagon insisted any deal should allow use of Anthropic’s Claude model for “all lawful purposes.”

*

The Free Press: Will AI Doom Us All? The Market Can’t Decide

*

The Pentagon had given Anthropic a deadline of Friday at 5:01 p.m. to either reach an agreement or lose out on its lucrative contracts with the military.

The military’s position is that it’s already illegal for the Pentagon to conduct mass surveillance of Americans, and internal policies restrict the military from using fully autonomous weapons. As talks between the two sides broke down this week, Pentagon officials have publicly accused the company of seeking to impose its own views onto the military.

Hegseth called Anthropic “sanctimonious” and arrogant on Friday, and accused it of trying to “strong-arm the United States military into submission.”

“Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable,” Hegseth alleged.

But Anthropic CEO Dario Amodei has argued that guardrails are necessary because Claude is not infallible enough to power fully autonomous weapons and a powerful AI model could raise serious privacy concerns. He says the company understands that military decisions are made by the Pentagon and has never tried to limit the use of its technology “in an ad hoc manner.”

“However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei said in a statement Thursday. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”

Amodei has been outspoken for years about the potential risks posed by unchecked AI technology, and has backed calls for safety and transparency regulations.

The company held firm to its position late Friday, writing: “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.”

“We are deeply saddened by these developments,” Anthropic said. “As the first frontier AI company to deploy models in the US government’s classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so.”

On Thursday, the eve of the military’s deadline to reach a deal, the Pentagon’s chief technology officer, Emil Michael, told CBS News that the Pentagon had made concessions, offering written acknowledgements of the federal laws and internal military policies that restrict mass surveillance and autonomous weapons.

“At some level, you have to trust your military to do the right thing,” said Michael, who also noted, “We’ll never say that we’re not going to be able to defend ourselves in writing to a company.”

Anthropic called that offer inadequate. A company spokesperson said the new language was “paired with legalese that would allow those safeguards to be disregarded at will.”

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注