Anthropic与五角大楼争执背后的原因


2026年2月25日 / 美国东部时间下午6:33 / CBS新闻

华盛顿 — 本周五角大楼向Anthropic发出最后通牒:要么允许美军无限制使用其人工智能技术,要么将失去所有政府合同。

争议的核心在于谁有权控制人工智能模型的使用——是五角大楼还是公司首席执行官。

五角大楼的AI合同


去年7月,五角大楼与Anthropic签订了一份2亿美元的合同,用于开发有助于提升美国国家安全的人工智能能力。

包括OpenAI、谷歌和xAI在内的Anthropic竞争对手去年也获得了五角大楼2亿美元的合同。

Anthropic目前是唯一通过与数据分析巨头Palantir合作,将其模型部署在五角大楼机密网络上的AI公司。

一位五角大楼高级官员告诉CBS新闻,由埃隆·马斯克的xAI公司开发的Grok已被允许在机密环境中使用,其他AI公司也在接近这一步。

五角大楼上月宣布,计划加快人工智能的应用,称该技术可以帮助军方”快速处理情报数据”并”提高作战人员的杀伤力和效率”。

监管限制的冲突


据报道,五角大楼与Anthropic的僵局是由美军在1月份抓捕前委内瑞拉总统尼古拉斯·马杜罗的行动中使用其名为Claude的技术引发的。

消息人士告诉CBS新闻,Anthropic多次要求五角大楼同意某些监管限制,其中包括禁止使用Claude对美国民众进行大规模监控。

知情人士透露,该公司还希望确保五角大楼不会在军事行动中未经人类干预就使用Claude做出最终目标决策。消息人士称,Claude存在幻觉风险,无法避免致命错误,比如在没有人类判断的情况下发生意外升级或任务失败。

当被要求置评时,一位五角大楼高级官员表示:”这与大规模监控和自主武器使用无关。五角大楼只下达合法命令。”

五角大楼官员向Anthropic表达了担忧,即该公司的监管限制可能会阻碍关键行动,例如应对针对美国的洲际弹道导弹。

负责研究的国防部副部长埃米尔·迈克尔在2月份的一次活动中表示:”任何公司施加的限制都可能造成这样一种情况:我们开始使用这些模型并习惯其运作方式,而当紧急情况需要使用时,我们却被阻止使用。”

关于当AI被用于打击或杀害军事目标并犯错时,责任在谁——军方还是AI公司——一位国防官员表示:合法性由作为最终用户的五角大楼负责。

高层领导的表态


Anthropic首席执行官达里奥·阿莫迪(Dario Amodei)一直直言表达对AI潜在危险的担忧,并将公司品牌定位在安全和透明上。

上个月,阿莫迪在一篇长文中警告称AI技术存在滥用风险,他写道:”一个强大的AI可以分析数百万人的数十亿对话,评估公众情绪,发现潜在的不忠迹象并在其形成之前加以压制。”

“民主国家通常有防止军事和情报机构转向针对本国民众的保障措施,但由于AI工具只需少数人操作,有可能规避这些保障措施和支持它们的规范。值得注意的是,一些民主国家的这些保障措施已经在逐渐削弱。”

阿莫迪长期以来支持他所谓的”合理AI监管”,包括要求AI公司透明化其模型的风险和缓解措施。

与此同时,特朗普政府倾向于更宽松的监管,并认为严格的AI法规会扼杀创新,使美国AI产业更难竞争。该政府试图阻止所谓的”过度”州级监管。去年曾有一段时间,风险投资家、白宫AI和加密货币顾问大卫·萨克斯指责Anthropic”散布恐惧”,称其对AI监管的兴趣是为了自身利益。

1月份,国防部长彼得·赫格塞斯(Pete Hegseth)抨击他认为”注入社会正义因素会限制和混淆我们对这项技术的使用”。

“我们不会使用不允许我们打战争的AI模型,”赫格塞斯宣称,”我们只会用一个标准来评判AI模型:事实准确、任务相关,没有限制合法军事应用的意识形态约束。战争部AI不会’觉醒'(woke),它会为我们工作。我们正在打造准备战争的武器和系统,而不是常春藤盟校教师休息室里的聊天机器人。”

Anthropic与五角大楼争端的下一步


知情人士向CBS新闻透露,赫格塞斯给了Anthropic直到2月28日(周五)的期限,要求其同意允许美军无限制使用其技术,否则将面临黑名单。

五角大楼官员正在考虑援引《国防生产法》,以国家安全为由强迫Anthropic遵守。

消息人士称,如果无法达成协议,国防官员已讨论将该公司宣布为”供应链风险”,以推动其退出政府合作。

https://www.cbsnews.com/video/pentagon-at-odds-with-tech-company-anthropic-over-ai-model/

What’s behind the Anthropic-Pentagon feud

February 25, 2026 / 6:33 PM EST / CBS News

Washington — The Pentagon gave Anthropic an ultimatum this week: Give the U.S. military unrestricted use of its AI technology or face a ban from all government contracts.

At the center of the issue is a question of who controls how artificial intelligence models are used, the Pentagon or the company’s CEO.

The Pentagon’s AI contracts


The Pentagon awarded Anthropic a $200 million contract in July to develop AI capabilities that would advance U.S. national security.

Anthropic’s rivals, including OpenAI, Google and xAI were also awarded $200 million contracts by the Pentagon last year.

Anthropic is currently the only AI company to have its model deployed on the Pentagon’s classified networks, through a partnership with data analytics giant Palantir.

A senior Pentagon official told CBS News that Grok, which is owned by Elon Musk’s xAI, is on board with being used in a classified setting, and other AI companies are close.

The Pentagon announced last month that it’s looking to accelerate its uses of AI, saying the technology could help the military “rapidly convert intelligence data” and “make our Warfighters more lethal and efficient.”

Clash over the guardrails


The standoff between the Pentagon and Anthropic was reportedly set off by the U.S. military’s use of its technology, known as Claude, during the operation to capture former Venezuela President Nicolás Maduro in January.

Anthropic has repeatedly asked the Pentagon to agree to certain guardrails, among them a restriction on using Claude to conduct mass surveillance of Americans, sources told CBS News.

And the company also wants to ensure Claude is not used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the matter said. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment, the source said.

When asked for comment, a senior Pentagon official said: “This has nothing to do with mass surveillance and autonomous weapons being used. The Pentagon has only given out lawful orders.”

Pentagon officials have expressed concerns to Anthropic that the company’s guardrails could stand in the way of critical actions, such as responding to an intercontinental ballistic missile launched toward the United States.

Any company-imposed restrictions “could create a dynamic where we start using them and get used to how those models work, and when it comes that we need to use it in an urgent situation, we’re prevented from using it,” Emil Michael, the undersecretary of defense for research, said at an event in February.

On the question of when AI is used to strike or kill military targets and makes a mistake, who is liable — the military or the AI company — a defense official said: Legality is the Pentagon’s responsibility as the end user.

What top leaders are saying


Anthropic CEO Dario Amodei has been vocal in expressing his concerns about the potential dangers of AI and has centered the company’s brand around safety and transparency.

In a lengthy essay last month, Amodei warned of the potential for abuse of the technologies, writing that “a powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.”

“Democracies normally have safeguards that prevent their military and intelligence apparatus from being turned inwards against their own population, but because AI tools require so few people to operate, there is potential for them to circumvent these safeguards and the norms that support them. It is also worth noting that some of these safeguards are already gradually eroding in some democracies,” he wrote.

Amodei has long backed what he describes as “sensible AI regulation,” including rules that would require AI companies to be transparent about the risks posed by their models and any steps taken to mitigate them.

The Trump administration, meanwhile, has favored a lighter touch, and has argued that stringent AI regulations could stifle innovation and make it harder for the American AI industry to compete. The administration has sought to block what it calls “excessive” state-level regulations. At one point last year, venture capitalist and White House AI and crypto adviser David Sacks accused Anthropic of “fear-mongering” and suggested its interest in AI regulations is self-serving.

In a January speech, Defense Secretary Pete Hegseth derided what he views as “social justice infusions that constrain and confuse our employment of this technology.”

“We will not employ AI models that won’t allow you to fight wars,” Hegseth declared. “We will judge AI models on this standard alone; factually accurate, mission relevant, without ideological constraints that limit lawful military applications. Department of War AI will not be woke. It will work for us. We’re building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.”

What’s next in the Anthropic v. Pentagon saga


Hegseth gave Anthropic until Friday, Feb. 28, to agree to give the U.S. military unrestricted use of its technology or risk being blacklisted, sources familiar with the situation told CBS News.

Pentagon officials are considering invoking the Defense Production Act to compel Anthropic to comply on national security grounds.

Or, if an agreement can’t be reached, defense officials have discussed declaring the company a “supply chain risk” to push it out of government, according to the sources.

https://www.cbsnews.com/video/pentagon-at-odds-with-tech-company-anthropic-over-ai-model/

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注