军方如何在战争中使用人工智能


2026年3月18日 / 美国东部时间上午11:05 / CBS新闻

随着Anthropic的人工智能系统被逐出五角大楼,其他主要人工智能公司正酝酿一场争夺,希望抓住这一潜在利润丰厚的机会,并塑造人工智能融入美国军事防御的方式。

本月早些时候,五角大楼要求在六个月内将Anthropic的人工智能技术从军事行动中移除——这是该公司首席执行官与特朗普政府之间不断升级的争执的结果。一份五角大楼内部备忘录暗示,Anthropic的人工智能正被用于国家安全的关键领域,包括核武器、弹道导弹防御和网络战。

熟悉美国军方人工智能使用情况的消息人士告诉CBS新闻,人工智能项目——包括Anthropic开发的一个项目,特朗普政府已将其视为供应链风险——可能正在作为美国针对伊朗行动的一部分部署。

虽然五角大楼没有具体说明人工智能工具是如何部署的,但CBS新闻采访了几位了解军事行动的专家,他们描述了可能的情况。

“军方现在每天大约处理一千个潜在目标,并打击其中的大部分,下一次打击的周转时间可能不到四小时,”民主基金会网络与技术创新中心高级主任、退役海军上将马克·蒙哥马利说。“人类仍然参与其中,但人工智能正在完成过去需要数天分析的工作——而且规模是以往任何一次行动都无法比拟的。”

军方如何使用人工智能

五角大楼使用人工智能的方式与许多消费者类似——一次总结和提炼大量信息。据前五角大楼官员称,通过分析来自战场的文件、视频和图像,人工智能可以帮助军方进行战争游戏模拟,以最大限度地减少伤亡,并确定哪些武器最有效。

“我们正经历一场由数字革命驱动的军事革命,”CBS新闻国家安全分析师亚伦·麦克莱恩说。“今天的革命由数据爆炸驱动:到处都是摄像头、智能手机、联网汽车。战场现在充斥着信息,这是上一代无法想象的。”

由于有如此多的数据可用,人工智能在为军事人员将数据情境化方面发挥了关键作用,其速度远远超过传统的人类分析。

“现在的数据量远远超过任何分析师房间能处理的量,且时间紧迫。人工智能算法会快速筛选数据,构建目标包、分配打击资产并评估损害——几乎是即时完成,”麦克莱恩说。

“以色列导弹防御的例子让这一点非常直观:当数百架无人机和导弹在几小时内来袭时,没有任何人类团队能实时决定拦截哪些、用什么拦截以及何时拦截。这就是人工智能正在做的事情。”

到目前为止,Anthropic的大型语言模型Claude是唯一在国防部机密系统上投入运行的大规模人工智能系统。

联邦采购服务局(Federal Acquisition Service)专员乔希·格鲁恩鲍姆表示,人工智能还用于其他行政职能,如研究、政策制定和采购。该机构是一个政府机构,负责决定使用哪些商品和服务。

“我们的目标过去是并仍然是,帮助各机构适应使用这项技术,并为美国纳税人加速产出和提高效率,同时保持公平的做法,欢迎那些加强机构任务并使这些工具能够合法部署的美国创新者,而不会造成不当阻碍,”格鲁恩鲍姆告诉CBS新闻。

人工智能如何与实体武器协同工作

人工智能并非在战场上孤立存在——仍然有大量人类监督和实体技术,包括从航空母舰到无人机,以及诺思罗普·格鲁曼公司、波音公司和洛克希德·马丁公司等传统国防承包商。为人工智能提供动力的大型语言模型并没有驾驶飞机或发射导弹,但它们在这些行动之前被用于大量分析。

据蒙哥马利称,这一进步将行动时间从几天压缩到了几小时。

“这是军方快速规划和执行战争行动能力的重要推动力,”蒙哥马利告诉CBS新闻,他强调过程中仍然有人类参与,但人工智能被用于帮助规划潜在打击。

一位直接了解Anthropic Claude人工智能军事能力的消息人士告诉CBS新闻,Claude的主要任务是筛选大量情报报告,如合成模式、总结发现并比人类分析师更快地呈现相关信息。

消息人士称,目标确定过程仍然由人类主导。尽管Anthropic的美国政府使用政策允许国防部使用Claude分析外国情报,但使用条款要求人类对军事目标做出任何决定。

CBS新闻无法独立核实2月28日针对伊朗一所女子学校的打击是否使用了Claude系统,美国可能与此事有关。

人工智能对行动是一个重大提升,但没有它战争仍然可以进行。蒙哥马利表示,更多传统遗留承包商仍然生产绝大多数武器。

“这场战争是由武器进行的,98%的武器由传统主要承包商提供,他们做得非常好,”蒙哥马利说。他补充说,没有人工智能也可以打仗,但会“不太理想”。“它肯定在发挥作用,而且可能在一次又一次的行动中只会变得越来越重要。”

大型科技公司在军方的角色——以及正在发生的变化

7月,五角大楼与人工智能公司Anthropic签署了一份2亿美元的合同,将Claude整合到五角大楼系统中。但在五角大楼与Anthropic领导人就谁应拥有限制军方使用Claude的最终决定权产生争议后,该合同已被取消。

该公司随后起诉联邦政府,指控其报复。“宪法不允许政府利用其巨大权力惩罚一家公司因行使受保护言论而采取的行动。没有联邦法规授权此处采取的行动,”Anthropic在诉讼中表示。

微软以及OpenAI和谷歌的员工已提交法庭之友简报支持Anthropic的诉讼。

五角大楼有六个月的过渡期,以从其系统中移除Anthropic的产品,尽管存在供应链风险指定,但仍在针对伊朗的行动中使用它们。

与此同时,其他公司也在参与其中。谷歌周二在一篇博客文章中宣布,它正在推出用于非机密军事用途的人工智能代理。在2月底Anthropic与国防部发生纠纷之后,Anthropic的竞争对手OpenAI首席执行官山姆·奥特曼在X平台上发文称,使用ChatGPT的开发者的人工智能模型进入了五角大楼的机密网络。该公司随后在其与五角大楼的协议中表示,他们的协议中包含所谓的“三条红线”:自主致命武器、对美国人的大规模监视以及高风险自动化决策。

How the military is using AI in war

March 18, 2026 / 11:05 AM EDT / CBS News

With Anthropic’s AI systems being ushered out of the Pentagon, a battle is brewing among other major artificial intelligence firms looking to capitalize on this potentially lucrative opening and shape the way AI is integrated into America’s military defense.

Earlier this month, the Pentagon called for Anthropic’s AI technology to be removed from military operations within six months — the result of an escalating feud between the company’s chief executive and the Trump administration. An internal Pentagon memo hinted that Anthropic’s artificial intelligence was being used in key national areas of national security, including nuclear weapons, ballistic missile defense and cyber warfare.

Sources familiar with the U.S. military’s use of artificial intelligence tell CBS News that AI programs — including one created by Anthropic, which the Trump administration has deemed a supply chain risk — are likely being deployed as part of the U.S. operation against Iran.

While the Pentagon has not said exactly how AI tools are being deployed, CBS News spoke with several experts with knowledge of military operations who described the likely scenarios.

“The military is now processing roughly a thousand potential targets a day and striking the majority of them, with turnaround time for the next strike potentially under four hours,” said retired Navy Admiral Mark Montgomery, senior director of the Foundation for Defense of Democracy’s Center on Cyber and Technology Innovation. “A human is still in the loop, but AI is doing the work that used to take days of analysis — and doing it at a scale no previous campaign has matched.”

How AI is used by the military

The Pentagon uses AI in the ways many consumers do — to summarize and distill lots of information at once. According to former Pentagon officials, by analyzing documents, video, and images coming in from the battlefield, AI can help the military war-game out scenarios to minimize casualties and determine which weapons can be most effective.

“We’re living through a military revolution driven by the digital revolution,” said CBS News national security analyst Aaron McLean. “Today’s revolution is driven by the explosion of data: cameras everywhere, smartphones, connected cars. The battlefield is now flooded with information in ways that were unimaginable a generation ago.”

With so much data available, AI has become instrumental in contextualizing it for military personnel at a speed far beyond traditional human analysis.

“There’s now far more data than any room of analysts could process on timelines that matter. AI algorithms sift through it to build targeting packages, assign strike assets and assess damage — nearly instantly,” McLean said.

“The Israel missile defense example makes this visceral: when hundreds of drones and missiles are inbound over a few hours, no human team can decide in real time which ones to intercept, with what, and when. That’s what AI is doing.”

So far, Anthropic’s large language model, Claude, is the only large-scale AI system that’s been operational on the Defense Department’s classified systems.

AI is also used for other administrative functions like research, policy development and procurement, according to Josh Gruenbaum, the commissioner of the Federal Acquisition Service, a government agency which helps decide which goods and services to use.

“Our goal has been, and remains, to help agencies become comfortable using this technology and turbocharging output and efficiencies for the American taxpayer, while maintaining an evenhanded approach that welcomes American innovators who strengthen agency missions and enable the lawful deployment of these tools by government without inappropriate impediment,” Gruenbaum told CBS News.

How AI works with physical weapons

AI doesn’t exist in a vacuum on the battlefield — there is still plenty of human oversight and physical tech, including everything from aircraft carriers to drones, from legacy defense contractors like Northrop Grumman, Boeing and Lockheed Martin. The large language models that power AI are not flying the planes or firing the missiles, but they are being used to do a lot of analysis before those things are done.

According to Montgomery, this advancement has compressed operation time from days to hours.

“It’s an important enabler in the military’s ability to rapidly plan and execute war fights,” Montgomery told CBS News, emphasizing that there are still humans in the process, but that AI is used to help plan potential strikes.

A source directly familiar with the military capabilities of Anthropic’s Claude AI told CBS News the main task Claude is doing is sifting through large amounts of intelligence reports, like synthesizing patterns, summarizing findings and surfacing relevant information faster than a human analyst could.

The targeting process remains human-driven, the source said. While Anthropic’s U.S. Government Usage Policy does allow the Defense Department to use Clause for analyzing foreign intelligence, the terms of use require humans to make any decisions on military targets.

CBS News has not been able to independently verify whether Claude systems were used in a Feb. 28 strike that hit a girls’ school in Iran for which the U.S. was likely responsible.

AI is a significant boost to operations, but war could still be fought without it. More traditional legacy contractors still make the vast majority of weapons, according to Montgomery.

“This war is being fought by weapons, 98% by weapons provided by the traditional primes, and they’re doing very well,” Montgomery said. He added that you could fight a war without AI, but it would be “less desirable.” “It definitely is playing a role that will probably only grow campaign after campaign after campaign,” he said.

Big tech’s role in the military — and what’s changing

In July, the Pentagon signed a $200 million contract with the artificial intelligence company Anthropic to integrate Claude into Pentagon systems. That contract has since been canceled following a dispute between the Pentagon and Anthropic’s leaders about who should have final say in setting restrictions on how Claude is used by the military.

Now, the company is suing the federal government, alleging retaliation.”The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here,” Anthropic said in the lawsuit.

Microsoft and workers from OpenAI and Google have filed amicus briefs in support of Anthropic’s lawsuit.

The Pentagon has a six-month off-ramp period to remove Anthropic’s products from its system, and is still using them in Iran, despite the supply chain risk designation.

Meantime, other companies are getting in on the action. Google announced in a blog post on Tuesday that it is rolling out AI agents for non-classified military uses. On the heels of Anthropic’s fallout with the Defense Department in late February, Sam Altman, CEO of Anthropic rival OpenAI, posted on X about using the ChatGPT maker’s artificial intelligence models in the Pentagon’s classified network. The company then posted about language in their deal with the Pentagon honoring what they refer to as their three red lines on using AI: autonomous lethal weapons, mass surveillance of Americans, and high-stakes automated decisions.

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注