2026-05-07T09:00:50.980Z / 美国有线电视新闻网(CNN)
作者:肖恩·林加斯
发布时间:2026年5月7日,美国东部时间早上5:00
5月5日,美国战争部长皮特·赫格斯西在弗吉尼亚州阿灵顿五角大楼的新闻发布会上旁听。奇普·索莫德维拉/盖蒂图片社
伊朗战争中,美军对人工智能的使用规模超过以往任何一场冲突,他们将来自卫星、信号情报及其他渠道的海量数据输入帕尔蒂尔等承包商开发的软件程序。
据多名熟悉美军行动的消息人士透露,像Anthropic公司的克劳德这类人工智能工具,能以远超人类的速度筛选数据,为指挥官标记潜在打击目标。
人工智能工具在战争中的普及引发了人们的质疑:这些工具是否加剧了战场失误?一些国会民主党议员已敦促五角大楼就人工智能是否可能是2月份美军空袭事件的部分责任方作出解释。据伊朗国家媒体报道,那次空袭击中了一所伊朗小学,造成至少168名儿童死亡。但军方使用人工智能的界限究竟是什么?
美国国防部长皮特·赫格斯西强调,五角大楼的人类而非人工智能代理,才是战时决定打击目标的最终决策者。
“我们遵守法律,由人类做出决策,”赫格斯西上周在参议院军事委员会表示,“人工智能不会做出致命打击决策。”
五角大楼发言人也多次重申,美军使用人工智能符合法律规定。
但法律专家告诉CNN,除了明确指挥官应对致命打击决策及其后果负责外,法律并未对所谓“杀伤链”中人工智能的使用范围作出明确限制。法律专家表示,人工智能帮助指挥官做出致命决策的速度,正不断引发新的疑问:人类需要在何时、以何种频率介入这一过程。
缺乏相关限制引发了关于战争中人工智能伦理的公开辩论。五角大楼正与美国领先人工智能公司Anthropic陷入一场棘手的法律纠纷,此前该公司要求对其技术的使用方式施加一些限制,赫格斯西甚至因此称该公司首席执行官为“意识形态疯子”。
“归根结底,这个故事关乎你选择——或是不得不——以多快的速度‘拿着剪刀乱跑’,”曾任参谋长联席会议办公室副法律顾问的加里·科恩表示,“而我们目前的方针是,‘我们要拿着剪刀全速冲刺’。这就是与Anthropic纠纷的核心所在。”
美国空军上校约翰·博伊德创造了“OODA循环”(观察、调整、决策、行动)一词,用以描述战场上指挥官必须做出决策的迭代窗口。现有的人工智能使用法律框架大多源于此前的法律,这些法律与决策做出时的责任归属挂钩。
“人工智能正以指数级速度”加快指挥官及其参谋人员在战场上应对OODA循环的速度,曾任美国特种作战司令部法律顾问的科里·辛普森说道。
战争中,能最快完成这一循环的一方拥有优势。
今年3月,帕尔蒂尔在X平台发布的一段视频中,五角大楼首席数字与人工智能官卡梅伦·斯坦利称赞该公司的“Maven智能系统”软件彻底改变了美军的目标锁定流程。他演示了这款他称已“在整个国防部部署”的软件如何识别潜在军事目标,并将其纳入“工作流程”供军事领导人考量。
“这具有革命性意义,”斯坦利说,“过去我们需要通过大约八九个系统完成这项工作,人类要手动在左右两侧移动检测结果,才能达成我们期望的最终状态,在这个案例中,就是完成杀伤链闭环。”
技术的快速进步意味着自主武器系统可以被设置为尽量避免伤及平民。但这项技术尚未成熟,专家表示,我们永远不应将权衡战争中可接受的平民附带伤亡这一道德计算工作交给人工智能。美国还面临着一些更不重视避免平民伤亡的潜在对手。
“最大的担忧……在于对这项投入使用的能力的可预测性和控制权,”现为美国大学华盛顿法学院兼职教授的科恩说道,他指的是无需人类参与即可运作的自主系统,包括无人机。“你必须确信该系统将在法律允许的 targeting 范围内运作。”
法律与五角大楼政策的规定
战争法和国际人道主义法规定,无论使用何种技术杀人,军事指挥官都有责任在可行范围内最大限度减少战争中的平民伤亡。指挥官可获得军事法官的建议,这些律师嵌入在全军各指挥单位中。
2023年,随着人工智能在国防工业中的应用不断扩大,五角大楼发布了一份针对军事人员的人工智能使用指南。“自主和半自主武器系统的设计,应允许指挥官和操作人员在使用武力时行使适当程度的人类判断力,”该指南写道。
2020年特朗普政府第一任期发布的另一套五角大楼指南,也使用了“适当程度的判断力”这一表述,来描述官员如何使用人工智能。
2023年的指南至今仍然有效,但它并未明确“适当”的人类判断力究竟包含哪些内容。
“国防部在[2023年指南]中表示,使用自主能力时,人类操作员始终处于‘回路中’,”CNN就战时使用人工智能的最新法律指南询问五角大楼时,一名五角大楼官员在一份声明中表示,“任何人工智能工具的合法使用责任都在于人类操作员和指挥链,而非软件本身。”
前特种作战司令部法律顾问辛普森表示,从采购武器到开火,每个阶段都需要法律专家的参与,且这种需求只会越来越大。
“正如它正在改变战争中武器的使用方式,它也将改变背后的职业领域,要求他们以不同的方式进行培训、思考流程,”辛普森说。
退役将军迈克尔·“埃里克”·库里拉表示,在2000年代末和2010年代初,美军在阿富汗的行动节奏在一定程度上受到收集和分析数据以寻找潜在目标能力的限制。
库里拉上个月在范德堡大学国家安全研究所的活动中表示,在接下来的十五年里,数据分析以及后来的人工智能让美军能够大幅增加对敌方的空袭次数。
随着数据量的增加,需要更多人类来审查和批准所有潜在目标,并执行打击任务。
人工智能“能为你带来决策优势,将数万、数十万的数据点整合后以更连贯的形式呈现给你,”曾监督美军2025年对伊朗空袭行动的库里拉说道。
一年后,库里拉协助搭建的人工智能支持的“杀伤链”如今再次在伊朗上空运作。
“在[美国中央司令部],我们搭建了一个系统,能够在每24小时内动态处理超过1000个目标,且具备处理更多目标的能力。布拉德·库珀如今在伊朗使用着同样的系统,并每天对其进行改进,”库里拉说道,他指的是自己在中央司令部的继任者。
伊朗战争中美军犯下的 targeting 失误,包括击中那所小学的空袭事件,正重新引发人们对军方如何使用人工智能的审视。目前尚不清楚人工智能是否在此次空袭失误中发挥了任何作用。五角大楼正在对这一事件展开调查。
科恩表示,此类调查将试图回答这样一个问题:“依赖这些情报,以及任何可能被使用的人工智能系统及其输出结果,是否合理?”
在某个环节,很可能有错误信息被提供给了批准空袭的指挥官。无论情报是由人工智能整理还是由人类整理,指挥官(或其顾问)都必须了解情报的来源。
“人工智能的质量取决于它所能获取的数据——这与人类的表现取决于其所能获取的数据并无不同,”科恩说道。
CNN的扎卡里·科恩为本报道贡献了内容。
https://x.com/PalantirTech/status/2032142543022960980/video/1
The Pentagon keeps promising to follow the law when using AI, but what are the limits?
2026-05-07T09:00:50.980Z / CNN
By Sean Lyngaas
PUBLISHED May 7, 2026, 5:00 AM ET
Secretary of War Pete Hegseth looks on during a press briefing at the Pentagon on May 5, in Arlington, Virginia.
Chip Somodevilla/Getty Images
The Iran war has seen the US military use AI more than any conflict before, drawing on vast amounts of data — from satellites, signals intelligence and elsewhere — piped into software programs made by contractors like Palantir.
AI tools like Anthropic’s Claude have sifted through the data far quicker than any human could to flag potential targets to strike for commanders, according to multiple sources familiar with US operations.
The ubiquity of AI tools in war has raised questions about whether those tools are contributing to errors on the battlefield. Some congressional Democrats have pushed the Pentagon to answer questions about whether AI may have been partially at fault for a US strike in February that hit an Iranian elementary school and, according to Iranian state media, killed at least 168 children. But what are the limits on the military’s use of AI?
Defense Secretary Pete Hegseth has emphasized that humans at the Pentagon, not AI agents, make the ultimate call on who to kill in war.
“We follow the law and humans make decisions,” Hegseth told the Senate Armed Services Committee last week. “AI is not making lethal decisions.”
Pentagon spokesmen have similarly repeatedly said that the military’s use of AI follows the law.
But other than specifying that commanders are responsible for lethal targeting decisions and their consequences, the law does not place explicit limits on where AI can be used in the so-called kill chain. The speed with which AI helps commanders make those lethal decisions is raising new questions of when and how often a human needs to be involved in the process, legal experts told CNN.
The lack of restrictions has led to some very public debates about the ethics of AI in warfare. The Pentagon is in a messy legal battle with a leading American AI firm, Anthropic, after that company insisted on some limitations in how its technology might be used, with Hegesth calling the company’s CEO an “ideological lunatic” over the demand.
“The story is ultimately one of how fast you choose to — or can afford not to — run with scissors,” said Gary Corn, a former deputy legal counsel in the Office of the Chairman of the Joint Chiefs of Staff. “And we see that the approach presently is, ‘We’re going to sprint as fast as we can with scissors.’ That’s the core of the Anthropic fight.”
US Air Force Colonel John Boyd coined the phrase “OODA loop” (observe, orient, decide, act) to describe the iterative windows in battle when commanders have to make decisions. Much of the legal framework for the use of AI stems from pre-existing law that’s tied to who is responsible when those decisions are made.
“AI is exponentially increasing” the speed at which commanders and their support staff will have to navigate OODA loops in battle, said Cory Simpson, a former legal adviser to US Special Operations Command.
In war, those who get through that loop the quickest have an advantage.
In a video posted to X by Palantir in March, Cameron Stanley, the Pentagon’s chief digital and AI officer, praised how Palantir’s Maven Smart System software has transformed US military targeting. He demonstrated how the software, which he said is deployed “across the entire Department [of Defense],” can identify potential military targets and move them into a “workflow” for military leaders to consider.
“This is revolutionary,” Stanley said. “We were having this done in about eight or nine systems, where humans were literally moving detections left and right in order to get to our desired end state, in this case, actually closing a kill chain.”
Rapid technological advancements mean that autonomous weapons systems can be wired to try to avoid civilians. But the technology is not ready for — and experts say we should never hand over — weighing the moral calculus of how much civilian collateral damage is acceptable in war. The US also faces potential adversaries that place much less emphasis on avoiding civilian casualties.
“The biggest concerns … are with the predictability and control over a capability that you put into operation,” said Corn, who is now an adjunct professor at American University’s Washington College of Law, referring to autonomous systems, including drones, that can operate without human involvement. “You have to have a confidence level that the system is going to operate within the bounds of what the law allows in targeting.”
What the law and Pentagon policy say
The law of armed conflict and international humanitarian law dictate that military commanders are responsible for minimizing, to the extent feasible, civilian casualties in war, regardless of the technology used to kill people. The commanders draw on counsel from judge advocates, attorneys embedded in commands across the military.
In 2023, as adoption of AI was expanding across the defense industry, the Pentagon issued a directive for military personnel on how to handle the technology. “Autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force,” the directive says.
Another set of Pentagon guidelines, issued in the first Trump administration in 2020, used the same phrase, “appropriate levels of judgment,” to describe how officials can use AI.
The 2023 directive is still in effect. It leaves open to interpretation what constitutes “appropriate” human judgement.
“The Department maintains in [the 2023 directive] that a human operator has always been in the loop when using autonomous capabilities,” a Pentagon official said in a statement when CNN asked about the latest legal guidance for using AI in war. “The responsibility for the lawful use of any AI tool rests with the human operator and the chain of command, not within the software itself.”
Simpson, the former Special Operations Command legal adviser, said the need for legal experts at every stage in the process, from buying a weapon to firing it, is only going to grow.
“As much as [AI] is changing the application of weapons in warfare, it is going to change the professions behind them in how they need to train differently and think about processes differently,” Simpson said.
In the late 2000s and early 2010s, the pace of US military operations in Afghanistan was somewhat limited by the ability to gather and analyze data to find potential targets, according to retired Gen. Michael “Erik” Kurilla.
Over the next decade and a half, data analytics, and later AI, allowed the US military to dramatically increase the number of strikes it could conduct against adversaries, Kurilla said last month at Vanderbilt University’s Institute of National Security.
With more data came the need for more humans to review and approve all of the potential targets and carry out missions to strike them.
AI “gives you decision advantage, taking tens of thousands and hundreds of thousands of data points to bring them to you in a more coherent fashion,” said Kurilla, who oversaw the US military’s 2025 bombing campaign against Iran.
A year later, the AI-supported “kill chain” that Kurilla helped build out has again been at work over Iran.
“At [US Central Command], we built a system that allowed us to dynamically prosecute over a thousand targets every 24 hours, with the capacity to do even more. Brad Cooper is using that same system today against Iran and improving it every day,” Kurilla said, referring to his successor at Central Command.
Targeting mistakes the US has made in the Iran war, including the US airstrike that hit the elementary school, are renewing scrutiny of how AI might be used by the military. It is not yet clear if AI played any role in the error of striking the school. The Pentagon is investigating the incident.
Corn said such an investigation would seek to answer the question: “Was it reasonable or unreasonable to rely on the intelligence, and by extension any AI system that may have been used and the output?”
Somewhere along the line, bad information was likely fed to the commander who approved the strike. And whether intelligence is curated by AI or not, the commander (or their advisers) has to know where it comes from.
“The AI is only as good as the data it can draw on — no different than humans are only as good as the data they can draw on,” Corn said.
CNN’s Zachary Cohen contributed to this report
https://x.com/PalantirTech/status/2032142543022960980/video/1
发表回复