拜登政府任命的法官冻结特朗普政府针对人工智能公司的行动,引发关于安全权限的权力斗争


2026年3月27日 美国东部时间下午2:19 / 福克斯新闻

联邦法官裁定阻止特朗普政府禁止人工智能公司 Anthropic 参与国防部(Department of War)相关工作,这一裁决引发了关于司法机构是否应介入国家安全决策的激烈争论。

美国加利福尼亚州北区联邦地区法官丽塔·林(Rita Lin)——拜登政府任命的法官——于周四晚间作出的这项裁决,在案件审理期间暂停了特朗普政府全面禁止 Anthropic 参与的行动,尽管该裁决并未明确要求五角大楼必须使用 Anthropic 提供的服务。法官还给予政府一周时间提出上诉。

国防部副部长埃米尔·迈克尔(Emil Michael)在社交平台 X 上表示,该裁决包含”数十处事实错误”,且是在”冲突时期”发布的,认为其”试图颠覆(总统)作为总司令的角色”,并扰乱国防部的军事行动能力。

迈克尔称,政府认为 Anthropic 仍被列为供应链风险,等待上诉结果,这表明官员们对法院禁令的适用范围和影响存在争议。

林法官指出,五角大楼将 Anthropic 列为国家安全风险的举措”很可能既违法又具有专断性”。

“在现行法律中,没有任何条款支持将一家美国公司因表达对政府的异议而被定性为潜在对手和破坏者的‘奥威尔式’观点,”林法官表示。

一位名为埃里克·韦思(Eric Wess)的 X 平台用户评论道:”法官能否命令国防部使用存在安全风险的供应商?不能,但这次林法官(拜登政府任命的加州北区法官)试图阻止特朗普总统/赫格塞斯部长禁止 Anthropic。但她也承认,政府可以选择不使用该公司。”

另一些人则称这一裁决是”纯粹的司法激进主义”,指责林法官干涉了国家安全决策。

然而,包括近150名两党退休联邦和州法官在内的裁决支持者认为,特朗普政府越权了,警告五角大楼将”供应链风险”标签不当应用,可能会压制言论自由和合法商业活动。

今年3月3日,五角大楼已通知 Anthropic,称其被列为国家安全供应链风险,这一认定要求任何与美国军方合作的承包商、供应商或合作伙伴不得与 Anthropic 开展商业活动。

这场法律纠纷源于五角大楼与 Anthropic 之间关于其人工智能系统 Claude 在军事行动中使用权限的争议。Claude 是唯一获批用于机密军事用途的商业人工智能系统。

国防部长皮特·赫格塞斯(Pete Hegseth)曾警告 Anthropic,如果不允许其 AI 平台用于所有合法用途,将终止该公司2025年7月获得的2亿美元合同,或直接将其列为供应链风险。

Anthropic 坚称不会允许 Claude 用于完全自主武器或大规模监控美国人。

五角大楼官员表示,此类用途本就不被允许,强调致命决策仍由人类掌控,军方不会进行国内监控,但坚持认为私营公司不能决定其系统在合法军事行动中的使用方式。

林法官指出,相关措施(包括全政府范围禁令和承包商限制)的适用范围过于宽泛,似乎”并非针对声明的国家安全关切量身定制,反而更像是试图削弱 Anthropic 的竞争力”。

Anthropic 对这一裁决表示欢迎,在声明中称:”我们感谢法院迅速采取行动,并欣慰他们认同 Anthropic 很可能在案情实质审理中胜诉。”

赫格塞斯在2月27日的 X 平台帖子中,将 Anthropic 首席执行官达里奥·阿莫迪(Dario Amodei)及其公司描述为”傲慢的典范”,并斥责其”是与美国政府打交道的反面教材”。

随着与 Anthropic 的紧张关系升级,OpenAI 已成为关键替代者,成功获得五角大楼合同,将其模型部署在机密系统中。

尽管如此,Anthropic 尚未被完全取代——其 Claude 系统仍深度嵌入军方工作流程,替换它需要时间。

https://www.foxnews.com/video/6390647244112

Biden judge freezes Trump administration’s move against AI firm, fueling battle over security authority

March 27, 2026 2:19pm EDT / Fox News

A federal judge’s decision to block the Trump administration from banning AI firm Anthropic from Department of War use is igniting a debate over whether the ruling pushes courts into national security decision-making.

The ruling, issued late Thursday by U.S. District Judge Rita Lin, a Biden appointee to the Northern District of California, pauses the administration’s broader effort to bar the company while the case proceeds, though it does not explicitly require the Pentagon to use Anthropic. The judge also gave the government one week to appeal.

Under Secretary of War Emil Michael wrote on X that the ruling contained “dozens of factual errors” and was issued “during a time of conflict,” arguing it “seeks to upend the (president’s) role as Commander in Chief” and disrupt the department’s ability to conduct military operations.

Michael said the administration views Anthropic as still designated a supply chain risk pending appeal, signaling officials are disputing the scope and effect of the court’s injunction.

Lin said the Pentagon’s move to designate Anthropic as a national security risk was”likely both contrary to law and arbitrary and capricious.”

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” Lin said.

“Can a judge order the Department of War to use a vendor that is a security risk? No, but also yes? Judge Lin (Biden N.D. California) tries to stop President Trump/Secretary Hegseth from banning Anthropic. But acknowledges they can choose not to use it?” one X user Eric Wess wrote on the social media platform.

Others described the ruling as “pure judicial activism” and accused the judge of interfering in a national security decision.

But supporters of the decision — including a bipartisan group of nearly 150 retired federal and state judges — say the administration overstepped, warning the Pentagon’s use of a “supply chain risk” designation appeared improperly applied and could chill free speech and legitimate business activity.

In a March 3 letter, the Pentagon had notified Anthropic it would be designated a supply chain risk to national security. That designation ordered that no contractor, supplier or partner doing business with the United States military may conduct commercial activity with Anthropic.

The legal fight follows a broader dispute between the Pentagon and Anthropic over how the company’s AI system, Claude, can be used in military operations. Claude is the only commercial AI system approved for classified use.

War Secretary Pete Hegseth had warned Anthropic it would face termination of its $200 million contract, awarded in July 2025, or be designated a supply chain risk if it did not allow its AI platform to be approved for all lawful uses.

Anthropic insisted it would not allow Claude to be used for fully autonomous weapons or mass surveillance of Americans.

Pentagon officials say such uses already are not permitted, emphasizing that humans remain in the loop for lethal decisions and that the military does not conduct domestic surveillance, but maintain that private companies cannot dictate how their systems are used in lawful operations.

Lin pointed to the breadth of the measures — including a government-wide ban and contractor restrictions — saying they did not appear “tailored to the stated national security concern” and instead “look(ed) like an attempt to cripple Anthropic.

Anthropic welcomed the decision, saying in a statement: “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits.”

Hegseth described CEO Dario Amodei and Anthropic of a “master class in arrogance” and a “textbook case of how not to do business with the United States Government” in a Feb. 27 post on X.

OpenAI has emerged as a key alternative, securing a Pentagon deal to deploy its models on classified systems as tensions with Anthropic escalated.

Still, Anthropic has not been fully displaced — its Claude system remains deeply embedded in military workflows, and replacing it would take time.

https://www.foxnews.com/video/6390647244112

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注