OpenAI 详述与美国国防部协议中的分层保护措施


2026年2月28日 23:05 UTC / 路透社

OpenAI 标志在这张2024年5月20日拍摄的插图中出现。路透社/Dado Ruvic/插图/档案照片

  • 摘要
  • 公司
  • OpenAI 与五角大楼的协议包含额外保障措施
  • OpenAI 的合同为人工智能使用设定三条红线
  • 协议禁止人工智能用于自主武器系统
  • OpenAI 表示反对将 Anthropic 标记为供应链风险

2月28日(路透社)- OpenAI 周六表示,其与五角大楼在前一天达成的将技术部署到美国国防部机密网络的协议,包含额外的保障措施以保护其应用场景。

美国总统唐纳德·特朗普周五指示政府停止与 Anthropic 合作,并表示五角大楼将宣布这家初创公司为供应链风险,这一决定在关于技术护栏的对峙之后,对这家人工智能实验室造成重大打击。Anthropic 表示将在法庭上挑战任何风险认定。

了解关于创新理念以及致力于解决全球危机的人物,请订阅路透社 Beacons 通讯。点击此处注册。

广告 · 滚动继续阅读

广告
很快,由微软、亚马逊、软银等支持的竞争对手 OpenAI 也在周五晚间宣布了自己的协议。

OpenAI 周六表示:“我们认为我们的协议比之前任何机密人工智能部署协议(包括 Anthropic 的)都有更多的护栏。”

该人工智能公司称,与特朗普政府已更名为“战争部”的国防部的合同设定了三条红线:OpenAI 技术不得用于大规模国内监控、指挥自主武器系统或任何高风险自动决策。

广告 · 滚动继续阅读

OpenAI 表示:“在我们的协议中,我们通过更广泛的多层方法保护我们的红线。我们保留对安全系统的完全决定权,通过云部署,经过审查的 OpenAI 人员参与其中,并且我们有强大的合同保护。”

五角大楼去年与主要人工智能实验室(包括 Anthropic、OpenAI 和谷歌)签署了每份最高达2亿美元的协议。五角大楼正寻求在国防方面保持所有灵活性,不受技术开发者关于用不可靠人工智能驱动武器的警告限制。

OpenAI 警告称,美国政府任何违反合同的行为都可能触发终止条款,尽管它补充道:“我们不期望这种情况发生。”

该公司还表示,竞争对手 Anthropic 不应被标记为“供应链风险”,并指出:“我们已向政府明确表达了这一立场。”

广告即将开始

访问广告商网站[前往页面]

由 Mrinmay Dey 在墨西哥城报道,Ananya Palyekar 在班加罗尔报道;Cynthia Osterman 和 Andrea Ricci 编辑

我们的标准:汤森路透信托原则。

OpenAI details layered protections in US defense department pact

February 28, 2026 11:05 PM UTC / Reuters

OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

  • Summary
  • Companies
  • OpenAI’s deal with Pentagon includes additional safeguards
  • OpenAI’s contract enforces three red lines for AI use
  • Agreement prohibits AI use for autonomous weapons systems
  • OpenAI says opposes labeling Anthropic as a supply chain risk

Feb 28 (Reuters) – OpenAI said on Saturday that the agreement it struck a day ago with the Pentagon to deploy technology on the U.S. defense department’s classified network includes additional safeguards to protect its use cases.

U.S. President Donald Trump on Friday directed the government to stop working with Anthropic, and the Pentagon said it would declare the startup a supply-chain risk, dealing a major blow to the artificial intelligence lab after a showdown about technology guardrails. Anthropic said it would challenge any risk designation in court.

Read about innovative ideas and the people working on solutions to global crises with the Reuters Beacon newsletter. Sign up here.

Advertisement · Scroll to continue

Report Ad

Soon after, rival OpenAI, which is backed by Microsoft , Amazon , SoftBank and others, announced its own deal late on Friday.

“We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s,” OpenAI said on Saturday.

The AI firm said that the contract with the Department of Defense, which the Trump administration has renamed the Department of War, enforces three red lines: OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions.

Advertisement · Scroll to continue

“In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections,” OpenAI said.

The Pentagon signed agreements worth up to $200 million each with major AI labs in the past year, including Anthropic, OpenAI and Google. The Pentagon is seeking to preserve all flexibility in defense and not be limited by warnings from the technology’s creators against powering weapons with unreliable AI.

OpenAI cautioned that any breach of its contract by the U.S. government could trigger a termination, though it added, “We don’t expect that to happen.”

The company also said rival Anthropic should not be labeled a “supply-chain risk,” noting, “We have made our position on this clear to the government.”

Ad Break Coming Up节点运行失败节点运行失败

Visit Advertiser website[GO TO PAGE]

Reporting by Mrinmay Dey in Mexico City and Ananya Palyekar in Bangalore; Editing by Cynthia Osterman and Andrea Ricci

Our Standards: The Thomson Reuters Trust Principles.

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注