2026-04-17T16:10:50-04:00 / 福克斯新闻频道
Anthropic的Mythos Preview模型已在关键系统中发现数千个此前未被发现的安全漏洞
作者:摩根·菲利普斯 福克斯新闻
发布于2026年4月17日 美国东部时间下午4:10
Anthropic的新型AI模型引发安全与网络安全担忧
OthersideAI联合创始人兼首席执行官马特·舒默做客《周日简报》,探讨Anthropic的新型AI模型Mythos,以及外界对其先进能力和潜在网络安全风险的日益担忧。
新增功能:您现在可以收听福克斯新闻文章!
收听本文
时长:8分钟
就在唐纳德·特朗普总统因与五角大楼发生冲突下令全联邦政府暂停使用人工智能公司Anthropic的技术一个月后,该公司首席执行官重返白宫参加高层会谈——官员们正在重新考虑,这项因国家安全和政治担忧而被搁置的系统,是否重要到不容忽视的地步。
一位知情人士告诉福克斯新闻,白宫办公厅主任苏西·瓦伊勒斯于周五会见了Anthropic首席执行官达里奥·阿莫代伊。
Anthropic的新型人工智能模型Mythos Preview被认为过于先进,该公司已限制其发布,仅允许少数合作伙伴使用,以防潜在滥用。
此次会谈标志着特朗普政府内部的迅速转变,官员们正在权衡此前被标记为国家安全风险的系统,是否也可能成为保卫美国基础设施的关键——这暴露出美国政府内部在如何处理兼具防御和进攻潜力的强大AI工具方面日益加剧的分歧。
“Anthropic首席执行官达里奥·阿莫代伊今日与政府高级官员举行了富有成效的讨论,探讨Anthropic与美国政府如何在网络安全、美国在AI竞赛中的领先地位以及AI安全等关键共同优先事项上开展合作。此次会议反映了Anthropic致力于与美国政府就负责任AI发展进行接触的持续承诺。我们感谢他们抽出时间,并期待继续这些讨论,”Anthropic的一位发言人告诉福克斯新闻数字频道。
马杜罗突袭事件引发的疑问促使五角大楼将顶级AI公司列为潜在“供应链风险”进行审查
尽管特朗普政府内部近期爆发冲突,但会谈仍如期举行,官员们正在重新评估这家被五角大楼标记为供应链风险的公司。该公司与前拜登政府官员的联系,以及其首席执行官此前对特朗普的批评,为围绕其技术是否应重返政府使用的辩论增添了政治层面的因素。
一位知情人士告诉福克斯新闻,白宫办公厅主任苏西·瓦伊勒斯于周五会见了Anthropic首席执行官达里奥·阿莫代伊。(图片来源:钱斯·耶/哈伯德波特组织/Getty Images)
这种潜力及其伴随的风险已经在美国政府内部引发了紧张局势。
五角大楼冲突、法律诉讼与态度转变让Anthropic重回视野
此次会谈是在2026年初Anthropic与五角大楼关系破裂之后举行的。
国防部长皮特·赫格斯瑟将该公司列为国家安全“供应链风险”,实际上将其排除在军事系统之外,并禁止承包商使用其技术。
Anthropic目前正对这一指定提出法律诉讼,该公司已针对五角大楼和其他联邦机构提起多起诉讼,辩称“供应链风险”标签是非法且具有报复性的。
这一指定实际上禁止承包商使用Anthropic的技术,其措施通常被用于针对外国对手,目前已在联邦法院引发相互矛盾的裁决:一名法官暂时叫停了该政策的部分内容,而上诉法院则拒绝暂停其执行。法律诉讼仍在进行中,承包商和各机构在Anthropic的系统是否以及如何能够被使用方面面临不确定性。
此前,双方就五角大楼如何使用Anthropic的AI发生了争执。
该公司拒绝为“所有合法用途”提供开放式授权,而是坚持其系统不得用于大规模国内监控或完全自主武器系统。尽管五角大楼官员表示,他们并未将AI用于上述任何目的,但他们拒绝受一家私营公司的限制。
随后,特朗普下令联邦机构全面停止使用Anthropic的模型,将对峙从国防部升级为全政府范围的禁令。
而仅仅几周后,该公司就重返白宫参加高层会谈,官员们正在权衡,尽管此前遭到禁令,其新型Mythos系统是否能够改变网络防御和攻击的格局。
政治关联与过往批评可能使白宫会谈复杂化
这场争端也带上了政治色彩。
据《华尔街日报》报道,阿莫代伊此前因批评特朗普而引发关注,他曾在2024年大选前的一条Facebook帖子中将特朗普比作“封建军阀”。
在Anthropic的Slack平台上发布并随后泄露给《信息报》的一条内部消息中,阿莫代伊暗示特朗普政府与该公司的争端,部分源于该公司拒绝提供他所说的“独裁者式的赞扬”。
这条写于3月初紧张局势迅速升级期间的消息,后来被《华尔街日报》和其他媒体引用。阿莫代伊随后为该帖子的语气道歉,表示其内容并未反映他经过深思熟虑的观点。
联邦上诉法院驳回Anthropic阻止五角大楼将其列入AI争端黑名单的请求
当被问及Anthropic的治理、招聘以及更广泛的政治关联时,一位白宫官员表示,政府“继续在政府和行业之间积极接触,以保护美国和美国人民”,包括“与前沿AI实验室合作,确保其模型有助于识别关键软件漏洞”。
这位官员补充道,“任何可能被联邦政府使用或部署的新技术,都需要经过一个技术评估期,以验证其可靠性和安全性”,并表示“所有相关方的共同努力最终将使行业和整个国家受益”。
阿莫代伊此前因批评特朗普而引发关注,他曾在2024年大选前的一条Facebook帖子中将特朗普比作“封建军阀”。(图片来源:帕特里克·西森/美联社照片)
除了直接争端外,该公司与华盛顿的更广泛联系也引发了关注。
在政府考虑加强接触之际,Anthropic的治理结构也受到了关注。该公司部分由一个独立的“长期利益信托基金”监管,这是一个不同寻常的机制,旨在让非财务利益相关者对公司决策施加影响。
该信托基金持有特殊投票权,可任命并最终控制公司多数董事会席位,董事会成员来自国家安全、公共政策和全球发展领域。
现任受托人包括克林顿健康倡议首席执行官尼尔·巴迪·沙阿、卡内基国际和平基金会主席马里亚诺-弗洛伦蒂诺·奎利亚尔——这位民主党人于2014年由前州长杰里·布朗任命为加利福尼亚州最高法院法官——以及新美国安全中心首席执行官理查德·方丹——他曾为约翰·麦凯恩2008年总统竞选提供建议。该团体由政策和国家安全领域的领导人组成,凸显了该公司与华盛顿和全球政策圈子的深厚联系。
Anthropic的支持者也将其置于相互重叠的科技、政策和政治网络的中心。
该公司的早期融资包括来自Facebook联合创始人达斯汀·莫斯科维茨和前谷歌首席执行官埃里克·施密特等人士的投资,两人都是长期的民主党捐助者,以及萨姆·班克曼-弗里德的FTX的一笔重要早期投资。
与此同时,该公司此后吸引了广泛的大型机构投资者——包括亚马逊、谷歌和微软——这反映了其在全球AI竞赛中日益重要的地位,也使得单纯从政治角度定性该公司变得复杂。
该公司还聘请了几位拜登政府的官员担任关键政策角色,进一步将Anthropic嵌入华盛顿的AI政策生态系统。其中包括前国家安全委员会官员塔伦·查布拉,他现在领导该公司的国家安全政策工作,以及其他曾参与制定联邦AI和技术战略的顾问和工作人员。
Anthropic还寻求在华盛顿扩大影响力的同时,建立跨党派联系。
该公司雇佣了具有共和党背景的政策工作人员,包括立法分析师本杰明·默克尔和说客玛丽·克罗根,并于2月任命克里斯·利德尔——特朗普时期的前白宫副办公厅主任——进入董事会。该公司还向Public First Action捐赠了2000万美元,这是一个两党团体,支持那些支持AI监管的两党候选人。
一名联邦法官裁决阻止特朗普政府禁止AI公司Anthropic使用国防部系统,这引发了一场关于该裁决是否将法院推向国家安全决策领域的辩论。(图片来源:萨米克塔·拉克什米/彭博社通过盖蒂图片社;尤金·秀子/泳池/路透社)
该公司也遭到了特朗普政府内部的批评。
白宫AI顾问戴维·萨克斯指责Anthropic推行“监管俘获”策略,称该公司利用对AI安全的担忧推动有利于自身地位的规则,同时减缓竞争对手的发展。
Anthropic反驳了这些说法,表示其做法反映了对先进AI系统构成风险的真正担忧。
法官冻结特朗普政府针对AI公司的禁令,加剧围绕安全权限的斗争
新型AI系统可能重塑网络战,在美国政府内部引发警报
这项新技术可以帮助开发人员识别并修复长期存在的安全漏洞,但也可能给黑客提供强大的新工具,以攻击美国企业和政府系统。
“鉴于AI的发展速度,此类能力很快就会扩散,甚至可能超出承诺安全部署它们的行为体的控制范围,”Anthropic在其公告中表示,“其后果——对经济、公共安全和国家安全——可能是严重的。”
Anthropic尚未公开发布Mythos,而是通过一个名为“Project Glasswing”的项目限制访问,由精选的公司使用该模型扫描关键系统以寻找漏洞。
Anthropic的网站页面和公司标志于2026年2月26日星期四在纽约的电脑屏幕上展示。(图片来源:帕特里克·西森/美联社照片)
点击此处下载福克斯新闻应用
该公司表示,该系统已经发现了数千个此前未被发现的漏洞——其中一些已有数十年历史——这既凸显了其防御价值,也凸显了如果该技术扩散,可能被用于加速网络攻击的风险。
福克斯商业频道的爱德华·劳伦斯为本报告做出了贡献。
White House meets AI firm Anthropic amid political tensions, Pentagon dispute
2026-04-17T16:10:50-04:00 / Fox News
Anthropic’s Mythos Preview model has uncovered thousands of previously unknown security flaws in critical systems
By Morgan Phillips Fox News
Published April 17, 2026 4:10pm EDT
Anthropic’s new AI model raises alarms over safety, cybersecurity concerns
OthersideAI co-founder and CEO Matt Shumer joins ‘The Sunday Briefing’ to discuss Anthropic’s new AI model, Mythos, and growing concerns over its advanced capabilities and potential cybersecurity risks.
NEW You can now listen to Fox News articles!
Listen to this article
8 min
One month after President Donald Trump ordered a government-wide halt on artificial intelligence firm Anthropic’s technology following a clash with the Pentagon, the company’s CEO is back at the White House for high-level talks — as officials reconsider whether a system they sidelined over national security and political concerns may be too important to ignore.
A source familiar with the meeting told Fox News White House chief of staff Susie Wiles met with Anthropic CEO Dario Amodei Friday.
Anthropic’s new artificial intelligence model, Mythos Preview, is considered so advanced that the company has restricted its release, limiting access to a small group of partners over concerns about potential misuse.
The meeting signals a rapid reversal inside the Trump administration, as officials weigh whether a system previously flagged as a national security risk could also be critical to defending U.S. infrastructure — exposing a growing internal tension over how to handle powerful AI tools with both defensive and offensive potential.
“Anthropic CEO Dario Amodei today met with senior administration officials for a productive discussion on how Anthropic and the U.S. government can work together on key shared priorities such as cybersecurity, America’s lead in the AI race, and AI safety. The meeting reflected Anthropic’s ongoing commitment to engaging with the U.S. government on the development of responsible AI. We are grateful for their time and are looking forward to continuing these discussions,” an Anthropic spokesperson told Fox News Digital.
MADURO RAID QUESTIONS TRIGGER PENTAGON REVIEW OF TOP AI FIRM AS POTENTIAL ‘SUPPLY CHAIN RISK’
The talks come despite a recent clash inside the Trump administration, as officials reconsider a company the Pentagon flagged as a supply chain risk. Its ties to former Biden officials and past criticism of Trump by its CEO have added a political dimension to the debate over whether its technology should return to government use.
A source familiar with the meeting told Fox News White House chief of staff Susie Wiles met with Anthropic CEO Dario Amodei Friday.(Chance Yeh/Getty Images for HubSpot))
That potential and the risks that come with it already have triggered tensions inside the U.S. government.
Pentagon clash, legal fight and reversal put Anthropic back in play
The meeting comes after a sharp break between Anthropic and the Pentagon earlier in 2026.
Defense Secretary Pete Hegseth designated the company a national security “supply chain risk,” effectively cutting it out of military systems and barring contractors from using its technology.
Anthropic is now challenging the designation in court, after filing multiple lawsuits against the Pentagon and other federal agencies arguing the “supply chain risk” label is unlawful and retaliatory.
The designation, which effectively bars contractors from using Anthropic’s technology and has been compared to measures typically reserved for foreign adversaries, already has faced conflicting rulings in federal court, with one judge temporarily blocking parts of the policy while an appeals court declined to halt its enforcement. The legal fight is ongoing, leaving contractors and agencies navigating uncertainty over whether and how Anthropic’s systems can be used.
The move followed a dispute over how the Pentagon could use Anthropic’s AI.
The company declined to grant open-ended authorization for “all lawful purposes,” instead insisting its systems not be used for mass domestic surveillance or fully autonomous weapons. While Pentagon officials said they do not rely on AI for either purpose, they rejected being constrained by a private company’s restrictions.
Trump then directed federal agencies to stop using Anthropic’s models altogether, escalating the standoff beyond the Defense Department into a government-wide halt.
Now, just weeks later, the company is back in high-level talks with the White House as officials weigh whether its new Mythos system — despite the earlier ban — could shift the balance of cyber defense and attack.
Political ties and past criticism may complicate White House talks
The dispute also has taken on a political dimension.
Amodei previously has drawn attention for his criticism of Trump, at one point likening him to a “feudal warlord” in a pre-2024-election Facebook post, according to a Wall Street Journal report.
In an internal message posted on Anthropic’s Slack platform and later leaked to The Information, Amodei suggested the Trump administration’s dispute with the company was driven in part by its refusal to offer what he described as “dictator-style praise.”
The message, written during a rapid escalation of tensions in early March, later was cited by the Wall Street Journal and other outlets. Amodei subsequently apologized for the tone, saying the post did not reflect his considered views.
FEDERAL APPEALS COURT REJECTS ANTHROPIC BID TO BLOCK PENTAGON BLACKLIST IN AI DISPUTE
When asked about Anthropic’s governance, hiring and broader political ties, a White House official said the administration “continues to proactively engage across government and industry to protect the United States and Americans,” including “working with frontier AI labs to ensure their models help secure critical software vulnerabilities.”
The official added that “any new technology that would potentially be used or deployed by the federal government requires a technical period of evaluation for fidelity and security,” and said “the collective effort of all involved will ultimately benefit industry, and our country, as a whole.”
Amodei previously has drawn attention for his criticism of Trump, at one point likening him to a “feudal warlord” in a pre-2024-election Facebook post, according to a Wall Street Journal report.(Patrick Sison/AP Photo)
Beyond the immediate dispute, the company’s broader ties to Washington also have drawn attention.
Anthropic’s governance structure has also drawn attention as the administration weighs closer engagement. The company is overseen in part by an independent “Long-Term Benefit Trust,” an unusual mechanism designed to give nonfinancial stakeholders influence over corporate decisions.
The trust holds special voting shares that allow it to appoint and eventually control a majority of the company’s board, with members drawn from national security, public policy and global development backgrounds.
Current trustees include Clinton Health Access Initiative CEO Neil Buddy Shah, Carnegie Endowment president Mariano-Florentino Cuéllar, a Democrat who was appointed to the California Supreme Court by former Gov. Jerry Brown in 2014, and Center for a New American Security CEO Richard Fontaine — who advised John McCain’s 2008 presidential campaign. The group is a mix of policy and national security leaders that underscores the company’s deep ties to Washington and global policy circles.
Anthropic’s backers also have placed it at the center of overlapping tech, policy and political networks.
Early funding for the company included investments from figures such as Facebook co-founder Dustin Moskovitz and former Google CEO Eric Schmidt, both longtime Democratic donors, and a major early investment from Sam Bankman-Fried’s FTX.
At the same time, the company has since attracted a broad range of major institutional investors — including Amazon, Google and Microsoft — reflecting its growing role in the global AI race and complicating efforts to characterize it along purely political lines.
The company also has brought on several officials from the Biden administration into key policy roles, further embedding Anthropic in Washington’s AI policy ecosystem. Among them is Tarun Chhabra, a former National Security Council official who now leads the company’s national security policy work, as well as other advisers and staff with experience shaping federal AI and technology strategy.
Anthropic also has sought to build ties across party lines as it expands its presence in Washington.
The company employs policy staff with Republican backgrounds, including legislative analyst Benjamin Merkel and lobbyist Mary Croghan, and in February added Chris Liddell — a former deputy White House chief of staff under Trump — to its board. It has contributed $20 million to Public First Action, a bipartisan group that backs candidates from both parties who support AI regulation.
A federal judge’s decision to block the Trump administration from banning AI firm Anthropic from Department of War use is igniting a debate over whether the ruling pushes courts into national security decision-making.(Samyukta Lakshmi/Bloomberg via Getty Images; Eugene Hoshiko/Pool/Reuters)
The company has also faced criticism from within the Trump administration.
White House AI adviser David Sacks has accused Anthropic of pursuing a “regulatory capture” strategy, arguing the firm is using concerns about AI safety to push rules that could benefit its own position while slowing competitors.
Anthropic has pushed back on those claims, saying its approach reflects genuine concerns about the risks posed by advanced AI systems.
JUDGE FREEZES TRUMP ADMIN MOVE AGAINST AI FIRM, FUELING BATTLE OVER SECURITY AUTHORITY
New AI system could reshape cyber warfare, raising alarms inside US government
The new technology could help developers identify and fix long-standing security flaws, but it could also give hackers a powerful new tool to target U.S. businesses and government systems.
“Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely,” Anthropic said in its announcement. “The fallout — for economies, public safety, and national security — could be severe.”
Anthropic has not released Mythos publicly, instead limiting access through a program called Project Glasswing, where a select group of companies use the model to scan critical systems for vulnerabilities.
Pages from the Anthropic website and the company’s logos are displayed on a computer screen in New York on Thursday, Feb. 26, 2026.(Patrick Sison/AP Photo)
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
The company says the system has already uncovered thousands of previously unknown flaws — some decades old — underscoring both its defensive value and the risk it could be used to accelerate cyberattacks if the technology spreads.
Fox Business’ Edward Lawrence contributed to this report.
发表回复