The Trump administration has long taken a non-intervention stance toward artificial intelligence. Now, prompted by cybersecurity concerns triggered by Anthropic’s powerful AI model Mythos revealed earlier, it is beginning to consider implementing a government review mechanism before AI models are publicly released—signaling a major shift in U.S. AI regulatory policy.
White House AI policy pivots sharply: from laissez-faire to regulation
The New York Times reported that U.S. officials said President Trump is considering issuing an executive order to set up an AI working group composed of senior figures from the tech industry and government officials, to develop government review procedures to be carried out before new AI models are made public.
Last week, the White House already held talks with senior executives of companies including Anthropic, Google, and OpenAI regarding the plan. Insiders said the potential review mechanism may draw on the UK’s existing model, with multiple government agencies jointly ensuring that AI models meet specific safety standards.
This policy shift contrasts with Trump’s openness in the past. Last July, he publicly said: “We have to let this baby grow and thrive; we can’t stop it with stupid regulations,” and after taking office, he quickly scrapped regulatory procedures in the Biden administration that required AI developers to conduct safety assessments. However, in just a few months, the policy direction has already changed.
Anthropic’s “Claude Mythos,” not dared to be revealed, shocks the White House
One of the reasons prompting the White House to change its stance is believed to be Anthropic’s new AI model Claude Mythos released last month. Anthropic warned that Mythos is so powerful in identifying software security vulnerabilities that it could exploit the weaknesses of “all major operating systems and web browsers,” potentially triggering large-scale cyberattacks—ultimately leading it to decide not to publish the model publicly.
The White House is highly alert to this, worrying that if a catastrophic AI cyberattack event occurs, the government will face severe political consequences. Some officials are even more actively evaluating whether it can establish a “government first access” mechanism so that federal agencies can obtain access rights to the model before its public release, ensuring that the model’s military and intelligence value can be utilized by the Pentagon.
(Anthropic weapon-grade cybersecurity model Mythos obtained without authorization: how did they do it?)
A legal dispute between Anthropic and the Pentagon
The backdrop for the policy adjustment is also intertwined with a thorny legal dispute between the government and companies. In February of this year, Anthropic and the Pentagon entered negotiations over a $200 million contract, but Anthropic chose to refuse the Trump administration’s requests. The U.S. Department of Defense then listed Anthropic as a “national security supply chain risk,” and Anthropic subsequently filed a lawsuit to try to block the move.
Last month, White House Chief of Staff Susie Wiles and Treasury Secretary Bessent met with Anthropic CEO Dario Amodei at the White House, focusing on how to restore the government’s use of Anthropic technology. After the meeting, both sides described it as “productive.”
As the AI arms race accelerates, the regulatory framework remains to be formed
In March this year, the Trump administration appointed 13 technology executives—including Meta CEO Mark Zuckerberg, Nvidia CEO Jensen Huang, and Oracle founder Larry Ellison—to form a new White House AI advisory committee, and it also released a legislative framework requiring Congress to create a nationwide unified AI policy to replace a patchwork of state-level rules.
Last week, the Pentagon also announced that it reached an agreement with OpenAI, Google, Nvidia, SpaceX, Microsoft, Amazon, and Reflection to deploy the AI tools of the aforementioned companies into the Department of Defense’s classified networks, to strengthen data analysis, improve battlefield situational awareness, and assist soldiers’ decision-making in complex combat environments.
This move shows that, regardless of how regulatory policy evolves, the direction of the U.S. government expanding AI military applications is already set. However, how to strike a balance between encouraging innovation and controlling risks remains the toughest question facing the White House.
This article, Anthropic Mythos is too powerful! The White House plans to require that new AI models pass a government safety review before release, first appeared on Chain News ABMedia.
Related News
The EU Commission reaches out to Anthropic Mythos: Dombrovskis confirms EU involvement in a confidential AI model
Karpathy “Let LLMs argue with themselves”: a 4-step method to counter thinking biases with AI
Sam Altman and Dario Amodei are both unbearable! AI doomsday talk and relative deprivation have made Americans increasingly resent AI
Ministry of Education’s “Libraries Have AI”: Free use of ChatGPT and Claude in libraries! Applicable time and locations—see it all at once
To cash in on the IPO boom for SpaceX, OpenAI, and Anthropic, Nasdaq and S&P ease standards