Economy May 6, 2026 10:05 AM

White House Weighs Executive Order to Require Pre-Release Testing of AI Models

Administration officials consider a framework modeled on FDA-style approvals after Anthropic's Mythos raised cybersecurity concerns

By Sofia Navarro

The White House is considering an executive order that would mandate testing for artificial intelligence models before they are released to the public, National Economic Council Director Kevin Hassett said. The contemplated framework would impose pre-release evaluations similar in spirit to Food and Drug Administration drug approvals and could apply broadly across AI developers. The move follows disclosures about Anthropic's Mythos model and a recent expansion of a Commerce Department voluntary testing program in which several major tech companies participate.

White House Weighs Executive Order to Require Pre-Release Testing of AI Models

Key Points

  • The White House is considering an executive order to require pre-release testing of AI models, according to National Economic Council Director Kevin Hassett.
  • Anthropic's Mythos model - reported to identify network vulnerabilities - prompted heightened attention; access to Mythos is limited to select large tech and financial firms and federal agencies are seeking access for testing.
  • The Commerce Department has expanded a voluntary AI testing program, with Google, Microsoft and xAI agreeing to provide access for capability and security assessments; OpenAI and Anthropic already participate.

The White House is reviewing the possibility of issuing an executive order that would establish a formal vetting process for newly developed artificial intelligence models, National Economic Council Director Kevin Hassett said on Wednesday.

Under the proposed framework, AI systems would be required to undergo testing prior to being released publicly - a process Hassett compared to the way the Food and Drug Administration evaluates new drugs. He described the idea during remarks to Fox Business.

"We have scrambled an all of government effort and all the private sector to coordinate and make sure that before this model is released out into the wild, that it's been tested left and right, to make sure that it doesn't cause any harm to the American businesses or the American government," Hassett said.


Hassett framed the initiative as an interagency and public-private coordination effort intended to identify risks and limit potential harm to businesses and government systems. He indicated that the vehicle for such coordination could be an executive order that mandates testing prior to public deployment of AI models.

The immediate impetus for the initiative includes Anthropic PBC's disclosure that its Mythos model can detect network vulnerabilities, a capability that the company said could pose cybersecurity risks. Anthropic has limited access to Mythos, allowing only select large technology and financial firms to use the model. Hassett confirmed that the Trump administration has pursued access to Mythos for federal agencies to test government systems.

White House Chief of Staff Susie Wiles and other senior administration officials held a meeting last month with Anthropic CEO Dario Amodei, during which Mythos was among the topics discussed, Hassett said.

Hassett suggested that required testing under any executive order would "really quite likely" extend to all AI firms. He characterized Mythos as the first high-profile example prompting consideration of wider safeguards, saying, "I think that, that Mythos is the first of them, but it's incumbent on us to build a system."


The precise contours and scope of any mandatory testing requirements remain unclear. Such measures would mark a change from President Donald Trump's prior emphasis on a lighter regulatory approach to artificial intelligence.

Separately, the Commerce Department recently expanded a voluntary AI testing program. Under that program - which is managed by the department's Center for AI Standards and Innovation - Alphabet Inc.'s Google, Microsoft Corp. and xAI agreed to let the government assess their models for capabilities and security improvements. OpenAI and Anthropic were already participants in the program.

How broadly any formal testing mandate would be applied, what standards would govern evaluations, and how enforcement would be structured are outstanding questions that the administration has not yet resolved, based on Hassett's statements.

Risks

  • Cybersecurity risks associated with advanced AI models - highlighted by Anthropic's disclosure about Mythos - could affect technology firms, financial institutions and government systems.
  • Regulatory uncertainty - the scope and enforcement of any mandatory pre-release testing remain undefined, creating uncertainty for AI developers and investors in the tech sector.
  • Potential policy shift - a move toward required testing would represent a departure from prior emphasis on minimal AI regulation, which could impact compliance costs and product deployment timelines for AI companies.

More from Economy

Colombia’s fiscal watchdog to halt monitoring and legal opinions after budget cut May 6, 2026 U.S. Oil Product Shipments Reach Record 8.2 Million Barrels Per Day May 6, 2026 Treasury Keeps Note and Bond Auction Sizes Steady, Signals Shift Could Come in Early 2027 May 6, 2026 NY Fed: Global supply chain stress surged to highest level since mid-2022 May 6, 2026 National Bank of Poland Leaves Policy Rate at 3.75%, Flags Inflation Risks May 6, 2026