Short Answer

Both the model and the market expect Trump to order pre-release federal review of AI models Before Jan 1, 2027, with no compelling evidence of mispricing.

1. Executive Verdict

  • Trump's previous executive orders prioritized deregulation and voluntary agreements. White House is reportedly drafting an Executive Order for AI model vetting. National security events, like cyberattacks, could compel an AI executive order. Advisors reportedly proposed mandatory AI model vetting, similar to FDA approval. Voluntary CAISI agreements enable AI model security testing without enforcement. Market confidence for a federal AI review order appears low.

Who Wins and Why

Outcome Market Model Why
Before Jun 1, 2026 18.0% 19.3% Trump's previous executive orders consistently emphasized deregulation over mandatory federal reviews for AI.
Before Jul 1, 2026 38.0% 36.5% Voluntary agreements have been pursued instead of mandatory measures under Trump's policy approach.
Before Jan 1, 2027 55.0% 53.4% The White House has denied reports of a mandatory executive order for pre-release AI model review.

Current Context

The White House is considering new federal reviews for high-risk AI models. On May 4, 2026, The New York Times reported that the White House was discussing an executive order to establish an AI working group tasked with reviewing artificial intelligence models before their public release [^][^]. This initiative, which involved briefings with executives from companies like Anthropic, Google, and OpenAI, would primarily focus on high-risk AI applications to address cybersecurity vulnerabilities and national security concerns [^][^]. Such a move could represent a departure from previous deregulatory positions [^][^]. Concurrently, on May 5, 2026, voluntary CAISI (Coordinated AI Safety Initiative) agreements were announced with Google DeepMind, Microsoft, and xAI for pre-deployment testing of their models [^].
Uncertainty surrounds a formal order despite ongoing White House discussions. As of May 9, 2026, no executive order regarding pre-release federal review of AI models has been signed [^]. This comes amid differing statements; on May 6, Hassett indicated that an order was being studied and might be signed within two weeks, while the White House stated that President Trump would make an announcement [^]. Prior executive orders, including one in December 2025, have historically prioritized deregulation and emphasized federal preemption over state-level AI regulations [^][^]. Prediction markets reflect this uncertainty: as of May 5, Polymarket showed a 21% probability for a "YES" on a formal order by May 31, 2026, while Lines.com indicated a 32.5% "YES" for the same event [^][^].

2. Market Behavior & Price Dynamics

Historical Price (Probability)

Outcome probability
Date
This prediction market has exhibited a sideways trading pattern, with the probability fluctuating within a narrow range of 17.0% to 25.0%. The market opened at 17.0% and is currently trading at 18.0%, indicating minimal net change over the period. The most significant price movement was an early spike to a high of 25.0% on May 7. This increase appears to be a direct reaction to reports from May 4 that the White House was considering an executive order for pre-release reviews of high-risk AI models. However, the market did not sustain this higher probability, quickly returning to the 18.0% level.
Volume patterns suggest that initial conviction behind the price spike was weak. The move to 25.0% occurred on zero volume, while the first significant trading activity, totaling over 300 contracts, materialized as the price fell back to 18.0% on May 9. This indicates that traders were more willing to transact at the lower probability, effectively rejecting the higher price. The overall trading volume of over 6,500 contracts suggests moderate but not overwhelming interest in the market. The price action has established a clear resistance level at the 25.0% peak and a support level around the 17.0% opening price.
Overall, the chart suggests that market sentiment is skeptical about the likelihood of a pre-release review order. Despite news that such a policy is under consideration, the market has consistently priced the probability below 25%. The failure to hold the price spike and the subsequent consolidation near the lows of the range imply that traders do not believe the reported discussions will necessarily lead to the specific executive action required for this market to resolve as 'YES'. The stable, low-probability trading range indicates a consensus that such a move remains unlikely.

3. Significant Price Movements

Notable price changes detected in the chart, along with research into what caused each movement.

📉 May 06, 2026: 18.0pp drop

Price decreased from 66.0% to 48.0%

Outcome: Before Jan 1, 2027

What happened: The market's 18.0 percentage point drop on May 06, 2026, was primarily driven by traditional news reports regarding the Commerce Department's Center for AI Safety and Innovation (CAISI). On May 05, 2026, CAISI signed voluntary agreements with Google DeepMind, Microsoft, and xAI for pre-public release evaluations of their AI models [^][^][^]. This news, which appeared to LEAD the price move, likely reduced the perceived need or probability of President Trump issuing a separate federal executive order mandating such reviews. Based on the provided research, social media activity appears to have been mostly noise or a contributing accelerant for disseminating this traditional news, rather than the primary driver itself.

📈 May 05, 2026: 29.0pp spike

Price increased from 37.0% to 66.0%

Outcome: Before Jan 1, 2027

What happened: The primary driver of the prediction market price spike on May 05, 2026, was a confluence of traditional news reports signaling increased likelihood of federal pre-release review of AI models. On May 5, Politico reported the White House was mulling tighter controls and considering a "16-page EO" for pre-deployment vetting [^]. Concurrently, it was announced the Trump administration would commence voluntary pre-deployment evaluations of models from Google DeepMind, Microsoft, and xAI [^][^]. This indicated concrete steps towards federal oversight, directly addressing the market's premise. Social media activity was not a primary driver based on the provided information.

4. Market Data

View on Kalshi →

Contract Snapshot

A "Yes" resolution occurs if Donald Trump issues a specific type of executive action (e.g., executive order, presidential memorandum) establishing or directing a federal review process for AI models before their public release, by December 31, 2026, 11:59 PM EST. This action must be signed by the President, explicitly address the topic with legal/policy effect, and be publicly documented by the White House or Federal Register; actions by cabinet members or incidental mentions do not qualify. If no such qualifying action occurs by the deadline, the market resolves to "No". The market will close early if the executive action is issued.

Available Contracts

Market options and current pricing

Outcome bucket Yes (price) No (price) Last trade probability
Before Jun 1, 2026 $0.19 $0.82 18%
Before Jul 1, 2026 $0.39 $0.62 38%
Before Jan 1, 2027 $0.55 $0.46 55%

Market Discussion

Reports on May 4, 2026, indicated the White House was considering an Executive Order for a federal working group to review AI models before public release, a potential policy shift reportedly prompted by cybersecurity concerns related to Anthropic's Mythos model [^][^][^][^][^]. Conversely, the White House called these reports 'speculation,' stating it is preparing an AI security order that will augment voluntary information sharing rather than mandating pre-release reviews, which aligns with previous deregulation-focused Executive Orders from 2025 [^][^][^][^][^]. This uncertainty is reflected in prediction markets, where Polymarket odds for such an order by May 31, 2026, stood at 21% 'Yes,' a decrease from 32.5% [^][^][^].

5. What Specific National Security Event Could Compel a Trump AI Executive Order Before July 2026?

CISA Document Upload IncidentJuly 2025 [^][^]
AI in Critical Infrastructure AttackJanuary 2026 [^][^][^]
Pentagon AI Model Removal OrderMarch 2026 [^]
AI-related national security incidents involve data leaks, cyberattacks, and supply chain risks. Before July 2026, several incidents emerged highlighting the misuse of AI for sensitive government information, its involvement in critical infrastructure attacks, and broader concerns regarding the exploitation of national AI capabilities. In July 2025, a CISA acting director reportedly uploaded "for official use only" documents to a public instance [^][^]. Following this, in January 2026, hackers reportedly used an AI model for 75% of commands to attack a Mexican water utility, as part of a larger campaign that stole over 150GB of data from government agencies [^][^][^]. Further demonstrating these vulnerabilities, the Pentagon issued a memo in March 2026, ordering the removal of an AI model from nuclear, missile defense, and cyber warfare systems due to supply chain risks [^].
Geopolitical tensions and military applications intensified AI national security concerns. In April 2026, the Trump administration accused China of industrial-scale extraction of US AI capabilities, vowing a crackdown [^][^][^]. Concurrently, the US reportedly utilized AI models and drones in lethal operations against Iran during February-March 2026 [^][^]. These varied events, collectively underscoring national security threats from AI, led to the White House reportedly exploring an executive order in May 2026. This potential order aimed to vet AI models before their public release and included studying security reviews [^][^][^][^]. This consideration followed earlier voluntary pacts with companies for safety testing, indicating an evolving strategy for managing AI's security implications [^][^].

6. How Does Trump's Previous Deregulatory Stance, as Seen in the December 2025 AI Orders, Conflict With a Pre-Release Review Policy?

EO 14179 Date & ActionJanuary 23, 2025, revoked Biden-era AI regulations [^][^]
December 2025 EO DirectivesDirected AG to challenge unlawful state AI laws, sought congressional preemption [^][^][^]
May 2026 Reports & ResponseWhite House considered pre-release vetting of AI models; reports dismissed [^][^][^][^][^]
Trump's early AI orders prioritized deregulation for U.S. leadership. His previous deregulatory stance, highlighted by Executive Order 14179 on January 23, 2025, and another Executive Order on December 11, 2025, sought to eliminate barriers to American AI leadership [^][^]. These orders promoted state preemption, directed the Attorney General to challenge state AI laws deemed unlawful, and aimed for congressional preemption of state regulations to establish a national AI policy framework [^][^][^]. This emphasis was designed to ensure American leadership without imposing federal pre-release review on private AI models, instead focusing on federal procurement of unbiased AI and accelerating infrastructure development [^][^].
Reports suggested a policy shift, but the White House dismissed them. In contrast to this deregulatory approach, May 2026 reports from The New York Times and Forbes indicated that the White House was considering an executive order for the pre-release vetting of AI models [^][^][^]. This included potential voluntary access agreements with major companies such as Google, Microsoft, and xAI [^][^][^]. AllThingsGeek characterized this potential policy as "morphing into centralized control" [^]. However, the White House subsequently dismissed these reports concerning a federal pre-release review policy [^][^].

7. How Do the Voluntary CAISI Agreements With Google and Microsoft Compare to a Potential Mandatory Executive Order in Scope and Enforcement?

CAISI Agreements SignedMay 2026 [^][^][^]
Evaluations CompletedOver 40 [^][^]
Chance of Executive Order by May 31, 202621-32% [^][^]
The voluntary CAISI agreements enable AI model security testing without enforcement. Established in May 2026 with companies including Google DeepMind, Microsoft, and xAI, these agreements facilitate security testing for unreleased state-of-the-art AI models [^][^][^]. Renegotiated under the Trump administration, these deals have led to over 40 evaluations [^][^][^][^][^]. However, the agreements are voluntary and lack enforcement power, meaning CAISI cannot delay or block model releases, nor can it unilaterally publish its findings [^].
In contrast, a mandatory Executive Order would represent a significant policy shift. Reports circulated on May 4, 2026, indicating White House discussions about a potential mandatory Executive Order (EO) concerning AI oversight procedures for an AI working group [^][^]. While prediction markets showed a 21-32% chance of such an EO by May 31, 2026, the White House denied these reports [^][^]. Historically, the Trump administration's AI policy has been largely deregulatory, issuing EOs on infrastructure, anti-woke AI, and state preemption without requiring prior mandatory reviews [^][^][^]. Therefore, a mandatory Executive Order for pre-release federal review would introduce binding enforcement mechanisms, a departure from the voluntary nature of the CAISI framework [^][^][^][^][^][^].

8. What Are the Competing Arguments from Advisors like Kevin Hassett Versus Tech CEOs on the Necessity of an AI Vetting Order?

Proposed AI Safety ReviewAI models to undergo a safety process "like an FDA drug" before release, addressing Mythos vulnerabilities (Kevin Hassett, May 6, 2026) [^][^]
Industry View on VettingCritics call vetting a "terrible idea" and express general industry concern over regulation (Daniel Castro of ITIF, Adam Thierer) [^]
Voluntary Pre-deployment PactsCAISI has voluntary agreements with OpenAI, Anthropic, Google DeepMind, Microsoft, and xAI (since 2024-2026) [^][^]
Kevin Hassett, an advisor, proposed mandatory AI model vetting similar to FDA drug approval. On May 6, 2026, Hassett put forth an executive order requiring AI models to undergo a safety process "like an FDA drug" before release, specifically aiming to address Mythos vulnerabilities [^][^].
The AI industry generally opposes mandatory vetting, citing innovation concerns. Critics such as Daniel Castro of ITIF and Adam Thierer have labeled mandatory vetting a "terrible idea" due to its potential negative impacts on AI development and deregulation [^]. In contrast, CAISI has pursued a different approach through voluntary pre-deployment agreements with major AI companies, including OpenAI, Anthropic, Google DeepMind, Microsoft, and xAI, which have been active since 2024-2026 [^][^].

9. What Do the Implied Probabilities from Polymarket and Lines.com Reveal About Market Confidence in an Order Before June 1?

Polymarket Yes probability21% (down to 18-20% in social mentions) [^][^]
Lines.com Yes probability32.5% [^]
Time remaining (as of May 9, 2026)Approximately three weeks [^]
Market confidence for an AI model review order is low. Implied probabilities from betting platforms indicate a low likelihood of a federal order establishing a pre-release AI model review process by June 1, 2026. Polymarket currently shows a 21% "Yes" probability, which has been reported as dropping to 18-20% in social discussions [^][^]. Similarly, Lines.com reports a 32.5% "Yes" probability, suggesting that there is a 67.5% expectation that no such order will be issued [^].
A formal review process is unlikely by the deadline. A "Yes" resolution for this market requires formal executive action or legislation to establish a pre-release AI model review process by May 31, 11:59 PM ET [^][^][^]. As of May 9, 2026, with approximately three weeks remaining before the deadline, no such order has been issued [^]. This low market confidence aligns with President Trump's known deregulatory stance and the White House's retraction of previous AI review reports [^][^]. Although temporary spikes in odds occurred following recent New York Times and Bloomberg reports of discussions, the overall market sentiment has remained skeptical regarding the issuance of an order [^][^].

10. What Could Change the Odds

Key Catalysts

The White House is currently drafting an Executive Order (EO) for FDA-style pre-release vetting of frontier AI models, with a possible signing by late May 2026 [^] [^] [^] . - Reuters">[^][^][^]. This development follows the announcement on May 5, 2026, of voluntary pre-release evaluation agreements with Google DeepMind, Microsoft, and xAI [^][^][^], which expand upon earlier deals with OpenAI and Anthropic [^].
This potential policy shift towards mandatory review is reportedly driven by concerns regarding AI national security risks, exemplified by Anthropic Mythos cyber exploits [^] [^] . This contrasts with prior deregulatory EOs, such as one on December 11, 2025, that aimed to preempt state laws [^][^][^][^]. Currently, as of May 9, 2026, a mandatory federal review process does not exist [^], with previous Executive Orders, including EO 14179 on January 23, 2025, having focused on removing barriers and state preemption [^][^]. Market probabilities for "Trump orders federal review by May 31, 2026" stand at 21% Yes, with approximately ~$35K volume on Polymarket [^][^].

Key Dates & Catalysts

  • Expiration: June 08, 2026
  • Closes: January 01, 2027

11. Decision-Flipping Events

  • Trigger: The White House is currently drafting an Executive Order (EO) for FDA-style pre-release vetting of frontier AI models, with a possible signing by late May 2026 [^] [^] [^] .
  • Trigger: This development follows the announcement on May 5, 2026, of voluntary pre-release evaluation agreements with Google DeepMind, Microsoft, and xAI [^] [^] [^] , which expand upon earlier deals with OpenAI and Anthropic [^] .
  • Trigger: This potential policy shift towards mandatory review is reportedly driven by concerns regarding AI national security risks, exemplified by Anthropic Mythos cyber exploits [^] [^] .
  • Trigger: This contrasts with prior deregulatory EOs, such as one on December 11, 2025, that aimed to preempt state laws [^] [^] [^] [^] .

13. Historical Resolutions

No historical resolution data available for this series.