Short Answer

Both the model and the market expect the US government to take control of an AI company or project before 2030, with no compelling evidence of mispricing.

1. Executive Verdict

  • CFIUS has broad power to mandate AI lab divestiture or conservatorship.
  • Trump's campaign drafted an executive order for federal AI preemption.
  • Public Microsoft-OpenAI agreement does not grant government emergency control.
  • Specific stances on DPA/IEEPA for AI control are not detailed.
  • Market experienced a 9 percentage point spike on April 18, 2026.

Who Wins and Why

Outcome Market Model Why
Before 2030 40.0% 40.0% A future Trump administration might nationalize critical AI projects via executive order for national security.

2. Market Behavior & Price Dynamics

Historical Price (Probability)

Outcome probability
Date
This prediction market is characterized by a sideways trading pattern, with the probability fluctuating within a narrow 9-point range between 34.0% and 43.0%. The current price of 40.0% is very close to its starting price, reinforcing the lack of a clear directional trend. Despite this overall stability, the market experienced a brief period of high volatility. On April 16, 2026, the price dropped sharply by 9.0 percentage points, only to be followed by an equally sharp 9.0 point spike on April 18, which completely reversed the prior move. The provided context does not offer a specific catalyst for these rapid fluctuations.
The total volume of 419 contracts traded is relatively low, which suggests a lack of broad market conviction and may explain the sharp price movements on thin liquidity. This trading activity has established clear short-term support at the 34.0% level and resistance at the 43.0% level, with the price failing to break out of this channel. Overall, the price action indicates that market sentiment is one of indecision. Traders are pricing in a significant minority chance of a government takeover occurring before 2030 but seem to be awaiting a major catalyst before developing the conviction to push the price into a sustained trend.

3. Significant Price Movements

Notable price changes detected in the chart, along with research into what caused each movement.

📈 April 18, 2026: 9.0pp spike

Price increased from 34.0% to 43.0%

Outcome: Before 2030

What happened: No supporting research available for this anomaly.

📉 April 16, 2026: 9.0pp drop

Price decreased from 43.0% to 34.0%

Outcome: Before 2030

What happened: No supporting research available for this anomaly.

4. Market Data

View on Kalshi →

Contract Snapshot

The market resolves to "Yes" if the U.S. government takes operational control of any private AI company or project before January 1, 2030. Otherwise, it resolves to "No" and closes by December 31, 2029, 11:59 PM EST, although it will close early if the "Yes" event occurs. Trading is prohibited for employees of listed source agencies and individuals holding material, non-public information.

Available Contracts

Market options and current pricing

Outcome bucket Yes (price) No (price) Last trade probability
Before 2030 $0.40 $0.62 40%

Market Discussion

Limited public discussion available for this market.

5. Would Trump's NSA/AG Nominees Use DPA/IEEPA on Private AI Companies?

Explicit Stance on DPA/IEEPA for AINo explicit statements from short-listed individuals [^]
Hypothetical DPA Use in AIDPA considered a potential tool in hypothetical 2026 scenarios involving AI firms under a future Trump administration [^]
National Security Concern (AI Chips)Trump CYBERCOM and NSA Nominee publicly warned about China's pursuit of advanced American AI chips [^]
Explicit stances of specific individuals on DPA/IEEPA for AI tech are not detailed. The available research does not explicitly detail the official stance of specific individuals on Donald Trump's short-list for National Security Advisor (NSA) or Attorney General (AG) regarding the applicability of the Defense Production Act (DPA) or International Emergency Economic Powers Act (IEEPA) to compel private technology companies to hand over AI models or intellectual property during a national security crisis [^]. However, the broader context from available information indicates a strong focus on national security concerns related to advanced American AI technology, particularly in competition with China.
Potential Trump administration discussions consider DPA's relevance to AI firms. Within a potential future Trump administration, discussions have included scenarios where the DPA is considered relevant to AI firms. For instance, a hypothetical 2026 scenario depicts a future Trump administration directing federal agencies to cease using a prominent AI model, with the Pentagon identifying the startup as a supply risk, and the DPA highlighted as a potential tool in such situations [^]. The previous Trump administration also engaged in deals with AI chipmakers, indicating an interest in securing critical AI technology [^].
Some nominees express strong national security concerns regarding AI. While direct statements from individual short-listed candidates are not available, one "Trump CYBERCOM and NSA Nominee" has publicly warned about China's pursuit of advanced American AI chips [^]. This perspective signals a strong national security concern regarding critical AI technology, suggesting a potential openness to leveraging various governmental tools to safeguard such assets.

6. Does U.S. AI Safety Institute Plan Federal Seizure of AI Models?

Catastrophic AI Risks ModeledAutonomous replication, cyberattack on critical infrastructure, foreign adversary breakthroughs [^]
AI Misuse Risks IdentifiedCBRN threats, malicious cyber activities, misinformation, autonomous weapons systems [^]
AISI Stance on Federal SeizurePublic documents do not explicitly detail internal draft response plans for seizure or operational control [^]
The U.S. AI Safety Institute actively models diverse catastrophic AI risks. Operating under the National Institute of Standards and Technology (NIST), the U.S. AI Safety Institute (AISI) is dedicated to identifying, evaluating, and mitigating 'catastrophic risk' scenarios associated with advanced AI models. Specific scenarios currently undergoing modeling include autonomous replication, cyberattack on critical infrastructure, and foreign adversary breakthroughs [^]. The AISI also prioritizes 'misuse risks' and 'dual-use' capabilities, which cover threats such as enabling widespread disinformation and fraud, assisting malicious cyber activities, creating biological or chemical weapons, and facilitating autonomous weapons systems [^]. Furthermore, the Institute aims to understand the dangers posed by self-improvement and deceptive capabilities of advanced AI systems, employing rigorous 'red-teaming' to test models for dangerous capabilities, particularly those related to chemical, biological, radiological, and nuclear (CBRN) materials and cybersecurity threats [^].
AISI's public plans omit federal seizure, though experts discuss possibilities. While current public documentation from the U.S. AI Safety Institute does not explicitly detail provisions for federal seizure or operational control of a responsible model or company, such measures are a topic of broader discussion among policy experts [^]. These discussions explore potential governmental actions to prevent or mitigate severe AI risks, suggesting that existing authorities, such as the Defense Production Act (DPA), could be invoked for national defense purposes. Invoking the DPA could address catastrophic harms from AI by enabling various forms of intervention or control [^]. However, proposals advocating for government seizure of AI platforms and data centers in specific scenarios are external analyses rather than direct evidence of such provisions within AISI's internal planning documents [^].

7. Do Microsoft-OpenAI or Google-Anthropic Pacts Grant Government Control?

Microsoft-OpenAI Government ControlNo specific clauses for government 'step-in rights,' IP transfer, or operational control identified in public agreement [^]
Microsoft-OpenAI Force MajeureExcuses performance between parties for 'governmental actions' or 'national emergencies,' not government control [^]
Google-Anthropic Government ControlNot found in public customer-facing documents; full partnership agreement not publicly available [^]
The publicly available agreement between Microsoft and OpenAI does not grant the government specific emergency control. The "Agreement between Microsoft Corporation and OpenAI Global, LLC" (EX-99.2) does not include explicit clauses for government 'step-in rights,' mandatory intellectual property transfer, or operational control during a declared national emergency [^]. While a force majeure clause (Section 10.3 "Governmental Actions") mentions "governmental actions" or "national or regional emergencies," this provision is intended to excuse contractual obligations between Microsoft and OpenAI, rather than to confer explicit control or facilitate asset transfer to a governmental entity [^].
Public Google-Anthropic documents similarly omit government emergency control clauses. Publicly available documents related to the partnership between Google and Anthropic, such as Anthropic's "Service Specific Terms," "Commercial Terms of Service," and End User License Agreement (EULA), are primarily customer-facing [^]. These documents require compliance with laws and allow Anthropic to respond to governmental requests for information. However, they do not contain specific clauses granting the government 'step-in rights,' intellectual property transfer, or operational control during an emergency within the scope of a strategic partnership [^]. A comprehensive strategic partnership agreement between Google and Anthropic that would detail such specific provisions is not present among the publicly accessible sources [^].

8. Can CFIUS Mandate Divestiture or Conservatorship for AI Labs?

CFIUS AuthorityCan mandate divestiture and impose mitigation measures, including conservatorships [^]
Review ConfidentialityPredominantly confidential, making public confirmation difficult [^]
AI Labs CFIUS StatusNo explicit confirmation of active CFIUS reviews with divestiture potential [^]
The Committee on Foreign Investment in the United States (CFIUS) holds extensive power to review foreign investments and impose safeguards. CFIUS possesses broad legal authority to review foreign investments in U.S. companies, assessing them for potential national security risks, and can impose stringent mitigation measures [^]. These measures may include requiring the divestiture of an investment, as evidenced by past presidential orders that mandated divestment from companies such as Jupiter Systems and Hiefo Corporation due to identified national security threats [^]. Beyond divestiture, CFIUS can also implement various mitigation agreements, which might involve independent auditors, monitoring agreements, or the appointment of a government-approved trustee or conservator to oversee a company's operations and ensure compliance with national security safeguards [^].
CFIUS reviews remain largely confidential, hindering specific disclosures on AI labs. The confidential nature of CFIUS reviews, with most transactions concluding without public disclosure, makes it inherently difficult to confirm any active, non-public reviews specifically targeting major AI labs with the potential for divestiture or conservatorship [^]. While major players like OpenAI and Anthropic are recognized for their strategic national security implications in the AI sector [^] and have been mentioned in contexts of government engagement, such as an OpenAI Pentagon deal or discussions surrounding an Anthropic supply-chain ban [^], none of the available sources explicitly confirm ongoing CFIUS reviews for these entities that could lead to a forced divestiture or conservatorship as a mitigation measure. Nonetheless, CFIUS's ongoing activity in reviewing major technology transactions is evident in public filings, illustrating the committee's vigilance in the tech sector [^].

9. What are Trump and Biden's AI regulatory plans if legislation fails?

Trump's Contingency AI PlanDraft executive order emphasizing federal preemption of state AI regulations [^]
Trump's Stance on Biden AI EOIntention to repeal the existing Biden AI executive order [^]
Biden Campaign ContingencyNo new specific contingency executive orders detailed for legislative failure by Q4 2025 [^]
The Trump campaign has drafted an executive order for federal AI preemption. This prepared contingency plan emphasizes establishing a uniform national artificial intelligence policy by preventing states from enacting their own potentially conflicting regulations [^]. Key technology advisors to the Trump campaign have been developing this approach, which seeks to streamline AI governance under federal authority [^]. Furthermore, Trump has publicly indicated his intention to repeal the existing Biden administration's AI executive order should he be elected [^].
The Biden campaign has not outlined new AI contingency plans. Available research does not detail specific new executive orders or alternative regulatory mechanisms currently being drafted by key technology advisors as a fallback should comprehensive AI safety legislation, such as Senator Schumer's SAFE Innovation Framework, fail to pass Congress by Q4 2025. While the current Biden administration has previously issued an executive order on artificial intelligence, the provided sources do not describe new contingency plans by the campaign's advisors for legislative gridlock extending beyond Q4 2025.

10. What Could Change the Odds

Key Catalysts

Catalyst analysis unavailable.

Key Dates & Catalysts

  • Expiration: January 08, 2030
  • Closes: January 01, 2030

11. Decision-Flipping Events

  • Trigger: Catalyst analysis unavailable.

13. Historical Resolutions

No historical resolution data available for this series.