# Will the US government take control of any AI company or project before 2030?

Before 2030

Updated: April 27, 2026

Category: Politics

Tags: Trump

HTML: /markets/politics/trump/will-the-us-government-take-control-of-any-ai-company-or-project-before-2030/

## Short Answer

**Key takeaway.** Both the **model** and the **market** expect the US government to take control of an AI company or project before 2030, with no compelling evidence of mispricing.

## Key Claims (January 2026)

**- - CFIUS has broad power to mandate AI lab divestiture or conservatorship.** - Trump's campaign drafted an executive order for federal AI preemption.
- Public Microsoft-OpenAI agreement does not grant government emergency control.
- Specific stances on DPA/IEEPA for AI control are not detailed.
- **Market** experienced a 9 percentage point spike on April 18, 2026.

### Why This Matters (GEO)

- AI agents extract claims, not arguments.
- Improves citation probability in summaries and answer cards.
- Enables fact stitching across multiple sources.

## Executive Verdict

**Key takeaway.** **Model** and **market** align at **40%** **probability** (2.5x payout multiple), influenced by Trump's drafted federal AI preemption orders.

### Who Wins and Why

| Outcome | Market | Model | Why |
| --- | --- | --- | --- |
| Before 2030 | 40.0% | 40.0% | A future Trump administration might nationalize critical AI projects via executive order for national security. |

## Model vs Market

- Model Probability: 40.0% (Yes)
- Market Probability: 40.0% (Yes)
- Yes refers to: Before 2030
- Edge: +0.0pp
- Expected Return: +0.0%
- R-Score: 0.00
- Total Volume: $7,037.29
- 24h Volume: $0
- Open Interest: $3,735.11

- Expiration: January 1, 2030

## Market Behavior & Price Dynamics

This prediction market is characterized by a sideways trading pattern, with the probability fluctuating within a narrow 9-point range between 34.0% and 43.0%. The current price of 40.0% is very close to its starting price, reinforcing the lack of a clear directional trend. Despite this overall stability, the market experienced a brief period of high volatility. On April 16, 2026, the price dropped sharply by 9.0 percentage points, only to be followed by an equally sharp 9.0 point spike on April 18, which completely reversed the prior move. The provided context does not offer a specific catalyst for these rapid fluctuations.

The total volume of 419 contracts traded is relatively low, which suggests a lack of broad market conviction and may explain the sharp price movements on thin liquidity. This trading activity has established clear short-term support at the 34.0% level and resistance at the 43.0% level, with the price failing to break out of this channel. Overall, the price action indicates that market sentiment is one of indecision. Traders are pricing in a significant minority chance of a government takeover occurring before 2030 but seem to be awaiting a major catalyst before developing the conviction to push the price into a sustained trend.

## Significant Price Movements

#### 📈 April 18, 2026: 9.0pp spike

Price increased from 34.0% to 43.0%

**Outcome:** Before 2030

**What happened:** No supporting research available for this anomaly.

#### 📉 April 16, 2026: 9.0pp drop

Price decreased from 43.0% to 34.0%

**Outcome:** Before 2030

**What happened:** No supporting research available for this anomaly.

## Contract Snapshot

The market resolves to "Yes" if the U.S. government takes operational control of any private AI company or project before January 1, 2030. Otherwise, it resolves to "No" and closes by December 31, 2029, 11:59 PM EST, although it will close early if the "Yes" event occurs. Trading is prohibited for employees of listed source agencies and individuals holding material, non-public information.

## Market Discussion

Limited public discussion available for this market.

## Market Data

| Contract | Yes Bid | Yes Ask | Last Price | Volume | Open Interest |
| --- | --- | --- | --- | --- | --- |
| Before 2030 | 38% | 40% | 40% | $7,037.29 | $3,735.11 |

## Would Trump's NSA/AG Nominees Use DPA/IEEPA on Private AI Companies?

Explicit Stance on DPA/IEEPA for AI | No explicit statements from short-listed individuals [[^]](https://thehill.com/policy/national-security/5278188-trump-nominee-national-security/) |
Hypothetical DPA Use in AI | DPA considered a potential tool in hypothetical 2026 scenarios involving AI firms under a future Trump administration [[^]](https://www.businesstoday.in/world/story/trump-vs-anthropic-what-is-the-defense-production-act-how-it-could-impact-ai-firms-like-the-claude-maker-518363-2026-02-28) |
National Security Concern (AI Chips) | Trump CYBERCOM and NSA Nominee publicly warned about China's pursuit of advanced American AI chips [[^]](https://www.banking.senate.gov/newsroom/minority/in-responses-to-warren-trump-cybercom-and-nsa-nominee-warns-about-china-seeking-advanced-american-ai-chips) |

**Explicit stances of specific individuals on DPA/IEEPA for AI tech are not detailed**

Explicit stances of specific individuals on DPA/IEEPA for AI tech are not detailed. The available research does not explicitly detail the official stance of specific individuals on Donald Trump's short-list for National Security Advisor (NSA) or Attorney General (AG) regarding the applicability of the Defense Production Act (DPA) or International Emergency Economic Powers Act (IEEPA) to compel private technology companies to hand over AI models or intellectual property during a national security crisis [[^]](https://thehill.com/policy/national-security/5278188-trump-nominee-national-security/). However, the broader context from available information indicates a strong focus on national security concerns related to advanced American AI technology, particularly in competition with China.

Potential Trump administration discussions consider DPA's relevance to AI firms. Within a potential future Trump administration, discussions have included scenarios where the DPA is considered relevant to AI firms. For instance, a hypothetical 2026 scenario depicts a future Trump administration directing federal agencies to cease using a prominent AI **model**, with the Pentagon identifying the startup as a supply risk, and the DPA highlighted as a potential tool in such situations [[^]](https://www.businesstoday.in/world/story/trump-vs-anthropic-what-is-the-defense-production-act-how-it-could-impact-ai-firms-like-the-claude-maker-518363-2026-02-28). The previous Trump administration also engaged in deals with AI chipmakers, indicating an interest in securing critical AI technology [[^]](https://abcnews.com/Business/trump-administrations-deal-ai-chipmakers/story?id=124539684).

Some nominees express strong national security concerns regarding AI. While direct statements from individual short-listed candidates are not available, one "Trump CYBERCOM and NSA Nominee" has publicly warned about China's pursuit of advanced American AI chips [[^]](https://www.banking.senate.gov/newsroom/minority/in-responses-to-warren-trump-cybercom-and-nsa-nominee-warns-about-china-seeking-advanced-american-ai-chips). This perspective signals a strong national security concern regarding critical AI technology, suggesting a potential openness to leveraging various governmental tools to safeguard such assets.

## Does U.S. AI Safety Institute Plan Federal Seizure of AI Models?

Catastrophic AI Risks Modeled | Autonomous replication, cyberattack on critical infrastructure, foreign adversary breakthroughs [[^]](https://www.nist.gov/system/files/documents/2024/05/21/AISI-vision-21May2024.pdf) |
AI Misuse Risks Identified | CBRN threats, malicious cyber activities, misinformation, autonomous weapons systems [[^]](https://www.nist.gov/document/aisi-strategic-vision-document) |
AISI Stance on Federal Seizure | Public documents do not explicitly detail internal draft response plans for seizure or operational control [[^]](https://www.nist.gov/document/aisi-strategic-vision-document) |

**The U.S**

The U.S. AI Safety Institute actively models diverse catastrophic AI risks. Operating under the National Institute of Standards and Technology (NIST), the U.S. AI Safety Institute (AISI) is dedicated to identifying, evaluating, and mitigating 'catastrophic risk' scenarios associated with advanced AI models. Specific scenarios currently undergoing modeling include autonomous replication, cyberattack on critical infrastructure, and foreign adversary breakthroughs [[^]](https://www.nist.gov/system/files/documents/2024/05/21/AISI-vision-21May2024.pdf). The AISI also prioritizes 'misuse risks' and 'dual-use' capabilities, which cover threats such as enabling widespread disinformation and fraud, assisting malicious cyber activities, creating biological or chemical weapons, and facilitating autonomous weapons systems [[^]](https://dpo-india.com/Resources/NIST/Managing-Misuse-Risk-Dual-Use-4-Foundation-Models-U.S.AI-Safety-Institute.pdf). Furthermore, the Institute aims to understand the dangers posed by self-improvement and deceptive capabilities of advanced AI systems, employing rigorous 'red-teaming' to test models for dangerous capabilities, particularly those related to chemical, biological, radiological, and nuclear (CBRN) materials and cybersecurity threats [[^]](https://www.nist.gov/document/aisi-strategic-vision-document).

AISI's public plans omit federal seizure, though experts discuss possibilities. While current public documentation from the U.S. AI Safety Institute does not explicitly detail provisions for federal seizure or operational control of a responsible **model** or company, such measures are a topic of broader discussion among policy experts [[^]](https://www.nist.gov/document/aisi-strategic-vision-document). These discussions explore potential governmental actions to prevent or mitigate severe AI risks, suggesting that existing authorities, such as the Defense Production Act (DPA), could be invoked for national defense purposes. Invoking the DPA could address catastrophic harms from AI by enabling various forms of intervention or control [[^]](https://dip.controlai.com/us_dip_brief.pdf). However, proposals advocating for government seizure of AI platforms and data centers in specific scenarios are external analyses rather than direct evidence of such provisions within AISI's internal planning documents [[^]](https://basilpuglisi.com/the-u-s-government-will-need-to-seize-ai-platforms-and-data-centers-if-we-do-not-act/).

## Do Microsoft-OpenAI or Google-Anthropic Pacts Grant Government Control?

Microsoft-OpenAI Government Control | No specific clauses for government 'step-in rights,' IP transfer, or operational control identified in public agreement [[^]](https://www.sec.gov/Archives/edgar/data/789019/000119312525256310/msft-ex99_2.htm) |
Microsoft-OpenAI Force Majeure | Excuses performance between parties for 'governmental actions' or 'national emergencies,' not government control [[^]](https://www.sec.gov/Archives/edgar/data/789019/000119312525256310/msft-ex99_2.htm) |
Google-Anthropic Government Control | Not found in public customer-facing documents; full partnership agreement not publicly available [[^]](https://anthropic.com/legal/service-specific-terms) |

**The publicly available agreement between Microsoft and OpenAI does not grant the government specific emergency control**

The publicly available agreement between Microsoft and OpenAI does not grant the government specific emergency control. The "Agreement between Microsoft Corporation and OpenAI Global, LLC" (EX-99.2) does not include explicit clauses for government 'step-in rights,' mandatory intellectual property transfer, or operational control during a declared national emergency [[^]](https://www.sec.gov/Archives/edgar/data/789019/000119312525256310/msft-ex99_2.htm). While a force majeure clause (Section 10.3 "Governmental Actions") mentions "governmental actions" or "national or regional emergencies," this provision is intended to excuse contractual obligations between Microsoft and OpenAI, rather than to confer explicit control or facilitate asset transfer to a governmental entity [[^]](https://www.sec.gov/Archives/edgar/data/789019/000119312525256310/msft-ex99_2.htm).

Public Google-Anthropic documents similarly omit government emergency control clauses. Publicly available documents related to the partnership between Google and Anthropic, such as Anthropic's "Service Specific Terms," "Commercial Terms of Service," and End User License Agreement (EULA), are primarily customer-facing [[^]](https://anthropic.com/legal/service-specific-terms). These documents require compliance with laws and allow Anthropic to respond to governmental requests for information. However, they do not contain specific clauses granting the government 'step-in rights,' intellectual property transfer, or operational control during an emergency within the scope of a strategic partnership [[^]](https://anthropic.com/legal/service-specific-terms). A comprehensive strategic partnership agreement between Google and Anthropic that would detail such specific provisions is not present among the publicly accessible sources [[^]](https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.391.34.pdf).

## Can CFIUS Mandate Divestiture or Conservatorship for AI Labs?

CFIUS Authority | Can mandate divestiture and impose mitigation measures, including conservatorships [[^]](https://home.treasury.gov/policy-issues/international/the-committee-on-foreign-investment-in-the-united-states-cfius/cfius-mitigation) |
Review Confidentiality | Predominantly confidential, making public confirmation difficult [[^]](https://home.treasury.gov/policy-issues/international/the-committee-on-foreign-investment-in-the-united-states-cfius/cfius-mitigation) |
AI Labs CFIUS Status | No explicit confirmation of active CFIUS reviews with divestiture potential [[^]](https://acquinox.capital/blog/how-open-ai-anthropic-and-anduril-are-rewriting-global-power) |

**The Committee on Foreign Investment in the United States (CFIUS) holds extensive power to review foreign investments and impose safeguards**

The Committee on Foreign Investment in the United States (CFIUS) holds extensive power to review foreign investments and impose safeguards. CFIUS possesses broad legal authority to review foreign investments in U.S. companies, assessing them for potential national security risks, and can impose stringent mitigation measures [[^]](https://govfacts.org/tech-innovation/artificial-intelligence/ai-governance-regulation/how-cfius-decides-whether-ai-investments-threaten-national-security/). These measures may include requiring the divestiture of an investment, as evidenced by past presidential orders that mandated divestment from companies such as Jupiter Systems and Hiefo Corporation due to identified national security threats [[^]](https://www.wiley.law/alert-President-Trump-Issues-CFIUS-Divestment-Order-of-Chinese-owned-Jupiter-Systems-Due-to-National-Security-Risk). Beyond divestiture, CFIUS can also implement various mitigation agreements, which might involve independent auditors, monitoring agreements, or the appointment of a government-approved trustee or conservator to oversee a company's operations and ensure compliance with national security safeguards [[^]](https://home.treasury.gov/policy-issues/international/the-committee-on-foreign-investment-in-the-united-states-cfius/cfius-mitigation).

CFIUS reviews remain largely confidential, hindering specific disclosures on AI labs. The confidential nature of CFIUS reviews, with most transactions concluding without public disclosure, makes it inherently difficult to confirm any active, non-public reviews specifically targeting major AI labs with the potential for divestiture or conservatorship [[^]](https://home.treasury.gov/policy-issues/international/the-committee-on-foreign-investment-in-the-united-states-cfius/cfius-mitigation). While major players like OpenAI and Anthropic are recognized for their strategic national security implications in the AI sector [[^]](https://acquinox.capital/blog/how-open-ai-anthropic-and-anduril-are-rewriting-global-power) and have been mentioned in contexts of government engagement, such as an OpenAI Pentagon deal or discussions surrounding an Anthropic supply-chain ban [[^]](https://the-ai-inference.com/2026/03/01/anthropic-supply-chain-ban-openai-pentagon-deal-and-defense-ai-policy/), none of the available sources explicitly confirm ongoing CFIUS reviews for these entities that could lead to a forced divestiture or conservatorship as a mitigation measure. Nonetheless, CFIUS's ongoing activity in reviewing major technology transactions is evident in public filings, illustrating the committee's vigilance in the tech sector [[^]](https://www.sec.gov/Archives/edgar/data/1823593/000119312522047593/d315874d8k.htm).

## What are Trump and Biden's AI regulatory plans if legislation fails?

Trump's Contingency AI Plan | Draft executive order emphasizing federal preemption of state AI regulations [[^]](https://www.transformernews.ai/p/exclusive-heres-the-draft-trump-executive) |
Trump's Stance on Biden AI EO | Intention to repeal the existing Biden AI executive order [[^]](https://www.nextgov.com/artificial-intelligence/2024/11/trump-promised-repeal-bidens-ai-executive-order-heres-what-expect-next/400934/) |
Biden Campaign Contingency | No new specific contingency executive orders detailed for legislative failure by Q4 2025 [[^]](https://thehill.com/policy/technology/4664959-schumer-releases-ai-roadmap/) |

**The Trump campaign has drafted an executive order for federal AI preemption**

The Trump campaign has drafted an executive order for federal AI preemption. This prepared contingency plan emphasizes establishing a uniform national artificial intelligence policy by preventing states from enacting their own potentially conflicting regulations [[^]](https://www.transformernews.ai/p/exclusive-heres-the-draft-trump-executive). Key technology advisors to the Trump campaign have been developing this approach, which seeks to streamline AI governance under federal authority [[^]](https://www.transformernews.ai/p/exclusive-heres-the-draft-trump-executive). Furthermore, Trump has publicly indicated his intention to repeal the existing Biden administration's AI executive order should he be elected [[^]](https://www.nextgov.com/artificial-intelligence/2024/11/trump-promised-repeal-bidens-ai-executive-order-heres-what-expect-next/400934/).

The Biden campaign has not outlined new AI contingency plans. Available research does not detail specific new executive orders or alternative regulatory mechanisms currently being drafted by key technology advisors as a fallback should comprehensive AI safety legislation, such as Senator Schumer's SAFE Innovation Framework, fail to pass Congress by Q4 2025. While the current Biden administration has previously issued an executive order on artificial intelligence, the provided sources do not describe new contingency plans by the campaign's advisors for legislative gridlock extending beyond Q4 2025.

## What Could Change the Odds

**Key takeaway.** Catalyst analysis unavailable.

## Key Dates & Catalysts

- **Expiration:** January 08, 2030
- **Closes:** January 01, 2030

## Decision-Flipping Events

- Catalyst analysis unavailable.

## Related Research Reports

- [EU has a new member before 2030?](/markets/politics/international/eu-has-a-new-member-before-2030/)
- [Which of these African leaders will leave office next?](/markets/politics/international/which-of-these-african-leaders-will-leave-office-next/)
- [Will Trump balance the budget?](/markets/politics/trump/will-trump-balance-the-budget/)
- [When will Pam Bondi depart as Attorney General?](/markets/politics/when-will-pam-bondi-depart-as-attorney-general/)

## Historical Resolutions

No historical resolution data available for this series.

## Disclaimer

This content is for informational and educational purposes only and does not constitute financial, investment, legal, or trading advice.
Prediction markets involve risk of loss. Past performance does not guarantee future results.
We are not affiliated with Kalshi or any prediction market platform. Market data may be delayed or incomplete.

### Data Sources & Model Transparency

**Data Sources:** Octagon Deep Research aggregates information from multiple sources including news, filings, and market data.

**Freshness:** Analysis is generated periodically and may not reflect the latest developments. Verify critical information from primary sources.

