# When will any company achieve AGI?

Corporate announcement

Updated: April 5, 2026

Category: Companies

Tags: Product launches

HTML: /markets/companies/product-launches/when-will-any-company-achieve-agi/

## Short Answer

**Key takeaway.** Both the **model** and the **market** expect any company to achieve AGI before January 1, 2031, with no compelling evidence of mispricing.

## Key Claims (January 2026)

**- - Well-funded "dark horse" entities are actively pursuing explicit AGI goals.** - Significant talent migration increases the likelihood of earlier AGI announcements.
- UAE organizations are making strides toward public AGI demonstrations.
- Performance data for Gemini 2 and GPT-5 remains publicly unavailable.
- Major AI frameworks mandate extensive safety evaluations before public deployment.
- OpenAI and Anthropic prioritize reputational risk over first-mover AGI advantage.

### Why This Matters (GEO)

- AI agents extract claims, not arguments.
- Improves citation probability in summaries and answer cards.
- Enables fact stitching across multiple sources.

## Executive Verdict

**Key takeaway.** The Octagon **model** is 0.5pp below the 6c **market**, despite dark horse AGI entities increasing early announcement likelihood.

### Who Wins and Why

| Outcome | Market | Model | Why |
| --- | --- | --- | --- |
| Before Jul 1, 2026 | 6.0% | 5.5% | Market higher by 0.5pp |
| Before Oct 1, 2026 | 15.0% | 13.4% | Market higher by 1.6pp |
| Before Jan 1, 2027 | 21.0% | 18.5% | Market higher by 2.5pp |

## Model vs Market

| Outcome | Market Probability | Octagon Model Probability |
| --- | --- | --- |
| Before Jul 1, 2026 | 6.0% | 5.5% |
| Before Oct 1, 2026 | 15.0% | 13.4% |
| Before Jan 1, 2027 | 21.0% | 18.5% |
| Before Apr 1, 2027 | 23.0% | 20.2% |
| Before Jul 1, 2027 | 41.0% | 35.6% |
| Before Oct 1, 2027 | 37.0% | 36.5% |
| Before Jan 1, 2028 | 38.0% | 37.5% |
| Before Apr 1, 2028 | 49.0% | 42.7% |
| Before Jul 1, 2028 | 51.0% | 44.6% |
| Before Oct 1, 2028 | 50.0% | 45.5% |
| Before Jan 1, 2029 | 53.0% | 46.5% |
| Before Jan 1, 2030 | 55.0% | 48.3% |
| Before Jan 1, 2031 | 62.0% | 55.2% |

- Expiration: January 1, 2031

## Market Behavior & Price Dynamics

This market, which predicts a corporate AGI announcement, is characterized by a sideways long-term trend, though it has experienced significant short-term volatility. The probability has fluctuated widely between 1.0% and 48.0%, but currently sits at 6.0%, slightly below its starting price of 8.0%. Recent activity has been particularly turbulent, with the price spiking 9.0 percentage points to 14.0% on March 28, only to be followed by two sharp drops: a 9.0pp fall on April 1 and another 8.0pp decrease on April 5. As no specific news or external context has been provided, the direct causes for these rapid shifts in sentiment are not apparent from the data alone. Such movements often reflect reactions to un-cited news, rumors, or significant trades within the market itself.

The trading volume provides additional insight into market conviction. While the total volume is substantial at 4,931 contracts, the sample data indicates that some of the most significant recent price changes occurred on very low or zero volume. This pattern suggests that the market may be illiquid at times, allowing small trades to have an outsized impact on the price. This can indicate a lack of broad consensus or conviction behind the sharp moves. From a technical perspective, the 5-8% range has acted as a support level, while the 14-16% area has recently served as resistance. The current price of 6.0% suggests that traders, on average, assign a low probability to a corporate AGI announcement within the market's timeframe, with sentiment remaining skeptical despite brief periods of speculative optimism.

## Significant Price Movements

### Outcome: Before Jul 1, 2026

#### 📉 April 05, 2026: 8.0pp drop

Price decreased from 14.0% to 6.0%

**What happened:** No supporting research available for this anomaly.

#### 📉 April 01, 2026: 9.0pp drop

Price decreased from 16.0% to 7.0%

**What happened:** No supporting research available for this anomaly.

#### 📈 March 28, 2026: 9.0pp spike

Price increased from 5.0% to 14.0%

**What happened:** No supporting research available for this anomaly.

### Outcome: Before Jan 1, 2027

#### 📈 March 25, 2026: 12.0pp spike

Price increased from 18.0% to 30.0%

**What happened:** No supporting research available for this anomaly.

## Contract Snapshot

This market resolves to "Yes" if any company (public or private, verifiable by major business news sources) officially announces it has "achieved, attained, reached, or developed Artificial General Intelligence (AGI)" after the market's issuance and before January 1, 2029, as reported by specified news sources. The announcement must explicitly use "AGI," not "AGI-like" or similar terms. If no such qualifying announcement is made by December 31, 2028, the market resolves to "No." The market will close and expire early if a qualifying announcement occurs, otherwise it closes on December 31, 2028, at 11:59 pm EST.

## Market Discussion

Traders are debating the likelihood of a company officially announcing AGI by late 2028, with market probabilities for deadlines around that time hovering near 50%. A key argument for "Yes" centers on the market rules, which only require an explicit *official announcement* of achieving AGI, regardless of its true technical feasibility. Conversely, some express skepticism, hinting that such an announcement might be a strategic move or a PR stunt rather than a definitive scientific breakthrough.

## Market Data

| Contract | Yes Bid | Yes Ask | Last Price | Volume | Open Interest |
| --- | --- | --- | --- | --- | --- |
| Before Jul 1, 2026 | 6% | 10% | 6% | $6,008 | $2,963 |
| Before Oct 1, 2026 | 13% | 16% | 15% | $2,536 | $1,941 |
| Before Jan 1, 2027 | 21% | 28% | 21% | $2,368 | $1,693 |
| Before Apr 1, 2027 | 22% | 24% | 23% | $1,454 | $1,261 |
| Before Jul 1, 2027 | 36% | 41% | 41% | $1,408 | $1,360 |
| Before Oct 1, 2027 | 37% | 42% | 37% | $684 | $627 |
| Before Jan 1, 2028 | 38% | 44% | 38% | $1,234 | $1,181 |
| Before Apr 1, 2028 | 42% | 46% | 49% | $1,345 | $1,287 |
| Before Jul 1, 2028 | 43% | 52% | 51% | $566 | $544 |
| Before Oct 1, 2028 | 48% | 57% | 50% | $216 | $174 |
| Before Jan 1, 2029 | 51% | 56% | 53% | $1,022 | $883 |
| Before Jan 1, 2030 | 56% | 63% | 55% | $1,459 | $1,358 |
| Before Jan 1, 2031 | 61% | 66% | 62% | $1,878 | $1,716 |

## Are Gemini 2, GPT-5, and AGI Thresholds Publicly Disclosed?

Gemini 2 Berkeley AGIS-5 Performance | No specific private performance results found [[^]](https://storage.googleapis.com/deepmind-media/gemini/gemini_v2_5_report.pdf), [[^]](https://officechai.com/ai/gemini-3-deep-think-benchmarks-arc-agi/), [[^]](https://www.llmrumors.com/news/gemini-3-1-pro-google-reclaims-benchmark-crown) |
GPT-5 Berkeley AGIS-5 Performance | No performance results on any benchmark, nor mention of Berkeley AGIS-5 [[^]](http://openai.com/index/introducing-gpt-5/) |
AGI Declaration Thresholds | Not specified in responsible AGI development frameworks [[^]](https://deepmind.google/blog/taking-a-responsible-path-to-agi/), [[^]](https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf) |

**Specific performance data for 'Gemini 2' and 'GPT-5' remains unavailable**

Specific performance data for 'Gemini 2' and 'GPT-5' remains unavailable. Research indicates that private performance results for Google DeepMind's 'Gemini 2' and OpenAI's 'GPT-5' on the unpublished Berkeley AGIS-5 benchmark suite are not accessible. While performance details have been made available for other models, such as Google DeepMind's Gemini 1.5 Pro [[^]](https://storage.googleapis.com/deepmind-media/gemini/gemini_v2_5_report.pdf), Gemini 3 Deep Think [[^]](https://officechai.com/ai/gemini-3-deep-think-benchmarks-arc-agi/), and Gemini 3.1 Pro [[^]](https://www.llmrumors.com/news/gemini-3-1-pro-google-reclaims-benchmark-crown) on various benchmarks, no information was found regarding 'Gemini 2' or its performance on 'Berkeley AGIS-5'. Similarly, despite the introduction of 'GPT-5' [[^]](http://openai.com/index/introducing-gpt-5/), the available materials do not contain any performance results for 'GPT-5' on any benchmark, including 'Berkeley AGIS-5'.

AGI declaration thresholds by Google DeepMind and OpenAI are undisclosed. The internal thresholds for a preliminary AGI declaration, as established by the safety and ethics boards of Google DeepMind and OpenAI, are not provided in the research sources. Documents from Google DeepMind [[^]](https://deepmind.google/blog/taking-a-responsible-path-to-agi/) and OpenAI [[^]](https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf) articulate their commitment to responsible AGI development and comprehensive preparedness frameworks. However, these frameworks outline strategic approaches to evaluating risks and ensuring safety, rather than specifying concrete, quantitative performance thresholds that would trigger an AGI declaration.

## Are Dark Horse Entities Quietly Pursuing AGI Development?

UAE AI Supercomputer Capacity | 8,000 petaFLOPS [[^]](https://www.arabianbusiness.com/business/technology/uae-to-deploy-8-exaflop-ai-supercomputer-in-india-in-sovereign-infrastructure-push) |
G42 AI Agents Target | One billion by 2026 [[^]](https://www.khaleejtimes.com/business/tech/g42-ceo-peng-xiao-says-over-7000-workers-and-100-cranes-are-building-stargate-project-in-abu-dhabi) |
Safe Superintelligence (SSI) Valuation | $32 billion [[^]](https://markets.financialcontent.com/stocks/article/tokenring-2026-1-27-the-32-billion-stealth-bet-ilya-sutzkevers-safe-superintelligence-and-the-future-of-agi) |

**UAE entities are making significant strides toward public AGI demonstrations**

UAE entities are making significant strides toward public AGI demonstrations. The United Arab Emirates, through organizations like the Technology Innovation Institute (TII) and G42, has strategically invested in advanced artificial intelligence, with explicit goals related to Artificial General Intelligence (AGI). TII has publicly committed to "Building a better foundation for AGI" [[^]](https://files-prod.tii.ae/2025-05/V7%20AI%**20%**20Building%20a%20better%20foundation%20for%20AGI%**20%****281%**29_compressed.pdf) and demonstrates its competitive ambition with open-source Falcon AI models [[^]](https://www.computerweekly.com/news/366638759/UAEs-TII-challenges-big-tech-dominance-with-open-source-Falcon-AI-models). This ambition is supported by massive computational power, including plans to deploy an AI supercomputer in India with a capacity equivalent to 8,000 petaFLOPS [[^]](https://www.arabianbusiness.com/business/technology/uae-to-deploy-8-exaflop-ai-supercomputer-in-india-in-sovereign-infrastructure-push). Furthermore, Abu Dhabi-based G42 is engaged in large-scale infrastructure projects such as "Stargate" and aims to develop one billion AI agents by 2026, involving over 7,000 workers [[^]](https://www.khaleejtimes.com/business/tech/g42-ceo-peng-xiao-says-over-7000-workers-and-100-cranes-are-building-stargate-project-in-abu-dhabi).

Stealth startups, led by prominent AI figures, are emerging as serious AGI contenders. Beyond national initiatives, new ventures led by influential individuals in AI are attracting substantial investment and talent. Ilya Sutskever, formerly chief scientist at a major US AI lab, co-founded Safe Superintelligence (SSI) with the explicit mission to achieve AGI [[^]](https://markets.financialcontent.com/stocks/article/tokenring-2026-1-27-the-32-billion-stealth-bet-ilya-sutzkevers-safe-superintelligence-and-the-future-of-agi). This venture has garnered considerable investor interest, reportedly valued at **$32** billion [[^]](https://markets.financialcontent.com/stocks/article/tokenring-2026-1-27-the-32-billion-stealth-bet-ilya-sutzkevers-safe-superintelligence-and-the-future-of-agi). Another significant entrant is a **$6.2** billion AI venture backed by Jeff Bezos, which has strategically acquired an agentic computing startup, signaling a focused pursuit of advanced AI capabilities [[^]](https://www.techbuzz.ai/articles/bezos-6-2b-ai-venture-quietly-acquires-agentic-computing-startup). These new entities are also actively recruiting top talent from established labs; for instance, numerous startups founded by alumni of leading AI companies are now dedicated to pushing the boundaries of AI development [[^]](https://techcrunch.com/2026/02/20/the-openai-mafia-15-of-the-most-notable-startups-founded-by-alumni/).

Substantial evidence indicates dark horse entities are aggressively pursuing AGI development. The combined effect of significant compute cluster acquisitions, explicit AGI-focused research and development roadmaps, and notable talent migration to these non-major US lab entities provides strong evidence of their intent and capability to pursue AGI. While specific timelines remain speculative, the sheer scale of investment and the caliber of talent involved suggest these entities are aggressively positioning themselves to be leading contenders in the race for AGI, potentially preceding public demonstrations from major US laboratories.

## Do AI Frameworks Mandate Audits Prior To Public Announcements?

Meta External Red-Teaming | Before widespread release of models [[^]](https://about.fb.com/news/2025/02/meta-approach-frontier-ai) |
Meta Information Sharing | Prior to public deployment [[^]](https://ai.meta.com/static-resource/meta-frontier-ai-framework/?amp=) |
Microsoft Expert Evaluation | Prior to and post-deployment [[^]](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/Frontier-Governance-Framework.pdf) |

**Meta's framework emphasizes robust safety evaluations and red-teaming prior to deployment**

Meta's framework emphasizes robust safety evaluations and red-teaming prior to deployment. Its Frontier AI Framework stresses extensive internal and external red-teaming, primarily before widespread release or public deployment of advanced models [[^]](https://ai.meta.com/static-resource/meta-frontier-ai-framework/?amp=). Third-party red teaming and evaluations are considered crucial for identifying and mitigating risks, especially before broad dissemination. Meta commits to sharing relevant information with policymakers and the public prior to public deployment and will pause training or not deploy models if novel capabilities present severe, unmitigable risks [[^]](https://ai.meta.com/static-resource/meta-frontier-ai-framework/?amp=). The framework's focus is on ensuring safety measures are in place for deployment, rather than for the announcement of a capability milestone.

Microsoft also mandates rigorous safety before broad **model** availability. Its Frontier Governance Framework requires rigorous safety evaluations and testing, along with external and independent expert evaluation and red teaming, both prior to and post-deployment of a frontier **model** [[^]](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/Frontier-Governance-Framework.pdf). Microsoft explicitly states that these rigorous evaluations will be conducted before a frontier **model** is made broadly available to mitigate potential risks. Like Meta, Microsoft's framework prioritizes safeguarding users and the public before a **model** is broadly released or deployed, particularly for models with potentially severe risks [[^]](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/Frontier-Governance-Framework.pdf).

Neither framework explicitly requires pre-announcement external red-teaming completion. Therefore, neither Meta nor Microsoft's frameworks explicitly mandate the successful completion of a multi-month external red-teaming period prior to a public announcement of a significant AI achievement. Instead, both frameworks emphasize that these comprehensive safety evaluations and red-teaming efforts must be completed, and their findings mitigated, before the broad deployment, widespread release, or public availability of their advanced AI models [[^]](https://ai.meta.com/static-resource/meta-frontier-ai-framework/?amp=).

## Are TSMC 2nm Delays Impacting Nvidia's Rubin GPU Production?

Rubin GPU Process Node | TSMC 3nm process node [[^]](https://en.wikipedia.org/wiki/Rubin_(microarchitecture)) |
Rubin Production Status | On track, 'in fab', or in full production [[^]](https://www.tomshardware.com/tech-industry/artificial-intelligence/nvidias-rubin-gpu-and-vera-cpu-data-center-ai-platforms-begin-tape-out-both-chips-in-fab-and-on-track-for-2026) |
Rubin Platform Availability | On track for 2026 data center AI platforms [[^]](https://www.tomshardware.com/tech-industry/artificial-intelligence/nvidias-rubin-gpu-and-vera-cpu-data-center-ai-platforms-begin-tape-out-both-chips-in-fab-and-on-track-for-2026) |

**No direct evidence indicates unannounced 2nm delays affecting Nvidia's Rubin GPUs**

No direct evidence indicates unannounced 2nm delays affecting Nvidia's Rubin GPUs. Nvidia's 'Rubin' platform GPUs are widely expected to utilize TSMC's 3nm process node for their fabrication, rather than the 2nm node [[^]](https://en.wikipedia.org/wiki/Rubin_(microarchitecture)). Reports confirm that the Rubin GPU and Vera CPU have successfully taped out and are currently 'in fab' at TSMC, maintaining the schedule for data center AI platforms in 2026 [[^]](https://www.tomshardware.com/tech-industry/artificial-intelligence/nvidias-rubin-gpu-and-vera-cpu-data-center-ai-platforms-begin-tape-out-both-chips-in-fab-and-on-track-for-2026). Furthermore, some sources indicate that Nvidia's Rubin AI chips have either entered full production 'well ahead of schedule' [[^]](https://wccftech.com/nvidia-rubin-ai-chips-enter-full-production-well-ahead-of-schedule/) or are currently in full production [[^]](https://introl.com/blog/nvidia-rubin-full-production-ces-2026-ai-infrastructure).

Broader discussions regarding TSMC's 2nm process do not directly impact Rubin. There have been general discussions concerning potential supply chain uncertainty and capacity shortages impacting TSMC's 2nm process, including an anticipated shift in wafer production capacity [[^]](https://streetstocker.com/tsmc-2nm-capacity-constraints-2026/). However, these broader concerns are not specifically linked to unannounced delays that would directly affect the 'Rubin' platform, primarily because the 'Rubin' microarchitecture is projected to rely on the 3nm node instead of the 2nm node.

## What Factors Influence AGI Claims by OpenAI and Anthropic?

OpenAI Primary Duty | Humanity, with commitment to long-term safety and avoiding harmful uses of AGI [[^]](https://openai.com/charter) |
Anthropic AGI Claim Policy | Responsible Scaling Policy (RSP) with strict monitoring and evaluation requirements for AGI stages [[^]](https://assets.anthropic.com/m/24a47b00f10301cd/original/Anthropic-Responsible-Scaling-Policy-2024-10-15.pdf) |
AGI Claim Threshold Driver | Reputational risk mitigation and safety, rather than first-mover advantage [[^]](https://storage.courtlistener.com/recap/gov.uscourts.cand.433688/gov.uscourts.cand.433688.77.3_1.pdf) |

**Both OpenAI and Anthropic prioritize reputational risk mitigation over first-mover advantage**

Both OpenAI and Anthropic prioritize reputational risk mitigation over first-mover advantage. The prevailing game-theoretic stance at both companies regarding the timing of an Artificial General Intelligence (AGI) claim leans significantly towards "reputational risk mitigation," suggesting a much higher, more defensible threshold for announcement. This approach is driven by concerns for safety and responsible development, rather than a "first-mover" advantage to capture **market** or talent. OpenAI's strategy, outlined in its Charter and public communications, stresses ensuring AGI benefits all humanity and explicitly warns against a "competitive race dynamic" [[^]](https://openai.com/charter). The company defines its primary fiduciary duty to be to humanity, committing to long-term safety research and avoiding AGI applications that harm humanity or concentrate power unduly [[^]](https://openai.com/charter).

Anthropic structures its AGI claims through a rigorous Responsible Scaling Policy. Anthropic's approach is even more explicitly structured around "reputational risk mitigation" via its Responsible Scaling Policy (RSP) [[^]](https://assets.anthropic.com/m/24a47b00f10301cd/original/Anthropic-Responsible-Scaling-Policy-2024-10-15.pdf). The RSP delineates specific "AGI stages," such as AGI-0 through AGI-4 [[^]](https://assets.anthropic.com/m/24a47b00f10301cd/original/Anthropic-Responsible-Scaling-Policy-2024-10-15.pdf). AGI-1 is characterized by models significantly outperforming current frontier models, while AGI-4 denotes models vastly more capable than humans across nearly all economically valuable tasks [[^]](https://assets.anthropic.com/m/24a47b00f10301cd/original/Anthropic-Responsible-Scaling-Policy-2024-10-15.pdf). Crucially, the policy mandates strict monitoring, evaluation, and mitigation requirements for each stage before systems reaching or exceeding that capability are deployed [[^]](https://assets.anthropic.com/m/24a47b00f10301cd/original/Anthropic-Responsible-Scaling-Policy-2024-10-15.pdf).

Both companies share a strong commitment to safe, beneficial AGI development. Their publicly available strategy documents consistently indicate a shared emphasis on prioritizing safety, mitigating catastrophic and existential risks, and ensuring beneficial outcomes for humanity. This deep commitment to responsible development and deployment establishes a higher benchmark for an AGI claim. Their strategy is rooted in comprehensive evaluations and robust risk management, rather than driven by the mere ambition of being the first to make such an announcement.

## What Could Change the Odds

**Key takeaway.** Catalyst analysis unavailable.

## Key Dates & Catalysts

- **Expiration:** July 08, 2026
- **Closes:** January 01, 2031

## Decision-Flipping Events

- Catalyst analysis unavailable.

## Related Research Reports

- [When will Apple announce foldable iPhone?](/markets/companies/product-launches/when-will-apple-announce-foldable-iphone/)
- [Jamie Dimon leaves JPMorgan Chase?](/markets/companies/ceos/jamie-dimon-leaves-jpmorgan-chase/)
- [When will Tim Cook leave Apple?](/markets/companies/ceos/when-will-tim-cook-leave-apple/)
- [Will Paramount acquire Warner Bros?](/markets/companies/will-paramount-acquire-warner-bros/)

## Historical Resolutions

No historical resolution data available for this series.

## Disclaimer

This content is for informational and educational purposes only and does not constitute financial, investment, legal, or trading advice.
Prediction markets involve risk of loss. Past performance does not guarantee future results.
We are not affiliated with Kalshi or any prediction market platform. Market data may be delayed or incomplete.

### Data Sources & Model Transparency

**Data Sources:** Octagon Deep Research aggregates information from multiple sources including news, filings, and market data.

**Freshness:** Analysis is generated periodically and may not reflect the latest developments. Verify critical information from primary sources.

