Short Answer

Both the model and the market expect any company to achieve AGI before January 1, 2031, with no compelling evidence of mispricing.

1. Executive Verdict

  • Well-funded "dark horse" entities are actively pursuing explicit AGI goals.
  • Significant talent migration increases the likelihood of earlier AGI announcements.
  • UAE organizations are making strides toward public AGI demonstrations.
  • Performance data for Gemini 2 and GPT-5 remains publicly unavailable.
  • Major AI frameworks mandate extensive safety evaluations before public deployment.
  • OpenAI and Anthropic prioritize reputational risk over first-mover AGI advantage.

Who Wins and Why

Outcome Market Model Why
Before Jul 1, 2026 6.0% 5.5% Market higher by 0.5pp
Before Oct 1, 2026 15.0% 13.4% Market higher by 1.6pp
Before Jan 1, 2027 21.0% 18.5% Market higher by 2.5pp
Before Apr 1, 2027 23.0% 20.2% Market higher by 2.8pp
Before Jul 1, 2027 41.0% 35.6% Market higher by 5.4pp

2. Market Behavior & Price Dynamics

Historical Price (Probability)

Outcome probability
Date
This market, which predicts a corporate AGI announcement, is characterized by a sideways long-term trend, though it has experienced significant short-term volatility. The probability has fluctuated widely between 1.0% and 48.0%, but currently sits at 6.0%, slightly below its starting price of 8.0%. Recent activity has been particularly turbulent, with the price spiking 9.0 percentage points to 14.0% on March 28, only to be followed by two sharp drops: a 9.0pp fall on April 1 and another 8.0pp decrease on April 5. As no specific news or external context has been provided, the direct causes for these rapid shifts in sentiment are not apparent from the data alone. Such movements often reflect reactions to un-cited news, rumors, or significant trades within the market itself.
The trading volume provides additional insight into market conviction. While the total volume is substantial at 4,931 contracts, the sample data indicates that some of the most significant recent price changes occurred on very low or zero volume. This pattern suggests that the market may be illiquid at times, allowing small trades to have an outsized impact on the price. This can indicate a lack of broad consensus or conviction behind the sharp moves. From a technical perspective, the 5-8% range has acted as a support level, while the 14-16% area has recently served as resistance. The current price of 6.0% suggests that traders, on average, assign a low probability to a corporate AGI announcement within the market's timeframe, with sentiment remaining skeptical despite brief periods of speculative optimism.

3. Significant Price Movements

Notable price changes detected in the chart, along with research into what caused each movement.

Outcome: Before Jul 1, 2026

📉 April 05, 2026: 8.0pp drop

Price decreased from 14.0% to 6.0%

What happened: No supporting research available for this anomaly.

📉 April 01, 2026: 9.0pp drop

Price decreased from 16.0% to 7.0%

What happened: No supporting research available for this anomaly.

📈 March 28, 2026: 9.0pp spike

Price increased from 5.0% to 14.0%

What happened: No supporting research available for this anomaly.

Outcome: Before Jan 1, 2027

📈 March 25, 2026: 12.0pp spike

Price increased from 18.0% to 30.0%

What happened: No supporting research available for this anomaly.

4. Market Data

View on Kalshi →

Contract Snapshot

This market resolves to "Yes" if any company (public or private, verifiable by major business news sources) officially announces it has "achieved, attained, reached, or developed Artificial General Intelligence (AGI)" after the market's issuance and before January 1, 2029, as reported by specified news sources. The announcement must explicitly use "AGI," not "AGI-like" or similar terms. If no such qualifying announcement is made by December 31, 2028, the market resolves to "No." The market will close and expire early if a qualifying announcement occurs, otherwise it closes on December 31, 2028, at 11:59 pm EST.

Available Contracts

Market options and current pricing

Outcome bucket Yes (price) No (price) Last trade probability
Before Jul 1, 2026 $0.10 $0.94 6%
Before Oct 1, 2026 $0.16 $0.87 15%
Before Jan 1, 2027 $0.28 $0.79 21%
Before Apr 1, 2027 $0.24 $0.78 23%
Before Jul 1, 2027 $0.41 $0.64 41%
Before Oct 1, 2027 $0.42 $0.63 37%
Before Jan 1, 2028 $0.44 $0.62 38%
Before Apr 1, 2028 $0.46 $0.58 49%
Before Jul 1, 2028 $0.52 $0.57 51%
Before Oct 1, 2028 $0.57 $0.52 50%
Before Jan 1, 2029 $0.56 $0.49 53%
Before Jan 1, 2030 $0.63 $0.44 55%
Before Jan 1, 2031 $0.66 $0.39 62%

Market Discussion

Traders are debating the likelihood of a company officially announcing AGI by late 2028, with market probabilities for deadlines around that time hovering near 50%. A key argument for "Yes" centers on the market rules, which only require an explicit official announcement of achieving AGI, regardless of its true technical feasibility. Conversely, some express skepticism, hinting that such an announcement might be a strategic move or a PR stunt rather than a definitive scientific breakthrough.

5. Are Gemini 2, GPT-5, and AGI Thresholds Publicly Disclosed?

Gemini 2 Berkeley AGIS-5 PerformanceNo specific private performance results found [^], [^], [^]
GPT-5 Berkeley AGIS-5 PerformanceNo performance results on any benchmark, nor mention of Berkeley AGIS-5 [^]
AGI Declaration ThresholdsNot specified in responsible AGI development frameworks [^], [^]
Specific performance data for 'Gemini 2' and 'GPT-5' remains unavailable. Research indicates that private performance results for Google DeepMind's 'Gemini 2' and OpenAI's 'GPT-5' on the unpublished Berkeley AGIS-5 benchmark suite are not accessible. While performance details have been made available for other models, such as Google DeepMind's Gemini 1.5 Pro [^], Gemini 3 Deep Think [^], and Gemini 3.1 Pro [^] on various benchmarks, no information was found regarding 'Gemini 2' or its performance on 'Berkeley AGIS-5'. Similarly, despite the introduction of 'GPT-5' [^], the available materials do not contain any performance results for 'GPT-5' on any benchmark, including 'Berkeley AGIS-5'.
AGI declaration thresholds by Google DeepMind and OpenAI are undisclosed. The internal thresholds for a preliminary AGI declaration, as established by the safety and ethics boards of Google DeepMind and OpenAI, are not provided in the research sources. Documents from Google DeepMind [^] and OpenAI [^] articulate their commitment to responsible AGI development and comprehensive preparedness frameworks. However, these frameworks outline strategic approaches to evaluating risks and ensuring safety, rather than specifying concrete, quantitative performance thresholds that would trigger an AGI declaration.

6. Are Dark Horse Entities Quietly Pursuing AGI Development?

UAE AI Supercomputer Capacity8,000 petaFLOPS [^]
G42 AI Agents TargetOne billion by 2026 [^]
Safe Superintelligence (SSI) Valuation$32 billion [^]
UAE entities are making significant strides toward public AGI demonstrations. The United Arab Emirates, through organizations like the Technology Innovation Institute (TII) and G42, has strategically invested in advanced artificial intelligence, with explicit goals related to Artificial General Intelligence (AGI). TII has publicly committed to "Building a better foundation for AGI" [^] and demonstrates its competitive ambition with open-source Falcon AI models [^]. This ambition is supported by massive computational power, including plans to deploy an AI supercomputer in India with a capacity equivalent to 8,000 petaFLOPS [^]. Furthermore, Abu Dhabi-based G42 is engaged in large-scale infrastructure projects such as "Stargate" and aims to develop one billion AI agents by 2026, involving over 7,000 workers [^].
Stealth startups, led by prominent AI figures, are emerging as serious AGI contenders. Beyond national initiatives, new ventures led by influential individuals in AI are attracting substantial investment and talent. Ilya Sutskever, formerly chief scientist at a major US AI lab, co-founded Safe Superintelligence (SSI) with the explicit mission to achieve AGI [^]. This venture has garnered considerable investor interest, reportedly valued at $32 billion [^]. Another significant entrant is a $6.2 billion AI venture backed by Jeff Bezos, which has strategically acquired an agentic computing startup, signaling a focused pursuit of advanced AI capabilities [^]. These new entities are also actively recruiting top talent from established labs; for instance, numerous startups founded by alumni of leading AI companies are now dedicated to pushing the boundaries of AI development [^].
Substantial evidence indicates dark horse entities are aggressively pursuing AGI development. The combined effect of significant compute cluster acquisitions, explicit AGI-focused research and development roadmaps, and notable talent migration to these non-major US lab entities provides strong evidence of their intent and capability to pursue AGI. While specific timelines remain speculative, the sheer scale of investment and the caliber of talent involved suggest these entities are aggressively positioning themselves to be leading contenders in the race for AGI, potentially preceding public demonstrations from major US laboratories.

7. Do AI Frameworks Mandate Audits Prior To Public Announcements?

Meta External Red-TeamingBefore widespread release of models [^]
Meta Information SharingPrior to public deployment [^]
Microsoft Expert EvaluationPrior to and post-deployment [^]
Meta's framework emphasizes robust safety evaluations and red-teaming prior to deployment. Its Frontier AI Framework stresses extensive internal and external red-teaming, primarily before widespread release or public deployment of advanced models [^]. Third-party red teaming and evaluations are considered crucial for identifying and mitigating risks, especially before broad dissemination. Meta commits to sharing relevant information with policymakers and the public prior to public deployment and will pause training or not deploy models if novel capabilities present severe, unmitigable risks [^]. The framework's focus is on ensuring safety measures are in place for deployment, rather than for the announcement of a capability milestone.
Microsoft also mandates rigorous safety before broad model availability. Its Frontier Governance Framework requires rigorous safety evaluations and testing, along with external and independent expert evaluation and red teaming, both prior to and post-deployment of a frontier model [^]. Microsoft explicitly states that these rigorous evaluations will be conducted before a frontier model is made broadly available to mitigate potential risks. Like Meta, Microsoft's framework prioritizes safeguarding users and the public before a model is broadly released or deployed, particularly for models with potentially severe risks [^].
Neither framework explicitly requires pre-announcement external red-teaming completion. Therefore, neither Meta nor Microsoft's frameworks explicitly mandate the successful completion of a multi-month external red-teaming period prior to a public announcement of a significant AI achievement. Instead, both frameworks emphasize that these comprehensive safety evaluations and red-teaming efforts must be completed, and their findings mitigated, before the broad deployment, widespread release, or public availability of their advanced AI models [^].

8. Are TSMC 2nm Delays Impacting Nvidia's Rubin GPU Production?

Rubin GPU Process NodeTSMC 3nm process node [^]
Rubin Production StatusOn track, 'in fab', or in full production [^]
Rubin Platform AvailabilityOn track for 2026 data center AI platforms [^]
No direct evidence indicates unannounced 2nm delays affecting Nvidia's Rubin GPUs. Nvidia's 'Rubin' platform GPUs are widely expected to utilize TSMC's 3nm process node for their fabrication, rather than the 2nm node [^]. Reports confirm that the Rubin GPU and Vera CPU have successfully taped out and are currently 'in fab' at TSMC, maintaining the schedule for data center AI platforms in 2026 [^]. Furthermore, some sources indicate that Nvidia's Rubin AI chips have either entered full production 'well ahead of schedule' [^] or are currently in full production [^].
Broader discussions regarding TSMC's 2nm process do not directly impact Rubin. There have been general discussions concerning potential supply chain uncertainty and capacity shortages impacting TSMC's 2nm process, including an anticipated shift in wafer production capacity [^]. However, these broader concerns are not specifically linked to unannounced delays that would directly affect the 'Rubin' platform, primarily because the 'Rubin' microarchitecture is projected to rely on the 3nm node instead of the 2nm node.

9. What Factors Influence AGI Claims by OpenAI and Anthropic?

OpenAI Primary DutyHumanity, with commitment to long-term safety and avoiding harmful uses of AGI [^]
Anthropic AGI Claim PolicyResponsible Scaling Policy (RSP) with strict monitoring and evaluation requirements for AGI stages [^]
AGI Claim Threshold DriverReputational risk mitigation and safety, rather than first-mover advantage [^]
Both OpenAI and Anthropic prioritize reputational risk mitigation over first-mover advantage. The prevailing game-theoretic stance at both companies regarding the timing of an Artificial General Intelligence (AGI) claim leans significantly towards "reputational risk mitigation," suggesting a much higher, more defensible threshold for announcement. This approach is driven by concerns for safety and responsible development, rather than a "first-mover" advantage to capture market or talent. OpenAI's strategy, outlined in its Charter and public communications, stresses ensuring AGI benefits all humanity and explicitly warns against a "competitive race dynamic" [^]. The company defines its primary fiduciary duty to be to humanity, committing to long-term safety research and avoiding AGI applications that harm humanity or concentrate power unduly [^].
Anthropic structures its AGI claims through a rigorous Responsible Scaling Policy. Anthropic's approach is even more explicitly structured around "reputational risk mitigation" via its Responsible Scaling Policy (RSP) [^]. The RSP delineates specific "AGI stages," such as AGI-0 through AGI-4 [^]. AGI-1 is characterized by models significantly outperforming current frontier models, while AGI-4 denotes models vastly more capable than humans across nearly all economically valuable tasks [^]. Crucially, the policy mandates strict monitoring, evaluation, and mitigation requirements for each stage before systems reaching or exceeding that capability are deployed [^].
Both companies share a strong commitment to safe, beneficial AGI development. Their publicly available strategy documents consistently indicate a shared emphasis on prioritizing safety, mitigating catastrophic and existential risks, and ensuring beneficial outcomes for humanity. This deep commitment to responsible development and deployment establishes a higher benchmark for an AGI claim. Their strategy is rooted in comprehensive evaluations and robust risk management, rather than driven by the mere ambition of being the first to make such an announcement.

10. What Could Change the Odds

Key Catalysts

Catalyst analysis unavailable.

Key Dates & Catalysts

  • Expiration: July 08, 2026
  • Closes: January 01, 2031

11. Decision-Flipping Events

  • Trigger: Catalyst analysis unavailable.

13. Historical Resolutions

No historical resolution data available for this series.