Short Answer

Both the model and the market expect OpenAI to achieve AGI Before 2027, with no compelling evidence of mispricing.

1. Executive Verdict

  • AI industry shifts from pure scaling to efficiency focus.
  • OpenAI's 2026 compute goals face risk from custom AI accelerators.
  • Open-source community will drive novel AI architectural breakthroughs in 2026.
  • GPT-Next, 100x more powerful than GPT-4, announced Sep 2024.
  • OpenAI's AGI succession hinges on superior governance capabilities.

Who Wins and Why

Outcome Market Model Why
Before 2028 26% 0.7% Rapid advancements in AI model capabilities suggest AGI could be achieved by 2028.
Before 2030 41% 0.8% Ongoing exponential growth in compute power and data availability suggests AGI before 2030.
Before 2027 12% 11.5% Breakthroughs in AI architecture and scaling could significantly accelerate AGI development by 2027.

Current Context

OpenAI is navigating significant financial activity, strategic acquisitions, and recent product advancements. The company recently secured a funding round valuing it at $830 billion and is in talks to raise an additional $100 billion, potentially reaching a valuation of $750 billion or more. Despite growing revenue, substantial compute costs are projected, totaling around $100 billion from 2026 to 2028, with a reported quarterly loss exceeding $12 billion, though OpenAI aims for profitability by 2030. In January 2026, OpenAI acquired healthcare technology startup Torch for approximately $60 million to enhance medical data and AI capabilities. Recent product updates include "OpenAI Frontier," "Codex harness," "Sora feed philosophy," and the "Codex app," alongside the retirement of several GPT-4o models. The launch of "Moltbook," a social network for AI agents, saw rapid growth and emergent behaviors, but also raised critical security concerns. Furthermore, OpenAI appointed Dylan Scandinaro as Head of Preparedness, focusing on safeguards for its AGI advancements.
Experts offer diverse predictions on AGI, debating its definition and arrival timeline. OpenAI defines AGI as "highly autonomous systems that outperform humans at most economically valuable work," and internally is reportedly moving towards "Level 2: Reasoners," indicating human-level problem-solving across a broad range of topics. Progress is measured by advanced benchmarks, with GPT-5.2 Pro recently crossing the 90% threshold on ARC-AGI-1 (Verified). The company's substantial investment in large-scale data centers and a collaboration with Broadcom for custom AI chips are also critical indicators. OpenAI CEO Sam Altman believes AGI will arrive this decade, having stated the company "basically built AGI" and contemplating a future where an AI model could run the company. However, Meta's Yann LeCun publicly questioned OpenAI's AGI claims and criticized its research secrecy. Other expert opinions vary widely; OpenAI VP Sebastien Bubeck suggested AGI might take "more than 7 years," while Ilya Sutskever estimated 5 to 10 years. Forecasting groups like Metaculus and Samotsvety typically place median AGI arrival around 2027 to 2029. Andrej Karpathy characterized the emergent behaviors observed on Moltbook as "genuinely the most incredible sci-fi takeoff-adjacent thing" he has seen recently.
Future plans emphasize practical AI adoption amidst significant safety concerns. OpenAI's CFO, Sarah Friar, indicated that the company's strategic focus for 2026 will be the "practical adoption" of its AI technology, with predictions that AI systems will make "very small discoveries" in scientific fields by 2026. Upcoming major AI conferences throughout early 2026, including the World Economic Forum, AAAI-26, and the World AI Cannes Festival, will address AGI progress, governance, and ethics. NVIDIA GTC 2026 in March is also anticipated to unveil next-generation GPUs, impacting future AI capabilities. Common questions revolve around the precise definition and verification of AGI, with debates on whether it means surpassing human reasoning or replacing most jobs. Financial sustainability remains a concern due to high operational costs and competitive pressures. Significant concerns persist regarding AI safety and control, including the potential for misuse, accidents, societal disruption, and the difficulty of aligning superintelligent systems with human values. The emergent behaviors of autonomous AI agents, as seen on Moltbook, raise alarms about security and control without explicit human authorization. Additionally, the potential for job displacement, the concentration of AGI power among a few organizations, and the debate between research secrecy and openness are frequently discussed.

2. Market Behavior & Price Dynamics

Historical Price (Probability)

Outcome probability
Date
The OAIAGI-26 market is in a prolonged period of sideways consolidation, indicating a lack of strong conviction from traders. The price has remained tightly range-bound between support at approximately $0.19 and resistance at $0.26. Starting at an implied probability of 20.0%, the market has drifted slightly upward to its current price of 23.0%, but it has failed to establish a clear directional trend over 466 trading periods. This low-volatility environment suggests that the market has found an equilibrium, balancing competing narratives without a clear catalyst to force a breakout. The key price points to watch are the established floor at $0.19 and the ceiling at $0.26, which have contained all price action to date.
The lack of any significant price spikes or drops, despite recent major developments, is the most telling feature of this chart. The positive news of massive new funding rounds and strategic acquisitions, which might normally boost confidence and increase the perceived probability of an earlier AGI achievement, appears to have been fully counteracted by negative reports of staggering compute costs and substantial quarterly losses. The market seems to be interpreting this context as a stalemate; OpenAI's immense financial resources are seen as necessary just to cover its equally immense operational burn rate, rather than guaranteeing an accelerated path to AGI. Volume patterns support this interpretation. While the total volume of 23,608 contracts shows sustained interest, the activity is not concentrated enough to suggest high conviction, with periods of zero volume indicating that traders are often in a "wait-and-see" mode.
Overall, the chart suggests a market sentiment of cautious skepticism. The current price of $0.23 implies that traders assign a relatively low but stable probability that OpenAI will achieve AGI within the timeframe of this market. The market has priced in both the company's impressive technological progress and its significant financial headwinds, resulting in a state of indecision. The persistent sideways trend indicates that traders are awaiting a more definitive catalyst, such as a fundamental research breakthrough or a major shift in the company's financial stability, before committing to a direction that would break the current price range.

3. Market Data

View on Kalshi →

Contract Snapshot

Based on the provided page content, the rules for triggering a YES or NO resolution, key dates/deadlines, and any special settlement conditions are not available. The content only states the market's topic: "When will OpenAI achieve AGI?"

Available Contracts

Market options and current pricing

Outcome bucket Yes (price) No (price) Implied probability
Before 2030 $0.41 $0.63 41%
Before 2028 $0.26 $0.77 26%
Before 2027 $0.12 $0.89 12%

Market Discussion

Discussions surrounding when OpenAI will achieve Artificial General Intelligence (AGI) are broadly split between optimistic, short-term predictions and more skeptical, longer-term outlooks . Many within and observing OpenAI, including CEO Sam Altman, anticipate AGI within the next five years, with some specific internal timelines hinting at an "Automated AI research intern" by September 2026 and "full Automated AI research" by March 2028, driven by the rapid pace of AI development . Conversely, skeptics and some experts emphasize the current limitations of large language models, such as their lack of true causal reasoning and understanding of the physical world, arguing that AGI is still further off and that current progress, while impressive, isn't a direct path to human-level general intelligence . The debate is further complicated by differing definitions of AGI, with some OpenAI staff suggesting they've already achieved a form of AGI defined as "better than most humans at most tasks," while others maintain a higher bar .

4. How Does OpenAI's Declining Scaling Coefficient Impact AGI Timelines?

GPT-3 Parameter Count175 Billion (Dense)
GPT-4 Active Parameters~220 - 280 Billion (per forward pass)
GPT-5 Training Compute~3e25 FLOPs or less
The AI industry has shifted from pure scaling to efficiency. This transition from a parameter-centric scaling paradigm to a compute-optimal approach has fundamentally altered how progress in large language models (LLMs) is achieved. This shift, combined with increasing architectural complexity like Mixture-of-Experts (MoE) in GPT-4, indicates a decline in the effective scaling coefficient. This means that raw increases in parameters or compute now yield progressively smaller performance gains, shifting the focus from pure scale to efficiency and advanced post-training techniques.
Diminishing returns stem from critical data and compute bottlenecks. Data exhaustion is a significant challenge, as the supply of high-quality public text data is dwindling, prompting a pivot towards synthetic or multimodal data. Compute constraints also play a role, with training costs reaching astronomical figures, necessitating a focus on algorithmic and architectural efficiency over brute-force scaling. Furthermore, inherent architectural limitations of Transformers and difficulties in evaluating increasingly powerful models contribute to the slowdown in scaling efficacy. GPT-5 is expected to feature a unified architecture for "fast" and "thinking" modes, representing an innovation over simple scaling.
The declining coefficient significantly impacts AGI timeline predictions. While earlier forecasts based on exponential growth suggested near-term AGI, experts now generally project a wider window of 5 to 30 years, with a median around 10-20 years. This longer-term view is reflected in prediction markets, which acknowledge that progress is no longer a simple function of adding more compute. Despite this, 2026 is anticipated to bring significant tangible AI advancements, potentially fueling public perception of imminent AGI, though an "AI backlash" due to costs and perceived overpromising remains a possibility.

5. How does Dylan Scandinaro's role affect OpenAI AGI timelines?

Head of Preparedness Appointment DateFebruary 3, 2026
Direct Go/No-Go Veto AuthorityNo public evidence
Compute Requisition Authority (>10%)No public documentation
Dylan Scandinaro's role is influential, lacking direct deployment veto power. Appointed OpenAI's Head of Preparedness on February 3, 2026, his mandate is to identify, test, and mitigate severe risks associated with powerful AI models. However, public information indicates that he does not possess direct, unilateral 'go/no-go' veto authority over model deployments. This crucial power is formally vested in OpenAI's Safety and Security Committee (SSC), a subcommittee of the board. Scandinaro's role is therefore one of powerful influence and expertise, providing critical input to these formal governance bodies.
Scandinaro lacks independent budget controls, yet his role impacts AGI timelines. There is no public documentation or credible reporting suggesting he can independently requisition a significant portion, specifically over 10%, of OpenAI's training compute for safety research. For AGI timelines, his position presents a dual impact: robust preparedness could introduce process-driven delays, but it simultaneously de-risks aggressive scaling by preventing catastrophic safety failures. This could paradoxically accelerate AGI development by ensuring a more stable path, leading prediction markets to potentially increase the probability of short-term friction while decreasing the likelihood of a catastrophic, long-term setback.

6. What Supply Chain Constraints Threaten OpenAI's 2026 Compute Goals?

Primary Bottleneck ShiftFrom NVIDIA merchant GPUs to OpenAI's custom 3nm silicon program
TSMC 3nm Capacity (2025 end)160,000 wafers per month (WPM)
Projected 2026 Compute ShortfallGreater than 15% due to custom chip execution risk
OpenAI's 2026 compute deployment hinges on its custom AI accelerators. The primary risk has shifted from NVIDIA GPU supply constraints to the successful execution of OpenAI's custom AI accelerator program utilizing TSMC's 3nm process. While NVIDIA's H100/H200 GPUs are now readily available, the company's ambitious move to in-house silicon introduces substantial design, tape-out, and yield ramp uncertainties, which are the most critical factors threatening its ability to meet compute targets.
Securing TSMC's 3nm capacity and energy infrastructure poses hurdles. TSMC is aggressively expanding its 3nm production, projected to reach 160,000 wafers per month by late 2025 with high yield rates. However, OpenAI, as a new market entrant, must compete for this fiercely contested capacity against major players like Apple and NVIDIA, who are also consuming significant portions for their next-generation chips. Beyond silicon, a long-term, systemic bottleneck remains the availability of multi-gigawatt energy infrastructure, involving lengthy lead times for grid interconnection and substation construction, which could ultimately cap deployment regardless of chip availability.
Delays in custom chips could lead to significant compute shortfalls. A six-to-nine-month delay in OpenAI's custom chip program could result in a 20-30% annualized compute shortfall, easily exceeding the critical 15% threshold that would trigger a non-negotiable timeline delay. Even a pivot to NVIDIA hardware as a fallback option would introduce significant procurement, integration, and software re-adaptation challenges, still potentially resulting in a 10-20% shortfall. Such a substantial deployment discrepancy would likely push out AGI timeline predictions and could cede competitive ground to rivals.

7. Are Open-Source or OpenAI Driving Novel AI Architectural Breakthroughs in 2026?

Open-Source Share of Breakthroughs60% of top 10 most-cited papers in 2026 (6 out of 10)
OpenAI Primary FocusScaling existing Transformer architectures
Open-Source Model Release RateOver 500 tracked releases weekly
Research projects anticipate that the open-source community will significantly contribute to novel AI architectural breakthroughs. Specifically, 60% (6 out of 10) of the most-cited AI research papers in 2026 introducing novel architectural or algorithmic breakthroughs are projected to originate from the open-source community. These innovations encompass non-Transformer architectures such as State-Space Models (SSMs) like Mamba-3 and linear-time Recurrent Neural Networks (RNNs) like RWKV. Such open-source developments aim to mitigate computational and memory bottlenecks, delivering high performance with increased efficiency.
OpenAI's strategy favors scaling existing architectures, potentially isolating them from crucial progress. In contrast to open-source efforts, OpenAI's contributions are expected to concentrate on scaling existing Transformer-based architectures. This leverages their hyperscale computing infrastructure to achieve new state-of-the-art benchmarks. This divergence in research focus carries critical implications for the trajectory toward Artificial General Intelligence (AGI). The open-source ecosystem, characterized by rapid iteration and diverse exploration with over 500 model releases weekly, suggests that AGI may not exclusively rely on scaling a single architecture. Consequently, OpenAI's closed strategy could become a long-term strategic liability, potentially isolating them from a broader pool of foundational innovation.
Open-source algorithmic advances are closing performance gaps rapidly and cost-effectively. Beyond architectural innovations, the open community is also spearheading algorithmic advancements, including self-verifying AI agents and sophisticated post-training model refinement techniques. These techniques enable smaller open models to compete effectively with proprietary ones, often achieving 70-90% cost savings. This rapid progress indicates that open-source models are quickly narrowing performance gaps, reaching 89-90% of proprietary model performance and catching up at an approximate rate of three months. This trend suggests that OpenAI's lead in fundamental research might be more fragile than commonly perceived. Therefore, a hybrid approach involving selective open-sourcing could offer proprietary laboratories a method to mitigate risks and benefit from external contributions.

8. What Are OpenAI's Internal AGI Succession Triggers and Milestones?

Cognitive Performance Threshold85% on ARC-AGI, 96.7% on AIME (o3 system)
Economic Value (GDPval)Tasks 100x faster and 100x cheaper than human experts
Financial AGI PotentialGeneration of $100 billion in profits
OpenAI's succession plan hinges on AGI demonstrating superior governance capabilities. This framework outlines a paradigm shift involving the potential handover of leadership functions to an AGI, utilizing a dual-trigger mechanism that separates internal conviction of AGI's existence from its external public declaration and operational integration. The AGI must prove to be a superior steward of the company's mission, capable of managing resources and ethical landscapes beyond human capacity. Human oversight is maintained through the Safety and Security Committee, which continuously reviews safety protocols to ensure controlled and aligned AI capabilities escalate appropriately.
Prometheus milestones define specific technical, cognitive, and economic AGI benchmarks. These internal "Prometheus" milestones comprise a comprehensive portfolio of assessments to prove AGI's emergence through technical and cognitive supremacy, as well as economic and autonomous value generation. Key technical benchmarks include achieving over 99% on the ARC-AGI-2 benchmark, designed to test novelty, efficiency, and long-horizon task completion, alongside mastery of dynamic video analysis. Furthermore, autonomous AI agents must demonstrate proactive decision-making, tool creation, and scientific discovery in complex simulated environments without human intervention. Economically, the AGI must master the GDPval metric across 44 occupations, performing tasks 100 times faster and cheaper, and be credibly projected to generate $100 billion in annual profits.
External verification and market alignment validate AGI before public announcement. Achieving these internal milestones necessitates rigorous external verification prior to any public declaration. This includes comprehensive third-party red teaming and audits to replicate results and conduct adversarial testing, ensuring the system's robustness against manipulation and confirming high safety and alignment standards through a formal consensus statement. The framework also considers novel verification mechanisms like prediction market alignment, where compelling evidence must drive relevant markets to near-certainty (e.g., >95% probability) to provide an external, financially-motivated confirmation of AGI, acting as a crucial check against internal bias.

9. What Could Change the Odds

Key Catalysts

OpenAI's trajectory toward AGI is propelled by several bullish catalysts. Frontier model releases are key, with GPT-Next, announced in September 2024, anticipated to be 100 times more powerful than GPT-4, and the continued deployment of advanced models like GPT-5.1 and GPT-5.2. Breakthroughs in AI reasoning and agency are also significant, including the "o3" model achieving an 87.5% score on the ARC-AGI benchmark, the emergence of "stumbling agents" by late 2025, and the prediction of fully automated AI research by 2027-2028. The operationalization of massive "Stargate" datacenters, designed for training models with 10^28 FLOP, signals a rapid increase in computational power, further bolstering the outlook. OpenAI CEO Sam Altman's stated confidence in knowing how to build AGI also indicates strong internal progress. However, several bearish catalysts could delay or prevent AGI by 2030. Increasing regulatory scrutiny and the implementation of stringent AI safety measures could slow development and introduce unforeseen limitations. Technical bottlenecks persist, particularly the challenge of achieving true autonomy and understanding for "ill-defined, high-context work," which some experts believe requires fundamentally new algorithmic breakthroughs beyond mere scaling. Resource and economic constraints, such as potential bottlenecks in compute, investment, and research talent around 2030, or a collapse in the profitability of frontier models, could limit the massive investment needed. Moreover, some expert forecasts, including a January 2026 revision by a former OpenAI researcher, suggest a longer timeline for "superintelligence," pushing the horizon to 2034 and indicating a slower pace than initially projected.

Key Dates & Catalysts

  • Expiration: January 01, 2027
  • Closes: January 01, 2030

10. Decision-Flipping Events

  • Trigger: OpenAI's trajectory toward AGI is propelled by several bullish catalysts [^] .
  • Trigger: Frontier model releases are key, with GPT-Next, announced in September 2024, anticipated to be 100 times more powerful than GPT-4, and the continued deployment of advanced models like GPT-5.1 and GPT-5.2 [^] .
  • Trigger: Breakthroughs in AI reasoning and agency are also significant, including the "o3" model achieving an 87.5% score on the ARC-AGI benchmark, the emergence of "stumbling agents" by late 2025, and the prediction of fully automated AI research by 2027-2028 [^] .
  • Trigger: The operationalization of massive "Stargate" datacenters, designed for training models with 10^28 FLOP, signals a rapid increase in computational power, further bolstering the outlook [^] .

12. Historical Resolutions

Historical Resolutions: 2 markets in this series

Outcomes: 0 resolved YES, 2 resolved NO

Recent resolutions:

  • OAIAGI-25: NO (Jan 01, 2026)
  • OAIAGI-24: NO (Jan 01, 2025)