Short Answer

Both the model and the market expect the NVIDIA H100 SXM Compute Price to reach 'Price to Beat: 1.7717' by April 10, 2026, with no compelling evidence of mispricing.

1. Executive Verdict

  • NVIDIA Blackwell and Rubin GPUs face overwhelming hyperscaler and cloud demand.
  • Flagship AI models will significantly increase training compute requirements by late 2025.
  • Cloud providers' custom AI chips will significantly boost compute capacity by 2025.
  • Power infrastructure remains a critical bottleneck for AI data center expansion.
  • NVIDIA Rubin GPUs are anticipated for public unveiling at CES 2026.

Who Wins and Why

Outcome Market Model Why
Price to Beat: 1.7717 54.0% 59.9% Sustained high demand for AI compute and limited H100 supply will keep prices elevated.

2. Market Behavior & Price Dynamics

Historical Price (Probability)

Outcome probability
Date
This market, which speculates on the future price of NVIDIA H100 SXM compute, has experienced a dramatic and rapid upward trend since its inception. The probability started at an extremely low 2.0%, indicating a strong consensus against the price increasing. However, the market saw an explosive 42.0 percentage point spike around April 6, 2026, and continued to climb to its current price of 54.0%. This represents a complete reversal in sentiment, moving from a near-certain "No" to a majority "Yes" probability in less than a week. The initial low of 2.0% acts as a historical support level, while the current high of 54.0% is the immediate resistance point the market is testing.
The cause for this significant price movement is not apparent from the provided context. Such a sharp repricing typically corresponds with a major news event or a shift in fundamental analysis, but no specific catalyst is available. The total trading volume is 160 contracts, which is relatively light. The lack of volume on the specific sample dates provided, despite the large price swings, suggests that these movements may have been caused by a small number of traders with high conviction rather than broad market participation. This low liquidity environment can lead to increased volatility and sharp price changes.
Overall, the chart indicates a powerful and sudden shift in market sentiment. Traders have rapidly moved from believing a price increase was highly unlikely to viewing it as the more probable outcome. The conviction behind this move is tempered by the low overall trading volume. The market is now pricing in a greater than 50% chance that H100 compute prices will be higher by the resolution date, reflecting a significant change in expectations regarding supply, demand, or other factors influencing AI hardware costs.

3. Significant Price Movements

Notable price changes detected in the chart, along with research into what caused each movement.

📈 April 06, 2026: 42.0pp spike

Price increased from 3.0% to 45.0%

Outcome: Price to Beat: 1.7717

What happened: No supporting research available for this anomaly.

4. Market Data

View on Kalshi →

Contract Snapshot

This market resolves to YES if the NVIDIA H100 SXM compute price per hour is above $1.7717 on April 10, 2026; otherwise, it resolves to NO. The outcome is verified from Ornn (dashboard.ornnai.com) based on the value at the market close on April 10, 2026, 5:00 PM EDT. Values are rounded to two decimal places, and revisions to the underlying data after expiration will not be accounted for. If no data is available by the expiration date, the market will resolve to NO.

Available Contracts

Market options and current pricing

Outcome bucket Yes (price) No (price) Last trade probability
Price to Beat: 1.7717 $0.89 $0.95 54%

Market Discussion

Limited public discussion available for this market.

5. What is the expected NVIDIA Blackwell and Rubin GPU supply-demand gap?

Blackwell and GB200 GPUsSold out through mid-2026, 3.6 million unit backlog [^]
Total NVIDIA GPU OrdersEstimated $1 trillion for Blackwell and Rubin architectures [^]
TSMC CoWoS Capacity Target130,000 wafers monthly by late 2026 [^]
NVIDIA's Blackwell and Rubin GPUs face overwhelming hyperscaler and cloud demand. Demand for Blackwell B200 and GB200 units has resulted in them being sold out through mid-2026, leading to a backlog of 3.6 million units [^]. Further underscoring this high demand, NVIDIA has secured approximately $1 trillion in orders for its next-generation Blackwell and Rubin GPUs [^]. Reports indicate that the unprecedented scale of demand for Blackwell B300 has "completely broken" existing data center planning models [^].
TSMC's CoWoS capacity will significantly lag GPU demand in Q1 2026. Despite TSMC's substantial efforts to increase its CoWoS advanced packaging capacity, which is a critical bottleneck for these complex GPUs, the supply is expected to fall considerably short. TSMC is actively boosting its CoWoS capacity, with NVIDIA dominating orders through 2027 [^]. While TSMC aims to reach a monthly CoWoS wafer output of 130,000 by late 2026 [^] and specifically targets 150,000 CoWoS wafers for the Rubin architecture [^], the capacity available in Q1 2026 will be considerably lower than these future targets as the ramp-up progresses. Consequently, even with increased production, the existing backlog and continued robust orders for Blackwell and Rubin architectures project that supply will not meet the escalating demand in Q1 2026.

6. How Will AI Model Compute Requirements Evolve by 2025?

GPT-6 Training FLOPs Increase20 to 100 times more than predecessors [^]
GPT-5 Estimated Training FLOPs3-5e24 FLOPs [^]
Predominant AI Workload ShiftTowards inference [^]
Flagship foundation models anticipate significant increases in training compute requirements towards late 2025. While an intermediate model like GPT-4.5 was estimated to require around 1e25 FLOPs, GPT-5 potentially utilized less, approximately 3-5e24 FLOPs [^]. However, this reduction is considered an anomaly, with subsequent models such as GPT-6 projected to demand 20 to 100 times more training FLOPs than their predecessors [^]. Concurrently, Meta's Llama 4 family is expected to feature models with substantial parameter counts to support advanced multimodal capabilities, signaling continued intensive training efforts [^]. Overall, the demand for compute power for AI training is predicted to increase rather than decrease [^].
A clear structural shift favors inference over initial large-scale training, a trend termed the 'post-training revolution.' This indicates that a significant portion of a model's capabilities and optimization now occurs after the initial pre-training phase, through advanced techniques such as fine-tuning, alignment, and reinforcement learning from human feedback (RLHF) [^] . Analysis by McKinsey corroborates this shift, confirming that while training large foundation models remains compute-intensive, the predominant share of AI compute workloads is transitioning towards inference [^]. Hyperscalers are consequently adapting their strategies to meet this growing inference demand, which often necessitates distinct hardware and software optimizations compared to those used for training [^].

7. How Will Cloud Providers' Custom AI Chips Impact Capacity by 2025?

AWS Trainium 2 New AI Training Capacity25-35% by end of 2025 [^]
Google Cloud TPUs New External Customer AI Capacity35-45% by end of 2025 [^]
Microsoft Azure Maia 200 New Total AI Accelerator Capacity15-25% by end of 2025 [^]
Major cloud providers anticipate significant in-house AI accelerator contributions by 2025. By the end of 2025, AWS projects its Trainium 2 chips to fulfill 25-35% of its new AI training capacity, even as the company continues substantial investment in NVIDIA GPUs to meet overall demand and expand AWS capacity [^]. Google Cloud, leveraging its extensive investment in TPUs, aims for these in-house chips to cover an estimated 35-45% of new external customer-facing AI accelerator capacity. The company also anticipates TPUs potentially reaching 40-50% of new AI capacity in its own cloud offerings by the end of 2025 [^].
Microsoft Azure prioritizes in-house silicon for AI inference workloads. Its custom Maia 200 silicon is primarily targeted for inference, expected to contribute 20-30% of new AI inference capacity by late 2025 [^]. More broadly, Maia 200 is anticipated to fulfill 15-25% of Azure's new total AI accelerator capacity by the end of 2025, though the company's AI training capacity is projected to continue its heavy reliance on externally sourced NVIDIA GPUs [^].

8. How Do Power Constraints Impact AI Data Center Growth?

New Power Delivery DelayTwo to five years [^]
US Data Center Build Delays/CancellationsApproximately half [^]
Oracle/OpenAI Project DelayFrom 2027 to 2028 [^]
New power infrastructure is a critical bottleneck for AI data center expansion. The rapid growth of AI infrastructure is significantly constrained by the availability of new utility-scale power, leading to substantial delays in power delivery. New data center projects across major US markets are experiencing wait times of two to five years for essential power hookups [^]. This creates a de facto shortfall in accessible grid power relative to projected consumption, particularly evident in hubs like Northern Virginia, where data center demand in Loudoun County alone is forecast to exceed 3,000 MW by 2030 [^].
Power infrastructure deficits cause significant data center project delays and cancellations. Approximately half of planned US data center builds have been either delayed or canceled, with similar issues potentially affecting up to half of the world’s data centers this year [^]. A notable example includes Oracle delaying certain data center projects intended for OpenAI, which pushes their full operational status from an anticipated 2027 to 2028 [^]. These delays are primarily attributed to a lack of sufficient power infrastructure and difficulties in securing necessary components for development [^].

9. When Will NVIDIA Rubin Benchmarks Be Released Relative to Cloud Pricing?

Rubin Architecture Full ProductionEarly January 2026 (CES 2026) [^]
MLPerf Inference v6.0 BenchmarksApril 10, 2026 [^]
Cloud Provider H100 Pricing SetQ1 2026 (ending March 31, 2026) [^]
NVIDIA's Rubin GPUs are anticipated for public unveiling at CES 2026. NVIDIA's Rubin-architecture GPUs are expected to enter full production and be publicly unveiled at CES 2026, an event typically held in early January [^]. Initial details about the Rubin platform, particularly its advanced capabilities for AI inference, are anticipated at this event [^]. Independent performance benchmarks, such as those from MLPerf, generally follow such a production announcement. MLCommons has officially scheduled the release of MLPerf Inference v6.0 benchmark results for April 10, 2026 [^]. While NVIDIA has already submitted benchmarks for Blackwell Ultra in this round [^], comprehensive, independent benchmarks specifically for Rubin architecture GPUs would align with this MLPerf release or subsequent rounds.
Independent Rubin benchmarks will follow cloud H100 pricing decisions. A critical timing implication arises when comparing the MLPerf release with cloud provider pricing schedules. Major cloud providers are projected to finalize their pricing for H100-era compute during Q1 2026, a period concluding on March 31, 2026 [^]. Given that the MLPerf Inference v6.0 results will become available on April 10, 2026, these independent performance benchmarks for Rubin-architecture GPUs will be released after cloud providers have made their Q1 2026 pricing decisions [^]. This sequence indicates that initial cloud pricing for H100-era compute will be set before detailed, independent performance metrics for Rubin are widely accessible, potentially influencing market dynamics without the full benchmark data.

10. What Could Change the Odds

Key Catalysts

Catalyst analysis unavailable.

Key Dates & Catalysts

  • Strike Date: April 10, 2026
  • Expiration: April 17, 2026
  • Closes: April 10, 2026

11. Decision-Flipping Events

  • Trigger: Catalyst analysis unavailable.

13. Historical Resolutions

Historical Resolutions: 2 markets in this series

Outcomes: 2 resolved YES, 0 resolved NO

Recent resolutions:

  • KXH100W-26APR03-1.761: YES (Apr 03, 2026)
  • KXH100W-26MAR27-1.6992: YES (Mar 27, 2026)