Short Answer

Both the model and the market expect the NVIDIA H100 SXM compute price to be above 1.7617 by April 17, 2026, with no compelling evidence of mispricing.

1. Executive Verdict

  • Blackwell B200 volume production commenced in Q1 2026.
  • Leading AI companies project substantial FY2025 capital expenditures.
  • AMD MI300X shows competitive performance against NVIDIA H100.
  • H100 rental prices surged over 150% from Q2 2023 to Q1 2024.
  • The market experienced a significant upward price spike on April 12, 2026.

Who Wins and Why

Outcome Market Model Why
Price to Beat: 1.7617 57.0% 64.6% Sustained high demand for AI accelerators will support elevated H100 compute prices.

2. Market Behavior & Price Dynamics

Historical Price (Probability)

Outcome probability
Date
This analysis covers the prediction market for the price of NVIDIA H100 SXM compute by April 17, 2026. The market has experienced a significant and recent upward trend. After opening and holding steady at a 44.0% probability, the price saw a sharp 13 percentage point spike on April 12, 2026, reaching its current level of 57.0%. This represents the entire price range of the market to date. The initial 44.0% level acted as a clear support base before this breakout. The cause of this sudden price jump is not apparent from the provided context, suggesting the move could be in reaction to external news or a significant shift in trader positioning not detailed here.
The total trading volume of 410 contracts indicates a moderate level of market participation. The sample data shows very low volume immediately preceding the price spike, which can sometimes imply that a price move occurred on limited conviction or that a single large trade was able to shift the market significantly. The sharp move from 44.0% to 57.0% reflects a decisive shift in market sentiment. The probability has now crossed the 50% threshold, indicating that traders, on average, believe it is more likely than not that the H100 compute price will be higher by the resolution date. The new high of 57.0% now establishes a key resistance level for the market to test going forward.

3. Significant Price Movements

Notable price changes detected in the chart, along with research into what caused each movement.

📈 April 12, 2026: 13.0pp spike

Price increased from 44.0% to 57.0%

Outcome: Price to Beat: 1.7617

What happened: No supporting research available for this anomaly.

4. Market Data

View on Kalshi →

Contract Snapshot

This market resolves to Yes if the value of H100 SXM compute per hour, as reported by Ornn, is above 1.7617 on April 17, 2026; otherwise, it resolves to No. The market officially closes on April 17, 2026, at 5:00 PM EDT, with projected payouts by 5:30 PM EDT. Settlement data is sourced from Ornn's "USD" iteration, and any revisions made after the expiration date will not be considered. If no data is available by the expiration date, the market will also resolve to No.

Available Contracts

Market options and current pricing

Outcome bucket Yes (price) No (price) Last trade probability
Price to Beat: 1.7617 $0.56 $0.45 57%

Market Discussion

Limited public discussion available for this market.

5. How Does NVIDIA Blackwell Production Impact AI Hardware Supply?

Blackwell Volume Production StartQ1 2026 [^]
Q1 2026 Production VolumesNot publicly disclosed [^]
AI Compute Demand ProjectionExpected to exceed supply well into 2026 [^]
NVIDIA Blackwell B200 chips commenced volume production in Q1 2026. NVIDIA's next-generation Blackwell B200 and GB200 chips officially commenced volume production in Q1 2026, marking a significant milestone for the AI industry [^]. While precise Q1 2026 production volumes for the Blackwell platform remain proprietary and have not been publicly disclosed, analysts project a substantial ramp-up throughout the year [^]. The demand for Blackwell platforms, including variants like Blackwell Ultra, is described as "unprecedented" and significantly outpacing initial supply [^].
Early Blackwell B200 shipments prioritize foundational AI and major cloud providers. Initial customer allocations for Q1 2026 are heavily weighted towards foundational AI research and large-scale data center deployments [^]. Major cloud service providers, including AWS, Azure, Google Cloud, and Oracle Cloud Infrastructure, are at the forefront of receiving these early, limited Blackwell units [^]. Contrary to the premise that a faster-than-expected Blackwell rollout could create a supply glut of older H100 hardware, the available research indicates a continued significant shortage of H100 GPUs [^].
Blackwell's debut will not resolve existing AI compute supply constraints. Analysts do not anticipate the initial Blackwell rollout to immediately resolve current supply issues or saturate the market [^]. The "insatiable demand for AI acceleration" is expected to absorb both H100 and early Blackwell units, preventing a significant price drop for older hardware [^]. Industry projections suggest that the demand for high-end AI compute will continue to exceed supply well into 2026, which is likely to keep H100 prices relatively stable or potentially even increasing [^].

6. How Much Are Top AI Companies Investing in Infrastructure in FY2025?

Meta FY2025 Capex$35-40 billion [^]
Alphabet FY2025 Capex$75 billion [^]
Amazon FY2025 Capex$100 billion [^]
Leading AI spenders project substantial FY2025 capital expenditures driven by AI demand. Meta anticipates total capital expenditures between $35 billion and $40 billion, with a significant allocation towards servers, including AI hardware, and data centers to support its AI research and product development [^]. Alphabet plans to invest approximately $75 billion in capital expenditures, focusing on infrastructure such as data centers and servers for its cloud computing and AI initiatives [^]. Amazon projects the largest capital outlay at $100 billion for 2025, dedicating a considerable sum, equivalent to nearly a year of AWS revenue, to AI investments in data centers and infrastructure for large language models and generative AI technologies [^]. Microsoft also expects an acceleration in its capital expenditures, citing robust demand for its cloud and AI services, with substantial allocations directed to data centers and AI technologies [^].
Public guidance indicates a strong shift towards next-generation AI hardware. Microsoft has specifically identified the NVIDIA GB300 NVL72, part of the next-generation Blackwell platform, as a key component of its Azure AI infrastructure strategy [^]. This aligns with industry trends, as NVIDIA's Blackwell platform is reportedly the fastest ramping compute engine in the company's history, indicating high demand and rapid adoption [^]. However, other companies like Meta, Alphabet, and Amazon provide general guidance on "AI hardware" or "AI infrastructure" expenditures, without detailing a precise breakdown between current-generation H100s or next-generation B200 or MI300X purchases [^]. The collective increase in capital expenditure across these leading companies suggests an overall expansion of their AI compute capacity. This expansion will likely encompass the continued deployment of H100s as next-generation hardware scales up, rather than an immediate and complete pivot that would lead to a collapse in the residual value of H100s.

7. How Do AMD MI300X and NVIDIA GPUs Compare in AI Performance?

AMD MI300X vs. NVIDIA H100Roughly in line with H100 performance [^]
NVIDIA Blackwell performance gainUp to 2.6x higher over prior generations [^]
MLPerf Training v5.1 resultsNVIDIA won every benchmark [^]
AMD MI300X performance shows competitiveness with NVIDIA H100 in benchmarks. Initial MLPerf Training submissions for AMD's Instinct MI300X, including v5.0 and v5.1, demonstrated performance that was "roughly in line with Nvidia H100 performance," showing competitive results in specific tests [^]. AMD officially expanded its AI momentum with its first MLPerf Training submission featuring the MI300X [^], and continued with detailed submissions in v5.1, showcasing its capabilities across models like LLAMA2 70B and GPT-3 175B using ROCm 6.1 [^].
NVIDIA's Blackwell architecture established a significant performance lead in benchmarks. The company's latest submissions, particularly with its Blackwell architecture, set a high bar, with the Blackwell GPU delivering up to 2.6x higher performance compared to prior generations in MLPerf Training v5.0 [^]. Subsequent MLPerf Training v5.1 results further solidified NVIDIA's position, with the company announcing it "Wins Every MLPerf Training v5.1 Benchmark," as confirmed by MLCommons [^]. This indicates that while AMD's MI300X is a significant competitor to the H100, especially regarding raw hardware capabilities [^], NVIDIA's newer chips and robust software ecosystem continue to maintain a performance lead in large-scale training benchmarks [^].
A definitive price/performance win for AMD is not yet established. Regarding total cost of ownership (TCO) and price/performance, a direct "clear price/performance win for AMD" based solely on current MLPerf training results against NVIDIA's top offerings (like Blackwell) is not definitively established. While the MI300X offers a compelling alternative to the H100, particularly considering its competitive raw performance [^], the ongoing presence of the "CUDA Moat" [^]—referring to NVIDIA's mature software ecosystem—suggests that NVIDIA's solutions often achieve better overall efficiency and ease of deployment. These factors can influence TCO beyond raw hardware speed, and industry analyses, such as buyer's guides comparing H100 and MI300X, factor in these broader elements alongside price to provide a complete picture [^].

8. How will NVIDIA Blackwell GPUs impact H100 prices and deprecation?

H100 Rental Price Surge (6 months)Nearly 40% [^]
Blackwell Training Performance Boost4x-5x higher than H100 [^]
Blackwell TCO Advantage3x-4x better for training workloads vs. H100 [^]
H100 compute rental prices have seen a significant increase, surging by nearly 40% in six months and over 150% from Q2 2023 to Q1 2024 [^] . However, the emergence of NVIDIA Blackwell B200 and GB200 NVL72 systems marks a substantial leap in energy efficiency and performance. These new systems offer 4x-5x higher training performance, with the GB200 NVL72 consuming 25% less power than comparable H100 systems [^].
Blackwell's energy efficiency promises significant Total Cost of Ownership improvements over H100. Blackwell systems are approximately 5x-6x more energy-efficient per training FLOP compared to H100 systems, which is crucial for power-sensitive AI deployments [^]. Furthermore, Blackwell offers a compelling 3x-4x better Total Cost of Ownership (TCO) for training workloads [^]. Given these advantages, the superior capabilities of Blackwell are expected to accelerate the deprecation and price decline of H100 compute, potentially faster than anticipated, before April 2026 [^].

9. What are the Q1 2026 NVIDIA H100 pricing tiers?

AWS H100 1-Year Reserved Instance~$3.125-$3.75 per H100 per hour [^]
Google Cloud H100 3-Year CUC~$1.88-$2.25 per H100 per hour [^]
Azure H100 3-Year Reserved Instance~$2.49 per H100 per hour [^]
The projected Q1 2026 pricing for H100 reserved instances sets the market baseline. The anticipated Q1 2026 pricing tiers for NVIDIA H100 compute on 1-year and 3-year reserved instances or committed use contracts (CUCs) from AWS, Google Cloud, and Azure are expected to establish the fundamental value for H100. These long-term commitments consistently provide substantial discounts compared to on-demand rates, with prices typically decreasing further for longer contract durations [^].
AWS and Google Cloud offer competitive H100 pricing for long-term commitments. For AWS, 1-year reserved instances for H100 are projected to be in the range of $3.00 to $4.50 per hour per H100 [^], with specific p5.48xlarge instance pricing translating to approximately $3.125-$3.75 per H100 per hour [^]. Google Cloud's A3 series instances offer 1-year CUCs for a single H100 at about $3.34 per hour [^]. Three-year CUCs can reduce costs to around $2.51 per hour [^], with larger instances potentially achieving rates of $1.88-$2.25 per H100 per hour for 3-year terms [^].
Azure's H100 reserved instances provide substantial long-term savings. Microsoft Azure's ND H100 v5 series instances deliver H100 compute, with a 1-year reserved instance for an 8-GPU instance priced at approximately $26.80 per hour, which equates to $3.35 per H100 per hour [^]. A 3-year reserved instance on Azure significantly lowers this cost to $19.90 per hour for the same 8-GPU instance, or roughly $2.49 per H100 per hour [^]. Overall, Azure's 1-year and 3-year reservations are broadly estimated between $3.20 and $4.80 per hour per H100 [^].

10. What Could Change the Odds

Key Catalysts

Catalyst analysis unavailable.

Key Dates & Catalysts

  • Strike Date: April 17, 2026
  • Expiration: April 24, 2026
  • Closes: April 17, 2026

11. Decision-Flipping Events

  • Trigger: Catalyst analysis unavailable.

13. Historical Resolutions

Historical Resolutions: 3 markets in this series

Outcomes: 2 resolved YES, 1 resolved NO

Recent resolutions:

  • KXH100W-26APR10-1.7717: NO (Apr 10, 2026)
  • KXH100W-26APR03-1.761: YES (Apr 03, 2026)
  • KXH100W-26MAR27-1.6992: YES (Mar 27, 2026)