Short Answer

The model sees potential mispricing: Above $1.78 at 98.7% model vs 85.0% market, suggesting persistent demand and supply constraints for AI chips will likely keep H100 compute prices stable through mid-2026.

1. Executive Verdict

  • Insatiable demand for AI chips is likely to persist through mid-2026.
  • Ongoing supply chain constraints affect H100 and Blackwell GPUs.
  • NVIDIA H100 is projected to retain significant market dominance.
  • Major cloud providers anticipate substantial AI infrastructure investments.
  • TSMC's CoWoS packaging capacity is reportedly booked into H1 2026.
  • Blackwell's launch and H100 inference costs may soften prices.

Who Wins and Why

Outcome Market Model Why
Above $2.17 47.0% 39.3% Insatiable AI chip demand and ongoing supply constraints are expected to maintain H100 compute prices.
Above $1.78 85.0% 98.7% Insatiable AI chip demand and ongoing supply constraints are expected to maintain H100 compute prices.
Above $2.10 98.0% 98.7% Insatiable AI chip demand and ongoing supply constraints are expected to maintain H100 compute prices.
Above $1.90 85.0% 98.7% Insatiable AI chip demand and ongoing supply constraints are expected to maintain H100 compute prices.
Above $2.15 98.0% 98.7% Insatiable AI chip demand and ongoing supply constraints are expected to maintain H100 compute prices.

Current Context

Prediction market forecasts NVIDIA H100 SXM compute at around $1.80/hour. A snapshot from May 4, 2026, of the prediction market for the price of NVIDIA H100 SXM compute by May 31, 2026, shows strike levels clustered around $1.78$1.82 per hour [^]. The market contract specifies that "H100 SXM compute per hour" is defined as reported by Ornn (dashboard.ornnai.com) [^]. Settlement for this event will be based on the value at the expiration window, rounded to two decimal places, using Ornn’s USD index as the official settlement basis [^].
Market's compute price aligns with rental costs, not hardware purchase prices. Independent pricing guides, which do not use Ornn for settlement, estimate the NVIDIA H100 SXM GPU's market price to be approximately $32,000, contrasting with an MSRP of $30,000 [^]. These same guides suggest that cloud rental rates for H100 SXM compute typically start at around $2.10 per hour [^]. This comparison indicates that the prediction market’s per-hour compute index, at roughly $1.80 per hour, reflects the cost of renting or utilizing compute services rather than the outright purchase price of the hardware [^].

2. Market Behavior & Price Dynamics

Historical Price (Probability)

Outcome probability
Date
This prediction market has displayed a significant upward trend, moving from an initial 0.0% to a current probability of 85.0%. The price action has been marked by extreme volatility. A notable event occurred on May 05, 2026, when the probability experienced a sharp 50.0 percentage point drop from 50.0% to 0.0%. This dramatic decline appears to be linked to market awareness of the H100's higher inference costs relative to newer Blackwell architectures. Following this drop, the market quickly rebounded, reaching a high of 95.0% before settling near its current level, suggesting a rapid reassessment of the initial negative sentiment.
The total trading volume of 283 contracts across the market's history indicates moderate activity. Key price points have emerged from the recent volatility. The 0.0% level has acted as a temporary floor, while the recent peak of 95.0% may now serve as a resistance level. The current price of 85.0% suggests the market is finding a new support zone. Overall, despite the brief but severe downturn, the market sentiment is strongly bullish. The high probability indicates a strong belief among participants that the price of NVIDIA H100 SXM compute will meet or exceed the $1.780 per hour threshold by the resolution date.

3. Significant Price Movements

Notable price changes detected in the chart, along with research into what caused each movement.

Outcome: Above $1.82

📉 May 06, 2026: 34.0pp drop

Price decreased from 77.0% to 43.0%

What happened: The provided research does not contain specific social media activity, traditional news announcements, or market structure events directly corresponding to the 34.0 percentage point drop on May 06, 2026. While general expectations for NVIDIA H100 SXM compute prices in 2026 include competitive hourly rates potentially near or below $2.00 for certain offerings or long-term reservations [^][^][^], and the introduction of newer NVIDIA GPUs like Blackwell in Q1 2026 could eventually lead to H100 price stabilization or discounts [^][^][^], no specific event is tied to the movement's date. Therefore, social media was irrelevant, as no related activity or narrative could be identified from the provided sources.

Outcome: Above $1.78

📉 May 05, 2026: 50.0pp drop

Price decreased from 50.0% to 0.0%

What happened: The primary driver for the 50.0 percentage point drop in the prediction market for NVIDIA H100 SXM compute by May 31, 2026, appears to be market awareness of the H100's higher cost for inference operations compared to the newer Blackwell architectures [^]. The introduction of the Blackwell (B200/B300) generation, which began shipping in 2025-2026 with lead times extending into mid-2026, is expected to gradually depress H100 prices due to its improved performance and energy efficiency [^][^][^][^]. This reflects a shift in market value as more advanced and efficient alternatives become available, particularly impacting long-term rental rate predictions for H100s. Based on the provided research, social media was not identified as a primary driver for this specific price movement.

4. Market Data

View on Kalshi →

Contract Snapshot

The market resolves to "Yes" if the NVIDIA H100 SXM compute per hour is above $2.17 on May 31, 2026; otherwise, it resolves to "No." The outcome is verified by Ornn's reported "USD" value, rounded to two decimal places, and revisions made after the market closes on May 31, 2026, at 11:59pm EDT will not be accounted for. If no data is available by the expiration date, the market resolves to "No," with payout projected for June 1, 2026.

Available Contracts

Market options and current pricing

Outcome bucket Yes (price) No (price) Last trade probability
Above $1.79 $0.99 $0.12 99%
Above $1.80 $0.99 $0.12 99%
Above $1.81 $0.99 $0.15 99%
Above $1.82 $0.99 $0.15 99%
Above $1.84 $0.99 $0.15 99%
Above $1.85 $0.99 $0.15 99%
Above $1.86 $0.99 $0.15 99%
Above $1.87 $0.99 $0.15 99%
Above $1.88 $0.99 $0.15 99%
Above $1.89 $0.98 $0.15 99%
Above $1.91 $0.99 $0.15 99%
Above $1.92 $0.99 $0.15 99%
Above $1.93 $0.99 $0.15 99%
Above $1.94 $0.99 $0.15 99%
Above $1.95 $0.99 $0.15 99%
Above $1.96 $0.99 $0.15 99%
Above $1.97 $0.99 $0.15 99%
Above $1.98 $0.99 $0.15 99%
Above $1.99 $0.99 $0.15 99%
Above $2.01 $0.99 $0.15 99%
Above $2.03 $0.99 $0.15 99%
Above $2.04 $0.99 $0.15 99%
Above $2.05 $0.99 $0.15 99%
Above $2.06 $0.99 $0.15 99%
Above $2.07 $0.99 $0.15 99%
Above $2.08 $0.99 $0.14 99%
Above $2.14 $0.99 $0.15 99%
Above $2.16 $0.99 $0.15 99%
Above $2.02 $0.99 $0.15 98%
Above $2.10 $0.98 $0.15 98%
Above $2.15 $0.99 $0.14 98%
Above $2.09 $0.98 $0.15 86%
Above $1.78 $1.00 $0.20 85%
Above $1.90 $0.98 $0.15 85%
Above $2.11 $0.99 $1.00 85%
Above $2.12 $0.98 $0.40 85%
Above $2.13 $0.98 $1.00 85%
Above $2.00 $0.98 $0.15 75%
Above $1.83 $0.99 $0.15 50%
Above $2.17 $0.96 $0.52 47%

Market Discussion

The prediction market for NVIDIA H100 SXM compute by May 31, 2026, settles based on an H100 SXM compute-per-hour USD index (OCPI), with strikes visible around $1.78–$1.82 per hour [^]. One firm's public commentary suggests upward pricing pressure, as its H100 1-year GPU rental contract price index rose from about $1.70/hr/GPU in October 2025 to about $2.35/hr/GPU by March 2026 [^]. A third-party pricing guide also cites H100 SXM cloud rental starting at approximately $2.10/hr [^].

5. How will the launch and market adoption of NVIDIA's Blackwell GPUs influence the rental price of H100 compute through mid-2026?

Blackwell Architecture LaunchMarch 18, 2024 (GTC 2024) [^][^][^]
H100 Rental Price Range$1.38 to $10.00 per GPU-hour [^][^]
Projected AI Infrastructure Spending 2026Nearly $700 billion [^]
NVIDIA officially launched the Blackwell architecture on March 18, 2024, at GTC 2024, with products like the B200 and GB200 becoming available from partners later in 2024 and through 2025 [^] [^] [^] . Blackwell GPUs offer significant performance advantages over the H100, being up to 57% faster for model training and potentially up to 10 times cheaper to run when self-hosted [^]. The increased supply of H100s throughout 2025, coupled with the arrival of Blackwell, led to a stabilization or slight softening of H100 prices from their late 2024 peaks [^].
AI chip demand continues to outpace supply into 2026. Despite the introduction of the more advanced Blackwell generation, the demand for AI chips continues to outpace supply in 2026, making chip production a binding constraint on the overall AI compute buildout [^]. Major technology companies are projected to spend nearly $700 billion on capital expenditures in 2026, with a significant portion dedicated to AI infrastructure [^]. This robust demand, combined with persistent supply chain constraints for both H100 and Blackwell, indicates that H100s are expected to retain significant value and utility.
H100 rental prices are expected to remain substantial through mid-2026. Current rental prices for NVIDIA H100 SXM compute typically range between $2.00 and $10.00 per GPU-hour, with more competitive cloud providers offering H100 80GB GPUs as low as $1.38 to $2.40 per GPU-hour [^][^]. Given the substantial and ongoing demand for AI compute, along with persistent supply chain constraints affecting both H100 and Blackwell GPUs, H100s are anticipated to maintain their significant value and utility through mid-2026 [^][^][^][^][^].

6. What is the methodology of the Ornn AI compute index, and what does its historical data reveal about H100 SXM price stability?

OCPI BasisReal trades and live traded spot prices for H100 and H100 SXM GPUs [^][^]
Settlement LogicAveraged values over the contract duration for futures and swaps [^][^]
H100 SXM Historical DataComplete historical price table not available for numerical stability analysis [^][^][^]
The Ornn AI compute index tracks real-time GPU pricing. The Ornn AI compute index (OCPI) provides a reference for pricing and settling compute derivatives, built upon real trades and live spot prices for essential GPU models, including H100 and H100 SXM [^][^]. The settlement logic for futures and swaps employs averaged values over the contract’s duration, reflecting the instant consumption characteristic of GPU compute, which is likened to electricity [^][^].
H100 SXM historical price data is not fully available. The accessible research does not contain a complete OCPI-H100 SXM historical price table [^][^][^]. This data limitation prevents the numerical calculation of variance, volatility, or a definitive stability score for the period leading up to May 4, 2026 [^][^][^]. Therefore, any conclusions regarding price stability from this specific dataset must be qualitative rather than quantitative [^][^][^]. A Kalshi market resolution, which depends on whether the H100 SXM compute per hour value exceeds a particular strike, is noted as being verified by Ornn [^].

7. How do AMD's MI300X and Intel's Gaudi 3 compare to the NVIDIA H100 on key price-performance benchmarks and cloud market share heading into 2026?

NVIDIA H100 Market Share (2026)Approximately 80% by revenue [^]
AMD MI300X Memory192GB of HBM3 memory [^]
Intel Gaudi 3 Price vs H100About half the cost of an H100 [^]
NVIDIA's H100 maintains significant market dominance and high performance. Heading into 2026, the H100 is projected to command approximately 80% of the AI accelerator market by revenue, with NVIDIA's data center revenue reaching $193.7 billion in FY2026 [^]. The H100 delivers 1,979 TFLOPS in FP16 and nearly 4,000 TFLOPS in FP8, featuring 80GB of memory and a memory bandwidth of 3.35 TB/s [^]. A single H100 card typically costs between $25,000 and $40,000 for direct purchase [^][^][^][^][^][^][^][^][^].
AMD's MI300X offers a compelling value, especially for inference. It stands out with its 192GB of HBM3 memory and a superior memory bandwidth of 5.3 TB/s, making it particularly effective for large language model inference tasks [^][^][^][^]. This capacity enables single-GPU inference for models exceeding 100 billion parameters and can lead to 10-40% better inference performance and 40% lower latency in memory-bound applications [^][^][^][^]. While its theoretical FP16 performance is 1,310 TFLOPS, its real-world performance is approximately 45% of this peak due to software maturity, contrasting with NVIDIA's approximately 93% efficiency [^][^]. The MI300X is priced more affordably, estimated at $10,000 to $15,000 [^][^][^], and AMD's Instinct GPU line is projected to capture 5-7% market share with revenues of $7-8 billion in FY2026 [^].
Intel's Gaudi 3 offers a cost-effective option with limited acceptance. This accelerator provides 1,800 TFLOPS in BF16/FP8 and a memory bandwidth of 3.67 TB/s [^]. It is positioned at about half the cost of an H100, although optimizing its performance may necessitate additional engineering resources [^]. Gaudi 3 accelerators became available for production workloads on IBM Cloud in Q1 2025 [^]. Despite its competitive pricing, the Gaudi 3 has experienced limited overall market acceptance [^][^]. Furthermore, custom silicon developed by hyperscalers, such as Google's TPUs and AWS's Trainium, poses a growing competitive threat that is larger and expanding faster than AMD's challenge to NVIDIA [^].

8. What are the H100 procurement plans and capital expenditure forecasts of major cloud providers like AWS, Azure, and GCP for the 2025-2026 timeframe?

Projected 2026 US Cloud/AI Capex$660 billion - $690 billion (2026) [^][^]
Amazon 2026 Capex~$200 billion [^][^][^]
H100 Rental Price (March 2026)~$2.35 per hour (for one-year contracts) [^][^]
Major cloud providers anticipate substantial capital expenditure increases for AI infrastructure. Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) are planning significant capital expenditure (capex) boosts for 2025 and 2026, primarily to fund AI infrastructure, including NVIDIA H100 GPUs and next-generation accelerators [^][^][^][^][^]. This surge is driven by an "insatiable AI demand" that continues to outpace available supply [^][^]. The five largest U.S. cloud and AI infrastructure providers are projected to collectively spend between $660 billion and $690 billion on capex in 2026, nearly doubling 2025 levels, with AI capex potentially approaching $1 trillion per year when including other major players [^][^]. Specific plans include Amazon investing approximately $200 billion in 2026 [^][^][^], Alphabet (Google) forecasting $175 billion to $185 billion in 2026 [^][^][^][^][^], and Microsoft raising its 2026 capex guidance to approximately $190 billion, with two-thirds allocated to GPUs and CPUs for Azure and Copilot services [^][^].
Cloud providers actively secure GPUs amid high demand and elevated prices. Major cloud players are actively building out GPU clusters and developing custom AI chips to meet the booming demand for AI workloads [^]. NVIDIA's Blackwell GPU production is reportedly committed to hyperscalers through mid-2026, and existing H100 inventory is being acquired faster than it can be replenished, indicating high procurement [^]. As of early to mid-2026, the price of NVIDIA H100 SXM compute remains elevated due to sustained high demand, persistent supply chain bottlenecks, and rising component costs [^][^][^][^][^]. H100 rental prices have surged by nearly 40% since October 2025, reaching approximately $2.35 per hour by March 2026 for one-year contracts [^][^]. Despite Blackwell GPUs beginning volume production in Q1 2026 with initial supply heavily committed to hyperscalers, the scarcity and strong demand for both new and existing generations are keeping H100 prices high [^].

9. What are the latest industry projections for TSMC's CoWoS packaging capacity through H1 2026, and how could this impact NVIDIA's H100 supply?

CoWoS Capacity BookingFully booked through 2025 and well into 2026 [^][^][^][^]
Target CoWoS Capacity (End 2026)120,000 to 130,000 wafers per month [^][^][^]
NVIDIA's CoWoS Consumption ShareApproximately 60% of TSMC's capacity through 2027 [^][^][^][^]
TSMC plans significant CoWoS capacity expansion through H1 2026. TSMC's CoWoS packaging capacity is reported to be fully booked through 2025 and well into 2026, serving as a critical bottleneck for high-performance AI accelerators [^][^][^][^]. As of late 2025, the monthly CoWoS capacity was approximately 75,000 to 80,000 wafers. TSMC aims for a substantial increase to 120,000 to 130,000 wafers per month by the end of 2026, primarily through optimizing existing facilities [^][^][^]. To further alleviate capacity constraints, TSMC also plans to outsource an estimated 240,000–270,000 wafers annually in 2026 to partners such as Amkor and SPIL [^]. Despite these aggressive expansion efforts, the increased CoWoS capacity is considered insufficient to meet the surging global demand for AI chips [^].
NVIDIA dominates CoWoS allocation, facing H100 supply constraints. NVIDIA, a dominant consumer, is estimated to secure approximately 60% of TSMC's CoWoS capacity through 2027 [^][^][^][^]. The company has confirmed that ongoing limitations in component supply, such as HBM memory, combined with an oversubscribed CoWoS assembly capacity, pose short-term challenges for H100 production through at least mid-2026 [^].
CoWoS scarcity impacts market entry and H100 GPU pricing. The scarcity of CoWoS capacity has also created a significant market entry barrier for many aspiring participants in the AI sector [^][^]. While H100 rental rates fluctuated, dipping in late 2025 before recovering by early 2026, the direct purchase cost for a single H100 GPU ranges from around $25,000 to over $40,000 for SXM models [^][^][^][^]. However, market forecasts anticipate price stabilization in 2026, with potential discounts as newer GPU releases, such as the H200 and B100, become more widely available [^][^][^].

10. What Could Change the Odds

Key Catalysts

Prediction markets provide insight into expectations for H100 SXM compute-per-hour by May 31, 2026. Robinhood’s contract, which resolves based on Ornn’s reported value for H100 SXM compute per hour using the “USD” iteration of the index, lists “Above” strikes clustered around roughly $1.78$1.82 per hour [^]. The resolution mechanism specifies that revisions after expiration will not be counted [^][^]. This measure differs from the H100 SXM outright purchase price of ~$32k (MSRP $30k) and cloud rental starting around $2.10/hr [^]. Prediction market activity in the sector, such as Kalshi users wagering on GPU compute prices based on Ornn’s platform where H100s were noted at ~$1.70, indicates these markets track an Ornn-derived compute-per-hour index [^].
Energy constraints and demand growth represent significant catalysts that could influence future compute pricing expectations. Morgan Stanley’s 2026 AI energy outlook highlights that training AI creates higher and more variable pressure on the grid, emphasizing growing energy constraints and the development of on-site/off-grid solutions as a value-creation theme that can influence compute pricing expectations [^]. Complementing this, CMC Markets’ 2026 Energy Outlook identifies electricity constraints, including data-center electricity demand growth and generation infrastructure limitations, as a key bottleneck to AI expansion [^].

Key Dates & Catalysts

  • Strike Date: June 01, 2026
  • Expiration: June 08, 2026
  • Closes: June 01, 2026

11. Decision-Flipping Events

  • Trigger: Prediction markets provide insight into expectations for H100 SXM compute-per-hour by May 31, 2026.
  • Trigger: Robinhood’s contract, which resolves based on Ornn’s reported value for H100 SXM compute per hour using the “USD” iteration of the index, lists “Above” strikes clustered around roughly $1.78$1.82 per hour [^] .
  • Trigger: The resolution mechanism specifies that revisions after expiration will not be counted [^] [^] .
  • Trigger: This measure differs from the H100 SXM outright purchase price of ~$32k (MSRP $30k) and cloud rental starting around $2.10/hr [^] .

13. Historical Resolutions

Historical Resolutions: 20 markets in this series

Outcomes: 20 resolved YES, 0 resolved NO

Recent resolutions:

  • KXH100MON-26APR30-1.950: YES (May 01, 2026)
  • KXH100MON-26APR30-1.940: YES (May 01, 2026)
  • KXH100MON-26APR30-1.930: YES (May 01, 2026)
  • KXH100MON-26APR30-1.920: YES (May 01, 2026)
  • KXH100MON-26APR30-1.910: YES (May 01, 2026)