# Price of NVIDIA H100 SXM compute by May 31, 2026?

05/04/26

Updated: May 9, 2026

Category: Science and Technology

Tags: Energy
AI

HTML: /markets/science-and-technology/energy/price-of-nvidia-h100-sxm-compute-by-may-31-2026/

## Short Answer

**Key takeaway.** The **model** sees potential mispricing: Above **$1.78** at **98.7%** **model** vs **85.0%** **market**, suggesting persistent demand and supply constraints for AI chips will likely keep H100 compute prices stable through mid-2026.

## Key Claims (January 2026)

**- - Insatiable demand for AI chips is likely to persist through mid-2026.** - Ongoing supply chain constraints affect H100 and Blackwell GPUs.
- NVIDIA H100 is projected to retain significant **market** dominance.
- Major cloud providers anticipate substantial AI infrastructure investments.
- TSMC's CoWoS packaging capacity is reportedly booked into H1 2026.
- Blackwell's launch and H100 inference costs may soften prices.

### Why This Matters (GEO)

- AI agents extract claims, not arguments.
- Improves citation probability in summaries and answer cards.
- Enables fact stitching across multiple sources.

## Executive Verdict

**Key takeaway.** **Model**'s **98.7%** for H100 stability offers 1.2x payout vs 85c **market**, citing insatiable demand and supply issues.

### Who Wins and Why

| Outcome | Market | Model | Why |
| --- | --- | --- | --- |
| Above $2.17 | 47.0% | 39.3% | Insatiable AI chip demand and ongoing supply constraints are expected to maintain H100 compute prices. |
| Above $1.78 | 85.0% | 98.7% | Insatiable AI chip demand and ongoing supply constraints are expected to maintain H100 compute prices. |
| Above $2.10 | 98.0% | 98.7% | Insatiable AI chip demand and ongoing supply constraints are expected to maintain H100 compute prices. |

## Model vs Market

| Outcome | Market Probability | Octagon Model Probability |
| --- | --- | --- |
| Above $2.17 | 47.0% | 39.3% |
| Above $1.78 | 85.0% | 98.7% |
| Above $2.10 | 98.0% | 98.7% |
| Above $1.90 | 85.0% | 98.7% |
| Above $2.15 | 98.0% | 98.7% |
| Above $2.12 | 85.0% | 98.7% |
| Above $2.00 | 75.0% | 98.7% |
| Above $2.02 | 98.0% | 98.7% |
| Above $2.13 | 85.0% | 98.7% |
| Above $2.09 | 86.0% | 98.7% |
| Above $1.79 | 99.0% | 98.7% |
| Above $1.80 | 99.0% | 98.7% |
| Above $1.81 | 99.0% | 98.7% |
| Above $1.82 | 99.0% | 98.7% |
| Above $1.84 | 99.0% | 98.7% |
| Above $1.85 | 99.0% | 98.7% |
| Above $1.86 | 99.0% | 98.7% |
| Above $1.87 | 99.0% | 98.7% |
| Above $1.88 | 99.0% | 98.7% |
| Above $1.89 | 99.0% | 98.7% |
| Above $1.91 | 99.0% | 98.7% |
| Above $1.92 | 99.0% | 98.7% |
| Above $1.93 | 99.0% | 98.7% |
| Above $1.94 | 99.0% | 98.7% |
| Above $1.95 | 99.0% | 98.7% |
| Above $1.96 | 99.0% | 98.7% |
| Above $1.97 | 99.0% | 98.7% |
| Above $1.98 | 99.0% | 98.7% |
| Above $1.99 | 99.0% | 98.7% |
| Above $2.01 | 99.0% | 98.7% |
| Above $2.03 | 99.0% | 98.7% |
| Above $2.04 | 99.0% | 98.7% |
| Above $2.05 | 99.0% | 98.7% |
| Above $2.06 | 99.0% | 98.7% |
| Above $2.07 | 99.0% | 98.7% |
| Above $2.08 | 99.0% | 98.7% |
| Above $2.14 | 99.0% | 98.7% |
| Above $2.16 | 99.0% | 98.7% |
| Above $1.83 | 50.0% | 98.7% |
| Above $2.11 | 85.0% | 98.7% |

- Expiration: June 1, 2026

## Market Behavior & Price Dynamics

This prediction market has displayed a significant upward trend, moving from an initial 0.0% to a current probability of 85.0%. The price action has been marked by extreme volatility. A notable event occurred on May 05, 2026, when the probability experienced a sharp 50.0 percentage point drop from 50.0% to 0.0%. This dramatic decline appears to be linked to market awareness of the H100's higher inference costs relative to newer Blackwell architectures. Following this drop, the market quickly rebounded, reaching a high of 95.0% before settling near its current level, suggesting a rapid reassessment of the initial negative sentiment.

The total trading volume of 283 contracts across the market's history indicates moderate activity. Key price points have emerged from the recent volatility. The 0.0% level has acted as a temporary floor, while the recent peak of 95.0% may now serve as a resistance level. The current price of 85.0% suggests the market is finding a new support zone. Overall, despite the brief but severe downturn, the market sentiment is strongly bullish. The high probability indicates a strong belief among participants that the price of NVIDIA H100 SXM compute will meet or exceed the $1.780 per hour threshold by the resolution date.

## Significant Price Movements

### Outcome: Above $1.82

#### 📉 May 06, 2026: 34.0pp drop

Price decreased from 77.0% to 43.0%

**What happened:** The provided research does not contain specific social media activity, traditional news announcements, or market structure events directly corresponding to the 34.0 percentage point drop on May 06, 2026. While general expectations for NVIDIA H100 SXM compute prices in 2026 include competitive hourly rates potentially near or below $2.00 for certain offerings or long-term reservations [[^]](https://www.gmicloud.ai/en/blog/nvidia-h100-gpu-pricing-2026-rent-vs-buy-cost-analysis)[[^]](https://www.reddit.com/r/deeplearning/comments/1qnjwdq/cloud_gpu_prices_vary_up_to_138x_for_h100s_i/)[[^]](https://www.hyperstack.cloud/nvidia-h100-sxm), and the introduction of newer NVIDIA GPUs like Blackwell in Q1 2026 could eventually lead to H100 price stabilization or discounts [[^]](https://jarvislabs.ai/blog/h100-price)[[^]](https://www.spheron.network/blog/gpu-shortage-2026/)[[^]](https://www.silicondata.com/blog/gpu-pricing-trends-2026-what-to-expect-in-the-year-ahead), no specific event is tied to the movement's date. Therefore, social media was irrelevant, as no related activity or narrative could be identified from the provided sources.

### Outcome: Above $1.78

#### 📉 May 05, 2026: 50.0pp drop

Price decreased from 50.0% to 0.0%

**What happened:** The primary driver for the 50.0 percentage point drop in the prediction market for NVIDIA H100 SXM compute by May 31, 2026, appears to be market awareness of the H100's higher cost for inference operations compared to the newer Blackwell architectures [[^]](https://www.gurufocus.com/news/8640991/nvidias-nvda-h100-gpu-price-plummets-as-tech-giants-face-challenges?mobile=true). The introduction of the Blackwell (B200/B300) generation, which began shipping in 2025-2026 with lead times extending into mid-2026, is expected to gradually depress H100 prices due to its improved performance and energy efficiency [[^]](https://intuitionlabs.ai/articles/nvidia-ai-gpu-pricing-guide)[[^]](https://global-scale.io/ai-infrastructure-market-analysis-nvidia-h100-and-h200-gpus/)[[^]](https://cyfuture.ai/kb/gpu/nvidia-h100-gpu-price-2026-buying-options)[[^]](https://www.itiger.com/news/1185188896). This reflects a shift in market value as more advanced and efficient alternatives become available, particularly impacting long-term rental rate predictions for H100s. Based on the provided research, social media was not identified as a primary driver for this specific price movement.

## Contract Snapshot

The market resolves to "Yes" if the NVIDIA H100 SXM compute per hour is above $2.17 on May 31, 2026; otherwise, it resolves to "No." The outcome is verified by Ornn's reported "USD" value, rounded to two decimal places, and revisions made after the market closes on May 31, 2026, at 11:59pm EDT will not be accounted for. If no data is available by the expiration date, the market resolves to "No," with payout projected for June 1, 2026.

## Market Discussion

The prediction market for NVIDIA H100 SXM compute by May 31, 2026, settles based on an H100 SXM compute-per-hour USD index (OCPI), with strikes visible around $1.78–$1.82 per hour [[^]](https://robinhood.com/us/en/prediction-markets/technology/events/price-of-nvidia-h100-sxm-compute-by-may-31-2026-may-04-2026/). One firm's public commentary suggests upward pricing pressure, as its H100 1-year GPU rental contract price index rose from about $1.70/hr/GPU in October 2025 to about $2.35/hr/GPU by March 2026 [[^]](https://newsletter.semianalysis.com/p/the-great-gpu-shortage-rental-capacity). A third-party pricing guide also cites H100 SXM cloud rental starting at approximately $2.10/hr [[^]](https://gpucost.org/gpu/h100-sxm).

## Market Data

| Contract | Yes Bid | Yes Ask | Last Price | Volume | Open Interest |
| --- | --- | --- | --- | --- | --- |
| Above $1.78 | 80% | 100% | 85% | $283.01 | $264.93 |
| Above $1.79 | 88% | 99% | 99% | $4 | $2 |
| Above $1.80 | 88% | 99% | 99% | $4 | $2 |
| Above $1.81 | 85% | 99% | 99% | $4 | $2 |
| Above $1.82 | 85% | 99% | 99% | $4 | $2 |
| Above $1.83 | 85% | 99% | 50% | $1 | $1 |
| Above $1.84 | 85% | 99% | 99% | $4 | $2 |
| Above $1.85 | 85% | 99% | 99% | $4 | $2 |
| Above $1.86 | 85% | 99% | 99% | $4 | $2 |
| Above $1.87 | 85% | 99% | 99% | $4 | $2 |
| Above $1.88 | 85% | 99% | 99% | $4 | $2 |
| Above $1.89 | 85% | 98% | 99% | $4 | $2 |
| Above $1.90 | 85% | 98% | 85% | $133.89 | $133.89 |
| Above $1.91 | 85% | 99% | 99% | $4 | $2 |
| Above $1.92 | 85% | 99% | 99% | $4 | $2 |
| Above $1.93 | 85% | 99% | 99% | $4 | $2 |
| Above $1.94 | 85% | 99% | 99% | $4 | $2 |
| Above $1.95 | 85% | 99% | 99% | $4 | $2 |
| Above $1.96 | 85% | 99% | 99% | $4 | $2 |
| Above $1.97 | 85% | 99% | 99% | $4 | $2 |
| Above $1.98 | 85% | 99% | 99% | $4 | $2 |
| Above $1.99 | 85% | 99% | 99% | $4 | $2 |
| Above $2.00 | 85% | 98% | 75% | $13 | $13 |
| Above $2.01 | 85% | 99% | 99% | $4 | $2 |
| Above $2.02 | 85% | 99% | 98% | $6 | $2 |
| Above $2.03 | 85% | 99% | 99% | $4 | $2 |
| Above $2.04 | 85% | 99% | 99% | $4 | $2 |
| Above $2.05 | 85% | 99% | 99% | $4 | $2 |
| Above $2.06 | 85% | 99% | 99% | $4 | $2 |
| Above $2.07 | 85% | 99% | 99% | $4 | $2 |
| Above $2.08 | 86% | 99% | 99% | $4 | $2 |
| Above $2.09 | 85% | 98% | 86% | $5 | $3 |
| Above $2.10 | 85% | 98% | 98% | $153.26 | $93.45 |
| Above $2.11 | 0% | 99% | 85% | $1 | $1 |
| Above $2.12 | 60% | 98% | 85% | $14 | $13 |
| Above $2.13 | 0% | 98% | 85% | $6 | $4 |
| Above $2.14 | 85% | 99% | 99% | $2 | $1 |
| Above $2.15 | 86% | 99% | 98% | $19 | $17 |
| Above $2.16 | 85% | 99% | 99% | $2 | $1 |
| Above $2.17 | 48% | 96% | 47% | $300.97 | $151.46 |

## How will the launch and market adoption of NVIDIA's Blackwell GPUs influence the rental price of H100 compute through mid-2026?

Blackwell Architecture Launch | March 18, 2024 (GTC 2024) [[^]](https://www.nexgencloud.com/blog/performance-benchmarks/nvidia-blackwell-gpus-architecture-features-specs)[[^]](https://nvidianews.nvidia.com/news/nvidia-blackwell-platform-arrives-to-power-a-new-era-of-computing)[[^]](https://en.wikipedia.org/wiki/Blackwell_(microarchitecture)) |
H100 Rental Price Range | $1.38 to $10.00 per GPU-hour [[^]](https://www.gmicloud.ai/blog/2025-cost-of-renting-or-uying-nvidia-h100-gpus-for-data-centers)[[^]](https://www.thundercompute.com/blog/nvidia-h100-pricing) |
Projected AI Infrastructure Spending 2026 | Nearly $700 billion [[^]](https://www.cnas.org/publications/reports/american-ai-companies-cant-get-enough-chips) |

**NVIDIA officially launched the Blackwell architecture on March 18, 2024, at GTC 2024, with products like the B200 and GB200 becoming available from partners later in 2024 and through 2025 [[^]](https://www.nexgencloud.com/blog/performance-benchmarks/nvidia-blackwell-gpus-architecture-features-specs)[[^]](https://nvidianews.nvidia.com/news/nvidia-blackwell-platform-arrives-to-power-a-new-era-of-computing)[[^]](https://en.wikipedia.org/wiki/Blackwell_(microarchitecture))**

NVIDIA officially launched the Blackwell architecture on March 18, 2024, at GTC 2024, with products like the B200 and GB200 becoming available from partners later in 2024 and through 2025 [[^]](https://www.nexgencloud.com/blog/performance-benchmarks/nvidia-blackwell-gpus-architecture-features-specs)[[^]](https://nvidianews.nvidia.com/news/nvidia-blackwell-platform-arrives-to-power-a-new-era-of-computing)[[^]](https://en.wikipedia.org/wiki/Blackwell_(microarchitecture)). Blackwell GPUs offer significant performance advantages over the H100, being up to **57%** faster for **model** training and potentially up to 10 times cheaper to run when self-hosted [[^]](https://www.lightly.ai/blog/nvidia-b200-vs-h100). The increased supply of H100s throughout 2025, coupled with the arrival of Blackwell, led to a stabilization or slight softening of H100 prices from their late 2024 peaks [[^]](https://neuralrack.ai/blog/unsustainable-cloud-gpu-rental-costs-2026-jan-10-2026).

AI chip demand continues to outpace supply into 2026. Despite the introduction of the more advanced Blackwell generation, the demand for AI chips continues to outpace supply in 2026, making chip production a binding constraint on the overall AI compute buildout [[^]](https://www.cnas.org/publications/reports/american-ai-companies-cant-get-enough-chips). Major technology companies are projected to spend nearly **$700** billion on capital expenditures in 2026, with a significant portion dedicated to AI infrastructure [[^]](https://www.cnas.org/publications/reports/american-ai-companies-cant-get-enough-chips). This robust demand, combined with persistent supply chain constraints for both H100 and Blackwell, indicates that H100s are expected to retain significant value and utility.

H100 rental prices are expected to remain substantial through mid-2026. Current rental prices for NVIDIA H100 SXM compute typically range between **$2.00** and **$10.00** per GPU-hour, with more competitive cloud providers offering H100 80GB GPUs as low as **$1.38** to **$2.40** per GPU-hour [[^]](https://www.gmicloud.ai/blog/2025-cost-of-renting-or-uying-nvidia-h100-gpus-for-data-centers)[[^]](https://www.thundercompute.com/blog/nvidia-h100-pricing). Given the substantial and ongoing demand for AI compute, along with persistent supply chain constraints affecting both H100 and Blackwell GPUs, H100s are anticipated to maintain their significant value and utility through mid-2026 [[^]](https://www.cnas.org/publications/reports/american-ai-companies-cant-get-enough-chips)[[^]](https://www.cudocompute.com/blog/nvidia-gpu-upgrade-planning)[[^]](https://www.patsnap.com/resources/blog/articles/nvidia-gpu-architecture-roadmap-cuda-to-blackwell/)[[^]](https://www.gmicloud.ai/blog/2025-cost-of-renting-or-uying-nvidia-h100-gpus-for-data-centers)[[^]](https://www.gmicloud.ai/en/blog/nvidia-h100-gpu-pricing-2026-rent-vs-buy-cost-analysis).

## What is the methodology of the Ornn AI compute index, and what does its historical data reveal about H100 SXM price stability?

OCPI Basis | Real trades and live traded spot prices for H100 and H100 SXM GPUs [[^]](https://ornn.trade/docs/ornn_index.pdf)[[^]](https://www.prnewswire.com/news-releases/ornn-compute-price-index-added-to-bloomberg-terminal-302732184.html) |
Settlement Logic | Averaged values over the contract duration for futures and swaps [[^]](https://davefriedman.substack.com/p/how-to-control-your-ai-compute-budget)[[^]](https://theinnermostloop.substack.com/p/the-first-tradable-compute-price) |
H100 SXM Historical Data | Complete historical price table not available for numerical stability analysis [[^]](https://ornn.trade/docs/ornn_index.pdf)[[^]](https://www.prnewswire.com/news-releases/ornn-compute-price-index-added-to-bloomberg-terminal-302732184.html)[[^]](https://kalshi.com/markets/kxh100mon/h100-monthly-price/kxh100mon-26may31) |

**The Ornn AI compute index tracks real-time GPU pricing**

The Ornn AI compute index tracks real-time GPU pricing. The Ornn AI compute index (OCPI) provides a reference for pricing and settling compute derivatives, built upon real trades and live spot prices for essential GPU models, including H100 and H100 SXM [[^]](https://ornn.trade/docs/ornn_index.pdf)[[^]](https://www.prnewswire.com/news-releases/ornn-compute-price-index-added-to-bloomberg-terminal-302732184.html). The settlement logic for futures and swaps employs averaged values over the contract’s duration, reflecting the instant consumption characteristic of GPU compute, which is likened to electricity [[^]](https://davefriedman.substack.com/p/how-to-control-your-ai-compute-budget)[[^]](https://theinnermostloop.substack.com/p/the-first-tradable-compute-price).

H100 SXM historical price data is not fully available. The accessible research does not contain a complete OCPI-H100 SXM historical price table [[^]](https://ornn.trade/docs/ornn_index.pdf)[[^]](https://www.prnewswire.com/news-releases/ornn-compute-price-index-added-to-bloomberg-terminal-302732184.html)[[^]](https://kalshi.com/markets/kxh100mon/h100-monthly-price/kxh100mon-26may31). This data limitation prevents the numerical calculation of variance, volatility, or a definitive stability score for the period leading up to May 4, 2026 [[^]](https://ornn.trade/docs/ornn_index.pdf)[[^]](https://www.prnewswire.com/news-releases/ornn-compute-price-index-added-to-bloomberg-terminal-302732184.html)[[^]](https://kalshi.com/markets/kxh100mon/h100-monthly-price/kxh100mon-26may31). Therefore, any conclusions regarding price stability from this specific dataset must be qualitative rather than quantitative [[^]](https://ornn.trade/docs/ornn_index.pdf)[[^]](https://www.prnewswire.com/news-releases/ornn-compute-price-index-added-to-bloomberg-terminal-302732184.html)[[^]](https://kalshi.com/markets/kxh100mon/h100-monthly-price/kxh100mon-26may31). A Kalshi **market** resolution, which depends on whether the H100 SXM compute per hour value exceeds a particular strike, is noted as being verified by Ornn [[^]](https://kalshi.com/markets/kxh100mon/h100-monthly-price/kxh100mon-26may31).

## How do AMD's MI300X and Intel's Gaudi 3 compare to the NVIDIA H100 on key price-performance benchmarks and cloud market share heading into 2026?

NVIDIA H100 Market Share (2026) | Approximately 80% by revenue [[^]](https://siliconanalysts.com/analysis/amd-vs-nvidia-ai-gpu-market-share-2026) |
AMD MI300X Memory | 192GB of HBM3 memory [[^]](https://www.hostrunway.com/blog/nvidia-h100-vs-amd-mi300x-vs-intel-gaudi3-best-gpu-for-ai-training-llm-inference/) |
Intel Gaudi 3 Price vs H100 | About half the cost of an H100 [[^]](https://www.hostrunway.com/blog/nvidia-h100-vs-amd-mi300x-vs-intel-gaudi3-best-gpu-for-ai-training-llm-inference/) |

**NVIDIA's H100 maintains significant market dominance and high performance**

NVIDIA's H100 maintains significant **market** dominance and high performance. Heading into 2026, the H100 is projected to command approximately **80%** of the AI accelerator **market** by revenue, with NVIDIA's data center revenue reaching **$193.7** billion in FY2026 [[^]](https://siliconanalysts.com/analysis/amd-vs-nvidia-ai-gpu-**market**-share-2026). The H100 delivers 1,979 TFLOPS in FP16 and nearly 4,000 TFLOPS in FP8, featuring 80GB of memory and a memory bandwidth of 3.35 TB/s [[^]](https://www.hostrunway.com/blog/nvidia-h100-vs-amd-mi300x-vs-intel-gaudi3-best-gpu-for-ai-training-llm-inference/). A single H100 card typically costs between **$25,000** and **$40,000** for direct purchase [[^]](https://siliconanalysts.com/analysis/amd-vs-nvidia-ai-gpu-**market**-share-2026)[[^]](https://www.gmicloud.ai/en/blog/nvidia-h100-gpu-pricing-2026-rent-vs-buy-cost-analysis)[[^]](https://computeprices.com/gpus/h100)[[^]](https://cyfuture.ai/kb/gpu/how-much-does-the-h100-gpu-cost-in-2026)[[^]](https://jarvislabs.ai/blog/h100-price)[[^]](https://www.thundercompute.com/blog/ai-gpu-rental-**market**-trends)[[^]](https://www.thundercompute.com/blog/nvidia-h100-pricing)[[^]](https://neuralrack.ai/blog/unsustainable-cloud-gpu-rental-costs-2026-jan-10-2026)[[^]](https://intuitionlabs.ai/pdfs/nvidia-ai-gpu-prices-h100-27k-40k-h200-315k-8-gpu-cost-guide.pdf).

AMD's MI300X offers a compelling value, especially for inference. It stands out with its 192GB of HBM3 memory and a superior memory bandwidth of 5.3 TB/s, making it particularly effective for large language **model** inference tasks [[^]](https://www.hostrunway.com/blog/nvidia-h100-vs-amd-mi300x-vs-intel-gaudi3-best-gpu-for-ai-training-llm-inference/)[[^]](https://introl.com/blog/amd-mi300x-vs-nvidia-h100-breaking-cuda-monopoly)[[^]](https://www.clarifai.com/blog/mi300x-vs-h100)[[^]](https://getdeploying.com/gpus/amd-mi300x-vs-nvidia-h100). This capacity enables single-GPU inference for models exceeding 100 billion parameters and can lead to 10-**40%** better inference performance and **40%** lower latency in memory-bound applications [[^]](https://www.hostrunway.com/blog/nvidia-h100-vs-amd-mi300x-vs-intel-gaudi3-best-gpu-for-ai-training-llm-inference/)[[^]](https://introl.com/blog/amd-mi300x-vs-nvidia-h100-breaking-cuda-monopoly)[[^]](https://www.clarifai.com/blog/mi300x-vs-h100)[[^]](https://getdeploying.com/gpus/amd-mi300x-vs-nvidia-h100). While its theoretical FP16 performance is 1,310 TFLOPS, its real-world performance is approximately **45%** of this peak due to software maturity, contrasting with NVIDIA's approximately **93%** efficiency [[^]](https://www.hostrunway.com/blog/nvidia-h100-vs-amd-mi300x-vs-intel-gaudi3-best-gpu-for-ai-training-llm-inference/)[[^]](https://siliconanalysts.com/analysis/amd-vs-nvidia-ai-gpu-**market**-share-2026). The MI300X is priced more affordably, estimated at **$10,000** to **$15,000** [[^]](https://siliconanalysts.com/analysis/amd-vs-nvidia-ai-gpu-**market**-share-2026)[[^]](https://introl.com/blog/amd-mi300x-vs-nvidia-h100-breaking-cuda-monopoly)[[^]](https://getdeploying.com/gpus/amd-mi300x-vs-nvidia-h100), and AMD's Instinct GPU line is projected to capture 5-**7%** **market** share with revenues of **$7**-8 billion in FY2026 [[^]](https://siliconanalysts.com/analysis/amd-vs-nvidia-ai-gpu-**market**-share-2026).

Intel's Gaudi 3 offers a cost-effective option with limited acceptance. This accelerator provides 1,800 TFLOPS in BF16/FP8 and a memory bandwidth of 3.67 TB/s [[^]](https://www.hostrunway.com/blog/nvidia-h100-vs-amd-mi300x-vs-intel-gaudi3-best-gpu-for-ai-training-llm-inference/). It is positioned at about half the cost of an H100, although optimizing its performance may necessitate additional engineering resources [[^]](https://www.hostrunway.com/blog/nvidia-h100-vs-amd-mi300x-vs-intel-gaudi3-best-gpu-for-ai-training-llm-inference/). Gaudi 3 accelerators became available for production workloads on IBM Cloud in Q1 2025 [[^]](https://newsroom.ibm.com/blog-intel-and-ibm-announce-the-availability-of-intel-gaudi-3-ai-accelerators-on-ibm-cloud). Despite its competitive pricing, the Gaudi 3 has experienced limited overall **market** acceptance [[^]](https://www.mexc.com/news/1075336)[[^]](https://www.openpr.com/news/4504542/ai-inference-chip-**market**-accelerates-alongside-the-broader-ai). Furthermore, custom silicon developed by hyperscalers, such as Google's TPUs and AWS's Trainium, poses a growing competitive threat that is larger and expanding faster than AMD's challenge to NVIDIA [[^]](https://siliconanalysts.com/analysis/amd-vs-nvidia-ai-gpu-**market**-share-2026).

## What are the H100 procurement plans and capital expenditure forecasts of major cloud providers like AWS, Azure, and GCP for the 2025-2026 timeframe?

Projected 2026 US Cloud/AI Capex | $660 billion - $690 billion (2026) [[^]](https://futurumgroup.com/insights/ai-capex-2026-the-690b-infrastructure-sprint/)[[^]](https://www.forbes.com/sites/rscottraynovich/2026/03/10/inside-the-top-private-infra-companies-taking-advantage-of-the-ai-boom/) |
Amazon 2026 Capex | ~$200 billion [[^]](https://futurumgroup.com/insights/ai-capex-2026-the-690b-infrastructure-sprint/)[[^]](https://www.industrialinfo.com/news/article/amazon-boosts-2026-capex-to-200-billion-amid-data-center-surge--353118)[[^]](https://www.constellationr.com/insights/news/amazon-sees-200-billion-capex-ahead-aws-sales-surge-24) |
H100 Rental Price (March 2026) | ~$2.35 per hour (for one-year contracts) [[^]](https://www.kavout.com/market-lens/why-are-nvidia-h100-gpu-rental-prices-surging-by-40)[[^]](https://www.itiger.com/news/1185188896) |

**Major cloud providers anticipate substantial capital expenditure increases for AI infrastructure**

Major cloud providers anticipate substantial capital expenditure increases for AI infrastructure. Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) are planning significant capital expenditure (capex) boosts for 2025 and 2026, primarily to fund AI infrastructure, including NVIDIA H100 GPUs and next-generation accelerators [[^]](https://futurumgroup.com/insights/ai-capex-2026-the-690b-infrastructure-sprint/)[[^]](https://finance.biggo.com/news/-_FL_Z0BLfE1EzqPT8WO)[[^]](https://mlq.ai/news/googles-massive-2026-capex-forecast-dwarfs-wall-street-predictions/)[[^]](https://www.cloudcomputing-news.net/news/ai-demand-pushes-companies-to-invest-billions-in-cloud-infrastructure/)[[^]](https://www.reddit.com/r/amazonemployees/comments/1qxt5jt/amazon_plans_200b_in_capex_by_2026_what_does_this/). This surge is driven by an "insatiable AI demand" that continues to outpace available supply [[^]](https://finance.biggo.com/news/-_FL_Z0BLfE1EzqPT8WO)[[^]](https://www.kavout.com/**market**-lens/why-are-nvidia-h100-gpu-rental-prices-surging-by-40). The five largest U.S. cloud and AI infrastructure providers are projected to collectively spend between **$660** billion and **$690** billion on capex in 2026, nearly doubling 2025 levels, with AI capex potentially approaching **$1** trillion per year when including other major players [[^]](https://futurumgroup.com/insights/ai-capex-2026-the-690b-infrastructure-sprint/)[[^]](https://www.forbes.com/sites/rscottraynovich/2026/03/10/inside-the-top-private-infra-companies-taking-advantage-of-the-ai-boom/). Specific plans include Amazon investing approximately **$200** billion in 2026 [[^]](https://futurumgroup.com/insights/ai-capex-2026-the-690b-infrastructure-sprint/)[[^]](https://www.industrialinfo.com/news/article/amazon-boosts-2026-capex-to-200-billion-amid-data-center-surge--353118)[[^]](https://www.constellationr.com/insights/news/amazon-sees-200-billion-capex-ahead-aws-sales-surge-24), Alphabet (Google) forecasting **$175** billion to **$185** billion in 2026 [[^]](https://futurumgroup.com/insights/ai-capex-2026-the-690b-infrastructure-sprint/)[[^]](https://mlq.ai/news/googles-massive-2026-capex-forecast-dwarfs-wall-street-predictions/)[[^]](https://www.trendforce.com/news/2026/02/05/news-google-reportedly-to-nearly-double-2026-capex-as-cloud-revenue-jumps-nearly-48/)[[^]](https://www.futuriom.com/articles/news/can-google-keep-its-balance-with-gigantic-capex/2026/02)[[^]](https://thetechcapital.com/googles-parent-to-nearly-double-capex-in-2026-as-data-centre-buildout-intensifies/), and Microsoft raising its 2026 capex guidance to approximately **$190** billion, with two-thirds allocated to GPUs and CPUs for Azure and Copilot services [[^]](https://www.heygotrade.com/en/blog/microsoft-azure-grows-39-msft-core-holding-2026/)[[^]](https://www.insiderfinance.io/news/microsoft-earnings-preview-azure-and-ai).

Cloud providers actively secure GPUs amid high demand and elevated prices. Major cloud players are actively building out GPU clusters and developing custom AI chips to meet the booming demand for AI workloads [[^]](https://finance.biggo.com/news/-_FL_Z0BLfE1EzqPT8WO). NVIDIA's Blackwell GPU production is reportedly committed to hyperscalers through mid-2026, and existing H100 inventory is being acquired faster than it can be replenished, indicating high procurement [[^]](https://www.gpuloans.com/blog/why-blackwell-didnt-ease-h100-**market**). As of early to mid-2026, the price of NVIDIA H100 SXM compute remains elevated due to sustained high demand, persistent supply chain bottlenecks, and rising component costs [[^]](https://jarvislabs.ai/blog/h100-price)[[^]](https://www.gpuloans.com/blog/why-blackwell-didnt-ease-h100-**market**)[[^]](https://intuitionlabs.ai/pdfs/nvidia-ai-gpu-prices-h100-27k-40k-h200-315k-8-gpu-cost-guide.pdf)[[^]](https://www.kavout.com/**market**-lens/why-are-nvidia-h100-gpu-rental-prices-surging-by-40)[[^]](https://www.itiger.com/news/1185188896). H100 rental prices have surged by nearly **40%** since October 2025, reaching approximately **$2.35** per hour by March 2026 for one-year contracts [[^]](https://www.kavout.com/**market**-lens/why-are-nvidia-h100-gpu-rental-prices-surging-by-40)[[^]](https://www.itiger.com/news/1185188896). Despite Blackwell GPUs beginning volume production in Q1 2026 with initial supply heavily committed to hyperscalers, the scarcity and strong demand for both new and existing generations are keeping H100 prices high [[^]](https://www.gpuloans.com/blog/why-blackwell-didnt-ease-h100-**market**).

## What are the latest industry projections for TSMC's CoWoS packaging capacity through H1 2026, and how could this impact NVIDIA's H100 supply?

CoWoS Capacity Booking | Fully booked through 2025 and well into 2026 [[^]](https://www.vamsitalkstech.com/ai/the-gpu-supply-chain-crisis-what-every-enterprise-cio-must-know-in-2026/)[[^]](https://info.fusionww.com/blog/inside-the-ai-bottleneck-cowos-hbm-and-2-3nm-capacity-constraints-through-2027)[[^]](https://globalsemiresearch.substack.com/p/tsmcs-cowos-capacity-scaling-up-outsourcing)[[^]](https://markets.financialcontent.com/stocks/article/tokenring-2026-1-1-the-great-packaging-pivot-how-tsmc-is-doubling-cowos-capacity-to-break-the-ai-supply-bottleneck-through-2026) |
Target CoWoS Capacity (End 2026) | 120,000 to 130,000 wafers per month [[^]](https://www.vamsitalkstech.com/ai/the-gpu-supply-chain-crisis-what-every-enterprise-cio-must-know-in-2026/)[[^]](https://globalsemiresearch.substack.com/p/tsmcs-cowos-capacity-scaling-up-outsourcing)[[^]](https://markets.financialcontent.com/stocks/article/tokenring-2026-1-1-the-great-packaging-pivot-how-tsmc-is-doubling-cowos-capacity-to-break-the-ai-supply-bottleneck-through-2026) |
NVIDIA's CoWoS Consumption Share | Approximately 60% of TSMC's capacity through 2027 [[^]](https://www.vamsitalkstech.com/ai/the-gpu-supply-chain-crisis-what-every-enterprise-cio-must-know-in-2026/)[[^]](https://medium.com/@joe_62117/the-ai-chip-wars-is-nvidias-moat-under-siege-ad65d4027241)[[^]](https://www.astutegroup.com/news/industrial/advanced-packaging-demand-soars-nvidia-secures-60-of-cowos-capacity/)[[^]](https://eu.36kr.com/en/p/3580962946874242) |

**TSMC plans significant CoWoS capacity expansion through H1 2026**

TSMC plans significant CoWoS capacity expansion through H1 2026. TSMC's CoWoS packaging capacity is reported to be fully booked through 2025 and well into 2026, serving as a critical bottleneck for high-performance AI accelerators [[^]](https://www.vamsitalkstech.com/ai/the-gpu-supply-chain-crisis-what-every-enterprise-cio-must-know-in-2026/)[[^]](https://info.fusionww.com/blog/inside-the-ai-bottleneck-cowos-hbm-and-2-3nm-capacity-constraints-through-2027)[[^]](https://globalsemiresearch.substack.com/p/tsmcs-cowos-capacity-scaling-up-outsourcing)[[^]](https://markets.financialcontent.com/stocks/article/tokenring-2026-1-1-the-great-packaging-pivot-how-tsmc-is-doubling-cowos-capacity-to-break-the-ai-supply-bottleneck-through-2026). As of late 2025, the monthly CoWoS capacity was approximately 75,000 to 80,000 wafers. TSMC aims for a substantial increase to 120,000 to 130,000 wafers per month by the end of 2026, primarily through optimizing existing facilities [[^]](https://www.vamsitalkstech.com/ai/the-gpu-supply-chain-crisis-what-every-enterprise-cio-must-know-in-2026/)[[^]](https://globalsemiresearch.substack.com/p/tsmcs-cowos-capacity-scaling-up-outsourcing)[[^]](https://markets.financialcontent.com/stocks/article/tokenring-2026-1-1-the-great-packaging-pivot-how-tsmc-is-doubling-cowos-capacity-to-break-the-ai-supply-bottleneck-through-2026). To further alleviate capacity constraints, TSMC also plans to outsource an estimated 240,000–270,000 wafers annually in 2026 to partners such as Amkor and SPIL [[^]](https://globalsemiresearch.substack.com/p/tsmcs-cowos-capacity-scaling-up-outsourcing). Despite these aggressive expansion efforts, the increased CoWoS capacity is considered insufficient to meet the surging global demand for AI chips [[^]](https://www.vamsitalkstech.com/ai/the-gpu-supply-chain-crisis-what-every-enterprise-cio-must-know-in-2026/).

NVIDIA dominates CoWoS allocation, facing H100 supply constraints. NVIDIA, a dominant consumer, is estimated to secure approximately **60%** of TSMC's CoWoS capacity through 2027 [[^]](https://www.vamsitalkstech.com/ai/the-gpu-supply-chain-crisis-what-every-enterprise-cio-must-know-in-2026/)[[^]](https://medium.com/@joe_62117/the-ai-chip-wars-is-nvidias-moat-under-siege-ad65d4027241)[[^]](https://www.astutegroup.com/news/industrial/advanced-packaging-demand-soars-nvidia-secures-60-of-cowos-capacity/)[[^]](https://eu.36kr.com/en/p/3580962946874242). The company has confirmed that ongoing limitations in component supply, such as HBM memory, combined with an oversubscribed CoWoS assembly capacity, pose short-term challenges for H100 production through at least mid-2026 [[^]](https://info.fusionww.com/blog/inside-the-ai-bottleneck-cowos-hbm-and-2-3nm-capacity-constraints-through-2027).

CoWoS scarcity impacts **market** entry and H100 GPU pricing. The scarcity of CoWoS capacity has also created a significant **market** entry barrier for many aspiring participants in the AI sector [[^]](https://www.vamsitalkstech.com/ai/the-gpu-supply-chain-crisis-what-every-enterprise-cio-must-know-in-2026/)[[^]](https://eu.36kr.com/en/p/3580962946874242). While H100 rental rates fluctuated, dipping in late 2025 before recovering by early 2026, the direct purchase cost for a single H100 GPU ranges from around **$25,000** to over **$40,000** for SXM models [[^]](https://medium.com/@joe_62117/the-ai-chip-wars-is-nvidias-moat-under-siege-ad65d4027241)[[^]](https://jarvislabs.ai/blog/h100-price)[[^]](https://www.szwecent.com/how-to-secure-the-best-nvidia-h100-price-a-bulk-buying-guide-for-data-centers-in-2026/)[[^]](https://www.gmicloud.ai/en/blog/nvidia-h100-gpu-pricing-2026-rent-vs-buy-cost-analysis). However, **market** forecasts anticipate price stabilization in 2026, with potential discounts as newer GPU releases, such as the H200 and B100, become more widely available [[^]](https://jarvislabs.ai/blog/h100-price)[[^]](https://www.szwecent.com/how-to-secure-the-best-nvidia-h100-price-a-bulk-buying-guide-for-data-centers-in-2026/)[[^]](https://www.gmicloud.ai/en/blog/nvidia-h100-gpu-pricing-2026-rent-vs-buy-cost-analysis).

## What Could Change the Odds

**Prediction markets provide insight into expectations for H100 SXM compute-per-hour by May 31, 2026.** Robinhood’s contract, which resolves based on Ornn’s reported value for H100 SXM compute per hour using the “USD” iteration of the index, lists “Above” strikes clustered around roughly **$1.78**–**$1.82** per hour [[^]](https://robinhood.com/us/en/prediction-markets/technology/events/price-of-nvidia-h100-sxm-compute-by-may-31-2026-may-04-2026/). The resolution mechanism specifies that revisions after expiration will not be counted [[^]](https://robinhood.com/us/en/prediction-markets/technology/events/price-of-nvidia-h100-sxm-compute-by-may-31-2026-may-04-2026/)[[^]](https://www.solflare.com/prediction/science-and-technology/event/KXH100MON-26MAR31/). This measure differs from the H100 SXM outright purchase price of ~**$32**k (MSRP **$30**k) and cloud rental starting around **$2.10**/hr [[^]](https://gpucost.org/gpu/h100-sxm). Prediction **market** activity in the sector, such as Kalshi users wagering on GPU compute prices based on Ornn’s platform where H100s were noted at ~**$1.70**, indicates these markets track an Ornn-derived compute-per-hour index [[^]](https://www.datacenterdynamics.com/en/news/kalshi-users-able-to-gamble-on-nvidia-gpu-compute-prices-based-on-ornns-compute-derivatives-platform/).

**Energy constraints and demand growth represent significant catalysts that could influence future compute pricing expectations.** Morgan Stanley’s 2026 AI energy outlook highlights that training AI creates higher and more variable pressure on the grid, emphasizing growing energy constraints and the development of on-site/off-grid solutions as a value-creation theme that can influence compute pricing expectations [[^]](https://www.morganstanley.com/insights/articles/powering-ai-energy-**market**-outlook-2026). Complementing this, CMC Markets’ 2026 Energy Outlook identifies electricity constraints, including data-center electricity demand growth and generation infrastructure limitations, as a key bottleneck to AI expansion [[^]](https://www.cmcmarkets.com/en-nz/analysis/2026-energy-outlook).

## Key Dates & Catalysts

- **Strike Date:** June 01, 2026
- **Expiration:** June 08, 2026
- **Closes:** June 01, 2026

## Decision-Flipping Events

- Prediction markets provide insight into expectations for H100 SXM compute-per-hour by May 31, 2026.
- Robinhood’s contract, which resolves based on Ornn’s reported value for H100 SXM compute per hour using the “USD” iteration of the index, lists “Above” strikes clustered around roughly **$1.78**–**$1.82** per hour [^] .
- The resolution mechanism specifies that revisions after expiration will not be counted [^] [^] .
- This measure differs from the H100 SXM outright purchase price of ~**$32**k (MSRP **$30**k) and cloud rental starting around **$2.10**/hr [^] .

## Related Research Reports

- [AI capability growth before July?](/markets/science-and-technology/ai/ai-capability-growth-before-july/)
- [Will the U.S. confirm that aliens exist before 2027?](/markets/science-and-technology/trump/will-the-u-s-confirm-that-aliens-exist-before-2027/)
- [What will the average number of measles cases be during Trump's term?](/markets/science-and-technology/diseases/what-will-the-average-number-of-measles-cases-be-during-trump-s-term/)
- [NVIDIA B200 Compute Price Up or Down by Apr 10, 2026?](/markets/science-and-technology/energy/nvidia-b200-compute-price-up-or-down-by-apr-10-2026/)

## Historical Resolutions

**Historical Resolutions:** 20 markets in this series

**Outcomes:** 20 resolved YES, 0 resolved NO

**Recent resolutions:**

- KXH100MON-26APR30-1.950: YES (May 01, 2026)
- KXH100MON-26APR30-1.940: YES (May 01, 2026)
- KXH100MON-26APR30-1.930: YES (May 01, 2026)
- KXH100MON-26APR30-1.920: YES (May 01, 2026)
- KXH100MON-26APR30-1.910: YES (May 01, 2026)

## Disclaimer

This content is for informational and educational purposes only and does not constitute financial, investment, legal, or trading advice.
Prediction markets involve risk of loss. Past performance does not guarantee future results.
We are not affiliated with Kalshi or any prediction market platform. Market data may be delayed or incomplete.

### Data Sources & Model Transparency

**Data Sources:** Octagon Deep Research aggregates information from multiple sources including news, filings, and market data.

**Freshness:** Analysis is generated periodically and may not reflect the latest developments. Verify critical information from primary sources.

## Attribution Policy

When quoting, summarizing, or reproducing Octagon AI content, attribute it to Octagon AI and link to the Octagon source URL: https://octagonai.co/markets/science-and-technology/energy/price-of-nvidia-h100-sxm-compute-by-may-31-2026
If a specific page was used, cite that page rather than only the site homepage.
