Compare Cloud GPU Costs
Real-time pricing across AWS, GCP, Azure, Lambda, and CoreWeave. Find the best $/hr for your workload.
$4.76
Cheapest H100/hr
CoreWeave per GPU
$5.50
Best Spot Deal
AWS L40S 48GB
5
Providers Tracked
Major cloud platforms
10+
GPU Types
B200, H200, H100, MI300X, TPU...
| Provider | GPU ↑ | Instance ↑ | VRAM | On-Demand $/hr ↑ | Spot $/hr | 1Y Reserved $/hr | Best For |
|---|---|---|---|---|---|---|---|
| CoreWeave | L40S 48GB | l40s-48gb | 48 GB | $1.50 | N/A | N/A | Per-GPU L40S pricing |
| CoreWeave | A100 80GB | a100-sxm-80gb | 80 GB | $2.21 | N/A | N/A | Per-GPU A100 pricing |
| CoreWeave | MI300X 192GB | mi300x-sxm-192gb | 192 GB | $4.10 | N/A | N/A | Per-GPU MI300X pricing |
| CoreWeave | H100 80GB | h100-sxm-80gb | 80 GB | $4.76 | N/A | N/A | Per-GPU H100 pricing |
| GCP | TPU v5e | ct5e-minitpu-8t | 128 GB HBM | $4.80 | N/A | $3.02 | Cost-effective JAX Inference |
| CoreWeave | H200 141GB | h200-sxm-141gb | 141 GB | $5.20 | N/A | N/A | Per-GPU H200 pricing |
| CoreWeave | B200 192GB | b200-sxm-192gb | 192 GB | $6.50 | N/A | N/A | Per-GPU B200 pricing |
| CoreWeave | B300 288GB | b300-sxm-288gb | 288 GB | $8.50 | N/A | N/A | Per-GPU B300 pricing |
| Lambda | L40S 48GB | gpu_8x_l40s | 384 GB (8×48) | $12.00 | N/A | N/A | Cost-Effective Inference & Rendering |
| GCP | TPU v6e | ct6e-standard-8t | 256 GB HBM | $12.50 | N/A | $7.88 | Cost-Efficient JAX Training & Inference |
| GCP | TPU v4 | ct4p-lowtpu-4t | 128 GB HBM | $12.80 | N/A | $7.68 | JAX Training |
| Lambda | A100 80GB | gpu_8x_a100_80gb_sxm4 | 640 GB (8×80) | $14.32 | N/A | N/A | Training (best value A100) |
| AWS | L40S 48GB | g7.48xlarge | 384 GB (8×48) | $16.00 | $5.5066% off | $10.00 | Enterprise Inference & Video |
| GCP | TPU v5p | ct5p-hightpu-4t | 384 GB HBM | $21.10 | N/A | $13.29 | JAX/TPU-native training |
| Lambda | MI300X 192GB | gpu_8x_mi300x | 1536 GB (8×192) | $24.50 | N/A | N/A | High-VRAM Training |
| Lambda | H100 80GB | gpu_8x_h100_sxm5 | 640 GB (8×80) | $27.60 | N/A | N/A | Training (best value H100) |
| GCP | TPU v7 | ct7p-hightpu-4t | 384 GB HBM3e | $28.50 | N/A | $18.50 | Next-Gen JAX/TPU Frontier Training |
| Lambda | H200 141GB | gpu_8x_h200_sxm5 | 1128 GB (8×141) | $30.00 | N/A | N/A | LLM Inference (best value) |
| Azure | A100 80GB | ND96amsr_A100_v4 | 640 GB (8×80) | $32.77 | $9.8370% off | $20.43 | Training & fine-tuning |
| Azure | MI250X 128GB | NDm_MI250X_v4 | 512 GB (4×128) | $36.00 | $10.8070% off | $22.00 | HPC & Scientific Computing |
| GCP | A100 80GB | a2-ultragpu-8g | 640 GB (8×80) | $40.22 | $12.0770% off | $25.34 | Training & fine-tuning |
| Azure | MI300A 128GB | ND_MI300A_v5 | 512 GB (4×128) | $45.00 | $13.5070% off | $28.00 | Unified Memory HPC |
| Lambda | B300 288GB | gpu_8x_b300_sxm | 2304 GB (8×288) | $52.00 | N/A | N/A | Cheapest B300 (per-node) |
| Azure | MI300X 192GB | ND_MI300X_v5 | 1536 GB (8×192) | $92.50 | $27.7570% off | $58.10 | High-memory LLM training |
| Azure | MI325X 288GB | ND_MI325X_v5 | 2304 GB (8×288) | $98.00 | $29.4070% off | $62.00 | Extreme VRAM LLM Training |
| AWS | H100 80GB | p5.48xlarge | 640 GB (8×80) | $98.32 | $35.5064% off | $62.12 | Large-scale training |
| Azure | H100 80GB | ND96isr_H100_v5 | 640 GB (8×80) | $98.32 | $29.5070% off | $60.96 | Large-scale training |
| GCP | H100 80GB | a3-highgpu-8g | 640 GB (8×80) | $98.35 | $29.5170% off | $61.64 | Large-scale training |
| AWS | H200 141GB | p5e.48xlarge | 1128 GB (8×141) | $104.00 | $38.0063% off | $68.00 | Optimized LLM Inference |
| GCP | H200 141GB | a3-megagpu-8g | 1128 GB (8×141) | $105.00 | $31.0070% off | $68.00 | Optimized LLM Inference |
| Azure | MI355X 288GB | ND_MI355X_v6 | 2304 GB (8×288) | $108.00 | $32.4070% off | $70.00 | CDNA 4 Frontier Training |
| GCP | B200 192GB | a4-highgpu-8g | 1536 GB (8×192) | $110.00 | $33.0070% off | $72.00 | Next-Gen Frontier Training |
| Azure | B200 192GB | ND_B200_v6 | 1536 GB (8×192) | $112.00 | $33.6070% off | $73.00 | Next-Gen Frontier Training |
| AWS | B200 192GB | p6.48xlarge | 1536 GB (8×192) | $115.00 | $42.0063% off | $75.00 | Next-Gen Frontier Training |
| GCP | B300 288GB | a5-ultragpu-8g | 2304 GB (8×288) | $142.00 | $42.6070% off | $92.00 | Frontier Model Training (Blackwell Ultra) |
| Azure | B300 288GB | ND_B300_v7 | 2304 GB (8×288) | $145.00 | $43.5070% off | $94.00 | Frontier Model Training (Blackwell Ultra) |
| AWS | B300 288GB | p7.48xlarge | 2304 GB (8×288) | $148.00 | $52.0065% off | $96.00 | Frontier Model Training (Blackwell Ultra) |
Use Spot for Training
Save 60-70% on training runs with checkpointing. GCP Spot offers up to 70% discount on A100/H100 instances.
Best for: Fault-tolerant training with checkpoints
Reserved for Inference
1-year commitments save 35-40% for always-on inference endpoints. Azure and AWS offer the deepest reserved discounts.
Best for: Production inference workloads
Lambda/CoreWeave for Value
GPU cloud specialists offer 2-4x lower per-GPU pricing than hyperscalers, ideal for teams that don't need full cloud ecosystems.
Best for: Pure GPU compute without cloud services
Prices are approximate and vary by region and availability. Pricing reflects Q1 2026 estimates — always verify with provider pricing pages before procurement.