Skip to content

Instinct MI355X vs Hopper H100 SXM5

Complete side-by-side comparison of specs, performance, memory, power efficiency, and pricing.

AMD

Instinct MI355X

96

Spec Wins

NVIDIA

Hopper H100 SXM5

73

Detailed Specifications

SpecInstinct MI355XHopper H100 SXM5
ArchitectureCDNA 4 Hopper
Memory288GB HBM3e 80GB HBM3
Memory Bandwidth8,000 GB/s 3,350 GB/s
FP16 TFLOPS2,400 1,979
FP8 TFLOPS4,625 3,958
BF16 TFLOPS2,400 1,979
INT8 TOPS4,625 3,958
TDP1400W 700W
InterconnectInfinity Fabric 4.0 (896 GB/s) (896 GB/s) NVLink 4.0 (900 GB/s) (900 GB/s)
Perf Score96 73
EcosystemROCM CUDA
Est. Price$30,000 $25,000

Instinct MI355X — Best For

LLM TrainingFrontier AIHPC

Hopper H100 SXM5 — Best For

LLM TrainingHPC

Who Should Choose Each GPU?

Choose Instinct MI355X if you…

  • Need more VRAM (288GB vs 80GB) for large model inference
  • Prioritize raw FP8 throughput (4,625 vs 3,958 TFLOPS)
  • Running LLM Training workloads
  • Running Frontier AI workloads
  • Running HPC workloads

Choose Hopper H100 SXM5 if you…

  • Need maximum CUDA/TensorRT/vLLM ecosystem compatibility
  • Have power-constrained data centers (700W vs 1400W TDP)
  • Working with a tighter CapEx budget (lower list price)
  • Running LLM Training workloads
  • Running HPC workloads

Verdict

The Instinct MI355X and Hopper H100 SXM5 target different priorities. The Instinct MI355X's 288GB of HBM3e gives it a clear edge for large-model inference where fitting the full model in VRAM eliminates quantization overhead. For training throughput, the Instinct MI355X's 4,625 FP8 TFLOPS outpaces the Hopper H100 SXM5's 3,958 TFLOPS. Teams already invested in the NVIDIA/CUDA ecosystem will have less friction with the Hopper H100 SXM5, while teams open to ROCM can benefit from the Instinct MI355X's advantages. Use our TCO Calculator to model the full 3-year cost difference for your specific utilization and power costs.

Instinct MI355X vs Hopper H100 SXM5: Common Questions

Which is faster, Instinct MI355X or Hopper H100 SXM5?+

In FP8 throughput, the Instinct MI355X leads with 4,625 TFLOPS vs 3,958 TFLOPS. For LLM inference, memory capacity and bandwidth often matter more than raw TFLOPS — the Instinct MI355X has more VRAM (288GB).

Is Instinct MI355X or Hopper H100 SXM5 better for LLM training?+

For LLM training at scale, the Instinct MI355X has higher raw throughput. However, the choice also depends on your software stack: Hopper H100 SXM5 offers CUDA compatibility with the widest framework support (PyTorch, JAX, TensorRT).

What is the price difference between Instinct MI355X and Hopper H100 SXM5?+

The Instinct MI355X is estimated at $30,000 per unit and the Hopper H100 SXM5 at $25,000. Actual pricing varies by vendor, volume, and configuration. Check our Buy page for current reseller pricing.

Which GPU is more power efficient, Instinct MI355X or Hopper H100 SXM5?+

The Hopper H100 SXM5 has a lower TDP (700W vs 1400W). Performance-per-watt depends on your workload — for FP8 inference, divide TFLOPS by TDP: Instinct MI355X = 3.3 TFLOPS/W vs Hopper H100 SXM5 = 5.7 TFLOPS/W.

Ask AI Advisor