NVIDIA
Hopper H200 SXM
3.9K FP8 · 141GB HBM3e · 700W
74Score
LLM InferenceLarge Models
Specifications
ArchitectureHopper
Memory141GB HBM3e
Memory Bandwidth4,800 GB/s
FP16 TFLOPS1,979
FP8 TFLOPS3,958
BF16 TFLOPS1,979
INT8 TOPS3,958
TDP700W
InterconnectNVLink 4.0 (900 GB/s) (900 GB/s)
EcosystemCUDA
GenerationCurrent
Est. Price$30,000
Recommended Configuration
8× H200 SXM in DGX H200
Training Intelligence
◆CUDA
◆PyTorch
◆TensorFlow
◆JAX
◆DeepSpeed
Training Time Estimates
LLaMA 70B(70B)
~8 days64 GPUs
GPT-3 175B(175B)
~8 days1024 GPUs
Stable Diffusion XL(3.5B)
~14 hrs8 GPUs
Cloud cost: $8.50/hr
Compare With
Ask AI Advisor