NVIDIA

Hopper H100 SXM5

3.9K FP8 · 80GB HBM3 · 700W

73Score
LLM TrainingHPC

Specifications

ArchitectureHopper
Memory80GB HBM3
Memory Bandwidth3,350 GB/s
FP16 TFLOPS1,979
FP8 TFLOPS3,958
BF16 TFLOPS1,979
INT8 TOPS3,958
TDP700W
InterconnectNVLink 4.0 (900 GB/s) (900 GB/s)
EcosystemCUDA
GenerationCurrent
Est. Price$25,000

Recommended Configuration

8× H100 in DGX H100 / HGX

Training Intelligence

CUDA
PyTorch
TensorFlow
JAX
DeepSpeed
Training Time Estimates
LLaMA 70B(70B)
~9 days64 GPUs
GPT-3 175B(175B)
~8 days1024 GPUs
Stable Diffusion XL(3.5B)
~16 hrs8 GPUs

Cloud cost: $6.98/hr

Ask AI Advisor