NVIDIA

Ampere A100 SXM4

312 FP16 · 80GB HBM2e · 400W

61Score
TrainingFine-tuning

Specifications

ArchitectureAmpere
Memory80GB HBM2e
Memory Bandwidth2,039 GB/s
FP16 TFLOPS312
FP8 TFLOPS312
BF16 TFLOPS624
INT8 TOPS1,248
TDP400W
InterconnectNVLink 3.0 (600 GB/s) (600 GB/s)
EcosystemCUDA
GenerationPrevious
Est. Price$12,000

Recommended Configuration

8× A100 in DGX A100

Training Intelligence

CUDA
PyTorch
TensorFlow
JAX
DeepSpeed
Training Time Estimates
LLaMA 70B(70B)
~28 days64 GPUs
GPT-3 175B(175B)
~34 days1024 GPUs
Stable Diffusion XL(3.5B)
~2.5 days8 GPUs

Cloud cost: $3.67/hr

Ask AI Advisor