Skip to content
NVIDIA

Tesla V100 SXM2 32GB

125 FP16 · 32GB HBM2 · 300W

9Score
Budget ML TrainingClassic Deep LearningLegacy Pipelines

Specifications

ArchitectureVolta (GV100)
Memory32GB HBM2
Memory Bandwidth900 GB/s
FP16 TFLOPS125
FP8 TFLOPS0
BF16 TFLOPS0
INT8 TOPS62
TDP300W
InterconnectNVLink 2.0 (300 GB/s) (300 GB/s)
EcosystemCUDA
GenerationPrevious
Est. Price$3,000

Recommended Configuration

8× V100 SXM2 in DGX-1 or DGX-2

Training Intelligence

CUDA
PyTorch
TensorFlow
JAX
DeepSpeed
Training Time Estimates
LLaMA 7B(7B)
~2 days8 GPUs
GPT-3 175B(175B)
~38 days1024 GPUs
SDXL(3.5B)
~5 days8 GPUs

Cloud cost: $0.80/hr

Ask AI Advisor