Tesla V100


The state-of-the-art datacenter GPU for training

GPU for the DATA CENTER

The Nvidia Tesla V100 provides the ultimate performance for deep learning and the highest versatility for accelerated workloads like HPC codes and analytics. DeepInsights offer the PCIe version of the Tesla V100 GPU.

V100 SPECS

-7 teraFLOPS for double precision workloads
- 5120 CUDA cores of the Volta generation
- 640 Tensor Cores
- 16GB or 32GB of HBM2 memory
- 7 TFlops of double precision performance
-112 TFlops of tensor performance
- 250W power consumption
- PCIe Gen3 Interface with 32GB/sec bandwidth

The V100 Value ProPOSITION

- 450+ GPU-accelerated HPC Applications
- Supports every Deep Learning Framework
- Max Efficiency mode for the V100 allows running at 80% performance at half the power consumption
- Efficient thread scheduling that enables finer-grain synchronization and improves GPU utilisation