NVIDIA® DGX SYSTEMS
NVIDIA Ampere Architecture®
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world’s toughest computing challenges.
As the engine of the NVIDIA data center platform, A100 can efficiently scale to thousands of GPUs or, with NVIDIA Multi-Instance GPU (MIG) technology, be partitioned into seven GPU instances to accelerate workloads of all sizes. And third-generation Tensor Cores accelerate every precision for diverse workloads, speeding time to insight and time to market.
The Most Powerful End-to-End AI and HPC Data Center Platform
Deep Learning Training
NVIDIA A100’s third-generation Tensor Cores with Tensor Float (TF32) precision provide up to 20X higher performance over the prior generation with zero code changes and an additional 2X boost with automatic mixed precision and FP16
Deep Learning Inference
A100 brings unprecedented versatility by accelerating a full range of precisions, from FP32 to FP16 to INT8 and all the way down to INT4
High Performance Computing
A100 introduces double-precision Tensor Cores, providing the biggest milestone since the introduction of double-precision computing in GPUs for HPC. This enables researchers to reduce a 10-hour, double-precision simulation to just four hours
High Performance Data Analytics
Accelerated servers with A100 deliver the needed compute power—along with 1.6 terabytes per second (TB/sec) of memory bandwidth and scalability with third-generation NVLink and NVSwitch—to tackle these massive workloads.
Enterprise Ready Utilisation
7X Higher Inference Throughput with Multi-Instance GPU (MIG). A100 with MIG maximizes the utilization of GPU-accelerated infrastructure like never before.
End to end AI
A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC™
NVIDIA DGX A100
NVIDIA DGX™ A100 is the universal system for all AI workloads—from analytics to training to inference. DGX A100 sets a new bar for compute density, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy compute infrastructure with a single, unified system
DGX A100 also offers the unprecedented ability to deliver fine-grained allocation of computing power, using the Multi-Instance GPU capability in the NVIDIA A100 Tensor Core GPU, which enables administrators to assign resources that are right-sized for specific workloads.
Register your interest to Test Drive the NVIDIA DGX A100 at BIOS IT Labs
Every business needs to transform using artificial intelligence (AI), not only to survive, but to thrive in challenging times. NVIDIA DGX™ A100 is the universal system for all AI workloads—from analytics to training to inference.
Register your interest in a remote test-drive (when available) by filling out the form
BIOS ANNA A100
BIOS IT ‘s latest iteration of its Artificial Neural Network Accelerator (ANNA) series is built on Supermicro Hardware and features the latest NVIDIA® A100™ Tensor Core GPUs. The ANNA A100 is designed for the most demanding AI workloads and is optimised for the new HGX™ A100 4-GPU baseboard. With the newest version of NVIDIA® NVLink™ and NVIDIA NVSwitch™ technologies, these servers can deliver up to 5 PetaFLOPS of AI performance in a single system. The system supports PCI-E Gen 4 for fast CPU-GPU connection and high-speed networking expansion cards.
NVIDIA DGX Systems
NVIDIA DGX™ systems are purpose-built to meet the demands of enterprise AI and data science, delivering the fastest start in AI development, effortless productivity, and revolutionary performance—for insights in hours instead of months.