skip to Main Content
NVIDIA DGX A100 Setting the Bar for Enterprise AI Infrastructure

DGX A100

NVIDIA DGX A100 Game Changing Performance

The NVIDIA DGX A100 Accelerated Compute Server delivers unprecedented performance for deep learning training and inference. Organizations can now deploy data-intensive, deep learning frameworks with confidence. DGX A100 enables the cutting-edge DL/ML and AI innovation data scientists desire, with the dependability IT requires. Available in POD design such as AgilityFlexAI with Spectrum Scale Storage Solution. Now available with 80GB of HBM2e memory with over 2TB’s per second of memory bandwidth.

Essential Building Block of the AI Data Center

The Universal System for Every AI Workload

NVIDIA DGX A100 is the universal system for all AI infrastructure, from analytics to training to inference. It sets a new bar for compute density, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy infrastructure silos with one platform for every AI workload

DGXperts: Integrated Access to AI Expertise

NVIDIA DGXperts are a global team of 16,000+ AI-fluent professionals who have built a wealth of experience over the last decade to help you maximize the value of your DGX investment.

Fastest Time To Solution

NVIDIA DGX A100 is the world’s first AI system built on the NVIDIA A100 Tensor Core GPU. Integrating eight A100 GPUs with up to 640GB of GPU memory, the system provides unprecedented acceleration and is fully optimized for NVIDIA CUDA-X software and the end-to-end NVIDIA data center solution stack.

NVIDIA A100 Tensor Core GPU

NVIDIA A100’s third-generation Tensor Cores with Tensor Float (TF32) precision provide up to 20X higher performance over the prior generation with zero code changes and an additional 2X boost with automatic mixed precision and FP16. When combined with third-generation NVIDIA® NVLink®, NVIDIA NVSwitch, PCI Gen4, NVIDIA Mellanox InfiniBand, and the NVIDIA Magnum IO software SDK, it’s possible to scale to thousands of A100 GPUs. This means that large AI models like BERT can be trained in just 37 minutes on a cluster of 1,024 A100s, offering unprecedented performance and scalability.

Available in PCIe Gen4 or SXM4 form factor in our rackmount servers.

A Simpler and Faster Way to Tackle AI

Unmatched Data Center Scalability

NVIDIA DGX A100 features Mellanox ConnectX-6 VPI HDR InfiniBand/Ethernet network adapters with 500 gigabytes per second (GB/s) of peak bi-directional bandwidth. This is one of the many features that make DGX A100 the foundational building block for large AI clusters such as NVIDIA DGX SuperPOD, the enterprise blueprint for scalable AI infrastructure.

NVIDIA DGX SuperPOD Solution for Enterprise

NVIDIA DGX SuperPOD Solution for Enterprise incorporates the best practices and know-how gained from the world’s largest AI deployments, designed to solve the most challenging AI opportunities facing organizations. For enterprises that need a trusted and turnkey approach to AI innovation at scale, we’ve taken our industry-leading reference architecture and wrapped it in a comprehensive solution and services offering. NVIDIA DGX SuperPOD Solution for Enterprise delivers a full-service experience that delivers industry-proven results in weeks instead of months to every organization that needs leadership-class infrastructure, with a white-glove implementation that’s intelligently integrated with your business, so your team can deliver results sooner.


Back To Top