skip to Main Content
1-844-371-4949 info@applieddatasystems.com
NVIDIA-Certified Systems™ with A100 Tensor Core GPUs

NVIDIA-Certified Systems deliver the performance, programmability and secure throughput enterprise AI needs. They combine the computing power of GPUs based on the NVIDIA Ampere architecture with secure, high-speed NVIDIA Mellanox networking.

To pass the certification, the systems are tested across a broad range of workloads, from jobs that require multiple compute nodes to tasks that only need part of the power of a single GPU.

The systems are optimized to run AI applications from the NGC catalog, NVIDIA’s hub for GPU-optimized applications.

Applied Data Systems AgilityAI line of NVIDIA-Certified servers from Gigabyte and Supermicro offer best in class price and performance. These systems are validated for performance, functionality, scalability, and security allowing data scientists, dev-ops, and IT teams to easily deploy complete solutions for the AI workloads from the NVIDIA NGC catalog. These systems, when combined with our high-performance and scalable ExtremeStor™ all NVMe flash storage systems, will offer the performance required to get the most out of these platforms.

Supermicro AS-2124GQ-NART and 4124GS-TNR

The Supermicro AS-2124GQ-NART offers unprecedented performance utilizing NVIDIA NVLink GPU interconnect technology along with PCIe Gen4. Available with four NVIDIA A100 Tensor Core GPUs with either 40GB or 80GB HBM2 memory. Two AMD Rome processors are supported with up to 64 CPU cores each. Four PCIe Gen4 x16 slots support four dual port NVIDIA Mellanox Dual Port ConnectX-6 200Gb Infiniband or Ethernet adapters. Four Gen4 NVMe slots support latest generation U.2 drives for maximum performance. 32 DIMM slots support up to 4TB memory. System available with custom water cooled, direct chip liquid cooling.

The Supermicro 4124GS-TNR offers maximum GPU density and optimal performance with a direct connect architecture utilizing full bandwidth PCIe Gen4. Available with support of eight NVIDIA A100 Tensor Core GPUs with  either 40GB or 80GB HBM2 memory. Two AMD Rome processors are supported with up to 64 CPU cores each. One PCIe Gen4 x16 slot support up to one dual port NVIDIA Mellanox Dual Port ConnectX-6 200Gb Infiniband or Ethernet adapter. Four Gen4 NVMe slots support latest generation U.2 drives for maximum performance. 32 DIMM slots support up to 4TB memory. System available with custom water cooled, direct chip liquid cooling.

Gigabyte R282-Z96 and G242-Z11

The Gigabyte R282-Z96 offers a dense dual socket AMD Rome platform with support for up to three NVIDIA A100 Tensor Core GPUs with  either 40GB or 80GB HBM2 memory. Two AMD Rome processors, up to 64 cores, are supported. A total of Four PCIe Gen4 x16 slots support three double width GPUs leaving room for an  NVIDIA Mellanox Dual Port ConnectX-6 200Gb Infiniband or Ethernet adapter. In addition, there are two OCP slots available with one supporting OCP 3.0 Gen4 x16 and the other OCP 2.0 Gen3 x8. A total of four Gen4 NVMe slots support latest generation U.2 drives for maximum performance. 32 DIMM slots support up to 4TB memory. System available with custom water cooled, direct chip liquid cooling.

The Gigabyte  G242-Z11 offers maximum GPU density and optimal performance with a direct connect architecture utilizing full bandwidth PCIe Gen4. Available with support of up to four NVIDIA A100 Tensor Core GPUs with  either 40GB or 80GB HBM2 memory. One AMD Rome processor is supported with up to 64 CPU cores each (280w). Two PCIe Gen4 x16 slots support up to two dual port NVIDIA Mellanox Dual Port ConnectX-6 200Gb Infiniband or Ethernet adapters. Two Gen4 NVMe slots support latest generation U.2 drives for maximum performance. 8 DIMM slots support up to 1TB memory. System available with custom water cooled, direct chip liquid cooling.

Gigabyte G482-Z54, G492-Z51

The Gigabyte G482-Z54 offers fantastic GPU density and optimal performance with a direct connect architecture utilizing full bandwidth PCIe Gen4. Available with support of eight NVIDIA A100 Tensor Core GPUs with  either 40GB or 80GB HBM2 memory. Two AMD Rome processors are supported with up to 64 CPU cores (280w). One PCIe Gen4 x16 slot support up to one dual port NVIDIA Mellanox Dual Port ConnectX-6 200Gb Infiniband or Ethernet adapter. Two Gen4 NVMe slots support latest generation U.2 drives for maximum performance. 32 DIMM slots support up to 4TB memory. System available with custom water cooled, direct chip liquid cooling.

The Gigabyte G492-Z51 offers up to ten full height, full length Gen4 expansion slots for GPU cards, FPGAs and other accelerators. There are also three other PCIe Gen4 x16 slots, two in the front of the chassis and one in the rear. A total of eight NVMe u.2 drive slots and four 3.5″ SATA drive slots are available for storage. 32 DIMM slots support up to 4TB memory.

All systems include high efficiency, redundant power supplies, onboard Ethernet and dedicated IPMI management port.

NVIDIA A100 Tensor Core GPU

NVIDIA A100’s third-generation Tensor Cores with Tensor Float (TF32) precision provide up to 20X higher performance over the prior generation with zero code changes and an additional 2X boost with automatic mixed precision and FP16. When combined with third-generation NVIDIA® NVLink®, NVIDIA NVSwitch, PCI Gen4, NVIDIA Mellanox InfiniBand, and the NVIDIA Magnum IO software SDK, it’s possible to scale to thousands of A100 GPUs. This means that large AI models like BERT can be trained in just 37 minutes on a cluster of 1,024 A100s, offering unprecedented performance and scalability.

Available in PCIe Gen4 or SXM4 form factor in our rackmount servers.

A Simpler and Faster Way to Tackle AI

Unmatched Data Center Scalability with NVIDIA Networking

Don’t forget and underestimate the importance a proper network plays in these high-performance GPU servers. We recommend all systems come with NVIDIA Mellanox ConnectX-6 VPI HDR InfiniBand/Ethernet network adapters that offer up to 200 gigabits per second (Gb/s) of peak bi-directional bandwidth. We can help select the correct components when building out your GPU servers.

Back To Top