skip to Main Content
1-844-371-4949 info@applieddatasystems.com

AgilityAI GPU Servers

Custom Designed GPU Servers for HPC, Training and Inferencing

Accelerating Data Center Workloads with NVIDIA DGX A100

The Universal System for Every AI Workload

NVIDIA DGX A100 is the universal system for all AI infrastructure, from analytics to training to inference. It sets a new bar for compute density, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy infrastructure silos with one platform for every AI workload.

Fastest Time to Solution

NVIDIA DGX A100 is the world’s first AI system built on the NVIDIA A100 Tensor Core GPU. Integrating eight A100 GPUs, the system provides unprecedented acceleration and is fully optimized for NVIDIA CUDA-X software and the end-to-end NVIDIA data center solution stack.

Unmatched Data Center Scalability

NVIDIA DGX A100 features Mellanox ConnectX-6 VPI HDR InfiniBand/Ethernet network adapters with 450 gigabytes per second (GB/s) of peak bi-directional bandwidth. This is one of the many features that make DGX A100 the foundational building block for large AI clusters such as NVIDIA DGX SuperPOD, the enterprise blueprint for scalable AI infrastructure.

NVIDIA A100 Tensor Core GPU

THE WORLD’S FIRST AI SYSTEM BUILT ON NVIDIA A100

NVIDIA DGX A100 is the universal system for all AI workloads, offering unprecedented compute density, performance, and flexibility in the world’s first 5 petaFLOPS AI system. NVIDIA DGX A100 features the world’s most advanced accelerator, the NVIDIA A100 Tensor Core GPU, enabling enterprises to consolidate training, inference, and analytics into a unified, easy-to-deploy AI infrastructure that includes direct access to NVIDIA AI experts.

Major Components inside the NVIDIA DGX A100 System

At the core, the NVIDIA DGX A100 system leverages the NVIDIA A100 GPU, designed to efficiently accelerate large complex AI workloads as well as several small workloads, including enhancements and new features for increased performance over the V100 GPU. The A100 GPU incorporates 40 GB high-bandwidth HBM2 memory, larger and faster caches, and is designed to reduce AI and HPC software and programming complexity.

 

IBM Power System AC922: Engineered to be the most powerful training platform

IBM Power System AC922

IBM POWER9 is the first chip with PCIe Gen4 (2x the bandwidth of PCIe Gen3). The Power AC922 has PCIe Gen4 and other advanced I/O interconnects including CAPI 2.0, OpenCAPI and NVIDIA® NVLink™. Unlike x86-based servers, on the Power AC922 the NVIDIA® NVLink™ enables CPU to GPU connectivity delivering 5.6x the data throughput for today’s data-intensive and AI workloads. The Power AC922 supports up to 6 NVIDIA® Tesla V100 GPUs (16GB or 32GB).

View Datasheet

Key Features of IBM Power System AC922

  • Faster I/O – up to 5.6x more I/O bandwidth than x86 servers
  • PCIe Gen4 – 2x the bandwidth of PCIe Gen3
  • POWER9 processor—the latest POWER processor, designed for AI
  • Advanced GPUs – up to 6 NVIDIA® Tesla® V100 GPUs with NVLink
  • Coherence – share RAM across CPUs & GPUs
  • Built scalable from one server to supercomputer

AgilityAI RG408P-SA AMD PCIe Gen4 GPU Server

Key Features 

The Applied Data Systems RG408P-SA is a state of the art PCIe Gen4 based GPU server utilizing AMD EPYC Rome Processors. Available with up to eight GPUs. Comes pre-installed with Ubuntu, CUDA, cuDNN, TensorFlow and PyTorch. Combine with ExtremeStor, our high speed parallel file system based storage solution for maximum performance.

• Up to Eight GPUs with direct connect architecture for maximum performance

• Up to 8TB DDR4 ECC memory across 32x DIMM slots

• Supports 280w AMD EPYC Rome processors, up to 64-cores each

• Nine PCIe Gen4 x16 slots available (no PCIe switches)

• Up to 24 hot swap 2.5″ drive slots; Four NVMe drives plus Four SATA standard

• Titanium Level, 96% efficient, redundant, 2+2, power supplies

AgilityAI RG204SX-SA AMD SXM4 GPU Server

Key Features 

The Applied Data Systems RG204SX-SA is a state of the art SXM4 based GPU server utilizing AMD EPYC Rome Processors. Available with up to four SXM4 GPUs. Comes pre-installed with Ubuntu, CUDA, cuDNN, TensorFlow and PyTorch. Combine with ExtremeStor, our high speed parallel file system based storage solution for maximum performance.

• Up to Four NVIDIA Tesla A100 GPUs with high speed SXM4 NV-Link architecture for maximum performance

• Up to 8TB DDR4 ECC memory across 32x DIMM slots

• Supports 280w AMD EPYC Rome processors, up to 64-cores each

• Four PCIe Gen4 x16 slots available (no PCIe switches)

• Four hot swap 2.5″ drive slots; Four NVMe/SATA drives

• Titanium Level, 96% efficient, redundant, 2x, power supplies

 

 

We’d love to work on your project. We do extensive analysis of your existing and future needs, deliver a comprehensive solution architecture on a validated hardware and software build that ships fully integrated. Contact us for an expert AI consultation today!

Back To Top