Configure your desired options and continue to checkout.
Base AI Acceleration
Target Scenarios: Medium-scale AI inference, real-time data processing
GPU: 8×NVIDIA Tesla A100 (80GB HBM2, full NVLink interconnect)
CPU: 2×AMD EPYC 75F3 (32 cores/64 threads, 3.0GHz base)
Memory: 1TB DDR5 ECC REG (8-channel×2, 3200MHz)
Storage: 4×3.84TB NVMe SSD (PCIe 4.0, RAID 10)
Network: 100Gbps bandwidth, supports GPUDirect RDMA
Extras: Pre-installed NVIDIA Triton Inference Server with model repository integration
Please correct the following errors before continuing:
Additional Required Information
Have questions? Contact our sales team for assistance.
Click here