Configure

Configure your desired options and continue to checkout.

Base AI Acceleration

Target Scenarios: Medium-scale AI inference, real-time data processing 6-Server Cluster

GPU: 8×NVIDIA H100 (80GB HBM2, full NVLink interconnect)
CPU: 2×AMD EPYC 75F3 (32 cores/64 threads, 3.0GHz base)
Memory: 1TB DDR5 ECC REG (8-channel×2, 3200MHz)
Storage: 4×3.84TB NVMe SSD (PCIe 4.0, RAID 10)
Network: 100Gbps bandwidth, supports GPUDirect RDMA
Extras: Pre-installed NVIDIA Triton Inference Server with model repository integration

Additional Required Information
Please select the operating system
Is firewall installed by default
Please enter the host name
Have questions? Contact our sales team for assistance. Click here

Order Summary