Eight GPU Large Language Model Server
Powerful 4U rackmount server supporting up to eight NVIDIA GPUs for training, fine-tuning, and inference with AI large language models.
Overview
Eight GPU 4U server supporting NVIDIA RTX Ada, L40S, and H100 NVL graphics cards
- Up to 752GB of VRAM across eight GPUs
- Great for 150B parameter fp16 inference and fine-tuning smaller models
- Requires four 200-240V power connections on separate circuits
Not sure what you need?
Tell us your situation and one of our experts will reply within 1 business day to help configure the right computer for your workflow. If you don’t see what you are looking for here, check out our other systems for more options.