Supermicro Grows AI Optimized Product Portfolio with a New Generation of Systems and Rack Architectures Featuring New NVIDIA Blackwell Architecture Solutions

Supermicro is enhancing its current NVIDIA HGX™ H100/H200 8-GPU systems to be drop-in ready for the NVIDIA HGX™ B100 8-GPU and enhanced to support the B200, resulting in a reduced time to delivery.

Supermicro, a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing new AI systems for large-scale generative AI featuring NVIDIA’s next-generation of data center products, including the latest NVIDIA GB200 Grace™ Blackwell Superchip, the NVIDIA B200 Tensor Core and B100 Tensor Core GPUs. Supermicro is enhancing its current NVIDIA HGX™ H100/H200 8-GPU systems to be drop-in ready for the NVIDIA HGX™ B100 8-GPU and enhanced to support the B200, resulting in a reduced time to delivery. Additionally, Supermicro will further strengthen its broad NVIDIA MGX™ systems lineup with new offerings featuring the NVIDIA GB200, including the NVIDIA GB200 NVL72, a complete rack level solution with 72 NVIDIA Blackwell GPUs. Supermicro is also adding new systems to its lineup, including the 4U NVIDIA HGX B200 8-GPU liquid-cooled system.

“Our focus on building block architecture and rack-scale Total IT for AI has enabled us to design next-generation systems for the enhanced requirements of NVIDIA Blackwell architecture-based GPUs, such as our new 4U liquid-cooled NVIDIA HGX B200 8-GPU based system, as well as our fully integrated direct-to-chip liquid cooled racks with NVIDIA GB200 NVL72,” said Charles Liang, president and CEO of Supermicro. “These new products are built upon Supermicro and NVIDIA’s proven HGX and MGX system architecture, optimizing for the new capabilities of NVIDIA Blackwell GPUs. Supermicro has the expertise to incorporate 1kW GPUs into a wide range of air-cooled and liquid-cooled systems, as well as the rack scale production capacity of 5,000 racks/month and anticipates being first-to-market in deploying full rack clusters featuring NVIDIA Blackwell GPUs.”

Read More About Hrtech : Untraditional Ways to Discover Tech Talent and Promising Software Projects

Supermicro’s direct-to-chip liquid cooling technology will allow for the increased thermal design power (TDP)  of the latest GPUs and deliver the full potential of the NVIDIA Blackwell GPUs. Supermicro’s HGX and MGX Systems with NVIDIA Blackwell are the building blocks for the future of AI infrastructure and will deliver groundbreaking performance for multi-trillion parameter AI training and real-time AI inference.

A wide range of GPU-optimized Supermicro systems will be ready for the NVIDIA Blackwell B200 and B100 Tensor Core GPU and validated for the latest NVIDIA AI Enterprise software, which adds support for NVIDIA NIM inference microservices. The Supermicro systems include:

  • NVIDIA HGX B100 8-GPU and HGX B200 8-GPU systems
  • 5U/4U PCIe GPU system with up to 10 GPUs
  • SuperBlade® with up to 20 B100 GPUs for 8U enclosures and up to 10 B100 GPUs in 6U enclosures
  • 2U Hyper with up to 3 B100 GPUs
  • Supermicro 2U x86 MGX systems with up to 4 B100 GPUs

For training massive foundational AI models, Supermicro is prepared to be the first-to-market to release NVIDIA HGX B200 8-GPU and HGX B100 8-GPU systems. These systems feature 8 NVIDIA Blackwell GPUs connected via a high-speed fifth-generation NVIDIA NVLink interconnect at 1.8TB/s, doubling the previous generation performance, with 1.5TB total high-bandwidth memory and will deliver 3X faster training results for LLMs, such as the GPT-MoE-1.8T model, compared to the NVIDIA Hopper architecture generation. These systems feature advanced networking to scale to clusters, supporting both NVIDIA Quantum-2 InfiniBand and NVIDIA Spectrum-X Ethernet options with a 1:1 GPU-to-NIC ratio.

“Supermicro continues to bring to market an amazing range of accelerated computing platform servers that are tuned for AI training and inference that can address any need in the market today, said Kaustubh Sanghani, vice president of GPU product management at NVIDIA. “We work closely with Supermicro to bring the most optimized solutions to customers.”

For the most demanding LLM inference workloads, Supermicro is releasing several new MGX systems built with the NVIDIA GB200 Grace Blackwell Superchip, which combines an NVIDIA Grace CPU with two NVIDIA Blackwell GPUs. Supermicro’s NVIDIA MGX with GB200 systems will deliver a vast leap in performance for AI inference with up to 30x speed-ups compared to the NVIDIA HGX H100. Supermicro and NVIDIA have developed a rack-scale solution with the NVIDIA GB200 NVL72, connecting 36 Grace CPUs and 72 Blackwell GPUs in a single rack. All 72 GPUs are interconnected with fifth-generation NVIDIA NVLink for GPU-to-GPU communication at 1.8TB/s. In addition, for inference workloads, Supermicro is announcing the ARS-221GL-NHIR, a 2U server based on the GH200 line of products, which will have two GH200 servers connected via a 900Gb/s high speed interconnect.

Supermicro systems will also support the upcoming NVIDIA Quantum-X800 InfiniBand platform, consisting of the NVIDIA Quantum-X800 QM3400 switch and the SuperNIC800, and the NVIDIA Spectrum-X800 Ethernet platform, consisting of the NVIDIA Spectrum-X800 SN5600 switch and the SuperNIC800. Optimized for the NVIDIA Blackwell architecture, the NVIDIA Quantum-X800, and Spectrum-X800 will deliver the highest level of networking performance for AI infrastructures.

Browse The Complete Interview About Hrtech : HRTech Interview with Tommy Barav, Founder and CEO at timeOS

 [To share your insights with us, please write to  pghosh@itechseries.com ] 

5G/EdgeAIClouddata centerIT SolutionstorageSupermicro