Intelligent Data Centres Issue 59 | Page 61

Supermicro expands global manufacturing footprint increasing rack scale manufacturing capacity

Supermicro – a total IT solution manufacturer for AI , cloud , storage and 5G / Edge – is expanding its AI and HPC rack delivery capacity and advanced liquid cooling solutions .

Worldwide , Supermicro ’ s full rack scale delivery capacity is growing from several state-of-the-art integration facilities in the US , Taiwan , Netherlands and Malaysia . Future manufacturing expansion and locations are actively being considered to scale to the increasing demand for Supermicro ’ s rack scale AI and HPC solution portfolio .
“ With our global footprint , we now can deliver 5,000 racks per month to support substantial orders for fully integrated , liquid cooled racks , requiring up to 100kW per rack ,” said Charles Liang , President and CEO , Supermicro . “ We anticipate that up to 20 % of new data centres will adopt liquid cooling solutions as CPUs and GPUs continue to heat up .
“ Our leading rack scale solutions are in great demand with the development of AI technologies , an increasing part of data centres worldwide . Full rack scale and liquid cooling solutions should be considered early in the design and implementation process , which results in reduced delivery times to meet the urgent implementation requirements for AI and hyperscale data centres .”
Supermicro maintains an extensive inventory of ‘ Golden SKUs ’ to meet fast delivery times for global deployments . Large CSPs and enterprise data centres running the latest Generative AI applications will quickly benefit from reduced delivery times worldwide . Supermicro ’ s broad range of servers from data centres to the Edge ( IoT ) can be seamlessly integrated , resulting in increased adoption and more engaged customers .
With the recent announcement of the MGX product line – and the NVIDIA GH200 Grace Hopper Superchip and the NVIDIA Grace CPU Superchip – Supermicro continues to expand AIoptimised servers to the industry .
Combined with the existing product line incorporating the LLM-optimized NVIDIA HGX 8-GPU solutions and NVIDIA L40S and L4 offerings , together with Intel Data Center MAX GPUs , Intel Gaudi2 , and the AMD Instinct MI series GPUs , Supermicro can address the entire range of AI training and AI inferencing applications . The Supermicro All- Flash storage servers with NVMe E1 . S and E3 . S storage systems accelerate data access for various AI training applications , resulting in faster execution times . For HPC applications , the Supermicro SuperBlade , with GPUs , reduces the execution time for high-end simulations with reduced power consumption .
Liquid cooling , when integrated into a data centre , can reduce the PUE by up to 50 % compared to existing industry averages . Reducing the power footprint and the resulting lower PUE in a data centre significantly lowers operating expenditures when running generative AI or HPC simulations .
With rack scale integration and deployment services from Supermicro , customers can start with proven reference designs for rapid installation while considering clients ’ unique business objectives . Clients can then work collaboratively with Supermicro-qualified experts to design optimised solutions for specific workloads .
Upon delivery , the racks only need to be connected to power , networking and the liquid cooling infrastructure , underscoring its seamless plug-and-play methodology . Supermicro is committed to delivering full data centre IT solutions , including on-site delivery , deployment , integration and benchmarking to achieve optimal operational efficiency . �
INTELLIGENT INFRASTRUCTURE www . intelligentdatacentres . com 61