Intelligent Data Centres Issue 78 | Page 57

THE ESCALATING THERMAL DEMANDS OF AI AND CLOUD COMPUTING ARE PUSHING CONVENTIONAL COOLING STRATEGIES TO THEIR LIMITS.
TALKING BUSINESS systems, designed for high-performance computing and AI, will consume enormous amounts of power, generating heat at levels that traditional cooling methods simply cannot manage.
Liquid cooling: the foundation for heat reuse
Liquid cooling has emerged as a cornerstone technology to address this challenge. Unlike air cooling, which disperses heat into the atmosphere, liquid cooling systems absorb heat directly at the source, often at the chip or board level, and carry it through a closedloop system.
While not a new approach, the needs of AI data centres and recent technology advancements have driven a resurgence in liquid cooling, transforming it into a high-priority, fast-growing market. It offers several key advantages over traditional air-based systems, but the largest benefit is in efficiency. Liquids, particularly engineered or dielectric fluids, have a much higher heat capacity than air, enabling more efficient thermal transfer directly at the source. This reduces reliance on large-scale HVAC systems, lowering operational energy costs and allowing for more compact, efficient facility designs.
Although liquid cooling is becoming more prevalent, most data centres are still designed to support a broad range of workloads – not just AI. These facilities typically serve a mix of traditional IT, cloud services and high-performance computing workloads. As a result, there is increasing interest in hybrid cooling solutions, which combine the efficiency of liquid cooling where needed with lower-cost air cooling for less demanding systems. These hybrid approaches strike a practical balance between sustainability, performance, and capital expenditure, allowing data centre operators to adapt to changing workloads.

THE ESCALATING THERMAL DEMANDS OF AI AND CLOUD COMPUTING ARE PUSHING CONVENTIONAL COOLING STRATEGIES TO THEIR LIMITS.
into a cohesive system. Emerging technologies are enabling this shift by facilitating smarter, more responsive infrastructure designs.
One key development is power-aware thermal design, which allows data centres to better manage heat generated by high-performance components. AI workloads, especially those involving GPUs, can create unpredictable power surges and localised hot spots. Intelligent co-design of thermal and power systems enables engineers to anticipate these fluctuations, minimising inefficiencies and reducing stress on infrastructure equipment in and around the data centre.
The rise of modular, high-density rack systems is also transforming scalability. These racks are built to support power loads well beyond 125kW, with designs already in development to handle up to 1MW. Their modular nature allows operators to add capacity without a complete overhaul, while also enabling more effective heat capture and reuse across multiple stages of operation.
Direct-to-chip cooling and advanced power delivery technologies are pushing thermal efficiency to new levels. Precision systems, such as micro-convective cooling, remove heat directly from critical silicon components. When combined with innovations like vertical power delivery( VPD), these solutions reduce energy loss and maintain higher-quality waste heat.
The role of AI and machine learning
One of the more transformative forces in thermal and energy optimisation is machine learning. Predictive analytics can monitor energy usage and thermal loads at a granular level, forecast peak demand periods and shift loads to optimise thermal capture, as well as dynamically adjust cooling systems for efficiency and performance.
Integrating cooling, power and compute design
To support next-generation workloads, particularly those driven by AI, data centres must integrate compute density, power delivery and thermal management
www. intelligentdatacentres. com 57