Intelligent Data Centres Issue 56 | Page 48

FEATURE
Scientific data centres , which include largely GPU-driven applications like Machine Learning , AI and high analytics like cryptomining , are the areas of the industry that typically are transitioning or moving towards liquid cooling . But if you look at some other workloads like the cloud and most businesses , the growth rate is rising but it still makes sense for air cooling in terms of cost . The key is to look at this issue from a business perspective – what are we trying to accomplish with each data centre ?
What ’ s driving server power growth ?
Up to around the year 2010 businesses utilised single-core processors but once available they transitioned to multi-core processors . However , there still was a relatively flat power consumption with these dual and quad-core processors . This enabled server manufacturers to concentrate on lower airflow rates for cooling ITE , which resulted in better overall efficiency .
Around 2018 , with the size of these processors continually shrinking , higher multi-core processors became the norm and with these reaching their performance
limits , the only way to continue to achieve the new levels of performance by compute-intensive applications is by increasing power consumption . Server manufacturers have been packing in as much as they can to servers but because of CPU power consumption , in some cases , data centres were having difficulty removing the heat with air cooling , creating a need for alternative cooling solutions , such as liquid .
Server manufacturers have also been increasing the temperature delta across servers for several years now , which again has been great for efficiency since the higher the temperature delta , the less airflow is needed to remove the heat . However , server manufacturers are , in turn , reaching their limits , resulting in data centre operators having to increase the airflow to cool high-density servers and to keep up with increasing power consumption .
Additional options for air cooling
Thankfully , there are several approaches the industry is embracing to cool power densities up to and even greater than 35kW per rack successfully , often with traditional air cooling . These options start with deploying either cold or hot aisle containment . If no containment is used typically , rack densities should be no higher than 5kW per rack with additional supply airflow needed to compensate for recirculation air and hot spots .
At some point , high-density servers and racks will also need to transition from air to liquid cooling , especially with CPUs and GPUs expected to exceed 500W per processor or higher in the next few years . But this transition is not automatic and isn ’ t going to be for everyone .
Liquid cooling is not going to be the ideal solution or remedy for all future cooling requirements . Instead , the selection of liquid cooling instead of air cooling has to do with a variety of factors , including specific location , climate ( temperature and humidity ), power densities , workloads , efficiency , performance , heat reuse and physical space available . This highlights the need for data centre stakeholders to take a holistic approach to cooling their critical systems . It will not and should not be an approach where we ’ re considering only air or only liquid cooling moving forward . Instead , the key is to understand the trade-offs of each cooling technology and deploy only what makes the most sense for the application . �
48 www . intelligentdatacentres . com