Intelligent Data Centres Issue 50 | Page 49

END USER INSIGHT END USER INSIGHT together with Schneider Electric , Total Power Solutions was responsible for the precise design of an optimum solution to meet the data centre ’ s needs and its integration into the existing infrastructure .
A major consideration was to minimise disruption to the data centre layout , keeping in place the Schneider Electric EcoStruxure Row Data Centre System ( formerly called a Hot Aisle Containment Solution , or HACS ). The containment solution is a valued component of the physical infrastructure , ensuring efficient thermal management of the IT equipment and maximising the efficiency of the cooling effort by minimising the mixing of the cooled supply air and hot return – or exhaust – airstream .
The new cooling system provides a highly efficient , close-coupled approach which is particularly suited to high density loads . Each InRow DX unit draws air directly from the hot aisle , taking advantage of higher heat transfer efficiency and discharges room-temperature air directly in front of the cooling load . Placing the unit in the row yielding 100 % sensible capacity and significantly reduces the need for humidification .
Cooling efficiency is a critical requirement for operating a low PUE data centre , but the most obvious benefit of the upgraded cooling system is the built-in resilience afforded by the 10 independent DX cooling units . No longer is there a single point of failure ; there is currently sufficient redundancy in the system that if one of the units fails , the others can pick up the slack and continue delivering cooling with no impairment of the computing equipment in the data centre .
“ We calculated that we might just have managed with eight separate cooling units ,” said Cannon , “ but we wanted the additional resilience and fault tolerance that using 10 units gave us .” Additional benefits of the new solution include its efficiency – the system is now sized according to the IT load and avoids the overcooling of the data centre both to reduce energy use and improve its PUE .
In addition , the new cooling system is scalable according to the potential requirement to add further HPC clusters or accommodate innovations in IT , such as the introduction of increasingly powerful but power-hungry CPUs and GPUs . “ We designed the system to allow for the addition of four more cooling units if we need them in the future ,” said Cannon . “ All of the power and piping needed is already in place , so it will be a simple matter to scale up when that becomes necessary .”
Implementation : Upgrading a live environment at UCD
It was essential while installing the new system that the data centre kept running as normal and that there was no downtime . The IT department and Total Power Solutions adopted what Cannon calls a ‘ Lego block ’ approach ; first to consolidate some of the existing servers into fewer racks and then to move the new cooling elements into the freedup space . The existing chilled-water system continued to function while the new DX-based system was installed , commissioned and tested . Finally , the obsolete cooling equipment was decommissioned and removed .
Despite the fact that the project was implemented at the height of the COVID pandemic with all the restrictions on movement and the negative implications for global supply chains , the project ran to schedule and the new equipment was successfully installed and implemented without any disruption to IT services at UCD .
Results : A cooling boost for assured IT services and space freed for increased student facilities
The new cooling equipment has resulted in an inherently more resilient data centre with ample redundancy to ensure reliable ongoing delivery of all hosted IT services in the event that one of the cooling units fails . It has also freed up much valuable real-estate that the university can deploy for other purposes .
As an example , the building housing the data centre is also home to an Applied Languages department . “ They can be in the same building because the noise levels of the new DX system are so much lower than the chilled-water solution ,” Cannon said . “ That is clearly an important issue for that department , but the DX condensers on the roof are so quiet you can ’ t tell they ’ re there . It ’ s a much more efficient use of space .”
With greater virtualisation of servers , the overall power demand for the data centre has been dropping steadily over the years . “ We have gone down from a power rating of 300kW to less than 100kW over the past decade ,” said Cannon . The Daedalus data centre now comprises 300 physical servers but there are a total of 350 virtual servers split over both data centres on campus .
To maximise efficiency , the university also uses EcoStruxure IT management software from Schneider Electric , backed up with a remote monitoring service that keeps an eye on all aspects of the data centre ’ s key infrastructure and alerts IT Services if any issues occur .
The increasing virtualisation has seen the Power Usage Effectiveness ( PUE ) ratio of the data centre drop steadily over the years . PUE is the ratio of total power consumption to the power used by the IT equipment only and is a well understood metric for electrical efficiency . The closer to 1.0 the PUE rating , the better . “ Our initial indications are that we have managed to improve PUE from an average of 1.42 to 1.37 ,” said Cannon .
“ However , we ’ re probably overcooling the data centre load currently , as the new cooling infrastructure settles . Once that ’ s happened , we ’ re confident that we can raise temperature set points in the space and optimise the environment in order to make the system more energy efficient , lower the PUE and get the benefit of lower cost of operations .”
The overall effects of installing the new cooling system are therefore : greater resilience and peace of mind ; more efficient use of space for the benefit of the university ’ s main function of teaching ; greater efficiency of IT infrastructure and consequently a more sustainable operation into the future . � www . intelligentdatacentres . com
49