Intelligent Data Centres Issue 24 | Page 22

INDUSTRY INTELLIGENCE POWERED BY THE DCA
Performance is different to efficiency but often the two become closely linked – make it more efficient and then use the spare capacity you ’ ve created to deliver more . Whatever the intended meaning , there are two critical factors to delivering high performance ; definition and operation .
supported . Defining the wrong parameter often results in great effort and expense being invested in meeting a specific requirement , which is later uncovered to either be outdated or set arbitrarily .
Whether a new facility or an existing one , the investment in the definition stage will always pay off as once fully defined , we are then able to maximise the performance of the facility , and because of a good definition , we will know what performance means .
For many years and still today , many high-performing data centres will consist of a number of carefully tuned systems , each looking after part of the system and reacting to changes in demand , be it at the IT or facility level . These control
A data centre with a solid definition , well designed with a modern deployment of sensors and controls , would still be a good example , delivering good figures in any number of KPIs chosen to be reported . That said , momentum is growing with the adoption of more complex systems with a wider scope and some level of Machine Learning .
Machine Learning can in some cases be overstated . At this time , the level of adoption is limited and within active deployments , there is a range of successes and failures .
The proven potential of Machine Learning systems cannot be undervalued though , especially when it comes to the final incremental improvements in efficiency and performance . It is in this area where Machine Learning offers an impartial , multi-skilled , constantly working and constantly watching , team member – one who is aware of the goals and can predict how these are best achieved . issues and perhaps most importantly , bridge disciplines to advise IT and M & E systems based on the calculated impact to the other .
There is a huge gap between the majority of the industry and the small few that are implementing such systems at scale and given the adoption of basics such as aisle containment , it might well be a long time before we see such systems in the majority of spaces , but we will eventually .
The key to squeezing every last drop of performance out of a facility might one day be a highly-refined Machine Learning
Zac Potts , Associate Director ( Data Centre Design ) – Sudlows
systems work to balance the performance targets and constrains set out in the definition stages , so the importance in getting that right is clear . Advanced design tools like CFD and advanced load placement algorithms offer a way to refine operation but are still based on the same definition and only offer information based on a snapshot in time .
The same tools which feed into leading design processes are being integrated into the ML decision tree of the ML models . At Sudlows , for instance , our Simulation and Modelling team are integrating CFD and hydraulic system models so algorithms can work with both observed historical data and continually recalculated simulated results of scenarios which , hopefully , we ’ ll never experience – unnerving combinations of poor load placement , system failures , peak days and grid power interruptions .
Limited to just improving the performance of the M & E , a developed ML system will soon become unchallenged , but fortunately the scope is much greater . Systems have expanded to consider long- and short-term reliability , offer predictive advice on imminent faults and
22 www . intelligentdatacentres . com