FEATURE
FEATURE
EFFICIENT
COOLING HAS
ALWAYS BEEN
CRITICAL TO
DATA CENTRE
RESILIENCE AS
WELL AS FROM
AN ENERGY COST
OPTIMISATION
PERSPECTIVE.
T
he dawn of a new decade is
seeing many more IT projects
and applications consuming
greater power per unit area.
Power densities in the hyperscale era
are rising. Some racks are now pulling
60 kWs or more and this trend will only
continue with the growing demand for
high performance computing (HPC) as
well as GPUs/IPUs for supporting new
technologies such as Artificial Intelligence.
Power and cooling are therefore top
priorities for data centre operators.
However, even though the availability of
sufficient existing and forwards power is
vital, it’s already proving a stretch for both
the power capacity and local electricity
distribution infrastructure of many older
data centres and those located in crowded
metropolitan areas.
Putting super-efficient cooling and energy
management systems in place is a must.
For cooling there are various options,
installing, for example, the very latest
predictive systems and utilising nano-
cooling technologies.
However, these may only be viable for
new purpose-designed data centres rather
than as retrofits in older ones. Harnessing
climatically cooler locations which favour
direct-air and evaporative techniques
is another logical step, assuming such
locations are viable when it comes to the
accessibility, cost, security, power and
connectivity considerations.
www.intelligentdatacentres.com
Clearly, efficient cooling has always been
critical to data centre resilience as well
as from an energy cost optimisation
perspective. But it now matters more than
ever, in spite of next-generation servers
being capable of operating at higher
temperatures than previous solutions.
HPC is a case in point. Would-be HPC
customers are finding it challenging to find
colocation providers capable of providing
suitable environments on their behalf,
especially when it comes to the powering
and cooling of these highly-dense and
complex platforms.
Suitable colocation providers in the UK
– and many parts of Europe – are few
and far between. The cooling required
demands bespoke build and engineering
skills as many colos are standardised/
productised; so unused to deploying the
specialist technologies required.
HPC requires highly targeted cooling.
Simple computer room air conditioning
(CRAC) or free air cooling systems (such
as swamp or adiabatic coolers) typically
do not have the capabilities required.
Furthermore, hot and cold aisle cooling
systems are increasingly inadequate for
addressing the heat created by larger
HPC environments which will require
specialised and often custom-built cooling
systems and procedures.
Cooling and energy
management in practice
Fit for purpose data centre facilities
are actually becoming greener and
ever more efficient in spite of the
rise in compute demands. However,
best practice necessitates real-time
analysis and monitoring for optimising
cooling systems plant and maintaining
appropriate operating temperatures for
IT assets, without fear of compromising
performance and uptime.
Central to this and to maximising
overall data centre energy efficiencies,
are integrated energy monitoring and
management platforms. An advanced
system will deliver significant savings
through reduced power costs and by
minimising environmental impact.
Issue 14
45