INDUSTRY INTELLIGENCE POWERED BY THE DCA
I believe this presents a real ‘wake-up’ call to
conventional server manufacturers – if they are ever to
get serious about total cost of ownership and global data centre
energy usage.
project with RI.SE and Boden Business
Agency to build the most efficient data
centre in the world. facility has already demonstrated PUEs
of below 1.02, which we believe is an
incredible achievement.
EcoCooling’s report below provides
interesting reading. The highly innovative modular building
and cooling system was devised to be
suitable for all sizes of data centres. By
using these construction, cooling and
operation techniques, smaller scale
operators will be able to achieve or better
the cost and energy efficiencies, of
hyperscale data centres.
Unlocking the holy grail of
efficiency – Holistic management
of DC infrastructure achieves PUE
of 1.02 in H2020 project
Since 2017, DCA member EcoCooling
has been involved in an EU Horizon 2020
funded ground-breaking pan-European
research project to build and manage the
most efficient data centre in the world.
With partners H1 Systems (project
management), Fraunhofer IOSB (compute
load simulation), RISE (Swedish Institute
of Computer Science) and Boden Business
Agency (Regional Development Agency) a
500kW data centre has been constructed
using the very latest energy efficient
technologies and employing a highly
innovative holistic control system.
In this article we will provide an update on
the exciting results being achieved by the
Boden Type Data Centre 1 (BTDC-1) and
what we can expect from the project in
the future.
The project objective: To build and
research the world’s most energy
and cost-efficient data centre
The BTDC is in Sweden, where there is
an abundant supply of renewable and
clean hydro-electricity and cold climate
ideal for free cooling. Made up of three
separate research modules/pods of Open
Compute/conventional IT, HPC and ASIC
(Application Specific Integrated Circuit)
equipment, the EU’s target was to design
a data centre with a PUE of less than 1.1
across all of these technologies. With
only half of the project complete, the
22
Issue 06
We all recognise that PUE has limitations
as a metric, however in this article and
for dissemination, we will continue to use
PUE as a comparative measure as it is still
widely understood.
Exciting first results – Utilising
the most efficient cooling
system possible
At BTDC-1, one of the main economic
features is the use of EcoCooling’s direct
ventilation systems with optional adiabatic
(evaporative) cooling which produces
the cooling effect without requiring an
expensive conventional refrigeration plant.
This brings two facets to the solution
at BTDC-1. Firstly, in the very hot or
very cold, dry days, the ‘single box
approach’ of EcoCoolers can switch to
adiabatic mode and provide as much
cooling or humidification as necessary to
maintain the IT equipment environmental
conditions within the ASHRAE ‘ideal’
envelope, 100% of the time.
using ‘single-purpose’ servers – but we’ve
done it with General Purpose OCP servers.
We’ve also achieved the same PUE using
high density ASIC servers.
This is an amazing development in the cost
and carbon footprint reduction of the data
centres. Let’s quickly look at the economics
of that applied to a typical 100kW medium
size data centre. The cooling energy cost is
dropped from £80,000 to a mere £5,000.
That’s a £75,000 per year saving in an
average 100kW medium size commercial
data centre.
Smashing 1.05 PUE – Direct linking
of server temperature to fan speed
What we did next has had truly
phenomenal results using simple
process controls. What has been
achieved here can be simply replicated
in conventional servers.
The ultra-efficient operation can only
be achieved if the mainstream server
manufacturers embrace these principles.
I believe this presents a real ‘wake-up’ call
to conventional server manufacturers – if
they are ever to get serious about total
cost of ownership and global data centre
energy usage.
You may know that within every server,
there are multiple temperature sensors
which feed into algorithms to control the
internal fans. Mainstream servers don’t
yet make this temperature information
available outside the server.
However, one of the three ‘pods’ within
BTDC-1 is kitted out with about 140kW
of Open-Compute servers. One of the
strengths of the partners in this project
is that average server measurements
have been made accessible to the cooling
With the cooling and humidification
approach I’ve just outlined, we were able
to produce very exciting results.
Instead of the commercial data centre
norm of PUE 1.8 or 80% extra energy used
for cooling, we have been achieving a PUE
of less than 1.05, lower than the published
values of some data centre operators
www.intelligentdatacentres.com