INDUSTRY INTELLIGENCE POWERED BY THE DCA
Call to leading vendors
We are absolutely certain that if server manufacturers
work with the partners at the BTDC-1 research project,
we can help them (and the entire data centre world) to slash
average cooling PUEs from 1.5 to 1.004
system. At EcoCooling, we have taken
all of that temperature information into
the cooling system’s process controllers
(without needing any extra hardware).
Normally, processing the cooling systems
are separate with inefficient time-lags and
wasted energy. We have made them close-
coupled and able to react to load changes
in milliseconds rather than minutes.
As a result, we now have BTDC-1 ‘Pod 1’
operating with a PUE of not 1.8, not 1.05,
but 1.03.
The BTDC-1 project has demonstrated a
robust repeatable strategy for reducing
the energy cost of cooling a 100kW data
centre from £80,000 to a tiny £3,000.
This represents a saving of £77,000 a
year for a typical 100kW data centre.
Now consider the cost and environmental
implication of this on the hundreds of new
data centres anticipated to be rolled out to
support 5G and ‘edge’ deployment.
generations of servers will use far less
energy when not busy. So instead of
75% quiescent energy, we expect to see
this fall to 25%. This means the cooling
system must continue to deliver 1.003
pPUE at very low loads. (It does.)
Also, BTDC-1, Pod 1 isn’t just sitting idly
drawing power – our colleagues from the
project are using it to emulate a complete
Smart City (including the massive
processing load of driverless cars).
At BTDC-1, we have three research pods.
Pod 2 is empty – waiting for one or more
of the mainstream server manufacturers
to step up to the ‘global data centre
efficiency’ plate and get involved.
As a sneak peek of what’s to come in
future project news, Pod 3 (ASIC) is now,
using the same principles outlined in this
article, achieving a PUE of 1.004.
We are absolutely certain that if server
manufacturers work with the partners at
the BTDC-1 research project, we can help
them (and the entire data centre world)
to slash average cooling PUEs from 1.5
to 1.004.
The processing load varies wildly – with
massive loads during the commuter traffic
‘rush hours’ in the weekday mornings and
the afternoons. The opportunity for EcoCooling to work
with RISE (Swedish institute of computer
science) and German research institute
Fraunhofer has allowed us to provide
independent analysis and validation of
what can be achieved using direct fresh
air cooling.
And then (comparatively) almost no activity
in the middle of the night. So, we can expect
many DCs (and particularly the new breed
of ‘dark’ edge DCs) to have wildly varying
power and cooling load requirements. The initial results are incredibly
promising and considering we are
only halfway through the project we
are excited to see what additional
efficiencies can be achieved. ◊
Planning for the future –
Automatically adjusting to
changing loads
An integrated and dynamic approach to DC
management is going to be essential as
data centre energy-use patterns change.
What do I mean? Well, most current-
generation data centres (and indeed the
servers within them) present a fairly
constant energy load.
That is because the typical server’s energy
use only reduces from 100% when it is
flat-out to 75% when it’s doing nothing.
At BTDC-1, we are also designing for
two upcoming changes which are
going to massively alter the way data
centres need to operate. Firstly, the next
www.intelligentdatacentres.com
Issue 06
23