Intelligent Data Centres Issue 51 | Page 45

IF EVERY DATA CENTRE UPGRADED TO
ADHERE TO HIGH PERFORMANCE SPECIFICATIONS
THE WORLD WOULD
EXPERIENCE A SERIOUS
ENERGY CRISIS .
FEATURE

The original definition of Artificial Intelligence ( AI ) was the ability of computers to perform tasks that people usually did . However , as the technology continues to develop it is apparent that there are many situations whereby computers can collate upto-the-minute data to provide analysis that no human could feasibly do . This isn ’ t to say that computers can , or will , replace human jobs , but rather that offer support and provide a better service that transforms the way in which tasks are carried out across business today . Many companies are adjusting their business models to secure stakeholders ’ trust and safeguard long-term profitability and this means that in order to stay competitive , businesses are increasingly leaning on AI to automate and optimise key processes that can increase productivity and operational efficiencies . The technology is already proving invaluable in heavily datadriven industries such as research and development or banking . The power of AI now allows engineers to simulate a new vehicle in a matter of hours , or can review MRI scans in a fraction of the time of a doctor – just two examples of a myriad of applications that AI can support . The results are amazing , but what is the cost ?

Digitalisation has had a massive impact on our climate , yielding an everincreasing demand for electricity and rising carbon emissions because of its acceleration . To train just one computer to behave in a humanlike way via Machine

IF EVERY DATA CENTRE UPGRADED TO

ADHERE TO HIGH PERFORMANCE SPECIFICATIONS

THE WORLD WOULD

EXPERIENCE A SERIOUS

ENERGY CRISIS .

Learning ( ML ) on a GPT-3 Large Language Model can use upwards of 12 MW of data and cost up to US $ 3million dollars . Not to mention the huge amount of energy needed to power this process . And if at any time the requirements change , the process of training the computer must be repeated from scratch .
With the increase in demand for these high-density workloads , businesses have had to invest heavily in computer hardware such as Graphics Processing Units ( GPUs ), which are necessary for advanced calculations for AI , Natural Language Processing ( NLP ), scientific simulations and risk analysis among other things .
Most IT departments cannot support this significant infrastructure internally . For businesses to build their own GPUs to power AI activity could take 18 months – considerably delaying any product to market – and , while they can be hosted in the public cloud , most companies would not deem this secure enough . Yet access to physical data centres that have the capability to handle High Performance Computing ( HPC ) may not be possible – especially if instant global data processing is required .
This is because most legacy data centres built for general purpose computing have become outdated and are not capable of accommodating the GPU servers , density and storage capacity to manage these workloads in an efficient way . Additionally , the amount of energy required to upgrade these legacy data centres to HPC standards is vast – if every data centre upgraded to adhere to high performance specifications the world would experience a serious energy crisis .
The nature of these applications , in addition to rising costs associated with using public cloud services and increased pressure on sustainability , has tasked many organisations with the challenge of finding new alternatives to ensure continuity with high-performance applications in a cost-effective and energy efficient way . As businesses strive to remain competitive and the demand for this kind of technology increases , there must be a way of preventing the financial , technological and ecological fallout that seems likely to occur .
Guy D ’ Hauwers , Global Director – HPC & AI , atNorth ,
Fortunately , there are steps that businesses can take to future-proof themselves . Firstly , look to utilise newer data centres that are built from the ground up with these high-density , highperformance applications in mind . This means that access to data ecosystems can be constructed within days , allowing businesses to get their product to market significantly faster . The data is protected by encrypted links that allow the data centre to act as an extension of the company ’ s own storage infrastructure .
Additionally , GPU as-a-Service ( GPUaaS ) technology means the offering is entirely scalable , so that as business requirements change , the infrastructure requirements can be increased or decreased as demand fluctuates . This flexible approach is not only more sustainable for businesses whose requirements fluctuate but is more cost-effective too .
Another consideration is the environmental cost of HPC . Data centres that are built to cater to these AI workloads require powerful systems with significant cooling requirements that involve the use of large amounts of energy at considerable costs .
Many businesses make pledges around their sustainability or carbon-neutral commitments , but this is often hard to implement and substantiate . The bulk of net-zero claims are based on buying from renewable energy suppliers that might use certificates like guarantees of origin ( GoOs ) and power purchase agreements ( PPAs ). Unfortunately , some suppliers are www . intelligentdatacentres . com
45