Intelligent Data Centres Issue 66 | Page 43

THE QUESTION IS NOT WHETHER TO INVEST IN HPC , BUT HOW BEST TO LEVERAGE IT TO ACHIEVE STRATEGIC BUSINESS GOALS .
E X P E R T O P I N I O N igh-Performance

H

Computing ( HPC ) is now a necessity for enterprises seeking to maintain a competitive edge . The question is not whether to invest in HPC , but how best to leverage it to achieve strategic business goals . However , making the decision between housing HPC infrastructure on-premises or using colocation facilities can significantly impact on costs as well as your organisation ’ s agility and scalability .
While the public cloud as a delivery model may be a viable alternative or complementary option for certain HPC applications and standard workloads , many won ’ t be optimised . This brings the decision down to self-build onpremises , or renting your data centre space , critical infrastructure and engineering support services through colocation . The key focus areas to help with making the right decision will most probably include the following .
Space
The first consideration is whether you have sufficient space on premises to house a high-performance system . Given that these machines are many times more powerful than anything else in the corporate data centre , it ’ s not just a matter of racking another box in the server room .
Efficient cooling
HPC rigs generate a significant amount of heat that needs to be managed carefully . Depending on the power of the system , existing cooling systems may be insufficient , and upgrading the cooling system to accommodate HPC requirements can be a costly and timeconsuming process that involves removing large volumes of generated heat . provisions . While standard computing systems commonly operate on a power density of 5 – 8kW per rack , with high-density blade platforms running at approximately 12kW per rack , HPC systems can require significantly higher levels . We are seeing requirements for 50 and even 100kW per rack .
The energy demands of HPC systems are remarkable . To put it in perspective , a 40-teraflop system requires around 2MW of power , roughly the equivalent of a small data centre . As HPC systems become more advanced and powerhungry , the challenge of meeting their energy demands only increases .
The ultimate limitation for most onpremises and indeed colocation data centres will therefore be the availability of sufficient power . Highly concentrated power to rack in ever smaller footprints is critical as dense HPC equipment needs high power densities .
Fibre connectivity and latency
Therefore , liquid-cooled systems , which circulate and recirculate within watercooling systems , become a necessity . These systems typically snake in and out of the building and are worth considering , even if your HPC needs are relatively modest . Retrofitting these systems once a system is in operation can be an even more complicated , costly and time-consuming process that involves significant downtime .
Abundant power
HPC systems , due to their highcomponent density , necessitate specialised power and cooling
The majority of colocation data centres have far higher levels of diverse fibre connectivity compared to on-premises campuses . Basic public connectivity solutions will generally not be sufficient for HPC systems .
Ensuring connectivity through multiple diverse connections from the facility is crucial along with specialised connections to public clouds , especially in the case of hybrid cloud solutions . These bypass the public Internet to enable more consistent and secure interactions between the HPC platform and other workloads the organisation may be operating .

THE QUESTION IS NOT WHETHER TO INVEST IN HPC , BUT HOW BEST TO LEVERAGE IT TO ACHIEVE STRATEGIC BUSINESS GOALS .
www . intelligentdatacentres . com 43