F E A T U R E
It will be interesting to see how the designation of data centres as Critical National Infrastructure ( CNI ) by the UK government will affect all stages of their lifecycle : site location , construction and operation . The main changes will involve increased government oversight , reporting and potential audits , along with new standards specific to data centres to ensure they meet high security and operational benchmarks . to High-Performance Computing – meaning AI clusters need to keep GPU servers located nearby , with most connections limited to 50 metres . That being said , not all data centres can accommodate GPU racks as a single cluster . These racks easily require over 40kW of power , forcing traditionally cooled data centres to spread them out , which wasn ’ t a problem in traditional data centres .
Alastair Waite , Senior Manager , Global Data Centre Market Development , CommScope
t ’ s clear AI is affecting
I data centre construction , deployments and network architecture design in general . From a regulatory standpoint , power-hungry AI has been increasing the difficulty of securing approval for the building of new data centres ; regulators are hyperconscious of the environmental footprint of data centres and their impact on local communities .
AI is also continuing to be a challenging factor for day-to-day data centre design and architectures . For example , processing large AI workloads requires GPU servers to have significantly higher connectivity between them , but because of power and heat constraints , there is a limitation on the number of servers which can be installed in each rack . This leads to a situation where each GPU server connects to a switch within its row or room , requiring more inter-rack fibre cables than previously seen in cloud data centres , running 400G and 800G connections .
However , this is problematic . AI and Machine Learning ( ML ) algorithms are highly sensitive to latency – similar
Cabling innovations allow data centres to navigate these narrow and congested GPU server-to-switch pathways and the increased cabling complexities that come with AI . Innovations like rollable ribbon fibre allow up to six 3,456 fibre cables to fit into a four-inch duct , doubling the density of traditional fibres , helping to keep GPU enabled Servers fed with the huge amounts of data they need to process Large Language Models ( LLMs ). Coupled together with new dense connector technologies like the MPO-16 connector , network designs can provide both high-density connectivity and support of mainstream IEEE high-speed roadmap speeds to 1.6Tb . Essential for future-proofing networks in preparation for AI networks . �
40 www . intelligentdatacentres . com