EDITOR ’ S QUESTION
DUNCAN CLUBB , HEAD OF DIGITAL INFRASTRUCTURE ADVISORY AT CBRE
ou ’ ve probably heard about ‘ the Edge ’; it
Y is being mentioned all over the trade press and at all the conferences that used to be dedicated to data centres . But if you asked three people what the Edge is , you will get at least four answers . None of the definitions are necessarily wrong , they just come from varied viewpoints . Telecommunication companies have one view ( at least ), content delivery networks have another , cloud companies yet another and so on . . . . The one common theme is about putting IT resources nearer to where they get used .
The Edge is merely a location where you put some form of processing power – simply because you have to . This is an important distinction . We have spent the last few years centralising processing power in enterprise data centres and cloud services – it was supposed to be the most efficient way . So why are we looking at a distributed network of systems , with a grid of small data centres , apparently breaking the mould that was meant to represent best practice and deliver the most efficient IT infrastructure ?
The reason is simple – distributed infrastructure allows you to deploy a new class of application : the Edge-native application .
Edge-native applications need a raft of special features that classic or cloud architectures cannot provide in all locations , including low network latency and high bandwidth .
Latency – the delay between transmitting and receiving data on a network – is highly sensitive to distance , so it follows that the nearer you can put processing power to its users , the lower the latency .
This is really important for some industrial or safety-critical apps , as well as AR / VR and gaming , but I would argue that it ’ s the bandwidth that is the most important factor at the Edge .
Edge-native applications are still in their infancy , but one thing most have
IT ’ S NOT HARD TO SEE THAT DATA OR VIDEO- INTENSIVE APPLICATIONS ARE GOING TO DRIVE THE NEED FOR EDGE COMPUTE . in common is that they generate or consume huge quantities of data .
Much of that data is in the form of highdefinition video feeds and many Edgenative applications have been written to perform processing and analytics on multiple video streams .
The use of Machine Learning or AI systems to analyse these huge data feeds is creating new requirements for compute resources to be provided at the Edge so backhaul networks do not become clogged .
One example in the retail industry is the use of video and ML systems to automate shops so that they can be operated without cashiers or tills – it started with the Amazon shops , but now all the big retailers are moving this way too .
These shops can use many tens or hundreds of high-definition ( 4K ) cameras , each of which needs 15 – 32Mbps of bandwidth .
It ’ s not hard to see that data or videointensive applications are going to drive the need for Edge Compute – a distributed data centre model that will enable the next generation of app by providing local processing power .
The good news for the data centre industry is that this will add to the enterprise and cloud core , not take away from it . If anything , Edge Compute will also increase demand at the core . ◊
32 www . intelligentdatacentres . com