DATA CENTRE PREDICTIONS
Meet Edge Computing in theory
data can be stored, accessed and then
uploaded to the cloud when accessible,
while data resides on the edge of the
device’s network. This feature greatly
benefits AI devices, such as smartphones
and self-driving cars which don’t always
have access to the cloud due to network
availability or bandwidth but are reliant on
data processing to make decisions.
In a world that is increasingly data-driven,
much of that information is generated
outside of the traditional data centre. This
is Edge Computing: the processing of data
outside of that traditional data centre and
typically on the edge of a network onsite.
Infrastructure at the edge, despite its
small hardware footprint, is able to collect,
process and reduce vast quantities of data
so that it can be uploaded to a centralised
data centre or the cloud.
Instead of sending data across long
routes, this allows for data to be
processed and reacted upon closer to
the point of creation. Many use cases,
such as self-driving cars, quick service
restaurants, grocery shops and industrial
settings like energy plants and mines,
have found Edge Computing to be key in
their implementation.
This said, there are still improvements to
be made in how effectively information
captured at the edge is used. Since AI is
still in its infancy, it requires an incredible
amount of resources in order to train
its models. For these training purposes,
Edge Computing is best suited to allow
information and telemetry to flow into the
cloud for deep analysis, and models that
are trained in the cloud should then be
deployed back to the edge. Cloud and data
centres will always be the best resources
for model creation.
And now in practice
Cerebras, a next-generation silicon
chip company, just introduced its new
‘Wafer Scale Engine’ which is designed
specifically for the training of AI models.
With 1.2 trillion transistors and 400,000
processing cores, the new chip is
phenomenally fast. However, all of this
consumes a huge amount of power,
which means it isn’t viable for most
Edge deployments.
Data lakes can be created and better
utilised by organisations when
consolidating Edge Computing workloads
using HCI. Once data is in a data lake, it’s
available to all applications for analysis.
On top of this, Machine Learning can
18
Issue 12
The combination of HCI and Edge
Computing also provides reduced form
Phil White, CTO, Scale Computing
provide new insights using shared data
from different devices and applications.
HCI creates an ease of use by combining
servers, storage and networking all
in one box. This eliminates many of
the challenges of configuration and
networking that come with Edge
Computing. Additionally, platforms can
integrate management for hundreds or
thousands of Edge devices in different
geographical locations all with different
types of networks and interfaces. These
allow for much of the complexity to be
avoided, which significantly reduces
operational expenses.
WITH THE HELP
OF HCI AND EDGE
COMPUTING,
ORGANISATIONS
CAN HARNESS
AI TOOLS FOR
SMARTER
DECISION-MAKING.
How does AI benefit from HCI
and Edge Computing?
With the introduction of smart home
devices, wearable technology and self-
driving cars, AI is becoming much more
common and is only set to grow, with an
estimated 80% of devices having some
sort of an AI feature by 2022.
Most AI technology relies on the cloud: it
makes decisions based on the collection
of data stored in the cloud it is accessing.
However, since the data has to travel
to data centres and then back to the
device, this can cause latency. Latency is
especially problematic for technologies
such as self-driving cars, which cannot
wait for the round-trip of data to know
when to brake, or how fast to travel.
A key benefit of Edge Computing for AI is
that necessary data would live locally to
the device, which reduces latency. New
www.intelligentdatacentres.com