A year or so ago there were over 500 hyperscale data centre in existence worldwide and a further 170 or more in the pipeline, according to Synergy Research. But the explosion in cloud adoption – public and private – is not only driving demand for hyperscale data centres. Fuelled by the IoT and the arrival of 5G, the ongoing decentralisation of the cloud is also contributing hugely to the growing shift towards distributed ‘edge’ computing.
Edge cloud environments are now pivotal in extending the cloud down to the local level. These enable much of the data processing, storage, control and management of local applications to take place much closer to users, machines and devices. With this latency is significantly improved and optimised applications responsiveness, therefore maximising enterprise productivity, efficiency, competitive advantage, user and customer experience. Low latency also ensures the future availability and performance potential of 5G mobile network coverage; super-fast streaming video for content delivery providers; real-time cloud gaming; real-time AI, machine learning/deep learning decision making in industrial automation and medical environments; pinpoint control of driverless vehicles and much more.
“Fuelled by the IoT and 5G, the ongoing decentralisation of the cloud is contributing hugely to the growing shift towards distributed edge computing”
For lower latency and greater agility, data centres must be able to rapidly provision and scale compute and storage resources exactly where they are needed – but without risk of compromising IT security and resilience. At the same time, it is important to bear in mind that edge computing in edge data centres complements rather than competes against public cloud services.
Therefore, CIOs and developers reliant on the lowest latency possible must consider the best place to deploy and support new services as well as rethink the network architecture. In doing so, large enterprises, SMEs as well as cloud and telecoms service providers will benefit from their data and applications being much closer to users and customers with only less time sensitive, non-mission critical data being sent to the centralised public cloud for further analysis or archiving.
Apart from improved latency the cost of backhauling all data to one or two large hyperscale data centres can be significantly reduced by keeping it local. High volume data transmission costs can be enormous, such as in the case of autonomous vehicles.
Engineering a hybrid cloud approach
In response to new and growing market requirements a more regionalised edge data centre colocation solution has become necessary. This directly addresses the latency issues and data transit costs that typically occur with centralised cloud business models overly reliant on data centres in far off locations – at the other end of the country or even further afield. Edge colocation facilities are purpose-designed to fill the considerable gaps between modular micro (unmanned) data centres – located at the very edge of the network, for example next to mobile cell towers, on factory floors and hospital wards – and the centralised hyperscale ones.
However, to optimise a best of both worlds approach between public and local edge private clouds requires strategically located data centres to regional internet exchanges as well as diverse onsite carrier fibre connectivity. Hybrid architectures – combining public, private and perhaps on premise legacy IT – will also be required which often creates complex engineering challenges.
Applications migrations will dictate the hybrid strategy and one size does not fit all. Building the business case and the preparation work can be challenging when considering which applications will be placed in the edge data centre and which in the hyperscale data centre; how long it will take to migrate all the applications to the new infrastructure; the skills and experience available within the IT department; whether any remaining on premise legacy IT infrastructure needs to be accommodated; the software required for managing all environments within a hybrid implementation.
“To optimise a best of both worlds approach between public and local edge private clouds requires strategically located data centres to regional internet exchanges as well as diverse onsite carrier fibre connectivity”
With the above in mind, the level of on-site engineering competence available at regional colocation sites will be very important. Connectivity directly into public cloud provider infrastructure via on-site gateways is another factor, along with the flexibility to carry out pre-production testing in the data centre to ensure everything works prior to launching.
The ‘need for speed’ in terms of achieving low latency connectivity along with greater bandwidth and the benefits of reduced data transit costs must not be allowed to distract from the fundamentals of colocation: Continuous 24/7 data and storage systems availability through the provision of secure and resilient critical infrastructure.
It is wise to check physical and cyber security, forwards power availability, the types of cooling systems used and overall energy efficiency (PUE) – use of 100 per cent renewably sourced power should be a given by now but also look at how else a potential data centre provider is addressing sustainability. Finally, request proof of uptime service record, proven certifiable security and operational credentials, DR and business continuity contingencies, and the ability to provide end to end server migration and installation services.
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? The Data Centre Congress, 4th March 2021 is a free virtual event exploring the world of data centres. Learn more here and book your free ticket: https://datacentrecongress.com/