Edge Computing and its Vital Role in the 5G Economy
Edge computing is a way to provide services as close to the end user or device to increase speed or reduce latency, and is a linchpin to the value proposition of 5G.
What is mobile edge computing?
Mobile edge computing is typically providing cloud services (on demand, pay-as-you-go), close to the user or device, from within a mobile operator’s network.
Sometimes the benefit is simply lower latency and higher-bandwidth. But other benefits include the ability to provide intimate data sharing between nearby mobile users, the ability integrate with mobile services, the ability to provide local context about the physical world, and the ability make computationally sophisticated devices (e.g. augmented reality) lighter and with longer battery life.
The mobile edge can be described as an evolution of today’s cloud aimed at latency and network congestion reduction, but if viewed in terms of the new applications that are enabled is much more. Most of the futuristic 5G services talked about today — mass adoption of self-driving vehicles, remote surgery, Industrial Internet of Things (IIoT) — will depend on the availability of mobile edge computing.
Edge computing offers obvious technical benefits, according to CB Insights:
- Real-time or faster data processing and analysis: Data is processed closer to the source, not in an external data center or cloud, which reduces lag time.
- Lower costs: Enterprises spend less on data management solutions for local devices than for cloud and data center networks.
- Less network traffic: With an increasing number of IoT devices, data generation continues to rise at record rates. As a result, network bandwidth becomes more limited, overwhelming the cloud and leading to a greater bottleneck of data.
- Increased application efficiency: With lower latency levels, applications can operate more efficiently and at faster speeds.
Why does edge computing exist?
Today’s 4G networks can support roughly 4,000 devices per square mile. Networks providing 5G connectivity will dramatically increase that figure to roughly 3 million devices per square mile. To put that into perspective, New York City covers more than 300 square miles, which translates into nearly 1 billion devices needing connectivity in the Big Apple alone.
It no longer makes sense for all the data collected from those devices to be processed in a national or regional data center. Edge computing exists to bring the data center closer to where it’s needed, thus allowing for faster processing time.
Consider one of the internet’s biggest time-wasters — videos of cats — as an example, as stated by Jason Shepherd of Dell Technologies during the Linux Foundation’s Embedded Linux Conference in Europe.
"Cat videos explain the need for edge computing. If I post one of my videos online, and it starts to get hits, I have to cache it on more servers, way back in the cloud. If it goes viral, then I have to move that content as close to the subscribers that I can get it to. As a telco, or as Netflix or whatever, the closest I can get is at the cloud edge — at the bottom of my cell towers, these key points on the Internet. This is the concept of MEC, Multi-access Edge Computing — bringing content closer to subscribers. Well now, if I have billions of connected cat callers out there, I've completely flipped the paradigm, and instead of things trying to pull down, I've got all these devices trying to push up. That makes you have to push the compute even further down."
It’s a tongue in cheek example and most consumers don’t need that type of latency to share cat videos — streaming, in most cases, works just fine with 4G. But for many services, such as the self-driving car or virtual reality, incremental milliseconds of latency can make a world of difference.
Where is the edge?
Given its name, questions arise about where the edge exists. With a name like “the edge,” there is an intrinsic implication that we’re talking about the edge of “somewhere.”
Because edge, in this case, refers to putting computing power as close to the end device as possible, the fact of the matter is that the edge can be located essentially anywhere in a network architecture — from a central office, the base of a radio station, in a data center or even on a customer’s physical premise.
No matter the location, the goal remains the same — to provide computing power as close to the device producing data as possible to increase speed and lower latency.
What business outcomes does edge computing deliver?
Edge computing delivers a variety of benefits identified already, but the one we haven’t mentioned yet may be the easiest sell — making money.
Service providers have invested trillions of dollars to build out 3G and 4G networks only to see over-the-top services such as Netflix or Amazon truly derive value from that connectivity. Service providers today and moving into the future will invest enormous sums into 5G networks and cannot afford to miss out like they did with 3G and 4G.
Edge computing allows for faster monetization of 5G investments by delivering next-gen applications to consumers and, more importantly, enterprise customers.
Tying it all together
Similar to 5G, edge computing is not a single technology, but a set of technologies being deployed in unison to achieve a business outcome. And to achieve the scale needed with edge deployments, white box hardware will undoubtedly play a part.
The disaggregated nature of 5G adds further complexity. White box hardware alongside open source software stacks provide significantly more innovation and development opportunities. But as simple as it may sound, it is complicated.
Disaggregating software from hardware allows service providers to realize cost savings by deploying original design manufacturer (ODM) equipment while leveraging the power of software to become a nimbler operator that can provide best-of-breed solutions tailored to industry verticals.
Service providers desire the value white box can deliver, but typically can’t commit to the labor-intensive processes needed to validate and deploy white box solutions effectively nor able to dedicate the time or resources needed for ongoing support.
Deploying white box-based solutions is building something new and unknown. For service providers, it’s critical they have confidence the solutions they deploy will work as intended once deployed in the field. Operators need an experienced integrator that can oversee multi-vendor solutions, validate and ensure design requirements are met, rollout the solution quickly at scale, optimize the solution on an ongoing basis and provide technical support between various vendors.