What is the Edge and why are we all talking about it?
Edge Computing Today
After the cloud came the Edge. We take a look at the Edge computing phenomenon and attempt to assess what all the fuss is about.
Edge Computing is a popular term in the high tech media, and, like many buzzwords that rise to claim a place in the limelight, the term "Edge" appears to have emerged fully formed before the industry settled on a clear definition for what it is. So what is Edge computing, and how does it differ from other approaches? How is Edge computing related to IoT or other contemporary technologies? We decided it was time for a visit to the Edge.
Beyond the Cloud
For years now, large cloud providers have attempted to entice customers with the benefits of managing their data and infrastructure from a central, cloud-based location. For many companies, the cloud means abandoning their own on-premises data center and instead trusting their data to AWS, Azure, or another cloud company. The goal of the cloud is centralization. The data is all in one place, managed by experts with economies of scale. Security, fault tolerance, and other essential tasks are handled from the central facility. In many cases, all the data and processing power for an entire company might be in one or two locations, with branch offices accessing it from all over the world.
Edge computing is the exact opposite of this centralized cloud paradigm. The goal of Edge computing is to provide similar cloud-like services, but to move computing resources to the farthest edge of the environment – geographically close to where the data is gathered and accessed.
From CDNs to the Edge
The idea of placing computing resources at the edge of the network is nothing new. Around 20 years ago, the Web 2.0 era ushered in a new vision for the Internet. The volume of data skyrocketed quickly as new services for social media, images, and video entered into common usage. The industry soon found that it was unable to develop new higher-capacity hardware at the speed the market was asking for it.
As early as the end of the 1990s, the idea of Content Delivery Networks (CDNs) was born. CDNs followed a very simple principle: Instead of keeping the movie on a server in the USA, a company installs infrastructure closer to the customer and keeps a copy of the video closer to where the customer lives. Keeping the connection confined to a small region reduced latency, and it also simplified the communication path, requiring fewer routers and less overall traffic to deliver the data to the user.
The original CDN systems were primarily designed to offer storage, but today's Edge environment requires a much more elaborate palette of services. New technologies such as robotics, Internet of Things, remote sensing, and realtime monitoring handle lots of data, but they also require lots of computing power.
Home automation is a good example. Classic household appliances are replaced with state-of-art versions that provide Internet access and can be controlled remotely. Decisions occur beyond the device, and the results are transferred back to the device through simple commands. Voice-activated tools like Alexa offer additional complications. When a user talks to Alexa, it fields the command and sends the audio file to a server, where it is analyzed and interpreted. Alexa then receives the command back in machine language so that it can initiate an action. The longer the distance between Alexa and the server that interprets the command, the more sluggishly Alexa behaves.
A home assistant turning up the thermostat can probably afford a little latency, but consider a robotics installation on a factory floor or a set of sensors that monitor environmental data and make complex decisions in real time. These scenarios would potentially benefit from some form of cloud-like consolidation of processing power, but the idea of sending every command and sensor reading to a massive server in another region of the country hundreds of miles away is severely limiting and, in some cases, totally unworkable. The Edge offers a framework for imagining how the same technology would work with lots of mini-data centers scattered around wherever they are needed, instead of a massive data center serving a radius of a thousand miles.
An Edge Example: Autonomous Driving
All the major car manufacturers have been researching autonomous driving for years, and various Silicon Valley companies have also explored the possibilities of cars without drivers. Autonomous driving is a good example of why the experts believe Edge computing will figure so prominently in the future of IT. Clearly, it would not be practical for a central cloud infrastructure to process telemetry data and traffic information for a large region. If data from a car in North Dakota is first sent to the data center in New York, where it is evaluated and converted into instructions, which are then sent back to the car in North Dakota, the information would be out of date before it reaches the vehicle.
Letting autonomous vehicles crunch their own data does not appear to be a meaningful alternative. Evaluating all the sensor data of an autonomous vehicle requires a fair lump of compute power, and it simply does not make sense to turn every car into a small roaming compute center. Ultimately, energy considerations also play a role in ruling out on-vehicle data processing, because the car of the future will be electric.
Consequently, it will be necessary to provide cloud-like, off-vehicle data processing that is close enough to the vehicle to minimize latency – an ideal scenario for Edge computing. This solution could entail dividing a country into regions and then rolling out local islands for compute and storage. Ultimately, every car manufacturer will have to do their own thing, but similarities will undoubtedly exist between the designs.
The challenge will be to provide these small ad hoc mini-data centers wherever they are needed. Big cities would probably have to be divided into several areas – but, especially in city centers, comprehensive IT infrastructure is rarely available. Much of this infrastructure will need to be built from scratch or developed through complex sharing arrangements with existing companies. In the long run, the Edge will require lots of well-ventilated rooms scattered around the world, each with a server cabinet and a capable (possibly redundant) power supply.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Fedora Asahi Remix 41 Available for Apple Silicon
If you have an Apple Silicon Mac and you're hoping to install Fedora, you're in luck because the latest release supports the M1 and M2 chips.
-
Systemd Fixes Bug While Facing New Challenger in GNU Shepherd
The systemd developers have fixed a really nasty bug amid the release of the new GNU Shepherd init system.
-
AlmaLinux 10.0 Beta Released
The AlmaLinux OS Foundation has announced the availability of AlmaLinux 10.0 Beta ("Purple Lion") for all supported devices with significant changes.
-
Gnome 47.2 Now Available
Gnome 47.2 is now available for general use but don't expect much in the way of newness, as this is all about improvements and bug fixes.
-
Latest Cinnamon Desktop Releases with a Bold New Look
Just in time for the holidays, the developer of the Cinnamon desktop has shipped a new release to help spice up your eggnog with new features and a new look.
-
Armbian 24.11 Released with Expanded Hardware Support
If you've been waiting for Armbian to support OrangePi 5 Max and Radxa ROCK 5B+, the wait is over.
-
SUSE Renames Several Products for Better Name Recognition
SUSE has been a very powerful player in the European market, but it knows it must branch out to gain serious traction. Will a name change do the trick?
-
ESET Discovers New Linux Malware
WolfsBane is an all-in-one malware that has hit the Linux operating system and includes a dropper, a launcher, and a backdoor.
-
New Linux Kernel Patch Allows Forcing a CPU Mitigation
Even when CPU mitigations can consume precious CPU cycles, it might not be a bad idea to allow users to enable them, even if your machine isn't vulnerable.
-
Red Hat Enterprise Linux 9.5 Released
Notify your friends, loved ones, and colleagues that the latest version of RHEL is available with plenty of enhancements.