The Hardware and Platform Requirements for Edge Computing

Suppose we do see edge computing.  What exactly do we end up seeing?  Is edge hosting just like cloud hosting, does it perhaps tilt a bit toward feature hosting or event processing?  If so, is the architecture needed, both hardware and software, likely to be different?  These are important questions for vendors, but no less important for the operators who are likely to be investing in edge deployment.  We have to answer them starting with the things we can count on most and moving from there to more speculative points.

One thing we know for sure is that the edge is unlikely to be just a small version of the cloud, based on exactly the same technologies.  The network edge isn’t a convenient place to build economy of scale because it’s closest to the stuff that’s connected where the edge computing center is hosted.  Since proximity to the user/application is the value axiom of edge computing, you clearly can’t expect somebody to backhaul traffic for a hundred miles to get to an edge data center.  The edge cannot compete in resource efficiency with a deeper-hosted metro or regional cloud data center.

Event processing and caching are two application classes that fit an edge-host model.  Both of these could in theory be hosted on general-purpose servers, which means that standard Linux operating system distros and at least most middleware could be used.  Caching and ad targeting are fairly traditional in other ways, resembling something like web hosting, and so I think that standard container and VM software would likely be suitable.

When you move to event processing, the focus shifts.  Lambda or functional applications, microservices, and other stuff likely related to event-handling are typically very small units of code, so small that a VM would be wasted on one unless you ran something in it to further subdivide the resources and handle the scheduling of features in an efficient way.

The industry standard for microservice hosting is containers, and CoreOS (now Container Linux) is probably the most accepted of the server distros.  There’s a Lambda Linux designed to provide a pathway for logic off of Amazon’s Lambda service, but most operators would probably not want to tune their operating system down to be that specific.  A more general approach is represented by the Linux Foundation Akraino Edge Stack project, supported by Intel, Wind River, and a host of operators, including AT&T.  The key point for the foundation is the creation of an optimized platform for edge hosting, which includes containers.

The problem with containers is the management of the scheduling process.  Container deployment isn’t fast enough to make it ideal for hosting very short-lived processes.  Remember that event-driven functions are typically expected to load on demand.  One way to improve the handling of events in container systems is to rely on a lot of memory and forget the idea that you have to load on demand.  With that proviso, the Akraino approach seems to be the best overall path to a generalized software platform at the edge.

Another possibility that might be attractive if you can set aside the requirement for generalized software hosting is to forget Linux in favor of an embedded OS (remember QNX?).  This strips out most of the general-purpose elements of an OS and focuses instead on creating a very short path for handling network connections and short-duration tasks.  The problem you can have is that most embedded-control systems aren’t as flexible in terms of what they run.

Could we afford to have multiple hardware systems and software platforms in the edge?  It depends.  Edge hosting isn’t any different from regular cloud hosting in that economy of scale is an issue.  If you have five different edge platforms, each of them will have to be sized for the maximum design load, which will waste more capacity than if you had three, or even one, platforms.  Plus, the more specialized your platform the more difficult it will be to know how much resources you need there in the first place.

Rather than pick an embedded OS with totally foreign APIs and administration, it might be wise to opt for an embedded Linux distro.  Embedded Linux uses most of the standard POSIX APIs, which means that it’s likely to support a wider range of applications out of the box.  Wind River has a family of Linux products that include embedded systems, and their approach gives you a wider range of stuff you can run.  Red Hat also has an embedded-system version and toolkit that deal with specialized edge requirements pretty well.  Hardware vendors who offer embedded Linux will normally use one of these two.

I think that edge computing on a Linux platform is a given, and that it’s likely that the edge version would be optimized at least for containers and perhaps also for fast-path, low-latency, event handling.  This isn’t quite the same as network-optimized hardware and software because event processing takes place after the message has been received.  Edge computing, even for hosting VNFs (an application that I think exploits edge computing where available but won’t provide a decisive driver), requires fast task switching and minimal delays for inter-process communications (IPC).

An optimized Linux-container model seems the most appropriate software model for edge hosting, given that video delivery, ad delivery, and personalization applications are more likely to drive early deployments than event processing and IoT.  The hardware would likely resemble a modified form of the current multi-core, multi-chip boxes available from players like Dell and HPE, but with a lot more memory to increase the number of containers that could be hosted, reduce the time required to load processes, etc.  We’d also want considerable fast storage, likely solid-state drives, to keep content flowing.  Network performance would also be important because of the need to source large numbers of video streams.

I’m wondering if it’s possible that we might end up seeing what could be called “hybrid hosts”, where we had a multi-box tightly coupled cluster of devices forming a logical (or even real) server.  One device might be nothing more than a P4 flow switch based on a white box, another might be a specialized solid-state cache engine, and the last a compute platform.  How tightly coupled these elements would be depends on how fast any out-of-box connections could be made.  If really tight coupling were needed, a multi-chip box with elements for flow switching, caching, and computing might emerge.  This would be the true edge of the future.

How fast that unified model would emerge, or whether it would emerge at all, might well depend on the pace of growth of IoT.  Pure event-handling has relatively little need for flow switching and probably doesn’t have to cache as many event processes as a video cache would need to cache in content form.   If IoT and video/advertising share the driver, then multiple coupled boxes to form a virtual edge device is the more logical strategy.  The decision will have to be made by the IoT market, which so far has been way more hype than effective strategizing.