How Hybrid Cloud Thinking Can Lead Toward (or Away) From Edge Computing

We live in a polarized world, as a half-hour spent watching any news channel will show. It goes beyond politics, though. Even in technology, we tend to see things in an either/or way. Take the cloud and the data center. Many believe that the data center is the past and the cloud the future. Even “moderates” see the two as very different computing frameworks. Are they, though? Not so much, and where this polarizing tendency hurts us the most is in the way it frames our vision of data center evolution, and how that evolution shapes the future of edge computing.

The data center, historically, is both a place where corporate data is stored and the place where “core business” applications are run. These applications are primarily transactional in nature, meaning that they update and access that corporate data to reflect the operation of the business. In most cases, data centers are located proximate to centers of operation, places where many employees are located. Think “corporate headquarters” or “regional center” and you get the idea.

The computer technologies associated with the data center have been evolving since the 1960s when “mainframe” computing systems (like the IBM 360) came along. Early data centers were based on a small number of giant systems that cost millions of dollars. These used “multiprogramming” to share compute power across a range of concurrently running applications. Over time, these were supplemented with or replaced by “minicomputers” that were often more application-specific, and later by racks of servers that formed a pool of resources. Generally, each generation of computer technology was adopted for applications that were either new, or for applications that had undergone modernization or seen a shift in application software vendor.

The cloud is the latest step in data center evolution, a step that offloads “applications” to a public resource pool that’s distributed over a wide geographic area. I put “application” in quotes because what’s usually done today is to create a more agile front-end in the cloud to augment traditional transaction processing and database access still done primarily in the data center.

The cloud is an elastic tail connected to a high-inertia dog, at least in those cases where it front-ends traditional transactional applications. However, as I’ve noted, some applications have shifted to a more modern server-based hosting model, and many of these have evolved to use virtual machines, containers, etc. They still tend to be tied to the data center because, well, that’s where the data is. Even analytics and AI, which are often run on servers using modern virtualization technology, tend to run in the data center because of the data, and also because “headquarters” people usually run these applications and they’re co-located with the data center.

Some of the modern planning types have started to look at the virtualization-based elements of the data center as a kind of third layer, something between data center and cloud. It’s this new layer and its position that’s creating the potential for another shift in data center thinking, something designed to be closer to but not necessarily part of the cloud.

Enterprises have noted three factors that influence the design of this new boundary layer. The first is scalability of data center components that interface to the scalable cloud. Number two is classic cloudbursting from data center to cloud to scale or replace failed elements. The third is the selective migration of some data elements toward, and perhaps eventually into, the cloud. Network operators have added a fourth of their own, though they’d be happy to share it with enterprises. It’s the evolution to edge computing and the growth of distributed real-time applications. It’s nice that they’re willing to share, because this is the evolution that’s most important of them all.

If you think about it, both data center and cloud computing stem from a common compute model. We have a logically centralized application supporting a distributed user population. This application was originally purely central-transactional, has evolved to front-end-cloud, and is now developing that boundary layer. In parallel, though, we can now conceptualize applications that are real-time and naturally distributed. A current application, which is the shipping/warehousing application, is an example of such an application. Another is the metaverse.

The shipping/warehousing application is an example of how the evolution of empowering employees closer to their point of activity impacts computing policy. You can visualize an operation like this as a set of distribution facilities linked by transportation resources. For decades, this sort of operation was run from a data center, but as time passed and we started putting IoT elements out in the trucks and hands of delivery people, it became clear that a lot of what was going on was really local in nature. This is likely what led Fedex, for example, to say they were going to scrap mainframes and maybe even data centers.

There’s an evolving, if yet somewhat imprecise, computing model associated with this application, one that’s not exactly cloud nor exactly data center. The model is characterized by hierarchy, migration and caching, and first and foremost, binding to life.

Efficient operation of a shipping/warehousing company depends on creating and retaining a link between what’s effectively an abstract model of an operation and the real world. The company looks like a network, with the nodes being facilities and the trunks transportation. The binding between reality and the model has to reflect the extent to which the model has to intervene in live situations, so a requirement for edge computing evolves out of supporting situations more real-time in nature. However, the need to consider the system as a whole and reflect global judgments at any given local level is constant, and is reflected in our next characterization.

The value of hierarchy derives from both the life relationship and the way that insight is propagated. We organize workforces into teams, departments, etc. because that’s the most efficient way to get the organization working toward a common goal without requiring that each person involved understand the goal overall. Divide and conquer is a good adage. We could expect our boundary-layer features to be a critical step in the hierarchy, a step between highly agile and highly tactical cloud elements and more inertial transactional elements in the data center. By feeding status between the extremes, it lets them couple more efficiently.

Migration and caching is something we’ve learned from content delivery. The optimum location for a processing element depends on where the work it’s processing comes from and where its output is delivered to. If either of those are variable, then we could expect that the processing element would migrate in response. One way to make that possible is to assume that we would first host a process rather far back, and then push it out closer to the work until we find that the next push is a step too far, creating for example too much total latency elsewhere. Warehousing often works this way in the modern age; you stock something centrally at first, and then as it’s need is proven greater in one place or another, you start caching it closer to those points, distributing the stock. Same with processes, of course.

The point here is that all of this is fundamental to edge computing but also exploitable in more traditional models of the cloud and data center. The current model evolves with the generation of a boundary layer, and that layer takes responsibility for making the decisions associated with what runs where. We are inevitably going to face boundary issues between cloud and data center, and if we address them as we should, we will generate a software platform that can more easily adapt to the introduction of edge computing and the precise modeling, digital twinning, of real-world systems. It’s worth a shot, don’t you think?