It All Comes Down to Resources

I blogged yesterday that virtualization was the key to operations automation, which in turn is the key to the future, the thing we needed to look for in 2019.  The central vision in cloud and virtualization is that of a resource pool.  With resource pools, the trick is to make them as inclusive in terms of resources as you can, while limiting the integration needed to bind resources to the virtualization’s hosting abstraction—VM, container, etc.  We’ve had a lot of advances in this space, but the problem is that they’re divided by technology and terminology even as they seem to be aligning in terms of mission.

SDxCentral did a nice piece at the end of the year on “composable infrastructure”, so that’s one term.  We’ve also had the Apache Mesos approach, “resource abstraction”.  The NFV ISG has a “virtual infrastructure manager”, Apstra has “intent-based” infrastructure, Juniper’s HTBASE Jute’s “multi-cloud control and data fabric”…you get the picture.  Lacking a single clear term for what’s going on, we have a quest for differentiation that’s leading to a lot of confusion.

The biggest problem virtualization solves is that of resource diversity.  When a resource pool has diverse technologies in it, the software needed to commit, recommit, and connect resources has to reflect those differences somehow.  Otherwise, every time you change resource structure you have to change software, something that’s usually called a “brittle” approach because it breaks easily.

Let’s forget for the moment what people call their stuff and work with some neutral definitions.  There are two basic ways to deal with a diverse resource pool.  One is harmonization by plugin, meaning that the deployment function has a general feature set per resource type, adapted to specific resources with a plugin.  A lot of software works this way, including OpenStack.  The other is abstraction layer, meaning that there is a “layer” of functionality that presents resources in a consistent way to the higher (like orchestration) functions, and maps resources to that abstraction.

The critical point often hidden in the discussion of these approaches is the relationship between the resources and the operations processes that act on the resources.  We have stuff we want to do, like deploy things or connect them, and these processes have to operate somehow on resources.  The two options for dealing with resource pools do that mapping differently.

Harmonization by plugin is the older approach to the problem of resource variability.  If you have a vision of the operations processes you want to apply to resources, then designing tools around those processes makes sense.  So does adapting the end result of each process (the “steps” or “actions”) to each specific resource control API.  The downside of this approach is that operations processes might vary depending on the specifics of the resource, and that’s even more likely when you cross over between different types of resources.  The cloud is not the same as the data center, and scaling within either isn’t the same as scaling across the boundary.

Where this tends to cut the most is where you look at virtualization in the broadest sense, when a “resource” is a distributed resource that has to be connected through networking, for example.  You want compute resources for a distributed application.  When you select a resource, you have to deploy it and then connect your workflow to/through it, so networking becomes a resource implicit to hosting.  In a harmonization by plugin approach, you can harmonize the deployment, but you still have to connect.  The single decision to host doesn’t give any context to the rest of the component locations, so the process controlling the plugins has to make the decisions on connectivity.

Another issue with the process-to-plugin approach is the likelihood that there are different levels of abstraction inherent in the APIs to the resources involved.  If you think of the traditional management hierarchy—element, network, service—you see that as you move up the ladder of management, you have a greater inherent abstraction of the resources.  Whatever level of abstraction you have at the resource control point has to be reflected in the process you design—if you have to manipulate individual devices then the process has to be device aware.  If you happen to have a service abstraction, the process that invokes it needs no knowledge of the devices.  When abstraction of resources is variable, it’s hard to keep that from complicating the process-level design.

Harmonization by abstraction is the opposite.  You commit a resource (the abstraction layer’s model) and everything that’s associated with that resource commitment is under the covers.  Where that resource is, how it’s connected, is also under the covers, which means that it’s possible for the abstraction-layer model to hide the implicit network connectivity needed.  The process at the hosting level doesn’t need to be aware of those details, and so virtualization can extend across a pool of resources that create a variety of complex connection issues without impacting the application that asks for a place to host something.

However…that capability introduces another layer (which some vendors position against in their marketing).  It also means that if you want to make connections disconnected from hosting decisions, you need a way of harmonizing that task with whatever implicit connections are being set up in your abstraction layer.  That, to me, means that it’s very difficult to do hosting or server abstraction without abstracting other interreactive resources like networking, database, etc.

The database stuff it a superset of what I once called “Infrastructure Services”.  These are services available to applications using virtualization, in the same way that AWS or Azure “web services” are available to cloud applications.  They look a lot like a generalized set of application components, available to and shared by all.  These services, because they’re presumed to be generalized resources, have to be distributable to wherever you happen to put application components.  They’d need to be abstracted too, at least to the point where they could be discoverable.

Nobody has thought much about infrastructure service or database abstraction, beyond Hadoop (which has kind of boomed and busted in terms of popularity) or the web-service stuff inherent in the cloud.  The big problem with service abstraction is that since services are created by software, they’re highly variable.  Some users have suggested mechanisms for database abstraction to me that seem workable at least on a limited basis, but none of these would be suitable for the abstraction of a broader set of services.

Cloud-specific, virtualization-specific, applications (“cloud-native” in the modern terminology) depends on having everything abstracted and virtualized, because that which is not constrains the agility that virtualization can provide in other areas.  Nail one element of an application’s resources in place, and however flexible you make the rest, the whole structure is nailed to the ground.

The lesson of that truth is clear; network, database, feature, server, and any other form of abstraction has to be addressed, meaning resource abstraction in the most general sense is the only thing that can work in the long term.  That’s one reason why network abstraction, meaning network-as-a-service or NaaS, is so critical.  It’s also why we should focus or virtualization and cloud attention on the strategies that offer resource abstraction in complete form.

The primary benefit of generalized resource abstraction is that it allows a resource pool, however widely it’s distributed, to be represented as a single huge, expandable, computer and at the same time a vast connection resource.  The services of the latter are harnessed to enable the former; all the distributed elements of hosting are effectively centralized through connectivity.  By doing this once, in an abstraction layer, all of the application and service lifecycle processes are immunized against the complexity and variability of “the cloud”.  You deploy something, and deploy it the same way wherever it goes.  The cloud is no more complicated than a single server, because virtually speaking, that’s what it is.

We do have a number of initiatives aiming for this “total virtualization” approach, but nothing is quite there in terms of features or even just positioning.  The latter needs to be done even more than the former, because without a strong explanation of where this is heading and a reason why users would need to go there, features won’t matter.

Users themselves are partly to blame for this state of affairs.  We’ve become accustomed to very tactical thinking in IT and networking because we’ve shifted the focus more and more toward simply doing the same for less money.  Large-scale architectural changes tend to defer benefits to the long term while focusing costs in the present.  Vendors hate this too because they delay getting their quarterly quota fodder.  Finally, obviously, it’s a lot harder to explain massive change than little tweaks, and that means that the political coverage the media usually provides for senior management is limited.  Where risk of decisions rises, decisiveness declines.

The good news, of course, is that very avalanche of partial but increasingly capable solutions we’re getting out of the cloud side of the industry.  We’re creeping up on a real strategy here, and while it will take longer than it could have taken had we realized from the first where we were heading, I think it’s obvious now that we’re going to get to the right place, and that’s important news for the coming year.