The notion of composable infrastructure or infrastructure abstraction is one of my favorites. I think it’s probably the most important piece of our overall virtualization puzzle, in fact, but it’s also something that could present serious problems. That’s particularly true given how fuzzy the concept is to most users.
At the end of last year, enterprises had less than 20% literacy on composable/abstract infrastructure at the CIO level, by my model measurements. That means that only one in five could state the value proposition and basic technology elements correctly. This sort of thing obviously contributes to the classic “washing” problem where vendors slap a composable coat of paint on just about anything. It also makes it difficult for prospective users to assess products, and most important, assess their support of critical features.
The basic notion of abstraction in infrastructure is that you build a virtual hosting framework that applications and operations tools see as a single server or cluster. There’s then a layer of software that maps that abstraction to a pool of resources that should include servers, storage, database, network connectivity, and so forth. The complexity associated with using this diverse set of resources, each of which could be implemented in multiple places by multiple vendors, is hidden in that new layer of software. If that layer works, then composable infrastructure works.
And if that layer doesn’t work, you’re dead. There’s a risk that it might not work, too, because the complexity associated with a diverse resource pool doesn’t disappear in an abstract or composable infrastructure model, it just disappears from view. It’s still there, in that new layer. The implementation of the abstraction layer could mess things up, but even if it doesn’t, the layer hides issues about resources that will bite or defeat the goals of composability, and that users might remediate if they realized what was going on.
The problems go back to what I’ve been calling “resource equivalence”. A resource pool is a resource pool only if the resources in it can be freely assigned from the group and deliver comparable performance and cost points (capital and operations). If that’s not true, then there will be some assignments that are first perhaps a bit better or worse, and eventually end up being required or improper. That fragments the resource pool, so you have several of them instead of one.
When we abstract resources for composable infrastructure, we take responsibility for making all the resources in the pool equally available, not equally desirable. A data center in Alaska and one in Cape Town may be adapted to support the same hosting processes and run the same applications, but if the user of the applications is much closer to South Africa than to the Bering Sea, the performance of the applications will almost surely vary considerably depending on where you host things.
The easy answer to this sort of problem would be to say that you can create composable infrastructure only where there’s full resource equivalence in the pool. One problem with that is that all applications aren’t equally susceptible to non-equivalence. Another is that all resource pools that aren’t equivalent aren’t non-equivalent in the same way. A third is that most modern applications are componentized, and each component may have its own resource sensitivities.
A better answer is to offer the abstraction layer a combination of a measurement of the non-equivalence of various distributed resources in the pool, and a requirement for specific resource behavior for each application/component. That could be used to help the abstraction layer map between the virtual representation of a hosting resource set and the resource pool realization. It’s not a perfect solution, because it effectively segments the resource pool and raises the risk that applications might not find any suitable resources, but this risk could be addressed by maintaining an inventory of the resources available and allowing a management system to query the abstraction layer to learn how much capacity is available for a given application resource behavior requirement set.
The current answer, if we can call it that, is to try to reduce the impact of things that would make resources non-equivalent. The most significant of these is the network linkage that binds resources to the collective pool. If we had infinite capacity and zero delay in network paths, we could create uniform resource pools distributed everywhere. Data center interconnect (DCI) is the strongest element in any plan for a resource pool, and thus for any implementation of composable infrastructure.
The current answer doesn’t fully address the problem, at least for composable infrastructure to be used in public cloud or telco cloud applications. There are significant compliance or security issues associated with the location of resources. I’ve been involved in a number of operator-driven initiatives that involve resource virtualization, and all of them had requirements for controlling hosting locations based on regulatory requirements (don’t put certain data here because it would give government access to it or break local laws), availability (don’t put a backup resource in an area that has common power with the primary), or security (don’t put a resource here for applications that require highly secure connections).
We have no shortage of indicators that many in the industry are aware of this problem. All of the bodies charged with cloud hosting or feature hosting have at least nibbled on the issue. The challenge is to converge on something. With a dozen possible abstraction layers offering a dozen possible solutions to the resource equivalence problem, we’d have nothing likely to induce operators to actually implement something. With a different solution set for the cloud and for network operators (which we tend to have today), we’re crippling the solutions overall by limiting the problem set that each solution is targeted to, which limits how far any of them can develop.
You might fairly wonder whether the need to address this sort of resource selection requirement by means of supporting resource granularity in many dimensions doesn’t make our abstraction layer so complicated it’s impractical. No, it doesn’t, but that’s because it’s the requirements that are complicated, not the implementation. In my view, the alternative to making the abstraction layer handle this kind of precise resource selection is letting that task roll upward into the service model, which begs for all sorts of errors in modeling the services. Better to have an abstraction layer that hides the complexity but not the visibility.