Most buzzwords are probably hype these days, and all buzzwords probably contain a measure of hype. The saddest part of this is that often the hype covers legitimate value points, and that’s what may be happening in the carrier cloud and edge computing world. We hear a lot about “hyperconvergence” and “composable infrastructure”, but we’ve not really heard much about what makes either concept truly useful. And yes, there are such things.
It might seem that hyperconvergence, which is all about packing a lot of stuff into a single data center, and composable infrastructure, which is all about building virtual servers from agile hosted components, are totally different. The fact is that the mission of both technologies is converging, particularly for carrier cloud. When you host stuff on a pool of resources, you want to have good performance for all the things you put there. “Good” means not developing a lot of unnecessary latency in connecting things, it means not creating small specialized and inefficient resource pools to handle unusual hosting needs, and it means not creating management bottlenecks. Both our target technology trends can help with all these things.
There have always been benefits to big data centers, relating to efficient use of real estate, heating and cooling, power, and even operations staff. A benefit that’s less recognized is that if you have a lot of servers in one place, you probably can connect components within an application or service using fast local switching, reducing latency and improving performance. Operators I’ve talked with have been (like everyone else) weighing the benefits of concentrated resource pools against distributed ones, and they tell me that if you neglect for the moment edge-compute considerations, a large data center with highly concentrated hosting resources will offer better financial and user-subjective performance.
Hyperconvergence aims at getting a lot of servers in a rack, which means a lot get into a given data center. It also tries to improve switching performance with efficient multi-tier switches and fabrics, and this increases the benefit of hyperconvergence by increasing the difference that a combined data center presents in performance versus one that’s broken up into physically separate data centers that are connected via WAN links. The more we focus on the cloud, in any form, the more hyperconvergence matters. Even big OTT players like Facebook, who deploy their own data centers, like hyperconvergence.
In effect, the limiting factor in hyperconvergence is the latency within the data center. The more efficient connectivity is within the data center in reducing connection latency, the better the case for hyperconverged, giant, data centers. If you reach a point of diminishing returns because of network connectivity issues, then you top out in data center concentration benefits.
Latency isn’t just about server-to-server, though. Database resources, the storage systems that hold information for those components you deploy, are also a factor. Where services or applications rely on storage arrays, it’s very possible that the biggest argument in favor of resource concentration is that it normalizes the access time needed to interact with those arrays. If you distribute your data center, you either put your data store in one place or in several, and no matter which way you go, you’ve impacted resource pool efficiency and performance. A single array has to be somewhere, and any components that access that array from a different data center will pay a penalty. Multiple arrays, if the entire data set is accessed similarly by all components, will just change how the problem of latency is distributed. If you have specialized component placements to accommodate the fact that some components access different parts of your overall database, you limit the size of your resource pool because components have to be deployed with their data.
You do have to be wary in the database-as-a-connected-resource space. If you read and write at the detail-record level, latency is a killer, of course. Many database applications don’t do raw direct access to data at all, but rather do queries that could be sent to a remote query engine, which would then return only results. Good application design can address database efficiency, but it’s still important to consider database latency in transactional applications where a transaction equals an access, and there are a lot of transactions being done.
Other forms of resource specialization have their own potential impacts on resource pools and application performance. If a given component needs a different kind of resources than the typically deployed components, you are presented with a choice of either establishing a specialized resource pool for each specialized component-to-resource relationship, or making all the specialized resources generally available. The former means less efficiency and the latter raises the average cost of hosting.
That’s where composable infrastructure comes in. Suppose that storage, memory, even custom chips, were designed to be swapped in and out as needed, from a pool of those specialized resources. Now you’d be able to compose servers to match requirements, which means you could build specialized hosting into any resource within the range of your composability. That could radically improve hosting efficiency and performance within the domain where composition works, if it’s done right.
The challenge with composable infrastructure lies in the efficiency of the composition. What you’re doing is building a server from piece parts, and the question comes when you look at the connection between all those parts. If we assume that the pieces of your composed virtual server were connected using traditional networking, you would be introducing network latency into resource accesses that, in a traditionally integrated server, would be handled on an internal bus at a very high speed. Thus, composable infrastructure solutions should be divided into the “networked class” that depend on local data center networking, and the “bus” class that provide some sort of high-speed component interface.
Some resources, like memory, clearly cannot be connected using traditional networking; the impact on performance would be truly awful. With other resources, the question would be the way in which the resources were used. For example, if a GPU were used in some specialized calculation that had to be done a thousand times for a given event or transaction, traditional connectivity is probably going to introduce too much delay. If the calculation is done once per event/transaction, it could be fine.
There is value to composability as long as the connection efficiency of the solution matches the requirements of the applications/components involved. That’s a requirement that only the data center owner can enforce, or even assess. It’s far from clear at this point just what range of connection efficiencies might be presented in the market, and at what cost. Similarly, we don’t know whether application/service design could separate components that needed specialized resources from those that didn’t, to permit better management of specialized resources. And finally, we don’t know how the edge might fit into this.
Edge computing isn’t a universal benefit. An edge data center is almost certainly less efficient in terms of resource utilization than one deeper in the metro area, or even region. The latency associated with a connection to a data center depends first and foremost on the number of network devices you transit; geography is a secondary issue given that the propagation of data in fiber is about a hundred thousand miles per second. Applications vary on their sensitivity to latency, too. While autonomous vehicles seem a perfect example of why latency could be important, even a fast-moving vehicle wouldn’t go far during the time it took an event to be transported over a hundred miles of fiber. One millisecond would be required, during which a vehicle might travel 0.088 feet. In any event, avoidance of nearby objects isn’t likely to be ceded to a network-connected entity, but rather built into car control locally.
If there are few applications/components that have to be hosted at the edge, then hyperconvergence and composability might be unimportant to edge computing. Certainly at the edge, hyperconvergence is a benefit less for achieving efficiency than for reducing real estate requirements, which would be a factor only if you needed a lot of servers in your edge data centers. Composability might be a factor in edge computing even with lower edge hosting requirements, though, because you wouldn’t be able to fully utilize specialized server configurations in a small data center; too few applications would likely be running.
Everything doesn’t need to be hyperscaled or composed, and it’s likely that before the concepts become broadly useful, we’ll need to see more cloud commitment and greater use of specialized resources. It would also help to have an awareness of how component architectures and resource specialization intersect; applications should divide into components based in part on how specialized resources are used within them. Still, the timing of market hype here may be optimistic but the eventual reality of the benefits is surely there.