Are Vendors Responding to the “Lost” Carrier Cloud?

Large-scale data center deployment by operators depends on having large-scale drivers.  I’ve pointed out in past blogs (one earlier this week) that public cloud providers saw the lack of a sound carrier cloud strategy as an opportunity to address those drivers, and thus induce operators to outsource their carrier-cloud missions.  5G is an obvious target area, as my earlier blog said.

If carrier cloud is a hundred thousand data centers’ worth of business, it’s clear that vendors would like to see operators building their own clouds.  To make that happen, there would have to be a credible model for deployment, one that didn’t create the threat of vendor lock-in.  HPE may have decided to take the lead in generating one, because it’s announced its Open Distributed Infrastructure Management Platform, an alliance between HPE, the Linux Foundation, AMI, Apstra, IBM’s Red Hat, Intel, Tech Mahindra and World Wide Technology.

ODIMP (why do vendors pick such long product names?) isn’t actually a 5G-specific tool, it’s a carrier cloud deployment and management framework that’s intended to address the biggest potential risk of 5G core, which is that the complexity of a hosted-function service framework would overwhelm traditional operations.  Nokia, as I said in yesterday’s blog, has taken its own swipe at both the 5G space and service automation overall with its AVA Platform.  Is ODIMP a better strategy?  Is it a real solution to the 5G Core problem?  Let’s try to dig a bit and see.

The platform is based on DMTF Redfish, which is a set of specifications that define open, RESTful, APIs for management of carrier cloud and other “converged, hybrid” infrastructure.  Redfish schema are nice because they’re rather like intent models; an element represents a resource, a collection of resources, a service, etc.  While the first release focuses on servers, the goal is to cover the whole of the “software-defined data center” (SDDC) concept.

Having an entire data center, or in fact a collection of data centers, abstracted into a set of schema/elements is a nice touch, something that would benefit any application or service that depended on hosting features on pooled resources, particularly if the pools were made up of edge and more centralized data centers.  This model lets an operator build up a data center from Redfish-compatible gear, then define its elements and structure, or define a structure and backfill it with gear.  Since everything conforms to the Redfish APIs, the applications that manipulate the SDDC are vendor-independent, so lock-in isn’t a worry.

The Resource Aggregator is perhaps the nicest feature of the platform.  This is what does the modeling work, and modeling is the underpinning of software-centric zero-touch service lifecycle automation.  It’s also the foundation of the TMF’s NGOSS Contract work, seminal in my view with regard to data-driven service management (as opposed to AI management).  The ODIMP Resource Aggregator is not, as some stories have stated, a tool specifically for enterprises, meaning non-service-provider.  It’s HPE’s implementation (supported and augmented) of the ODIMP.  The models produced, as I’ve noted, structure and generalize by abstracting infrastructure, surely the right approach.

There’s a lot of good stuff here, but it’s important to note that the whole announcement is about a management framework for the data center infrastructure associated with hosted virtual functions of some sort, including those used in 5G.  It deals with the complexity of 5G and other “carrier-cloud” services by standardizing the SDDC framework that hosts stuff, but it doesn’t provide either the “stuff” that’s being hosted, or the specific applications that do the deployment and management.  Think of it as middleware, or think of it as analogous to Nokia’s AVA platform, whose “solutions” then include 5G.

Well, sort of.  It’s fair to say that there are overlaps between AVA and ODIMP, but what the latter does is sort-of-implicit in the former.  There’s base-line management intelligence built into AVA, and that is not a part of ODIMP.  For the two to be equivalent, you’d need to lay on a service management application that could do the zero-touch stuff you wanted, and work with the Redfish-schema framework.  To be equivalent to the AVA 5G support, you’d need to add specific solution logic for 5G.  HPE has such stuff, of course, but that’s not what’s being announced.

I think, as I suggested at the start of this blog, that moves like this are a response to the growing risk that vendors like HPE see from carrier-cloud outsourcing to the public cloud.  Carrier cloud is an enormous investment (a hundred thousand data centers, remember, if all drivers are realized).  Furthermore, if the early justifications for carrier cloud are even a tiny bit sketchy, this isn’t the time (in global economic reality terms) to be taking a risk.  The more a data center can be positioned as software-defined and vendor-neutral, the more compatible higher-layer service and 5G Core software are with the data center, the more palatable the build-out choice seems versus the “rent” choice for operators.

HPE has management tools; back in the early NFV days they had one of the few kits that recognized the idea of management by abstraction.  What I’d like to see HPE do is to frame ODIMP and DCC as a “hosting layer”, frame cloud tools as middleware, and frame their management tools as the ZTA layer—all in one document.  Right now, their stuff (like that of most vendors, frankly) suffers from dilution through microsegmentation.  If you break down even the most wonderful thing into small enough pieces, everything you look at seems to do nothing useful.

Some operators tell me that this problem arises from the engagement model with the operators.  The CTO organizations, focused as they are on standards and initiatives like ODIMP, tend trees and not forests.  Most I’ve been involved with have shunned the notion of taking a top-down approach or addressing a systemic problem, in favor of making local process.  That’s fine if everyone knows how to convert a series of turns into a route, but it’s a prescription for meandering if they don’t.

The biggest benefit I see from this is that it could unite the SDDC initiatives from the cloud, with a hodgepodge of carrier-cloud-related initiatives.  We do need to think about creating infrastructure based on a strong abstraction-and-modeling approach, even if we use AI above it, or we risk too much difficulty adapting generalized software to the specifics of where we host it.  I’d still like to see either HPE or the ODIM people in the open-source project expand their presentation to give more overall service and infrastructure context.  Sometimes the devil isn’t in the details, it is the details.  Missions matter.