Could We Unify CORD and ECOMP to Accelerate Infrastructure Transformation?

If you like the idea of somehow creating a union between CORD and ECOMP then the next obvious question is just where that union has to start.  The answer, in my view, isn’t in a place where both architectures contribute something that could be united, but where neither does enough and external unionizing forces are essential.  That’s the notion of modeling, not resources but functions.

In my last blog, I noted that integration depends on the ability to freely substitute different implementations of the same function without changing the service definitions or the management practices.  To make that happen, you need to have some Platonic shapes that define all the functions you intend to use in composing services…or even applications.  Each of these models then represents the “look and feel” of the function as seen from above.  The vendors who want to contribute those functions are responsible for building downward from the abstract model to make sure what they do fits seamlessly.

The goal is to make a function’s “object” into a representation of that function through the service lifecycle.  You manipulate the function at the model level, and the manipulation is coupled downward into whatever kind of implementation happens to be used.  That way, things that have to view or control a “router” don’t have to worry (or even know) whether it’s an instance of software, a physical device, or a whole system of routing features either built by SDN forwarding or by combining devices/software into a “network”.

The TMF really got a lot of this started back in the 2006-2010 timeframe, with two initiatives.  One was the “NGOSS Contract” that proposed that events would be steered to the appropriate lifecycle processes through the intermediary of the model service contract.  That approach was the first to make a contract (which the TMF modeled as a series of connected service elements) into a state/event machine.  The other was the Service Delivery Framework (SDF), that explicitly targets the lifecycle management of services that consist of multiple functions/features.

To me, the union of these two concepts required the notion that each service element or model element (my “router”) be represented as an object that had properties determined by the class of feature it defined.  That object was then a little “engine” that had state/event properties and that translated standard class-based features (“a router does THIS”) into implementation-specific methods (“by doing THIS”).  A service was a structured assembly of these objects, and each service was processed by a lifecycle management software element that I called a “Service Factory”, a term the TMF briefly adopted.

Service lifecycle management, which starts by instantiating a service model onto real infrastructure by making the connections between the “objects” that define the model and a service-instance-specific way of deploying or committing resources, lives above the model.  It never has to worry about implementation because it manipulates only the abstract vision (“router”).  The first step in lifecycle management is responsible for deployment, and it makes the connections between the general object vision of available features (probably in the form of APIs) and the way each object is actually deployed in the referenced service.

When a model is deployed, the abstract “model” has to be changed from a template that describes something to an instance that represents something.  There are two basic approaches to doing this.  One is to actually spawn a set of software objects that will then run to process service lifecycle events.  In this approach, a service is a real software application made up of modules for the features.  The second approach is to use a general software tool that interprets the model as needed, meaning that there is in the instance of a service model a set of references to software, not the software itself.  The references could be real pointers to software processes, or they could be a data model that would be passed to a generic software element.

CORD uses abstractions to represent things like the access network and the service trunking.  There are also arguably standard models for resources.  The former are useful but not sufficient to model a service because they don’t have the functional range needed to support all the service features.  The latter open the question of “standardization” below the service objects, which I’ll get to in a bit.

ECOMP also contributes elements.  It has the notion of a service model, though I’d argue it’s not as specific as the approach I’ve described.  It has the notion of service lifecycle management, again not as detailed.  Much of ECOMP detail is in the management and resource portion of the issue, again below the service model I’ve described.

If CORD describes the CO of the future and ECOMP describes the integration of elements, then the thing that would unite them in a logical sense is a complete statement of the service models that relate the processes of ECOMP with the resources of CORD.  To consider that, it’s now time to address the question of what happens underneath a service model.  Here we have three basic options to consider:

  1. We could use the same modeling approach below as we had used for service models, so that the decomposition of a “router” object into a network of “router” objects would use the same tools.
  2. We could use some other standardized modeling approach to describe how an “object of objects” is represented.
  3. We could let anything that works be used, foregoing standardization.

The best approach here, in my view, would depend on how many of the “other standardized modeling” approaches would be fielded in the market.  Below the service model, the mandate is to pick an implementation strategy and then connect it to the service-model’s object-level APIs.  You could see the work of the NFV ISG and MANO living down here, and you could also see modeling options like TOSCA, TMF SID, and YANG, and even more general API or data languages like XML or JSON.  The more options there are, the more difficult it would be to get a complete model from the underside of our highest-level service objects to the resources that will have to be committed.  That’s because it’s likely that vendors would support only a few model choices—their own gladly and everything else with great reluctance.

Clearly the last option leads to chaos in integration.  So does the second option, unless we can define only a very limited set of alternative approaches.  That leaves us with the first option, which is to find a general modeling approach that would work top to bottom.  However, that approach fields about as many different choices as my second one did—and it then demands we pick one before we can go very far in modeling services.  Given all of this, what I’d suggest is that we focus on defining what must be standardized—the structure of those abstract functional objects like “router”.  From there, we’d have to let the market decide by adopting what works best.

It should be easy to unify CORD and ECOMP with service modeling because both require and even partially define it, but neither seems to be firmly entrenched in a specific approach.  It’s also something that the NFV ISG might be ideally positioned to provide, since the scope of objects that need to be defined for the model are all within the range of functions considered by NFV.  It could also be done through open-source activities (including CORD and ECOMP), and it could be done by vendors.  Perhaps with all these options on the table, at least one could come to fruition.

There’s a lot at stake here.  Obviously, this could make both CORD and ECOMP much more broadly relevant.  It could also re-ignite the relevance of the NFV ISG.  It could help the TMF turn its ZOOM project into something other than a lifetime tenure for its members.  I also think that carrier cloud adoption could be accelerated significantly, perhaps by as much two years, if something like this were done.  But make no mistake, carrier cloud is going to happen and result in a lot of new money in the IT world.  Once that’s clear (by 2020 certainly) I think there will be a rush to join in.  For some, it will be too late to reap the full benefits.