More on NFV Orchestration and Open Source

Carol Wilson of Light Reading did a nice piece on the operators’ mixed position on open-source, quoting comments from the Light Reading NFV Carrier SDN event.  I’ve blogged about some of the points of the discussion, but I wanted to cover those that I hadn’t covered, or perhaps hadn’t covered fully.  The fall is the season of networking shows for the carrier space, so it always opens interesting new things to review.

The opening point of the article is very important in my view; the four operators on the panel said they didn’t see a value in waiting for standards; they just slowed things down.  That’s the view I get generally from operators these days, though there are differences in perspective depending on what operator organization you’re talking to.  Most now realize that the market isn’t going to wait for standardization, which is why a path like open source is important.

Operators and standards processes originated at a time when operators planned to support a future demand source whose emergence was under their control.  Today, the only service that matters much is Internet/broadband, and for that service the OTTs are the only relevant driver.  That means operators risk getting dangerously behind the curve if they can’t respond to outside influences.  Standards groups have never, and probably will never, work nearly fast enough.

Where I disagree with the operators on the panel, and where at least a slight majority of operators I’ve talked with also disagree, is that APIs and abstraction layers are enough to replace models.  There are situations where that can be true, but also situations where APIs and abstractions are a doorway into the wrong room.

I don’t really like the term “abstraction layer” unless it’s coupled with the basic truth that an abstraction layer abstracts something into something else.  The first “something” is a set of resources/infrastructure available to host or support services.  The “something else” is a set of things that will be mapped to that first “something” set.  A Virtual Machine (VM) is an abstraction of a server.  The point is that you can’t have an abstraction layer without having the abstractions, which if they aren’t models are something that sure looks like models.

I agree with the comments DT made in its keynote at the event; the industry would benefit from defining the abstractions as models.  It goes back to the notion of an “intent model” which is an abstraction of a set of functional behaviors, linked to a data model that defines the parameters and status variables.  A “router” might be an example of an intent-model abstraction, and if that were so, the implementations that could claim to be a router would be those that could map to the abstraction.  Beyond that capability, nothing else would matter.  That seems to me to be a key to open implementations of networks.

The alternative to having a set of abstract models like “router” is to allow everyone to create their own abstraction, like “router 1” through some arbitrary “router n”.  If we do that, then any vendor could create their own proprietary abstraction for “router”, and none of them would be assured as being interoperable.  Differences in the properties of the abstraction would mean differences in how the abstraction could be realized from below, and used from above, in a service.  That means everything would be vendor-specific and brittle.

What I’m really not comfortable with is the notion that API specifications solve anything.  An API exposes the features of a component.  The nature of the API depends on what the features are, what data is associated with them, and to a degree what the architecture of the two components that the API links is supposed to be.  At one extreme, you could say that every intent-model abstraction has an API, which gets us to the point of having to explicitly code for model relationships.  At the other (the right one) you could say that there is really only one API, the “intent-API”, which passes the data elements as a payload.  That allows a general event-and-model-driven relationship.

What most operators mean by “API” is a high-level interface, not to service orchestration or composition or lifecycle management, but to a service portal that is then linked downward to that detailed stuff.  The portal APIs don’t compose, they expose.  If they expose lousy design, then the best of them is still a lousy API.  You worry about exposure when you have an effective underlying service lifecycle architecture that you can expose.

Another comment I can’t agree with is that MANO’s issues arise from the fact that it has to “interact with so many different pieces and layers, there is no open source of traditional vendor solutions that tackles that, it’s the nature of the beast….”  MANO’s problems are far deeper, and different, in my view.  In fact, its largest problem is that it was scoped to interact in one very simple and limited way—as a means of creating the functional equivalent of a device from the virtual functions that make it up.  That means that everything that’s involved in composing services as a whole is out of scope.  You can’t push service architecture up from below, you have to impose it from above.

OSSs and their migratory issues and complexity really aren’t the problem either.  They were designed to interact with something very close to a set of service/device abstractions, so a good model-based service lifecycle management approach would be able to mesh with OSS/BSS relatively easily.  The fact that we’re expecting OSS/BSS to change or adapt to MANO shows that we’re pushing spaghetti uphill again.  We already know how OSS/BSS looks at services; we just have to model how resources are harnessed to create them and we have a nice meet-in-the-middle.

The point here is that yes, standards are a waste of time.  So is open-source if it’s not done right, meaning not based on intent modeling and event-driven systems.  Right now, we don’t have any example of open-source that is, and so right now we don’t have anything that I think will solve the problems of service lifecycle automation.  Could we get there?  Sure, and in my view fairly quickly.  I think that the whole situation could be defined correctly in six months if it were to be approached the right way.  I tried to lay out the best model in my ExperiaSphere project, and I still believe that’s the way to do it.

All the smoke around the NFV issues we’re now seeing is a good sign; you can’t fix a problem if you don’t recognize and accept it.  It’s also true that accepting a problem doesn’t fix it.  Open source is a software project strategy and a business model, not a software architecture, and you can’t have good software without a good software architecture.  In fact, without a good architecture, any project is a boat anchor not a stepping-stone, because it tends to freeze thinking more and more as it goes on.  We’re rapidly running out of time to get this right at the open-source level, which is why I think that ultimately a vendor will get to the right answer first.