How Do We Define Software-Defined Network Models?

If networks are truly software-defined, what defines the software that defines them?  This is not only the pivotal question in the SDN and NFV space, but perhaps the pivotal question in the evolution of networks.  We knew how to build open, interoperable networks using fixed devices like switches and routers, but it’s increasingly clear that these old methods won’t work in the new age of virtualization and software.  What does?  There are a variety of answers out there, but it may be that none are really complete.

The classic network solution is the formal standards process, which we have seen for both NFV and SDN.  The big question with the standards approach is “What do you standardize?”  SDN focused on standardizing a protocol, OpenFlow, and presumed that by doing that they would achieve openness and interoperability among the things that supported the protocol.  NFV originally said they weren’t going to write standards at all, but simply select from those already available.  That approach isn’t being followed IMHO, and arguably what NFV did do was to standardize a framework, an architecture, that identified “interfaces” it then proposed to standardize.

I don’t think that either SDN or NFV represents a successful application of formal standards to the software-defined world.  You could kick around the reasons why that’s true, but I think the root cause is that software design doesn’t work like formal standards work.  You really need to start software at the top, with your benefits and the pathway you propose to follow in achieving them.  Both SDN and NFV defined their model before they had identified the critical benefits and the critical steps that would be needed to secure them.

A second approach, this one from the software side, is the open-source model.  Open source software is about community development, projects staffed by volunteers who contribute their efforts and aimed at producing a result that’s open for all to use without payment for the software that results.  It’s worked with Linux, and so why not here?

I’m a fan of open source, but it has its limitations, the primary one being that the success of the project depends on the right software architecture, and it’s hard to say where that architecture vision comes from.  In Linux, it came from one man, and his inspiration was a running operating system (UNIX) mired in commercial debates and competing versions.  But for SDN and NFV there’s not just a division but a set of divisions on the open-source side that make things even more complicated.

One obvious division is among competing projects that have the same goal.  For both SDN and NFV we have that already, and even if all the projects are open, they are different and so threaten interoperability by creating competing software models that could be the targets for integration and deployment.  What works with one will probably not work with others, without special integration work at least.

Another division is the end-game-versus-evolutionary-path approach conflict.  We have projects like CORD (Central Office Re-architected as a Datacenter) that define the future end-state of software-driven networking, and others like the various NFV MANO projects that define a stepping stone toward that future.  It’s not clear to many (including me) just what all the MANO projects would generate as a future network model, and it’s not clear what actionable steps toward CORD would look like.

All of this uncertainty is troubling at best, but it’s intolerable if you want operators to commit to a big-budget transformation.  Add to this the fact that the important work being done today is de facto committed to one of these (flawed) approaches, and you can see that we could have a big problem.  It may even be too big to solve for current software-driven initiatives, but at the least we should try to lay out the right approach so future initiatives will have a better shot at success.  If we can then apply the right future answer to current work, retrofit it, we have at least a rational pathway forward.  If we can’t make the retrofit work, then we’ll have to accept that current initiatives are not likely to be fully successful.

Software projects have to start with an architecture, because you have to build software by successively decomposing missions and goals into functions and processes.  The architecture can’t be a “functional” one in the sense of the ETSI End-to-End model because it has to describe the organization of the software.  ETSI ended up doing a high-level software architecture perhaps without intending it, because you can’t interpret a functional model into software any other way.   A software expert would not have designed NFV that way, and that problem cannot be fixed by tighter interface descriptions, etc.  The software design based on the model isn’t optimum, period.  SDN has a similar problem, but the Open Daylight work there, combined with the fact that SDN is a much more contained strategy, has largely redeemed the SDN approach.  Still, the fact that there has to be an “optical” version of the spec demonstrates that the approach was wrong; the right design would have covered any path types without needing extensions.

Standards, in the traditional network sense, aren’t going to generate software architectures.  That requires software architects, who are rarely involved in formal standards processes.  It would certainly be possible to target the creation of a software architecture as a part of a standards process, though, and we should do that for any future software-defined activities.  We didn’t do it with SDN and NFV, though, and it’s exceptionally difficult to retrofit a new architecture onto an existing software project.  That means that open-source software would have to evolve into an optimum direction, based on recognized issues and opportunities.  Which, of course, could take time.

We may have to let nature take its course now with SDN and NFV, but in my view, it’s time to admit that we can’t fit the right model onto the current specification, and that in any case we’re past the point where standards and specifications will help us.  Once we have an implementation model we need to pursue it.  If we have several, we need to let market conditions weed them out.  That means that current SDN and NFV standards shouldn’t drive the bus at all, but rather should undertake specific and limited missions to harmonize the multiplicity of approaches being taken by open source.

Specs and standards guide vendor implementations, and it’s clear that in the case of SDN and NFV we are not going to get implementations that fully address the benefit goals of the operators.  We have to start with things that do, and in my own view there is only one that does, which is AT&T’s ECOMP, now part of the ONAP project in the Linux Foundation, along with OPEN-O.  ECOMP provides the total-orchestration model that the ETSI spec and other MANO implementations lack.  That’s true not only for NFV, but also for SDN.

It’s time for a change here, and the thing we need to change to is the new ONAP platform.  The best role for ETSI here would be to map their stuff to ONAP and facilitate the convergence of MANO alternatives with it.  The best role for the ONF would be to do the same with SDN.  Then, we need to get off the notion that traditional standards can ever successfully drive software virtualization projects.