Carol Wilson raised an interesting point in an article in Light Reading on SDN and NFV—that of collaboration. I’m happy that she found the approach CloudNFV has taken to collaboration and openness credible, but I think some more general comments on the topic would be helpful to those who want to assess how “open” a given SDN or NFV approach is, and even whether they care much whether it’s open or not.
An important starting point for the topic is that the network devices and servers that make up “infrastructure” for operators are going to have to be deployed in a multi-vendor, interoperable, way, period. Nothing that doesn’t embrace the principle of open resources is likely to be acceptable to network operators. However, “open” in this context is generally taken to mean that there exists a standards-defined subset of device features which a credible deployment can exercise across vendor boundaries. We know how this works for network hardware, but it’s more complicated when you bring software control or even software creation of services into the picture.
If you step up to the next level, I believe there are three possibilities: an “open” environment, an “accommodating” environment, and a “proprietary” environment. I think everyone will understand that “proprietary” means that primary resource control would operate only for a single vendor. Vendor “X” does an SDN or NFV implementation and it works fine with their own gear but the interfaces are licensed and thus it won’t work and can’t be made to work with equipment from other vendors. Today with software layers, “proprietary” interfaces are usually private because they are licensed rather than exposed.
The difference between “open” and “accommodating” is a bit more subtle. To the extent that there are recognized standards that define the interfaces exercised for primary resource control, that’s clearly an “open” environment because anyone could implement the interfaces. I’d also argue that any environment where the interfaces are published and can be developed to without restriction is “open”, even if that framework isn’t sanctioned, but some will disagree with this point. The problem, they’d point out, is that if every vendor defined their own “open” interfaces it would be extremely unlikely that all vendors would support all choices, and the purpose of openness is to facilitate buyers’ interchanging components.
This is where “accommodating” comes in. If in our resource control process for SDN or NFV we define a set of interfaces that are completely specified and where coding guidance is provided to implement them, this interface set is certainly “open”. If we provide a mechanism for people to link into an SDN or NFV process but don’t define a specific interface, we’re accommodating to vendors. An example of this would be a framework to allow vendors to pull resource status from a global i2aex-like repository and deliver it in any format they like. There is no specific “interface” to open up here, but there is a mechanism to accommodate all interfaces.
Let’s look at this through an example. In theory, one could propose to control opaque TDM or optical flows using OpenFlow, and in fact there are a number of suggestions out there on how to accomplish that. IMHO it’s a dumb idea because packet-forwarding rule protocols don’t make sense where you’re not dealing with packets. Suppose that instead we created a simple topology description language (we have several; NED, NET, NML, Candela…) and we expressed a new route in such a language, using some simple XML schema. We have a data model but no interface at all.
Now suppose we support passing the data from the equivalent of a “northbound application” to the equivalent of the OpenFlow controller, where it’s decoded into the necessary commands to alter optical/TDM paths. If we specify an interface for that exchange that’s fully described and has no licensed components, it’s an “open” interface. If we express no specific interface at all but just say that the data model can be used to support one, we have an “accommodating” interface.
My point here is that we need to be thinking about software processes in software terms. I think that “open” interfaces in software are those that can be implemented freely, using accepted techniques for structuring information (XML, for example) and transporting information through networks (TCP, HTTP, XMPP, AMQP, whatever). I think “standard” interfaces are important as basic definitions of functional exchange, but hard definitions define fixed structures. In the current state of SDN and NFV it may be that flexibility and agility are more important.
We likely have the standards we need for both SDN and NFV interfaces in place, because we have standards that can be used to carry the needed information already defined—multiple definitions in most cases, in fact. Where we have to worry about openness is in how providers of SDN or NFV actually expose this stuff, and it comes down not so much to what they implement but what they permit, what they “accommodate”. I think that for SDN and NFV there are two simple principles we should adopt. First, the information/data model used to hold resource and service order information should be accommodating to any convenient interface, which means it should not have any proprietary restrictions on accessing the data itself. Second, the interfaces that are exposed should be fully published and support development without licensing restrictions.
This doesn’t mean that functionality that creates a data model or an interface can’t be commercial, but it does mean that a completely open process for accessing the data and the exposed interfaces is provided. That’s “open” in my book.