The world of SDN continues to evolve, and as is usually the case many of the evolutions have real utility. The challenge continues to be the conceptualization of a flexible new network framework that exploits what SDN can do, and at an even more basic level provide the framework by which the different SDN models can be assessed.
One of the most potentially useful announcements came from HP, who say they want to build an SDN ecosystem by providing an SDN app store and a developer environment complete with toolkit and validation simulator. This is built on top of HP’s SDN controller of course, but it’s arguably the first framework designed to promote a true SDN ecosystem.
I don’t have access to the SDK for this yet; the URL provided for the developer center is broken until November when the tools arrive. As a result I can’t say what the inherent strengths and limitations of the framework are. Obviously it’s disappointing that the program doesn’t have the pieces in place, but it’s not unreasonable. I do think that HP should publish at least the API guide in the open, though. People need time to assess the tools and their potential before they commit to an ecosystem, particularly one as potentially complex as SDN.
The challenge with any developer ecosystem is the level at which the developers are expected to function. OpenFlow, as I’ve said before, is simply a protocol to manipulate switch forwarding tables. To presume that developers would be building services by pushing per-switch commands directly is to presume anarchy, so HP has to be providing higher-level functionality that lets programmers manipulate routes or flows and not tables and switches. Even there, a basic challenge of SDN is that applications that can manipulate switches, even indirectly, can create serious security and instability flaws. Logically there has to be two levels of SDN, one that lets applications control basic connectivity on a “domain” basis and another that lets infrastructure providers manage QoS, availability, etc. How that gets done is critical to any ecosystem, IMHO, and I’d sure like to see HP document their model here.
Another SDN development is Version 1.4 of OpenFlow, which enhances the flexibility of OpenFlow considerably but also raises some questions. The new version has features that are so different from those of the previous version that it will be essential that switches and controllers know whether they’re running the same thing. That sort of change is always hard to make because “old” software rarely prepares for “new” functionality. It’s also virtually certain that some of the features of the new OpenFlow will have to be exposed via changes in the controller APIs, which means that applications that run on top of controllers may also have to be changed. This collides with the notion of building ecosystems, since nothing aggravates a developer like having the platform change underneath.
Still, it’s pretty obvious that SDN is growing up. Not surprisingly, players like Cisco and rival Huawei are promoting more SDN-ready technology, perhaps even starting to build things that go beyond exploiting SDN in a limited way toward actually depending on it to fully access features and capabilities. We’re also hearing about SDN layers a bit, but in what I think is an unfortunate context. We hear about “data center”, “carrier”, or “transport” SDN, and I think that this division blurs some pretty significant boundaries and issues.
At the top of the network, where applications live, the notion of software-defining networking is fairly logical. What you want to do is to allow for the creation of new service models (connectivity control based on something other than legacy L2/L3 principles; see my blog yesterday) and at the same time support the notion of multi-tenancy since applications are for users and there’s a load of users to support. As you get deeper, though, you are now supporting not an application but a community. It’s always been my view that something like OpenFlow, designed for specific forwarding control, gets more risky as you go down the stack. Further, at some point, you’re really dealing with routes at the transport level, even TDM or optical paths that don’t expose packet headers and aren’t forwarded as packets but as a unit. Here we have both a technical and a functional/strategic disconnect with classic OpenFlow.
The OSI model has layers, and I suspect that the SDN model will need them for the same reason, which is that you have to divide the mission of networking up into functional zones to accommodate the difference between network services as applications see them, and network services as seen by the various devices that actually move information. We’re not there yet on what the layers might be, and arguably there’s a real value in “flattening” the OSI layers down to something more manageable in number and more logical in mission. We aren’t going to harmonize these goals if we never have real discussions on the topic, though, and we’re not having them now.
We also need to understand how SDN and NFV relate, and how both SDN and NFV relate to the cloud. If operators are going to host a bunch of centralized SDN functionality or a bunch of virtual functions, it seems to me that they’d elect to use proven cloud technology to do that. How does proven cloud technology get applied, though? SDN supports service models that cloud architectures like OpenStack’s Neutron don’t support, because SDN in theory supports any arbitrary connection model. How do we use the cloud to distribute “centralized” SDN control so it’s reliable and can be exercised across a global network? How does NFV work in supporting both SDN centralized technology and its own function mission, but in the cloud? Can it also deploy cloud app components, and build services from both apps and network functions? There are a lot of questions to consider here, and a lot of opportunity for those who can answer them correctly.