The SDN game never ends, possibly because nobody wants to diss a good market hype wave while it still has momentum, and possibly because there’s still room to do something useful given the rather vague utility of some current strategies. In any case, PLUMgrid has joined the fray with an offering they call “Virtual Network Infrastructure”, a model that among other things drives home some changes in the SDN market model.
At a high level, we’ve always had “overlay SDN” and “infrastructure SDN”, meaning a division of SDN models between a connectivity-management mission (overlay) and a traffic management mission (infrastructure). A third model, sort of, can be created using virtual switches (OVS), and you can combine virtual switching and tunnel-overlay to create a more flexible and agile data center. All these models have to be mapped in some way to the cloud, so I think the most useful way of looking at PLUMgrid is to look at it from an OpenStack Quantum (now called “Neutron”) perspective.
Neutron builds networks by converting “models” to connectivity and functionality. For example, the classic model of a Neutron network that hosts application elements is a VLAN that’s combined with functions like DHCP and a default gateway. What PLUMgrid has done is to translate Neutron fairly directly into connectivity and functionality, using a combination of network tunnels and hosted virtual functions. Virtual switching and routing is provided where the model has explicit bridging/routing functions, otherwise there is only connectivity and higher-layer functions. In some ways, the connectivity model is similar to that of Nicira, and OpenFlow is not required because forwarding is implicit in the implementation of the model, connections are tunnel-supported rather than switching-derived. There’s a rich developer kit provided with PLUMgrid as well as management interfaces available for most popular cloud platforms.
So how does this relate to other SDN models? Arguably, PLUMgrid is a mature and perhaps better-thought-out version of Nicira. It’s distinctively cloud-data center in targeting, it’s at least so far an in-center model and not an end-to-end model, and it’s a true overlay SDN and not a virtual switch hybrid. It doesn’t attempt to align overlay SDN vision with physical infrastructure as much as to simply use what infrastructure is available. That means it can run over anything, even a combination of things like Ethernet and IP. That makes it easy to build hybrid networks that extend from the data center into a public cloud. Where some virtual SDN models are more “physical” (Alcatel-Lucent/Nuage, Juniper/Contrail, Brocade/Vyatta) PLUMgrid is solidly virtual.
What PLUMgrid makes very clear, I think, is that there is a lot of potential value to visualizing SDN as a two-layer process. At the top there’s virtual/overlay networking that has to be very agile and flexible to conform to software needs. Below that, there’s physical or infrastructure SDN, where software control is likely exercised more at the policy level than by managing specific connections. Separating these functions is good for startups because it keeps them out of the hardware business, and it lets them focus on the cloud.
The two specific questions that PLUMgrid begs are “Can you really make the cloud network a ship in the night relative to other traffic?” and “Can you do a useful network that doesn’t really extend end to end?” I think that the answer to both questions is “Under some conditions!” but I also think that the questions are related. If you contain the mission of overlay SDN to the data center then the cost of presuming ample connectivity and capacity is limited and might even be offset by the management simplicity of a fabric model. Thus, overlay-data-center SDN can be traffic-insensitive. As soon as you start transiting the WAN, though, you have to consider SLAs and QoS and all that stuff.
This is a good step for SDN because it’s making it clear that we really have a number of models of SDN that have their own optimum missions. We may also have situations, of an unknown number, where any given model will do as well as any other. We also have situations where some models will do better, and for buyers there’s going to be a need to align model and mission very carefully. Nobody in the SDN vendor space is likely to work very hard to make that easy, but some of this mission-utility stuff may emerge from competition among vendors, particularly between overlay and infrastructure SDN providers. Infrastructure SDN can be easily made end-to-end and is typically justified in large part by traffic management capability, so it stands to reason that the relationship between overlay SDN and traffic will emerge out of competition with the infrastructure side of the family. In any case, I think a debate here would be good for SDN and it might even create a future where we’re less concerned about “OpenFlow” and centralism on the infrastructure side. That would dodge the problem the SDN industry has wanted to dodge from the first, those northern-living application elements that create the service model on top of OpenFlow.