Assessing the REAL Risks to SDN

There’s been some talk recently about the risks to SDN, with the focus being that vendors will push their own proprietary visions and create something that’s not interoperable.  There’s a principle in risk management that says that it’s not helpful to attempt to mitigate risks that are below the level of risks you’re already accepting.  That’s a good guideline for SDN adoption, I think, because when you apply it you find that what seemed risky before isn’t so risky.  You also find out that there are risks, major ones, you hadn’t considered.

Let’s start with the risks frequently discussed—primarily that of SDNs becoming another proprietary pond.  The concept of “openness” is best applied to interfaces and protocols, meaning that an “open” architecture is one that relies on published/standardized interfaces and protocols.  Is SDN at risk to being less open?  It’s hard to see how.  Networks expose a bewildering array of these interfaces and protocols, and even today there are many that are non-standard.  For example, while most of the protocols used in IP networks today are standardized and thus open and interoperable, most of the INTERFACES are management APIs that are proprietary.

Will SDN interfaces be different?  Vendors today provide proprietary interfaces to their own devices because their devices are built to be different—to be differentiated.  Whether device management in an SDN world can be different will depend on whether SDN devices are standardized.  What I’ve called the “centralized” model of SDN could use OpenFlow to control forwarding, but there are no adequate device management standards for an OpenFlow device today.  If you’re running OpenFlow today you’re probably managing the devices with the same vendor-specific tools, because you’re probably implementing OpenFlow on traditional switches and routers.  SDN doesn’t add to risk here; it’s the same risk.

You could argue (and some will) that SDN creates a new level of interfaces, the interfaces designed for software control.  Even there I disagree.  There are two relevant sources of software control for networking—the Quantum interface of OpenStack and the generalized set of DevOps tools that are used as part of application lifecycle management in general, and increasingly for the cloud.  Quantum has a two-level model of plug-in, where a general tool is augmented with a set of vendor-specific interfaces and logic below. We have support for this model now, it works for non-SDN networks, and it would work for SDN too.

At the protocol level there is some risk of Balkanization that’s arising from the fact that consensus processes like standardization inevitably lag market requirements these days.  In part this is due to the fact that vendors probably work to rig the process for delay so they can differentiate at will, but I’ve worked enough with standards groups to know that delay is institutional there.  You don’t need deliberate obstruction.  OpenFlow doesn’t address management.  Given that, vendors will HAVE to find their own solutions because it’s unrealistic to assume the buyers will wait until standards are completely mature (most won’t even live that long!)

So are there no risks?  There are risks aplenty, they’re at a higher level than people talk about, and they are potentially more serious and insidious than any of the ones that are typically listed.

The primary risk is MISSION INCOMPATIBILITY WITH IMPLMENTATIONS.  SDN is an architecture that like all architectures balances benefits against costs/trade-offs.  Cars and trucks have common components but different missions, and if you get one when you needed the other no amount of standardized piece are going to take you back to the right choice.  I’ve pointed out that there are three widely accepted “SDN models”, the virtual-network-overlay Nicira model, the purist/centralized OpenFlow model, and the distributed/evolutionary IETF-favored model.  All of these produce what buyer call SDN, but none of them produce the same capability set and trade-offs the others do.  What point is there to talk about standardized implementation at the protocol/interface level when you’re not equivalent in terms of how the architecture is applied?

The second real SDN risk is in LACK OF CONFORMANCE WITH CLOUD EVOLUTION.  The cloud is what everyone agrees is driving SDN, but what does that mean in terms of where SDN is going?  It means we need to know where the cloud is going.  We have two trends in “the cloud” that are critical:

  • Applications of the future will be written for the presumption of hybrid cloud hosting.  How?  What is it that makes an application a native cloud app?  If we don’t know that then we don’t know whether SDN is tracking cloud progress.  If SDN isn’t tracking cloud progress, then SDN is disconnecting from its primary driver.  Much of today’s SDN focus is the cloud, and we obviously need to know where it’s going to know where SDN has to take us.
  • Services of the future will be built on network functions virtualization.  Like it or not, intentionally or not, network operators are building in NFV a model of distributed functionality, a “service layer” that defines network services in terms of basic devices and complex hosted/orchestrated elements.  There are already some operators (and vendors) looking at how orchestrated feature requirements influence connections and traffic flows.

If vendors articulated a full SDN architecture, something that covered all of the functional zones of SDN that I’ve outlined in my SDN tutorials on YouTube then we could determine from their positioning just how they addressed these SDN risks.  Absent such a vision we have problems with SDN at a higher layer, not down where the focus of compatibility and interoperability is today.  We should be holding vendors accountable for addressing these functional zones in detail, and we are not.  THAT’s the big risk of SDN.

 

Leave a Reply