One of the biggest issues I have with companies’ positioning is that they are postulating a totally revolutionary impact for something that differs in no significant sense from the status quo. If a new technology is going to change the world of networking, don’t you think it should do something significantly different? Perhaps the “revolutionary technology” that’s most impacted by this problem is NFV (and I’ve ranted on it there), but SDN has the same problem and perhaps with less reason. It’s easy to make claims of an SDN revolution credible, even though most of them aren’t now.
Packet networking is all about packet forwarding. You can’t connect users if you can’t get them the traffic that’s addressed to them. In a very simple sense, Level 2 and 3 network technologies (Ethernet and IP) manage packet forwarding via three processes. One is the process of addressing, which appends a header onto a packet to identify where it’s supposed to go. The second is route determination, which uses a series of “adaptive discovery” exchanges among devices to determine both the connection topology of the network’s devices and the location of addressed users. The third is the forwarding process itself—how does a route get enforced by the collective forwarding behavior of the devices?
My opening principle says that SDN has to do something different in this process to make a difference in the market. The difference can’t be in addressing or nothing designed for the current network services would work with the new, which means that the difference has to be in the forwarding process and/or route determination.
OpenFlow proposes to eliminate adaptive routing behavior by replacing it with centralized control of forwarding on a per-device basis. The devices’ forwarding tables are updated not as a result of adaptive discovery but by explicit commands from the SDN Controller. Two models of device-to-controller relationship are possible. In one, the controller has a master plan for routes and simply installs the correct forwarding entries according to that plan. The devices get all they need from the Controller when the network (or a device) is commissioned. The second model is a “stimulus” model where a device that receives a packet for which it has no forwarding instructions queries the SDN Controller for a “mother-may-I”.
It is possible to eliminate adaptive behavior through this process. An SDN Controller can define failure modes and quickly install rules to restructure routes around something that’s gone bad. It’s possible that security could be better in this situation because you could hypothesize a device that would pass user requests for any packet handling to a controller for validation and instructions, which would mean no connectivity to anything would exist until the controller validated the relationship being requested. This could be a pretty significant behavioral twist in itself.
The difficulty that an SDN revolution based on the central model brings is the classic difficulty of central models, which is the performance and availability of the controller. If the controller goes south, you have the network frozen in time. If the controller is overwhelmed with requests, you have a network whose connectivity lags more and more behind current demands. Logically you’d need to establish practical control zones in SDN and federate controller zones to divide up responsibility. There are a bunch of ways this could be done, and some advocate pressing protocols like BGP into service. I advocate defining the ideal solution to the problem and then seeing if current protocols like BGP can serve. If not, you do something new.
The packet forwarding element of SDN is where the real potential value lies. Even today, where SDN is (in my view, gratuitously) limited to MAC/IP/Port address recognition, you can envision forwarding structures that don’t map to a classic IP or Ethernet service today. Some of them could be very useful.
Example—the security angle I just mentioned. Suppose we designed an SDN network that was a set of three layers—edge, metro, core. Suppose that we had all these layers divided into control zones that made each zone look like a kind of “virtual OpenFlow switch”. In the metro and core, we’d be focusing on providing stable performance and availability between any metro zone and any other, either directly or via the core. In the edge zone we’d focus on mapping user flows to forwarding rules for the traffic we wanted to carry—explicit connectivity where permitted. The central two layers would be operated in preconfigured-route mode and the edge in stimulus mode. All of this is within the capabilities of OpenFlow today.
Another thing we could do with OpenFlow today is to reconfigure networks to respond to traffic, either based on time of day or on traffic data made available to the controller. OpenFlow networks are not going to be impacted by old Ethernet bridging/route restrictions or router adjacencies in IP; you can have as many paths as you like and engineer every packet’s path if that is what’s needed (obviously that would take a lot of controller horsepower but the point is that it could be done).
With some simple extensions we could do a lot more with OpenFlow SDN, and a bunch of these have already been proposed. Three very logical ones are the support for a more general DPI-based flow matching to rules, enhancements to what can be done when a match occurs (especially packet tagging in the rule itself), and the use of “wild-card” specifications for matching. If you had these capabilities you could do a lot that standard networks don’t do well, or at all.
One thing is intrinsic load-balancing. You could at any point on any route initiate a fork to divide traffic. That would let you “stripe” loads across multiple trunks (subject as always to the question of dealing with out-of-order arrivals). You could prioritize traffic based on deeper content issues, diving below port-level. You could implement one of the IP schemes for location/address separation. You could mingle L2/L3 header information including addresses to manage handling, handle traffic different depending not only on where it’s going but where it came from. You could authenticate packets and tag them to reduce spoofing.
The point here is that there is no reason why an IP or Ethernet service has to behave traditionally other than that the technology offers no practical alternative. What OpenFlow SDN could provide is totally elastic match-and-process rule handling. We could build routers like building software by defining processes, defining triggers, and initiating the former based on the latter. And because the mechanism would be protocol-independent it would never be obsolete. This is what OpenFlow and SDN should be, could be.
Why isn’t it? Some university researchers have proposed most if not all of the extensions I’ve mentioned here, and many of the applications. The challenge is turning all of this into product, and there the problem may be that the vendors aren’t interested in true revolution. VCs who used to fund stuff that was revolutionary now want to fund stuff that’s claimed to be revolutionary but doesn’t generate much cost, change, or risk—only “flips” of the startups themselves. I think the ONF should face up to the challenge of SDN revolution, but given who sponsors/funds the body that may be an unrealistic expectation on my part. If it is, we may wait a while for SDN to live up to its potential.