More Network Tea Leaves to Read

Guess what?  Optical companies are hot again.  Ciena reported better-than-expected earnings, Alcatel-Lucent says that people are pushing it for delivery of its new 400G stuff…it seems like the days of fiber transport slumps are gone.  They may be, and for two kind-of-orthogonal reasons.

If you think about the comments I got from EU network operators, related in yesterday’s blog, then it won’t surprise you to hear that Wall Street is getting more excited about vendors who offer fiber optic transport.  In a future where the value of the network is really a value of the cloud, the role of the network is to deliver stuff cheaply.  This favors a network where more is spent on creating capacity than on managing connectivity.  Thus, there’s a demand-side motive for valuing fiber capacity.

Related to this is that the Street isn’t valuing vendor defenses against things like OTN.  Recent research notes have suggested that products that tie optics to routing to compete with a pure optical core are likely to under-perform.  Some of their data comes from network operators, who say that they want a future IP network core that is just a big optical mesh.  It’s this desired melding of optical core and agile feature-aware edge that seems to be driving Ericsson’s vision of the router, which I’ve noted is a vision operators are liking.

The other driver for optics is the whole OpenFlow/SDN thing.  The problem with optics in a traditional OSI-modeled network is that the layers are supposed to be independent of each other, each offering fixed service to the layer above and consuming service from the layer below.  This puts optics in the position of PVC pipe.  The SDN model, which allows a higher control process to push forwarding rules down onto devices, could support a more integrated vision of optical and electrical layer cooperation.

Speaking of SDNs, Cisco made a presentation to the financial industry on its SDN strategy.  Cisco’s own SDN definition (according to the presentation) “complements” the standard SDN definition which Cisco says is primarily about decoupling the control and data planes of the network.  Actually, SDNs are about centralizing the “network intelligence and state”, as Cisco’s own citation of the standard SDN definition shows.  Cisco’s complementary definition is “a customizable framework to harness the entire value of the intelligent network offering openness, programmability and abstraction across multiple layers in an evolutionary manner. It offers a choice of protocols, industry standards, use-case based deployment models and integration experiences while laying the foundation for a dynamic feedback loop of user, session or application analytics through policy programming.”

Forgive me, Cisco, but I’m having a hard time pulling an SDN definition out of this, or understanding how this complements one.  But Cisco does have some valid points (if not cogent definitions) here if you dig a bit.  They talk about “evolution” of the SDN model, and that raises the question of how one might actually evolve given the large installed base of network devices.  We could displace gear if there were a significant benefit to be achieved to offset the cost, but just the availability of cheaper technology doesn’t justify displacing that already purchased.  Further, it’s not clear just how far SDN can go in networks.

My view is that the SDN model can be visualized as a pair of pyramids joined at the apex.  The top one is the collection of information resources on application connectivity needs, concentrating toward a point where this totality of knowledge can be centralized.  Absent a central collection of connectivity policies you can’t have centralized “network intelligence and state” as the definition demands.  From that point, you move to the lower pyramid which represents the mechanisms for issuing forwarding instructions based on these central policies.  OpenFlow is one protocol for doing that, but there are others that could be used, ranging from “provisioning” paths using something like MPLS (or one of its derivatives) using the path computation elements, to simply policy-managing sessions and admission control aspects of something like IMS.

Cisco may have that kind of flexibility in mind.  The key slide in the presentation shows three circles, labeled (from top left, clockwise) “Policy”, “Analytics”, and “Network” (bottom).  A “Programmability” arrow goes from the network to “Policy”, an “Intelligence” arrow from the network to “Analytics”, and an “Orchestration” arrow from “Analytics” to “Policy”.  The insight here is that SDNs do need to know the state of the network to effectively control forwarding, and operators do want to mine value from their network investment.  The problem is that the slide doesn’t reflect any input from the application.  You need to know what application connection needs are to create a forwarding map; the only other option is to learn that dynamically, which isn’t very different from routing.  But if “Programmability” means imposing forwarding rules and if we had “Policy” input from the applications in some way, the picture would be pretty nice.

One thing seems very clear, not only from Cisco’s presentation but from Google’s SDN example and from other work I’ve heard about (largely research); the real trick of SDN is getting a central source of intelligence and state.  Controlling hardware from that point is not rocket science.  And yet we don’t hear much about this in the OpenFlow or SDN dialog so far.  We have ways to accommodate gathering the data (Big Switch offers vertical APIs, for example) but we’re still grappling with where the data comes from in the first place.  Google uses processed forwarding information captured at the IP edge to SDN-manage routes.  I’ve proposed that DevOps templates/containers could provide the information.  Both these are needed, and more, and that’s what I’d love to see Cisco address—or somebody else address.  Preferably both.

 

Leave a Reply