AT&T made some news by announcing its latest Supplier Domain Program, based on the goal of developing a transformative SDN/NFV architecture for the network. The company is pledging to begin to buy under the program late this year and into 2014 but has not issued any capex updates. That tells me that they expect the program to have only a modest impact on equipment selection in the next year or so, since savings are the goal of SDN/NFV and none are being projected.
I think that Domain 2.0 is more a shot across the bow of vendors than a serious commitment to either SDN or NFV. The fact is that we don’t have any mature notion of how to build a large-scale SDN/NFV network and in particular how we’d manage one. The big impediment is that network vendors simply don’t want to cooperate, and that’s what I think AT&T is trying to address.
What is interesting is that AT&T is biasing its justification for the new program toward better service agility and other revenue-side points. I don’t believe for a minute that this shows a lack of interest in cost savings (despite the fact that capex plans aren’t being changed for 2014), but I do think that AT&T has like other operators come to realize that the value of SDN and NFV has to come in large part from boosting the “R” in ROI and not just cutting the “I” part. That also makes the initiative more palatable to vendors, who otherwise might see their own revenues vanish to a point.
And surprise, surprise, just as this is happening, Cisco is introducing its Network Convergence System, a kind of silicon-up approach to SDN and NFV that is aimed at transforming the network in the very way that AT&T says it wants. NCS is a three-party marriage—custom chip, promised enhancements to control-plane handling and control-to-traffic integration, and improved management coupling down to the chip level.
This sort of vertical integration from the chip to the skies isn’t new; Juniper made the same claim when it announced its new chips a couple years ago. The problem was that they diluted the benefit claims in their fuzzy and rudderless QFabric launch, the first application of the new chip. However, I think the lesson—nay, lessons—of both announcements are the same. First, you have to think about the network of the future very differently, and second that difference is going to be hard to square with our conception of how networks are built.
Let’s start at the top. Agility in service creation isn’t helpful if the services you’re creating have features invisible to the user. But service features that are visible have to be either built above the current network connection services, or they have to transform the connection model itself. NFV is the logical way to do the first, and SDN could be made to do the second. But NFV so far isn’t claiming to be about creating new features, but rather about hosting old ones that were once part of appliances. SDN, so far, is either about sticking an OpenFlow pipe up in the air and hoping a service model lands on top of it, or about using software to make an OpenFlow network do Ethernet switching or IP routing. How fast we stick both feet back into the network quicksand of the past isn’t a good measure of the value of agility.
I’ve tried, in the CloudNFV initiative for which I’m the Chief Architect, to make a clear point that the virtual functions of NFV have to be both network features and cloud application components. They can’t be things that have to be laboriously written to a new set of standards, nor can they be simply stuff that reluctant vendors have been forced (via things like AT&T’s Domain 2.0) to unbundle into software form. If we want software to be the king of the network’s future value, then we have to take software architecture for networking seriously. That means drawing value from current tools, providing a platform for creating new stuff, and supporting the major trends in the market.
One of which is that mobile point-of-activity empowerment I keep talking about. In agile services, we can build something that the market needs a bit faster than before. I’m recommending extemporaneous services, services that self-compose around the requests of users. We are not going to capitalize on the new value of networking by continuing to consider a “service” to be a long-planned evolution of features designed to capitalize on something. How do we know the “something” is even valuable, and how would a user come to depend on that “something” if nobody offered it? We have to be able to define software using the same agile principles that we expect to us in designing networks.
That’s the service-model side. SDN and NFV have to be coupled, in that NFV has to be able to define services it expects SDN to create. Those services shouldn’t, can’t, be limited to simply IP and Ethernet, because if that’s enough then we don’t have any real new value for SDN to address. How should connectivity work in an “extemporaneous” service? How do we secure and manage something that’s here one minute and gone, or different, the next? That’s the question that AT&T’s Domain 2.0 implies we must answer, and the question that Cisco’s NCS is implied to answer. But implication isn’t the same as proof, and we have yet to see a single extemporaneous application of the network from Cisco, or an architecture that’s convincingly framed to support such a thing. Can Cisco do it? I think so, but I want them to step up and be explicit on the “how” part. So, I’m sure, does AT&T.
And that’s the hope here. If AT&T did drive Cisco to claim more agility is at the core of NCS, then can any network vendor ignore the same issues? I don’t think network vendors will willingly move into the future unless buyers threaten to push them out of the present, which may be what AT&T is doing. Other operators, I think, will follow. So will other vendors.