What Does Domain 2.0 Have to Do to Succeed?

I’ve looked at the vendor side of network transformation in a couple of recent blogs, focusing on interpreting Wall Street views of how the seller side of our industry will fare.  It may be fitting that my blog for the last day of 2013 focuses on what’s arguably the clearest buyer-side statement that something needs to change for 2014 and beyond.  AT&T’s Domain 2.0 model is a bold attempt to gather information (it’s an RFI) about the next generation of network equipment and how well it will fit into a lower-capex model.  Bold, yes.  Achievable, optimal?  We’ll see, but remember this is just my analysis!

Insiders tell me that Domain 2.0 is aimed at creating a much more agile model of infrastructure (yes, we’ve heard that term before) as well as a model that can contain both opex and capex.  While some Street research (the MKM report cited in Light Reading for example) focuses on the capex impact of the AT&T initiative for the obvious reason that’s what moves network equipment stocks, the story is much broader.  My inside person says it’s about the cloud as a platform for services, next-gen operations practices to not only stabilize but drive down opex even as service complexity rises, and optimum use of cloud resources to host network features, including support for both SDN and NFV.  You can see from this list of goals that AT&T is looking way beyond white-box switches.

Another thing that insiders say is that Domain 2.0 recognizes that where regulatory issues aren’t driving a different model, the smart approach is to spend more proportionally on bandwidth and less proportionally on bandwidth optimization.  That’s why Ciena makes out better and Cisco likely makes out worse.  Networks are built increasingly from edge devices with optical onramp capability, coupled with agile optics.

Where SDN comes into the mix in the WAN is in providing a mechanism for creating and sustaining this model, which is sort of what some people mean when they say “flattening the network”.  It’s not as much about eliminating OSI layers as it is about eliminating physical devices so that the total complexity of the network is reduced.  According to these sources, Cisco isn’t the only router vendor to be at risk—everyone who’s not a pure-play agile-optics vendor might have to look often over their shoulder.

Data center networking is also on the agenda, mostly because the new cloud-and-NFV model demands a lot of network agility in the data center.  There will be, obviously, a major increase in the amount of v-switching consumed, but it’s not yet clear whether this is all incremental to the current data center switching infrastructure, a result of increased virtualization (which uses vSwitch technology, obviously).  However, my sources say that they are very interested in low-cost data center switching models based on SDN.

It seems likely to me that a combination of an SDN-metro strategy based on the optics-plus-light-edge model and an SDN data center strategy would be self-reinforcing.  Absent one or the other of these and it’s harder to see how a complete SDN transition could occur.  To me, that means that it will be hard for a smaller vendor with a limited portfolio could get both right.  Could a white-box player?  My sources in AT&T say that they’d love white boxes from giants like IBM or HP or Intel or Dell, but they’re skeptical about whether smaller players would be credible in as critical a mission.  They are even more skeptical about whether smaller players might be able to field credible SDN software.  A giant IT player is the best answer, so they say.

The role of NFV here is harder to define.  If you presume “cloud” is a goal and “SDN” is a goal, then you either have to make NFV a fusion of these things to gather enough critical executive attention, or you have to say that NFV is really going to be about something totally different from the cloud/SDN combination.  It’s not clear to me where AT&T sits on this topic, but it’s possible that they see NFV as the path toward gaining that next-gen operations model we talked about.

NFV does define a Management and Orchestration (MANO) function.  It’s tempting to say that this activity could become the framework of our new-age operations vision.  The challenge here is that next-gen operations is not the ETSI NFV ISG’s mandate.  It is possible that working through a strategy to operationalize virtual-function-based services could create a framework with broader capabilities, but it would require a significant shift in ISG policy.  The ISG, to insure it gets its own work done, has been reluctant to step outside virtual functions into the broader area, and next-gen operations demands a complete edge-to-edge model, not just a model of virtual functions.

Might our model come from the TMF?  Support for that view inside AT&T is divided at best, which mirrors the views we’ve gotten from Tier Ones globally in our surveys.  The problem here, I think, is less that the TMF doesn’t have an approach (I happen to think that GB942 is as close to the Rosetta Stone of next-gen management as you’ll find anywhere) as that TMF material doesn’t explain their approach particularly well.  TMF material seems aimed more at the old-line TMF types, and the front lines of the NGN push inside AT&T or elsewhere lacks representation from this group for obvious reasons.  NGN isn’t about tradition.

The future of network operations could be derived from NFV activities, or from TMF’s, or from something that embodies both, or neither.  Here again, it would be my expectation that advances in operations practices would have to come out of integration activity associated with lab trials and proof-of-concept (TMF Catalyst) testing.  As a fallen software guy, I believe you can develop software only from software architectures, and standards and specs from either the NFV ISG or the TMF aren’t architectures.  I also think this has to be viewed as a top-down problem; all virtualization including “management virtualization” has to start with an abstract model (of a service, end-to-end, in this case) and move downward to how that model is linked with the real world to drive real operations work.  The biggest advance we could see in next-gen networking for 2014 would come if Domain 2.0 identifies such a model.

I wish you all a happy and prosperous New Year, I’m grateful for your support in reading my blog, and I hope to continue to earn your interest in 2014.


Leave a Reply