Last week we had some interesting news on AT&T’s Domain 2.0 program and some announcements in the SDN and NFV space. As is often the case, there’s an interesting juxtaposition between these events that sheds some light on the evolution of the next-gen network. In particular, it raises the question of whether either operators or vendors have got this whole “domain” thing right.
Domain 2.0 is one of those mixed-blessing things. It’s good that AT&T (or any operator) recognizes that it’s critical to look for a systematic way of building the next generation of networks. AT&T has also picked some good thinkers in its Domain 2.0 partners (I particularly like Brocade and Metaswitch), and it represents its current-infrastructure suppliers there as well. You need both the future and the present to talk evolution, after all. The part that’s less good is that Domain 2.0 seems a bit confused to me, and also to some AT&T people who have sent me comments. The problem? Again, it seems to be the old “bottom-up-versus-top-down” issue.
There is a strong temptation in networking to address change incrementally, and if you think in terms of incremental investment, then incremental change is logical. The issue is that “incremental change” can turn into the classic problem of trying to cross the US by just making turns at random at intersections. You may make optimal choices per turn based on what you see, but you don’t see the destination. Domains without a common goal end up being silos.
What Domain 2.0 or any operator evolution plan has to do is begin with some sense of the goal. We all know that we’re talking about adding a cloud layer to networking. For five years, operators have made it clear that whatever else happens, they’re committed to evolving toward hosting stuff in the cloud.
The cloud, in the present, is a means of entering the IT services market. NFV also makes it a way of hosting network features in a more agile and elastic manner. So we can say that our cloud layer of the future will have some overlap with the network layer of the future.
Networking, in the sense most think of it (Ethernet and IP devices) is caught between two worlds, change-wise. On the one hand, operators are very interested in getting more from lower-layer technology like agile optics. They’d like to see core networking and even metro networking handled more through agile optical pipes. By extension, they’d like to create an electrical superstructure on top of optics that can do whatever happens to be 1) needed by services and 2) not yet fully efficient if implemented in pure optical terms. Logically, SDN could create this superstructure.
At the top of the current IP/Ethernet world we have increased interest in SDN as well, mostly to secure two specific benefits—centralized control of forwarding paths to eliminate the current adaptive route discovery and its (to some) disorder, and improved traffic engineering. Most operators also believe that if these are handled right, they can reduce operations costs. That reduction, they think, would come from creating a more “engineered” version of Level 2 and 3 to support services. Thus, current Ethernet and IP devices would be increasingly relegated to on-ramp functions—at the user edge or at the service edge.
At the service level, it’s clear that you can use SDN principles to build more efficient networks to offer Carrier Ethernet, and it’s very likely that you could build IP VPNs better with SDN as well. The issue here is more on the management side; the bigger you make an SDN network the more you have to consider the question of how well central control could be made to work and how you’d manage the mesh of devices. Remember, you need connections to manage stuff.
All of this new stuff has to be handled with great efficiency and agility, say the operators. We have to produce what one operator called a “third way” of management that somehow bonded network and IT management into managing “resources” and “abstractions” and how they come together to create applications and services. Arguably, Domain 2.0 should start with the cloud layer, the agile optical layer, and the cloud/network intersection created by SDN and NFV. To that, it should add very agile and efficient operations processes, cutting across all these layers and bridging current technology to the ultimate model of infrastructure. What bothers me is that I don’t get the sense that’s how it works, nor do I get the sense that goal is what’s driven which vendors get invited to it.
Last week, Ciena (a Domain 2.0 partner) announced a pay-as-you-earn NFV strategy, and IMHO the approach has both merit and issues. Even if Ciena resolves the issue side (which I think would be relatively easy to do), the big question is why the company would bother with a strategy way up at the service/VNF level when its own equipment is down below Level 2. The transformation Ciena could support best is the one at the optical/electrical boundary. Could there be an NFV or SDN mission there? Darn straight, so why not chase that one?
If opportunity isn’t a good enough reason for Ciena to try to tie its own strengths into an SDN/NFV approach, we have another—competition. HP announced enhancements to its own NFV program, starting with a new version of its Director software, moving to a hosted version of IMS/EPC, and then on to a managed API program with components offered in VNF form. It would appear that HP is aiming at creating an agile service layer in part by creating a strong developer framework. Given that HP is a cloud company and that it sells servers and strong development tools already, this sort of thing is highly credible from HP.
It’s hard for any vendor to build a top-level NFV strategy, which is what VNFs are a part of, if they don’t really have any influence in hosting and the cloud. It’s hard to tie NFV to the network without any strong service-layer networking applications, applications that would likely evolve out of Level 2/3 behavior and not out of optical networking. I think there are strong things that optical players like Ciena or Infinera could do with both SDN and NFV, but they’d be different from what a natural service-layer leader would do.
Domain 2.0 may lack high-level vision, but its lower-level fragmentation is proof of something important, which is that implementation of a next-gen model is going to start in different places and engage different vendors in different ways. As things evolve, they’ll converge. In the meantime vendors will need to support their own strengths to maximize their influence on the evolution of their part of the network, but also keep in mind what the longer-term goals of the operator are. Even when the operator may not have articulated them clearly, or even recognized them fully.