NFV: What We Virtualize Matters

Transformation in telecom means investment in something not currently being invested in, but it doesn’t likely mean “new” investment.  Most operators have capital budgets that are based on return on infrastructure, which means that the sum of services supported by a network determines how much you spend to sustain or change it.  One of the reasons mobile networking is so important to vendors is that mobile revenues and profits are far better than wireline, so there’s more money to spend in support of infrastructure involved in mobile service delivery.

Despite mobile services’ exalted position as revenue king, none of the major operators believe that it can sustain current profit levels given the competitive pressure.  As a result, mobile services has been a target for “modernization”, meaning the identification of new technologies or equipment that can deliver more for less.  We’ve had a number of announcements of NFV proof-of-concepts built around mobility—IMS and most recently (from NSN) EPC.  NFV is a kind of poster child for modernization of network architectures, so it’s productive to look at these to see what we might learn about NFV and modernization in general.

One thing that jumps out of mobile/NFV assessment is that it demonstrates that there’s no single model of an NFV service.  When you create a service using NFV you deploy assets on servers that will provide some or all of the service’s functionality.  It’s common to think of these services, and their assets, on a per-customer basis but clearly nobody is going to deploy a private IMS/EPC for everyone who makes a mobile call or sends an SMS.  We have some services that are per-customer and some that are shared (in CloudNFV we call the latter “infrastructure services”).

This range of services illustrates another interesting point, which is that there are services that have a relatively long deployment life and others that are transient.  An infrastructure service is an example of the former and a complex telepresence conference an example of the latter.  Obviously, something that has to live for years is something whose “deployment” is less an issue than its “maintenance”, and something that’s as transient as a phone call may be something that has to be deployed efficiently but can almost be ignored once active—if it fails the user redials.

If we look into things that are logically infrastructure services—IMS and EPC—we see another interesting point.  Most infrastructure services are a cooperative community of devices/elements, bound through standardized interfaces to allow operators to avoid vendor lock-in.  When we think about virtualizing these services, we have to ask a critical question; do we virtualize each of the current service elements or do we virtualize some set of cooperative subsystems.  Look at a diagram of IMS and you see what looks all too much like some complex organizational chart.  However, most of the relationships the diagram shows are internal—somebody using IMS or EPC from the outside would never see them.  They’re artifacts not of the service but of the service architecture.  So do we perpetuate the “old” architecture, which may have evolved to reflect strengths and limitations of appliances, by virtualizing it?  Or do we start from scratch and build a “black box” of virtual elements?

In the IMS world, we can see an example of this in Metaswitch’s Project Clearwater IMS implementation.  Project Clearwater builds IMS by replicating its features not its elements, which means that the optimum use of the technology isn’t constrained by limitations of physical devices.  I think something like that is even more important when you look at EPC.  EPC is made up of elements (MME, SGW, PGW…you get the picture) that represent real devices.  If we virtualize them that way, we’re creating an architecture that might fit the general notion of NFV (we’re hosting each element) but flies in the face of the SDN notion of centralization.  Why have central control of IP/Ethernet routing and do distributed, adaptive, mobility management?

So at this point you might wonder whether all the PoC activity around mobility is reflecting these points, and the problem is that we don’t really know for sure.  My interpretation of the press releases from the vendors involved (most recently NSN) is that the virtualization is taking place at the device level.  Every IMS and EPC element in the 3GPP diagram is reproduced in the virtual world, so all the limitations of the architecture used to provide IMS or EPC are sustained in the “new” world.  You can argue that this is a value in transitioning from hard devices to virtual EPC or IMS, but I think you must then ask whether you’ve facilitated the transition to an end-game you’ve devalued.  Can we really modernize networks by creating virtual versions of all the current stuff, versions that continue to demand the same interfaces and protocol exchanges as before?  Frankly, I don’t think there’s a chance in the world that’s workable.

So here’s my challenge or offer to vendors.  If you are doing a virtual implementation of some telecom network function that takes a black-box approach rather than rehashing real boxes, I want to hear from you.  I’ll review it, write about it, and hopefully get you some publicity about your differentiation.  I think this is a critically important point to get covered.

Conversely, if you’re virtualizing boxes I want you to tell me how much of the value proposition for the network of the future you’re really going to capture through that approach, if you want me to say nice things.  I also want to know how your box approach manages to dodge the contradiction with SDN goals.  We have one network and SDN and NFV will have to live in that network harmoniously.

Leave a Reply