How Operators are Preparing NFV Plans for their Fall Pre-Budget Review

The consensus among network operators who provide either wireline or wireless broadband is that they’ll cross over on the revenue/cost per bit by mid-2017.  Given the time it takes to make any significant changes in service offerings, operations practices, or capital infrastructure programs, something remedial would have to begin next year to be effective.

In mid-September of each year, operators embark on a technology planning cycle.  It’s not a universal process, nor is it always formalized, but it’s widespread and usually completed by mid-November.  The goal is to figure out what technology decisions will drive capital programs for the following year.  That which wins in this cycle has a good chance of getting into field trial and even deployment in 2016.

It’s not surprising that operators are taking stock now, in preparation for the work to come.  It’s not surprising that NFV is a big question to be addressed, and that NFV’s potential to improve profits by widening the revenue/cost-per-bit gap is perhaps the largest technology question-mark.

My opening qualifier “who provide either wireline or wireless broadband” is important here.  More specialized operators like the managed service providers (MSPs), cloud providers (CSPs), or those who offer multi-channel broadcast video are in a bit better shape.  Interestingly, one of most obvious success points for NFV is the MSP space, so let’s start looking at NFV’s potential with its successes.

An MSP adds value to connection services by introducing a strong feature-management model.  Most connection services are consumed within the context of private WAN deployment of some sort, and there’s more to a private WAN than a few pipes.  Over the last two decades, the cost of acquiring and sustaining the skills needed for that ancillary technology, and the cost of managing the private WAN overall, has grown as a component of TCO.  Today businesses spend almost twice as much supporting their network as buying equipment for it.  MSPs target that trend.

NFV does to, or at least the “service chaining” or “virtual CPE” portion does.  Connection services are built into private WANs by adding inline technologies like firewalls, application accelerators, encryption, and so forth, and by adding hosted elements like DNS and DHCP.  The MSP model of NFV vCPE is to supply those capabilities by hosting them on an agile edge device.  That means that you deploy a superbox with each customer and then mine additional revenue potential by filling it with features that you load from a central repository.  It’s a good, even great, model.

This same model can be adopted by any big operator, including all the broadband ISPs, and in theory it could be applied to every customer.  There are issues with that theory, though—particularly for NFV’s broad acceptance:

  • vCPE delivers the most value where the cost of actual devices is high and their deployment is dynamic. If the boxes are cheap and if users tend to get all the same features all at once, and then never change, it doesn’t do much good.  Consumers and small businesses don’t fit the vCPE success model.
  • While NFV can be used to deploy functions into CPE, that mission dodges most of the broader NFV value propositions. Managing that vCPE model isn’t much different from managing real boxes.  You don’t need clouds or really even need function-to-function connectivity to make it work.  There’s no economy of scale to consider.
  • vCPE has encouraged VNF providers to consider what operators overall say is an unrealistic revenue model—the pay-as-you-go. MSPs like this approach because it lets them match expenses to revenue; they don’t have to deploy much shared infrastructure with a CPE-hosted VNF model so the VNF licenses would be much of their first cost.  Other operators don’t like that model at all because it exposes them to what they believe to be higher long-term costs.
  • The applications of vCPE that do work are a very small part of the revenue/cost-per-bit problem, and so even if you revolutionize these services for the appropriate customers, you don’t move the ball on profit trends very much.

What does move the ball?  The other most successful NFV application to date is mobile infrastructure.  Operators are already dependent on mobile services for profits, ARPU, and customer growth.  There’s more change taking place within the mobile network than anywhere else, and it’s easier to drive new technology into a network when you’re investing in it anyway.

Virtual mobile infrastructure involves virtualizing IMS (the core registration and service control technology), EPC (the metro traffic management and mobility management piece), and of course the radio access network.  We’ve seen announcements in all of these areas, from players like Alcatel-Lucent (vIMS), Ericsson (vEPC), and ASOCS (vRAN, in partnership with Intel).

There’s a lot of money, a lot of potential, in virtualizing mobile infrastructure.  The problem from an NFV perspective is that mobile services are multi-tenant, which means that you generally deploy them and then keep them running forever.  Yes, you need operational support for high availability and performance, but you are really building a cloud application in the mobile infrastructure space and not an NFV application.

Despite the lack of dynamism in virtual mobile infrastructure (vMI), the larger operators tend to accept it as the priority path to NFV.  That’s because vMI is large in scale, both in geographic and technology terms.  It touches enough that if you can make it work, you can infer a lot of other things will also work.  And because operationalization is a big part of a vMI story, that could lead to broad operations transformation.  Operators believe in that.

Here’s what operators say they are facing when they enter their fall planning cycle.  We have proved that NFV works, in that we have proved that you can deploy and connect virtual functions to build pieces of services.  We have proved that NFV can be useful in vCPE and vMI, but we haven’t proved it’s necessary for either one.  But carriers have invested millions in NFV, it’s a major focus of standards-writers and technologists.  There is a lot of good stuff there, sound technology and the potential for an arresting business case.  We just don’t know what that business case is yet.

The plethora of PoCs and trials isn’t comforting to CFOs because it raises the risk of having a plethora of implementations, the classic silo problem.  We have no universal model of NFV, or of any new and different future network.  It’s a risk to build toward the goal of a new infrastructure through diverse specialized service projects when you don’t know whether these projects will add up to a cohesive future-network vision.  It’s particularly risky when we don’t have any firm specifications to help realize service agility or operations efficiency benefits—when those are the benefits operators think are most credible.

What operators are even now asking is whether they can start investing in any aspect of NFV with the assurance that their investment will be protected if NFV does succeed in broadening its scope.  Will we have “NFV” in the future or a bunch of NFV silos, each representing a service that works in isolation but can’t socialize?  This is the question I think is already dominating CFO/CEO planning, where one called it “suffering the death of a thousand benefits”.  It will shortly come to dominate the NFV proofs, tests, and trials because it’s the question that the fall technology planning cycle has to answer if the 2016 budgets are to cover expanded NFV activity.

I believe this question can be answered, and actually answered fairly easily and in the affirmative.  There are examples of effective NFV models broad enough to cover the required infrastructure and service critical masses.  There are examples of integration techniques that would address how to harmonize diverse services and even diverse NFV choices.  We don’t need to invent much here.  I believe that a full, responsive-to-business-needs, NFV infrastructure could be proved out in six months or less.  All we have to do is collect the pieces and organize them into the right framework.  Probably a dozen different vendors could take the lead in this.  The question for this fall, I hope, won’t be “will any?” but “which one?”