Virtual CPE for NFV: Models, Missions, and Potential

There is no doubt that virtual CPE is the most populist of the NFV applications.  There are more vendors who support it than any other NFV application, and more users who could potentially be touched by it.  vCPE is also arguably the most successful NFV application, measured by the number of operators who have actually adopted the model and offered services.  So how far can it go, and how would it get there?

All the popular vCPE applications are based on connection-point services, meaning services offered to users ancillary to their network connections.  Things like firewalls, NAT, IPsec, and similar higher-layer services are the common examples.  These services have the advantage of being almost universally in demand, meaning that in theory you could sell a vCPE service to anyone who used networking.  Today, they’re provided either through an access device like an Internet router, or in the form of discrete devices hooked to the access device.

While all vCPE applications involve hosting these connection-point functions rather than ossifying them in an appliance, they don’t all propose the same approach.  Two specific service models have emerged for vCPE.  One, which I’ll call the edge-hosted model, proposes to host the vCPE-enabling virtual network functions on devices close to or at the customer edge.  This group includes both vCPE vendors (Overture, RAD, etc.) and router vendors who offer boards for VNF hosting in their edge routers.  The other, the cloud-hosted model, would host VNFs in a shared and perhaps distributed resource pool some distance from the user connections.  That model is supported by server or server platform vendors and by NFV vendors with full solution stacks that include operations support and legacy device support.

The edge-hosted model of vCPE can generate some compelling arguments in its favor.  Most notably, something almost always has to be on the customer premises to terminate a service.  For consumer networking, for example, the user is nearly certain to need a WiFi hub, and SMBs often need the same.  Even larger user sites will normally have to terminate the carrier service on something to provide a clean point of management hand-off.  Given that some box has to be there, why not make the box agile enough to support a variety of connection-point services by hosting them locally?  This approach seeks to magnify service agility and eliminate truck rolls to change or add premises appliances when user feature needs change.

For many CFOs, the next-most-compelling benefit for edge-hosted vCPE is the synchronized scaling of revenue and cost.  If you sell a customer an edge-hosted strategy, you send the customer an edge-box.  You incur cost, but you have immediate revenue to offset it.  Cloud-hosting the same vCPE would mandate building a resource pool, which means that you’re fronting considerable capex before you earn a single dollar (operators call this first cost).  The broader the means of marketing the service to prospects, the more useful this first-cost control is.  That’s because the size of that initial resource pool is determined by the geographic breadth of the prospect base; you have to spread hosting points to be at least somewhat close to the user or network cost and complexity will drive your business case into the dust.

The next plus for edge-hosting is that management is simplified considerably.  Here’s a customer and customer service people used to managing real devices that perform the connection-point functions.  If an edge-hosted vCPE strategy is used, then the platform software in the edge-host can make the collection of functions look like a real device, and it’s simple to do.  There are no distributed, invisible, complicated network/hosting structures whose state must somehow be related to the functional state of the connection-point service of a user.  There’s no shared hosting to consider in SLAs.  All the stuff needed is in the same box, dedicated to the customer.

The final point in favor of edge-hosted vCPE, just now being raised, is that it considerably simplifies the process of deploying virtual functions.  Where does the customer’s VNF go?  On their edge device.  No complex policies, no weighing of hosting cost versus network connection costs.  There are twenty-page scholarly papers on how to decide where to put a VNF in a distributed resource pool.  What would implementing such a decision cost, and how would it impact the economies of shared resources?  Why not punt the question and put everything on the customer edge.

The obvious argument against edge-hosted vCPE is the loss of shared-resource economies.  If we presume that network operators follow through on plans to create a large number of NFV data centers to host functions, the cost of hosting in these data centers would be lower than hosting on the customer premises.  In addition, the service could be made more resilient through resource substitution, which is difficult if your resource is hanging on the end of an access line.

According to operators, though, the big problem with edge hosting isn’t that it’s more expensive because you don’t share resources among users.  The big problem is that it’s totally non-differentiated.  You don’t even need NFV to do edge-hosted vCPE because you do little or nothing of the orchestration and optimization that the ETSI ISG is focused on.  Any credible vendor would be able to offer edge-hosted vCPE if they could partner with the VNF players.  Who, as we know, will partner with nearly everyone vertical and not on life support.  Instant, infinite, competition?  Who wants that?

This point leads to the deeper problem, almost as profound.  It’s hard to see how basic edge-hosted vCPE leads anywhere.  If network functions virtualization has utility in a general sense, then it would make sense to pull through the critical elements of NFV early on, with your first successful applications.  How do you do that when your application doesn’t even need NFV?  And given that lack of pull-through, how do you ever get real NFV going?

Some of the smarter edge-hosted vCPE vendors recognize these issues and have addressed them, which can be done in two ways.  First, you could build real NFV “behind” your vCPE approach so that you could interchangeably host in the cloud.  That would require actually doing a complete NFV implementation, and this is what Overture does, and RAD announced an expansion of its own deployment and management elements just today.  Second, you could partner with somebody who offers complete NFV, which many of the edge-hosted vCPE players do.  Anything other than these approaches will leave at least some of the edge-hosting problems on the table.

A hybrid approach of edge- and cloud-hosted vCPE is by far the best strategy, but that model hasn’t gotten the play I’d have expected.  The reason is sales traction.  Overture has a very credible NFV solution but they’ve been a bit shy in promoting themselves as a complete NFV source, and even though they have Tier One traction they’re not seen as being on the same level as an equipment giant.  The partnership players seem to be stalled in the question of how the sale is driven and in what direction.  Many of the larger players who can make the overall NFV business case see edge-hosted vCPE as a useless or even dangerous excursion because they don’t make the edge gear themselves.

Overall, vCPE might be the only way that competing vendors can counter players like Alcatel-Lucent who have an enormous VNF opportunity with mobile/content that they can ride into a large deployment.  Edge-hosting vCPE would let operators get started without a massive resource pool, and with proper orchestration and management elements it could at least be partially backed up or augmented with cloud hosting, even replaced as resource density in the cloud rises.  But it still depends on having some credible NFV end-game, and it’s still hard to deduce what even the best vendors think that is.