One of the ironies of NFV is that its greatest success may be coming from deployments that are actually not NFV at all. A part of this is due to normal market dynamics; you always try to pick the low apples first. Another part is due to the scope limitations I’ve blogged about before; holistic benefits demand a holistic solution, and standards-based NFV doesn’t cover enough ground yet. One interesting question is whether the current dynamic could help or hurt long-term NFV deployment.
Most of the publicized NFV service strategies are based on a CPE-hosted functions model, rather than the cloud-or-virtualization-hosted model that operators first envisioned and that the ISG is working to define. In the CPE-hosted model, a user is given a premises box that is capable of being loaded with feature software from a central management system. This box then provides the “virtual functions” on demand, easily updated to new versions, deleted when no longer needed, or augmented when conditions demand.
Operators tell me that the primary reason for this shift is the problem with “first cost” in NFV deployments. If you presume central hosting of VNFs in NFV you need something to host them on, and unless you want to hairpin traffic a considerable distance you’ll need those hosting points at least proximate to the points of user connection. For a network operator rolling out an NFV-based service over three or four or thirty or forty metro areas, this means an early commitment to multiple cloud data centers with enough servers to create suitable economy of scale. That cost will rack up immediately, and the operator will then have to wait while marketing and opportunity combine to create buyers, which is why it’s called “first cost”.
When you use CPE hosting, you deploy a hosting point on the customer premises, and the cost is incurred only when you have compensating revenue. Costs continue to scale with revenues through the life of the service. When you use CPE hosting, you also eliminate some of the problems associated with shared infrastructure. Security is easier to address because you don’t have multiple users sharing servers. There’s less chance of one user’s service impacting another by having them load the server hosting a component of both users’ services. Management is easier because there is a real box that contains everything.
If you look a bit deeper into the CPE-first movement, you see it’s a reflection of those shifting value propositions for NFV. We started saying that shared-server efficiencies would reduce capex, and clearly the CPE-driven approach wouldn’t have shared servers at all. That means that the justification, the business case, has to be developed by reduction of opex or improvement in the revenue flow. That’s good news for operators because it suggests that the two NFV benefits that seem now to be the most credible are in fact credible enough to overcome any need for capex reduction at all.
CPE-first deployment really doesn’t need “NFV” at all, it needs only a management system to push software images into the CPE. We have other successes with “NFV” in IMS and EPC, but these are deployments of multi-tenant assets that are actually likely to look more like cloud computing mediated by NFV management than like the kind of per-user-and-service NFV everyone expects. The bad news for NFV, then, is that while the benefits are being proven we’re not really validating the full architecture.
If you look at the NFV specifications, you see a significant amount of work put into the details of picking the right place to host stuff, into creating high levels of availability, and so forth. Tuning, in short. If NFV is either about CPE-hosted VNFs or largely persistent multi-tenant VNFs, then are these microtunings useful? We’re not making a resource pool selection in most cases. That suggests that we may be paying too much attention to optimizing something that’s not currently proving to be important at all.
The big question, of course, is whether we can get to NFV deployment in a more “as-we’ve imagined” sense from what’s actually happening. It’s a hard question to answer because you can look at the evolution two different ways.
If vendors who support either of the current strategies are capable of delivering centrally hosted NFV and supporting the standards, and if they are not tempted to under-commit to traditional NFV implementation points by the fact that they are making money doing something different, then everything we have going on could grease the skids for NFV progress. If those vendors focus on the limited needs of their early service successes, then the broader features of NFV may end up becoming at best “options” to be offered later or at an additional cost. We’d create a pseudo-NFV, or even (as some of my blog readers have suggested recently) a whole bunch of walled garden strategies because we don’t need the NFV standards much at all.
It seems to me that the answer to what happens in NFV evolution is going to come from the way that the NFV business case is made. The current successes, if you look at them in benefit-harnessing terms, are successful because they address a special case of something. The CPE-hosted approach, for example, addresses service agility in the context of business users whose “agility” needs focus on connection-point features and whose service value is high. The IMS/EPC examples address operations improvements in a multi-tenant service set that simplifies the operations changes needed, versus per-user-per-service deployments.
Open, multi-vendor, revolutionary NFV has to be more pervasive than our current applications. I think we’ve validated the notion that operations efficiencies and service agility can justify something that’s at least NFV-like if not a full NFV deployment. We have to go the rest of the way, which means that we have to be able to exploit the benefits of service agility and operations efficiency more broadly. IMHO, this is what all the NFV efforts should be directed at doing.
It’s an “enoughness” problem. It’s not that we can’t help operations efficiency today, but that we can’t help enough of it in scope terms, and help it enough in terms of cost impact. In theory we could get there by expanding NFV’s scope into operations orchestration and legacy equipment, but I’m not sure we have time for that gradualism. We need to find a broader trigger for NFV opportunity, one that exposes benefits on a broad scale but can still be implemented at least somewhat gracefully. That trigger is probably IoT.
Something like IoT, truly and fully modeled as an NFV application and supported with a credible set of products, is the key to NFV’s broad success because such a service set would be a model for broad NFV deployment as well as a source of near-term drivers. So what we may be proving here is that a truly comprehensive IoT implementation is going to be the thing that gets NFV moving, that keeps it from becoming nothing more than a series of vague specifications to guide specialized deployments.
We have to do more with NFV to get more from it. We also have to get out of gardens or we’ll inevitably get walled in.