You can certainly tell from the media coverage that progress on NFV isn’t living up to press expectations. That’s not surprising on two fronts; first, press expectations border on an instant gratification fetish that nothing could live up to, and second that transformation of a three-trillion-dollar industry with average capital cycles of almost six years won’t happen overnight. The interesting thing is that many operators were just as surprised as the press has been at the slow progress. Knowing more about their perceptions might be a great step to getting NFV going, so let’s look at the views and the issues behind them.
In my recent exchanges with network operator CFO organizations, I found that almost 90% said that NFV was progressing more slowly than they had hoped. That means that senior management in the operator space had really been committed to the idea that NFV could solve their declining profit-per-bit problems before the critical 2017 point when the figure falls low enough to compromise further investment. They’re now concerned it won’t meet their goals.
Second point: The same CFO organizations said that their perception was that NFV progress was slower now than in the first year NFV was launched (2013). That means that senior management doesn’t think that NFV is moving as fast as it was, which means that as an activity it’s not closing the gap to achieving its goals.
Third point: Even in the organizations that have been responsible for NFV development and testing, nearly three out of four say that progress has slowed and that they are less confident that “significant progress” is being made on building broad benefit case.
Final point: Operators are now betting more on open-source software and operator-driven projects than on standards and products from vendors. Those CFO organizations said that they did not believe they would deploy NFV based on a vendor’s approach, but would instead be deploying a completely open solution. How many? One hundred percent. The number was almost the same for the technologists who had driven the process. Operators have a new horse to ride.
I’m obviously reporting a negative here, which many vendors (and some of my clients) won’t like. Some people who read my blog have contacted me to ask why I’m “against” NFV, which I find ironic because I’ve been working to make it succeed for longer than the ETSI ISG even existed. Further, I’ve always said (and I’ll say again here and now) that I firmly believe that a broad business case can be made for NFV deployment. I’ve even named six vendors who can make it with their own product sets. But you can’t fix a problem you refuse to acknowledge. I want to fix it and so I want to acknowledge it.
The first problem was that the ETSI ISG process was an accommodation to regulatory barriers to operators working with each other to develop stuff. I’ve run into this before; in one case operator legal departments literally said they’d close down an activity because it would be viewed as regulatory collusion as it was being run. The collusion issue was fixed by absorption into another body (dominated by vendors) but the body never recovered its relevance. That also happened with NFV, run inside ETSI and eventually dominated by vendors.
The second problem was that standards in a traditional sense are a poor way to define what has to be a software structure. Software design principles are well-established; every business lives or dies on successful software after all. These principles have to be applied by a software design process, populated by software architects. That didn’t happen, and so we have what’s increasingly a detailed software design created indirectly and without any regard for what makes software open, agile, efficient, or even workable.
The third problem was that you can’t boil the ocean, and so early NFV work focused on two small issues—did the specific notion of deploying VNFs to create services work at the technical level, and could that be proved for a “case study”. Technical viability should never have been questioned at all because we already had proof from commercial public cloud computing that it did work. Case studies are helpful, but only if they represent a microcosm of the broad targets and goal sets involved in the business case. There was never an attempt to define that broad business case, and so the case studies turned into proofs of concept that were totally service-specific. No single service can drive infrastructure change on a broad scale.
All of this is what’s generated the seemingly ever-expanding number of “open” or “open-source” initiatives. We have OPNFV, ONOS, OSM, OPEN-O, and operator initiatives like ECOMP from AT&T. In addition, nearly all the vendors who have NFV solutions say their stuff is at least open, and some say it’s open-source. The common thread here is that operators are demanding effective implementations, have lost faith that vendors will generate them on their own, and so are working through open-source to do what their legal departments wouldn’t let them do in a standards initiative.
The open-source approach is what should have been done from the first, because in theory it can be driven by software architecture and built to address the requirements first, in a top-down way. However, software design doesn’t always proceed as it should, and so even this latest initiative could fail to deliver what’s needed. What’s necessary to make that happen? That’s our current question.
The goal now, for the operators and for vendors who want NFV to succeed, is to create an open model for NFV implementation and back that model with open-source implementations. That model has to have these two specific elements:
- There must be an intent-model interface that identifies the relationship between the NFV MANO process and OSS/BSS/NMS, and another that defines the “Infrastructure Manager” relationship to MANO.
- There must be a Platform-as-a-Service (PaaS) API set that defines the “sandbox” in which all Virtual Network Functions (VNFs) run, and that provide linkage between VNFs and the rest of the NFV software.
There are three elements to NFV. One is the infrastructure on which stuff is deployed and connected, and this is represented by an infrastructure manager (IM, in my terms, VIM for “virtual” infrastructure manager in the ETSI ISG specs). One is the management and orchestration component itself, MANO, and one is the VNFs. The goal is to standardize the functionality of these three things and to control the way they connect among themselves and to the outside. This is critical in reducing integration issues and providing for open, multi-vendor, implementations.
We can’t simply collect the ETSI material into a set of specs to define my three elements and their connections; the details don’t exist in ETSI material. This puts anything that’s firmly bound to the ETSI model at risk to being incomplete. While an open-source implementation could expose and fix the problems, it’s not totally clear that any do (ONOS, CORD, and XOS among the open groups, or ECOMP for operators, seem most likely to be able to do what’s needed.
Vendors have to get behind this process too. They can do so by accepting the componentization I’ve noted, and by supporting the intent models and PaaS, by simply aligning their own implementations that way. Yes, it might end up being a pre-standards approach, but the kind of API-and-model structure I’ve noted can be transformed to a different API format without enormous difficulty—it’s done in software so often that there’s a process called an “Adapter Design Pattern” (and some slightly different but related ones too) to describe how it works. The vendors, then, could adapt to conform to the standards that emerged from the open-source effort. They could also still innovate in their own model if they wanted, providing they could prove the benefit and providing they still offered a standard approach.
This open approach isn’t essential in making the business case for NFV. In some respects, it’s an impediment because it will take time for any consensus process to work out an overall architecture that fits (in general) my proposed model. A single-vendor strategy could do that right now—six of them, in fact. The problem is that vendors have lost the initiative now, and even if they got smart in their positioning it’s not clear that they could present a proprietary strategy that had compelling benefits. They need an open model, a provable one. That’s something that even those six might struggle a bit with; I don’t have enough detail on about half of the six to say for sure that they could open theirs up in a satisfactory way. All of them will need some VNF PaaS tuning.
I think that it is totally within the capabilities of the various open-source organizations to solve the architecture-model problem and generate relevant specs and APIs, as well as reference implementations. It is similarly well within vendor capabilities to adopt a general architecture to promote openness—like the one I’ve described here—and to promise to conform to specific standards and APIs as they are defined. None of this would take very long, and if it were done by the end of the summer (totally feasible IMHO) then we’d remove nearly all the technical barriers to NFV deployment. Since I think applying the new structure to the business side would also be easy, we’d quickly be able to prove a business case.
Which is why I think this impasse is so totally stupid. How does this benefit anyone, other than perhaps a few vendors who believe that even if operators end up losing money on every service bit they carry they’ll sustain their spending or even grow it? A small team of dedicated people could do everything needed here, and we have thousands in the industry supposedly working on it. That makes no sense if people really want the problem solved.
My purpose here is to tell the truth as I see it, which is that we are threatening a very powerful and useful technology with extinction with no reason other than stubborn refusal to face reality. NFV can work, and work well, if we’re determined to make that happen.