Is Tech Wilting Under Unrealistic Expectations?

There was a thoughtful piece on NFV in Light Reading last week, and it raises a lot of points that are becoming important as carrier cloud opportunity awareness grows in the industry.  The big point the article makes is one of unrealistic expectations, and surely in our hype-driven tech world, that’s a problem.  I do think that there’s a deeper problem lurking behind the hype screen, though.

There isn’t a single technology in our industry that doesn’t have unrealistic expectations associated with it.  I used to joke that in the media, everything had to be the single-handed savior of western civilization or the last bastion of international communism.  Anything less extreme was too hard to write about.  When something new like NFV comes along, it gets the full “savior” treatment, so we’ve surely over-hyped NFV, and so I totally agree with the article’s conclusion in that sense.

The key part of the article opens with a comment that service lifecycle automation is becoming a big part of nearly all the network revolutions, then says “Still, I am hoping we don’t make the same mistake with automation as we did with NFV and saddle it with unrealistic implementation objectives. What I mean is the theme that NFV has not lived up to expectations is still making the rounds based on carrier implementation frustrations.”

This is where I think we have to transition from “expectations” in a hype sense to “expectations” in a goal sense.  NFV is like any technology; it depends on a business case for what it proposes to do.  There’s a lot wrong with living up to hype (like, it’s impossible), but living up to the goals set for a technology is never unrealistic.  Much of the hype surrounding NFV was never linked to any real business case, any specific goal of the NFV ISG.  However, the NFV ISG has to propose a technology evolution that meets some business case, and there I think there’s a problem that goes beyond over-hyping.

I totally agree that operators are frustrated with their NFV experience, but in my view the problem isn’t unrealistic implementation objectives in the “goals” sense.  The fact is that NFV didn’t have enough implementation objectives, which is why service lifecycle automation is now seen as so critical.

The very first US meeting of the NFV ISG was the spring of 2013, and I attended that meeting.  I fairly often on what I saw were important issues, but the one I was most concerned about was the lack of true “end-to-end” vision.  Service management is critical to achieving any kind of operations efficiency, and the NFV ISG decided to rule it, and in fact management of anything other than VNFs, out of scope.  That was the single worst decision in the whole NFV process because it disconnected NFV from the service lifecycle automation framework that was essential not only to improve opex, but to ensure NFV complexity didn’t end up increasing costs more than saving them.

NFV also has a problem architecturally.  The original architecture, a functional end-to-end description, encouraged a literal interpretation in early implementations, and that resulted in what was almost a batch system, a big software monster that was not only very complex but very brittle.  Limited opportunity to define new services or functions through a simple data model makes the architecture unrealistic.  We’ve proposed to build NFV management and orchestration processes like we built OSS/BSS systems a couple decades ago, and that’s the wrong approach (the right one is a microservice-based, model-driven, state/event system).

That raises the next point in the piece I’d like to look at references the view that the issues with NFV are growing pains related to business practices and pricing.  “While this is not trivial, these growing pains indicate that NFV has achieved the technology maturity to be commercially deployed on a massive scale.”  Here I have to disagree totally.

First, absent a compellingly capable service lifecycle automation framework, it’s difficult to see how NFV could really deploy in any volume.  By the fall of 2013, six of the operators who originally launched the NFV ISG told me at a meeting that the original capex-reduction justification of NFV could not work because the cost reduction wasn’t enough.  “If I want a 25% reduction in capex, I’ll just beat Huawei up on price” was the statement (Huawei was there at the time, by the way).

Architecture is the second problem.  Because the way NFV has been implemented, you almost have to try to impose a data-driven, state/event model onto an implementation that really focused in a different direction.  Nobody likes to go back and do something over, but we’re faced with whether to do NFV right, optimally, or have it fall short.

I think the view that there are business practice and pricing problems associated with NFV stems from the dogmatic clinging to the capex justification, because there’s no solution to the opex reduction challenge and because open integration doesn’t fall out of the NFV implementations.  There are a lot of people who seem to think that the market has to accommodate the technology choices.

Sure, if we could get vendors to give away software and support white-box devices with free integration, we could make it easier to substitute commodity technology for proprietary appliances.  But is it realistic for operators, who got into NFV to cut their spending on vendor technology, to then expect vendors to suck it up and make NFV work at their own financial expense?  Technology has to accommodate, has to optimize for, the market.

Early decisions were short-sighted in that area too.  I told the NFV ISG several times that if they wanted virtual network functions to be truly inexpensive, they needed to focus on open-source software as the basis for VNFs.  That would not only make VNFs “free” in open-source form, it would put pricing pressure on vendors who wanted to promote a proprietary version of VNFs.  I also recommended that there be a “class” and “inheritance” modeling of VNFs as a part of an intent-modeled, data-driven approach.  That way all “firewalls”, for example, could be deployed in the same way, further increasing competition.  I don’t think this would have solved the problem that the lack of operations automation created for NFV, but it would have helped.

What I think is the closing summary statement is “And let’s be honest, NFV, SDN and the cloud fabric they deliver are the future. Without this fabric, 5G wouldn’t be achievable in a few years. It’s unlikely the telecom industry would have designed and standardized a 5G next-gen core (NGC) and new radio (NR) in only two years without the foundations of NFV and SDN.”  It is absolutely true that carrier cloud is the future.  It is similarly true that SDN will play a role in connecting the data center elements of carrier cloud.  It is not true, at least based on my modeling, that NFV has any real role in carrier cloud deployment.  It’s a follower technology, something that could exploit carrier cloud if it were available, but not something that could justify it.

My model says that NFV could never hope to justify carrier cloud, even if you assume that somehow it could be linked into 5G deployment as some propose.  NFV, for its own mysterious reasons, probably linked to taking an easily visualized path, focused almost from the start on the application of virtual CPE (vCPE).  It is very difficult to make vCPE useful beyond enterprise sites, because the price pressure of readily available small-site and home technology.  I can buy a superb WiFi router for a home or branch office for less than $300, and it would almost certainly last for five years at least.  That’s $60 per year, or five dollars a month.  Does anyone think we could deliver a vCPE solution for that?  And we need WiFi on premises, not hosted in some remote cloud location, so the costliest element of that small-office/home device can’t be virtualized.

I agree that 5G could be a driver for NFV, but here again we’ve got two problems.  First, 5G is a technology that needs a justifying business case in itself.  We have no industry obligation to adopt a shred of it.  Second we focused on the wrong kind of VNF for 5G.  vCPE is a per-customer, per-site, technology.  Anything that goes in a 5G network is going to be explicitly multi-user in nature.  You won’t deploy a VNF to make a call, but where’s the thinking about how multi-user elements of a service could be deployed and sustained with NFV?  Would a “VNF” for 5G really be a VNF at all, or would it be simply a cloud application that supports multiple users like a web server does?  Surely the latter.

I completely agree with the sense of this piece; we’ve over-hyped NFV and expected too much from NFV and we’re at risk to expecting too much from service lifecycle automation too.  However, I don’t think the problem is the goal as much as the mechanism.  Everything is going to be over-hyped in one sense, and we have to live with the reality of our industry.  Everything still has to make a rational business case for itself.  We fell short on NFV not because we had unrealistic, meaning unachievable, expectations, but because we failed to design an approach that was capable of achieving even the right and reasonable business case expectations.  Zero-touch service lifecycle automation and 5G have exactly the same problem, and the article is dead on in saying we need to fear that outcome.

What’s the real future of NFV?  I think that like SDN, which will deploy significantly but not in the form that was originally defined by the ONF, we’ll see most “NFV” stuff out there has nothing to do with the ETSI specs.  That’s not necessarily a bad thing; the market makes its own decisions after all.  I do think that doing things right from the first would have ended up taking us to a better place, and with less effort overall, than we’ll eventually get to in NFV.  Clearly zero-touch service lifecycle automation is going to follow the same path, and I’ve said often that 5G in NSA or millimeter FTTN hybrid form will be the 5G most of us see.  Maybe these exercises in futility are necessary, but they just seem so wasteful.

We have been assuming that technologies are self-justifying and that everyone needs to get with the program and somehow make them work.  Not true.  Technologies aren’t even useful if they don’t present an optimum path to a set of goals that all the stakeholders can get behind, at least to the extent needed to play their role in the process.  We didn’t get that with NFV, we don’t seem (in my view) to be getting that with zero-touch service lifecycle automation, and we’re probably not getting it with 5G either.  There’s still time to make things better, but the lesson we need to learn is that optimality has to be designed in from the first.