Getting Beyond NFV Problem Recognition

The state of technologies like SDN and NFV is important, but it seems we can get to it only in little snippets or sound bites.  A couple of recent ones spoken at conferences come to mind.  First, AT&T commented that they wanted VNFs to be like “Legos” and not “snowflakes”, and then we had a comment from DT that you don’t want to “solve the biggest and most complex problems first.”  Like most statements, there are positives and negatives with both of these, and something to learn as well.

The AT&T comment reflects frustration with the fact that NFV’s virtual functions all seem to be one-offs, requiring almost customized integration to work.  That’s true, of course, but it should hardly be unexpected given that not only did NFV specifications not try to create a Lego model, they almost explicitly required snowflakes.  I bring this up not to rant on past problems, but to show that a course change of some consequence would be required to fix things.

What we need for VNFs (and should have had all along) is what I’ve called a “VNFPaaS”, meaning a set of APIs that represent the connection between VNFs and the NFV management and operations processes.  Yes, this set of APIs (being new) wouldn’t likely be supported by current VNF providers, but they’d provide a specific target for integration and would standardize the way VNFs are handled.  Over time, I think that vendors would be induced to self-integrate to the model.

What we have instead is the notion of a VNF Manager that’s integrated with a virtual function and provides lifecycle services.  This model is IMHO not only difficult to turn into a Lego, it’s positively risky from a security and stability perspective.  If lifecycle management lives in the VNF itself, then the VNF has to be able to access NFV core elements and resources, which should never happen.  The approach ties details of NFV core implementation to VNF implementation, which in my view is why everything ends up being a snowflake.

An open, agile, architecture for NFV always had three components—VNFs, infrastructure, and the control software.  The first of the three needed a very explicit definition of its relationship with the other two, and we didn’t get it.  We need to fix that now.

Snowflakes are also why the notion of not solving the biggest and most complex problems first is a real issue.  Yes, you don’t want to “boil the ocean” in a project, but you can’t ignore basic requirements because they’re hard to address without putting the whole solution at risk.  The architecture of a software system should reflect, as completely as possible, the overall mission.  If you don’t know what that mission is because you’ve deferred the big, complex, problems then you can end up with snowflakes, or worse.

What exactly is an NFV low apple?  You’d have to say based on current market attitude that it’s vCPE and the hosting of the associated functions on premises devices designed for that purpose.  There are a lot of benefits to this test-the-waters-of-NFV approach, not the least of which is the fact that the model avoids an enormous first-cost burden to the operator.  The problem is that, as it’s being implemented, the model really isn’t NFV at all.

There is no resource pool when you’re siting the VNFs in a CPE device.  The lifecycle management issues are minimal because you have no alternatives in terms of locating the function and no place to put it in the event of a failure.  You can’t scale in our out without violating the whole point of the premises-hosted vCPE model by putting multiple devices in parallel by sending out a new one.  Management issues are totally different because you have a real box that can become the management broker for the functions that are being hosted.

It’s also fair to say that the VNF snowflake problem is glossed over here, perhaps even caused here.  Nearly all the vendors who offer the CPE boxes have their own programs that integrate VNF partners.  That’s logical because the VNFs are really just applications running in a box.  Do these boxes have to provide a virtual infrastructure manager (VIM)?  Is it compatible with a cloud-hosted VIM?  Leaving aside the fact that we really don’t have a hard spec for the VIM overall, you can see that if vCPE hosting isn’t really even a hard-and-fast VIM-based approach, there’s little hope that we could avoid the flakes of falling snow.

The other early NFV application, mobile infrastructure (IMS, EPC) has in a way the same problem from a different direction.  Some of the operators testing virtualized IMS/EPC admit that the implementations really look like a static configuration of hosted functions, without the dynamism that NFV presupposes.  If you think of a network of virtual routers, you can see that you could go two ways.  Way One is that you have computers in place to host software router instances.  Way Two is that you have a cloud resource pool in which the instances are dynamically allocated.  There’s a lot more potential to Way Two, but are the early applications’ attempt to avoid difficulty/complexity going to favor Way One?

For both the snowflake-avoiders and the difficulty/complexity-avoiders we also have the specter of operations cost issues.  It’s hard to imagine how you could do efficient software automation of snowflake-based NFV; lifecycle tasks are embedded in VNFMs and their host VNFs after all.  Does this then mean that all of the operations integration would also have to be customized by the resident VNFMs?  And surely operations automation is a major goal and a major complexity.  Can we continue to ignore it by assuming that dynamic virtual processes can be represented to OSS/BSS as static virtual devices?

I think we’re on the verge of doing with NFV what we have done with a host of other technical “revolutions”.  We start with grandiose scope of goals and expectations.  We are stymied by the difficulty of defining and then justifying the models.  We accept a minimal step to test the waters, then we redefine “success” as the achievement of that step and forget our overall goals.  If that happens, then NFV can never impact enough services and customers and infrastructure to have the impact those ten operators who launched it expected it to deliver.  Recognition of the problem is the first step in solving it, as they say.  It’s not the only step.

Ericsson’s Challenge: How They Got it Wrong, and How They’ll Need to Fix It

Everyone by now knows that Ericsson has issued a profit warning, and that many analysts and reporters are wondering whether Ericsson can survive in the long term.  I think it’s premature to call Ericsson a member of the walking dead, or even seriously wounded, but I also think that it might be helpful to look at the primary causes of Ericsson’s problems.  They’re not what you’ve read about.

It’s true that networking is commoditizing, that operators are more and more concerned about keeping costs under control since revenue per bit seems to be stuck in perpetual free-fall.  It’s also true that this puts pressure on sales and margins, and that it favors vendors who are known to be price leaders, like Huawei and ZTE.  But transformation favors those who can drive it, and there’s plenty of indication that operators are open to fairly radical changes in networking.  Commoditization, then, isn’t inevitable.

Ericsson, concerned about the commoditization of networking, made an early move to address the trend by focusing less on selling hardware and more on professional services.  In the near term, this focus would compel buyers to pay explicitly for things that other vendors might include with their hardware/software, but if hardware was heading for commoditization and software for open-source, it could be a darn smart move to shift to a service stance.  That could also play to the transformation interest among operators.

Which leads us to the first of those causal issues.  Ericsson anticipated a shift in the market that hasn’t happened as fast as expected.  SDN and NFV should have been poster children for a shift from traditional proprietary networking to commodity boxes and professional services, but both have been much more a media success than a real driver of change.  The truth is that Ericsson’s primary customers build networks much the same way now as they did five years ago.

Which tends to favor vendors with more equipment skin in the game.  Ericsson’s professional services numbers were decent; the drop came in their Networking unit, and was blamed on underperformance of wireless deals and (in a margin sense) too aggressive bidding for emerging-market deals.  The point is that Ericsson isn’t known as a broad equipment vendor, and that hurts you when buyers believe that those who offer more gear will also offer services at a better price.

It was incumbent on Ericsson to demonstrate they brought something to the table, in SDN and NFV and transformation.  They have participated insightfully in the standards process for both areas, but transformation is more than standards, it’s using technology to solve profound business problems.  That should have been a big opportunity for Ericsson, and it’s not been so.

Because of the oldest, biggest, bugaboo for network vendors—marketing.  For most of network operator history, sales have been the ultimate example of “marketinglessness”.  Forget ads, web sites, branding.  You send your geeks to talk to the buyer’s geeks, you respond to RFPs that you’ve worked hard to wire in your favor, and you don’t expect to have to do a lot of creative singing and dancing.  The problem is that operators now recognize their challenges as being systemic rather than point-of-purchase in nature.  They don’t need a new box, they need a new approach.  That should have been perfect for Ericsson’s professional services slant, but like other Nordic networking giants, they just don’t know how to engage a broad (systemic) constituency.  It’s not sales there, it’s marketing.

Systemic positioning is especially critical for companies who don’t have highly visible product families whose names are household words.  It unifies what can otherwise be silos, and most important it provides visibility for things that on their own don’t seem all that visible.  That positioning can then easily tie in with buyer goals is another bonus, but that kind of success takes some serious effort and major insight.

One of my Ericsson friends told me years ago that to Ericsson, “marketing” meant “education”.  You told the buyer what they needed to know, which in this day and age is never going to work.  You have to inspire, not educate.  I think Ericsson figured that out this year, which is what lead to their partnership with Cisco, the ultimate marketing machine.  The problem is that Cisco may be able to sing, but they aren’t necessarily going to sing Ericsson’s song.

Cisco as an incumbent equipment vendor isn’t particularly interested in either systemic revolutions in approach or technologies that obsolete current equipment.  Ericsson needs both those things to develop a strong professional services commitment.  I don’t think a Cisco deal is going to do either party a whole lot of good.

What should Ericsson have done, or more important what should they do now?  The answer, I think, is simple.  The operators have a network transformation problem.  They need to forget the classic business of selling bits and sell something more directly useful, something that ties to buyer needs more explicitly.  Yes, some of that means going up the service stack, but it’s not the simplistic virtual-CPE junk that NFV has generated or the elastic-bandwidth model SDN advocates hype.  If you look at the architectures of operators like AT&T and Verizon, you see an attempt to model a new approach to services by framing a new model for infrastructure and service lifecycle management.  That’s what Ericsson should have come up with on its own, and should still work to support.

We have, in 5G, a transformation coming down the line in the very area where most capital dollars are going to be spent, and where change will be easier because you’re really adding new stuff and not just tweaking the old.  I’d argue that what 5G really represents is a model for mobile infrastructure transformation.  We need a similar model for wireline, and then we need technology elements that can support one or both the transformations we’ve defined, elements like those of SDN and NFV.  I do not believe that either AT&T or Verizon have fully developed such a vision, much less defined an architecture to support it.  Could Ericsson?  Sure; they’re smart people.

Will they, though?  We spend a lot of time in this industry bemoaning the operators’ adherence to a Bell-Head culture, but what about the vendors?  Ossified buyers begat ossified sellers.  Can Ericsson recognize that if operators are going to do a different business, they’ll do business differently?  Forget education, concentrate on inspiring buyers to believe you have the answer and that you’re prepared to make it work.  It’s still not too late, but the signals that time is passing are now clearly visible.