I’ve noted in the past that it’s proven difficult to make a business case for NFV. Rather than address that point now, I propose to ask “But what if it can?” Remember that while I’m unimpressed (to say the least) with efforts to paint a plausible picture to justify NFV deployment, I believe firmly that one could be drawn. In fact at least three vendors and possibly as many as five could do that. In a competitive market, if one succeeds others will jump in. Who gets the big bucks? What are they from, in terms of class of equipment? The last question is the easiest to answer, and from that answer the rest will likely flow.
No matter what proponents of CPE-hosting of VNFs say, NFV can’t succeed on that basis and in fact can’t generate a boatload of revenue in that space. Yes, we can expect to see customization of CPE to allow for hosting of features, remote management and updating, and even in some cases “offline” operation. That’s not where the majority of the money will be, though. It will help NFV bootstrap itself into the future, but the future of NFV is the cloud.
NFV would be, if fully successful, a source of a bit over 100 thousand data centers, supporting well over a million new servers. These will not be the traditional hyperscale cloud centers we hear about, though cloud data centers will surely be involved in NFV hosting and NFV principles will extend to influence operations practices in virtually all of them. What will characterize the NFV-specific data centers is distribution. Every metro area will have at least one, and probably an average of a dozen. This is the first distinguishing factor about NFV servers, the key to succeeding in the NFV server space.
The first and most numerous tier of NFV servers have to be placed proximate to the point of user attachment. That’s a point that operators agree on already. If you try to haul traffic too far to connect virtual functions you run the risk of creating reliability problems in the connections alone and you create a need for an expensive web of connectivity. Many operators expect to a server for every central office and every cluster of wireless cells (generally, where SGWs might be located), and expect those servers to be connected by very fast fiber trunks so that intra-function communication is easy. These trunks will, for services with NFV in the data path, become the traffic distribution elements of the future, so they’ll have to be both fast and reliable. So will the interfaces, and servers will have to be optimized to support a small number of very fast connections.
The NFV servers will be big, meaning that they’ll have a lot of CPUs/cores and a lot of memory. They’ll be designed for very high availability, and they’ll use operating system software that’s also designed for “carrier grade” operations. Yes, in theory, you can substitute alternative instances for higher availability, but operators seem skeptical that this could substitute for high-availability servers; they see it as a way to supplement that feature.
Although there’s been a broad assumption that the servers would run VMs, the trend recently has been toward containers, for several reasons. First, many of the VNFs are per-user deployments and thus would probably not require an enormous amount of resources. Second, VNFs are deployed under tight control (or they should be) and so tenant isolation isn’t as critical as it might be in a public cloud. Finally, emerging NFV opportunities in areas like content and IoT are probably going to be based on “transient” applications loaded as needed and where needed. This dynamism is easier to support with containers.
So who wins? The player everyone believes is most likely to benefit from NFV is Intel. Their chips are the foundation for nearly all the deployments, and the model of NFV I’m suggesting here would favor the larger chips over microserver technologies where Intel is less dominant. Intel’s Wind River Titanium Server is the most credible software framework for NFV. Intel is a sponsor of IO Visor, which I think will be a big factor in assuring foundation services for NFV. While I think Intel could still do more to promote the NFV business case, their support of NFV so far is obviously justified.
A tier up from Intel are the server vendors, and these divide into two groups—those who have foundation technology to build an NFV business case and those who have only infrastructure to exploit opportunities that develop elsewhere. Servers will be, by a long shot, the most-invested-in technology if NFV succeeds, which gives server vendors a seat at the head of the table in controlling deals. If there are deals to control, that is. HP is the only server vendor in that first group, and in fact the NFV vendor who is most likely to be capable of making a broad case for NFV with their current product line.
The fact that a server vendor could make the business case means to me that other server vendors’ positions with NFV are more problematic. If HP were to burst out with an astonishingly good positioning that included a solid operations story, they could win enough deals to look like a sure path to success for operators, in which case everyone else would have to catch up. Today in NFV it would be difficult to put together a great competing story quickly, so a lot of momentum would be lost.
That defines the next level of winner, the “MANO player”. If you have an NFV solution that could form a key piece of an operations/legacy element orchestration story, a supplement to plain old OpenStack in other words, then you might get snapped up in an M&A wave by a server vendor who doesn’t have something of their own. However, the window on this is short. I think most NFV-driven M&A will be over by the end of 1H16.
VNF players are often seen as major winners, but I don’t think “major” will be the right word. It is very clear that operators prefer open strategies, which few VNFs support. I believe that operators also want either a pure-licensing or a “pay-as” that evolves into a fixed licensing deal. The VNF guys seem to think they can build a revenue stream with per-user per deployment fees. Given this, I think that there will be a few big VNF winners, firms who figure out how to make “VNF as a service” work to everyone’s advantage and who have magnet capabilities (mobile, content, collaboration) for which there are fewer credible open alternatives.
To me, this situation makes it clear that the most likely “winners” in NFV will be IT giants who have minimal exposure to traditional telco network equipment. They have so much to gain and so little to lose that their incentive for being a powerful mover will be hard to overcome. That said, though, every NFV player so far has managed to overcome a lot of incentive and even managed to evade reality. That means that a player with a powerful magnet concept like Alcatel-Lucent’s vIMS/Rapport or Oracle’s operations-driven NFV could still take the lead. We’ll have to see how things evolve.