Overture Tells a Complete NFV Story

I’ve been critical of the state of NFV in most of my posts, not because I’m opposed to it (I’m a big supporter) but because it’s deserved it.  There is an avalanche of NFV hype and nonsense out there, so much so that it’s rare to be able to say anything that’s not a criticism.  Which is why I’m happy today; I finally have something I can write favorable things about.

Overture Networks isn’t the furthest thing from an expected source of NFV insight that you could find, but it’s certainly not central in the hearts and minds of the NFV aficionados.  They’re a small-ish Carrier Ethernet vendor who recently announced a CPE element that can host virtual functions.  There are probably a half-dozen of these, all of whom assert NFV credentials, but unlike the rest Overture has real substance.  In fact, they may have more substance than NFV giants.

NFV has a lot of moving parts, but the functional heart of NFV is what the ETSI process calls “MANO”, or Management/Orchestration.  It’s MANO’s responsibility to deploy a service based on some form of instructions—call it a “model” or a “descriptor” or whatever.  When a service is ordered or when events dictate runtime changes to configuration of service components, it’s up to MANO to step in and orchestrate resources.  MANO is the most insightful contribution the NFV ISG has made, and MANO concepts are central to every one of NFV’s possible benefits.  Without MANO, NFV-wise, you have notrhing.

The great majority of NFV stories around MANO boil down to having OpenStack support.  OpenStack isn’t even part of MANO in my view, it’s part of the Virtual Infrastructure Manager that rightfully belongs in NFV Infrastructure (NFVI).  You need something above MANO to organize the end-to-end service setup, not just something to stick VNFs somewhere.  It will be a very long time before we have services that have no legacy elements in them (if ever) so you need some flexibility here.  Overture announced that last year with its Ensemble Service Orchestrator.  ESO is based on policies expressed Drools and workflows in BPMN 2.0 (Business Process Model and Notation, an OMG specification).  BPMN policies could be used to define services and service processes in detail, and at a higher level than OpenStack.  Overture, in fact, places OpenStack correctly as a VIM element in their presentation.

ESO gave Overture the distinction of being one of only three industry players that have met my tests for an NFV architecture that can effectively model services at both the functional and structural level.  They also announced a generalized network controller with ESO, so they could deploy on at least some legacy infrastructure.  However, they didn’t have a management story and so they’ve been a sort-of “Ohhhhh” provider rather than a “MAN…Oh” provider up to now.

“Up to now”, because they’ve now released a new capability that blends service definition and deployment with management.  It works in two parts, one of which is a set of advanced analytics applications and the other (the Ensemble Service Intelligence piece) that provides the framework that relates analytics results with resources and services and presents interfaces to management tools and other applications.

From Day One of NFV I’ve been an advocate of creating a repository intermediary between resources and “native” sources of management data and the management and operations tools and applications.  That’s at the heart of Overture’s approach.  ESI is a big-data repository populated by the totality of the resource MIBs (for devices, servers, platforms, and even VNFs) and also by service-to-resource relationships created during deployment.  They extend the basic repository notion with a series of management applications that derive additional intelligence through analytics.  Analytics also provides the service-to-repository correlation necessary to make service management explicit and not just implicit.

In their presentation, Overture includes a service lifecycle process description that builds a service (through a nice drag-and-drop GUI) and then takes it through deployment and management, including autoscaling under load.  This is managed by one of those ESI applications, and the approach demonstrates the value of the repository/analytics approach to management integration.  It appears to me that ESI applications and management data could be used in conjunction with BPMN-described state-event workflows implementing state/event tables in Titan models.  That could allow Overture to integrate management and operations processes into the lifecycle, which would create event-driven management and operations, pretty much the holy grail of OSS/BSS/NMS.

Overture also has a VNF ecosystem, and a trio of service-edge solutions ranging from an Overture-augmented server and a kind of “Dmarc-plus” Overture device to pure software.  Not surprisingly given Overture’s Carrier Ethernet positioning, they integrate these elements into NFV quite well, making the edge and the cloud both elements of NFVI and allowing VNFs to migrate from one place to the other as needed.  They have a decent number of VNFs available, more on the way.

There have been only three vendors who have shown me critical mass in an NFV platform—HP, IBM, and Overture (I think Alcatel-Lucent likely has a good solution but they’ve not provided collateral on the details so I can’t assess it fully).  Overture’s approach doesn’t have the legacy-network integration and OSS/BSS connection maturity offered by HP or the cloud affinity of IBM’s TOSCA-based approach.  But HP and IBM don’t have the same level of resource-to-service management coupling detail as Overture can provide.  What HP and IBM do have is mass, and buyer credibility, though.

Overture’s ESO/ESI adds up to an utterly fascinating NFV implementation, one so complete that you’d think it came from a network giant.  The fact that it doesn’t may be its only real limitation.  Overture has never seemed to push its “NFV strategy” as a general NFV strategy, preferring to see it as an extension of Carrier Ethernet.  They tell a great story to somebody who wants the whole story (their slide deck is fifty slides, the most information anyone has shared with me on NFV), but their positioning still seems to stop short of shouting MANO supremacy from the rooftops.  That I think would be out of their comfort zone.

That raises an interesting point because ordinarily a Carrier Ethernet packaging for VNFs and NFV tools would risk creating a silo, which everyone (including me) opposes.  In this case, you’d have to wonder whether instead of creating a silo, Overture is creating an on-ramp.  An operator with a strong Carrier Ethernet position and early opportunities to augment basic services with VNF-based security and other functional add-ons might conceivably start out with Overture’s ESO/ESI combination and their virtual endpoints and find out they could grow out of that position to broader services in which they’d never have seen Overture as a player.

Of course, an operator who doesn’t want to focus early field trials and deployment on Carrier Ethernet customers might find Overture a lot less appealing, and Overture might not be enthralled by the opportunities these non-Ethernet plays present either.  NFV is a consultative sell at best, and something has to pay for all that effort.  If at the end of the day the operator involved has little use for Overture hardware, will the pot be sweet enough for Overture to hang in?  So if ESO/ESI is an on-ramp, it’s not very well marked.

Somebody could always buy them, of course, and it’s also possible that somebody would step up to establish a partnership with Overture that flows value from small (Overture) to large rather than in the large-to-small direction represented by most of the VNF partnerships today.  Should we think of this as a MANO partnership?  The point is that there is really nothing out there quite like this.  I hate to paraphrase a line from a movie I didn’t even see, but these guys could be contenders.  With some work and positioning it could be a really great story in NFV overall, and in 2015 as I’ve said there are a lot of operators who need some of the stuff that ESO/ESI can provide.  At the least, this may inspire others to step up and tell an NFV story instead of erecting an NFV billboard pointing toward the nearest reporter.  Substance, by the second half of this year, is going to matter a lot, and substance we have here.

The “New TMF”, the “Old Ericsson”, and Kainotophobia

Out with the old, in with the new, as they say.  The TMF has a new head, Peter Sany, and he’s making statements that sound on target (see this interview in the New IP).  Ericsson struggled in revenues for the quarter as they’ve tried to contend with the capex trend in the carrier industry.  Transformations abound, which of course gives us lots of things to think about.  And apparently a lot of things to fear.  “Kainotophobia” is “fear of change” and it’s throwing a kibosh into everyone’s transformation plans.

I’ve worked with the TMF in a lighthearted way for quite a while, and the body is certainly interesting.  The essential concept that John Reilly presented half-a-dozen years ago with “NGOSS Contract” was a factor in my own design for CloudNFV and ExperiaSphere and the notion of “Customer-Facing” and “Resource-Facing” services were also ingredients in my model.  I’ve cited these TMF inspirations regularly and I put a lot of value on the thinking.  It proves to me that the TMF can do good stuff.

What it’s not been good at is what I think the market itself has proven unable to cope with, and that’s “revolution”.  Through the last ten years we’ve been seeing a fundamental deterioration of the business model of technology in general and of networking in particular.  I’ve cited some statistics on this—operators have seen opex climb from a quarter of TCO toward being two-thirds and heading north.  They’ve seen revenue per bit plummet at rates of 50% per year.  ARPU in nearly all market sectors is plateauing and in many markets the customer base is saturating.  We had all the evidence in the world for these shifts even ten years ago, but everyone ignored them.

It was seven or eight years ago that the issues really started to come to the fore.  Almost immediately operators started grousing that vendors were not supporting their transformation goals.  By 2008 every single operator in my survey was rating vendor support of transformation as “unsatisfactory”.  At about that same time, the TMF was absorbing another body that had been effectively focusing vendor and operator efforts to develop a different model of service operations.  The initiative (the IPsphere Forum) died in the TMF.

What was behind both vendor intransigence and TMF glaciation was trying to take an evolutionary view of revolution.  If you want to transform a technology or an industry you have to start with what you’re trying to get to, not where you are already.  If we want to revolutionize network infrastructure, paint a picture of what the ideal would look like.  Revolutions in opex have to start by defining the perfect system for automated operations.  From the goal, we can then follow tendrils of possibilities back toward the present and order approaches by cost and risk to pick one.

The TMF is a political body and an organism focused largely on its own survival, which is what vendors are as well.  They could have answered every single point that their new leader raised in that interview I cited as far back as 2008 and could have decisively addressed the points in 2013.  They didn’t do that not because they weren’t exposed to the right approach—they already had fielded the cornerstone strategies needed—but because they got tangled up in politics and weak leadership and shallow thinking.

Vendors are similarly tangled.  Do the big-name network vendors seriously think that operators would invest growing sums in infrastructure that was yielding diminishing ROI?  These are the guys who were cutting their own costs to make the Street happy.  If you want somebody to increase the “I” you have to increase the “R”.  As important as cost management is, it’s nothing more than a means of refining a positive revenue model.  Only benefit gains can ultimately fund profit growth and infrastructure investment.  Yet vendors have focused not on fixing their market problem but capitalizing on the symptoms.  No products to support transformation?  Ericsson’s answer was to focus on integration and professional services.  Well, there is no way that one-off solutions to ROI problems is going to cut it as far as the industry is concerned, Mr. Ericsson.  Products systematize solutions, lower investment overall, and improve profits.  Band-Aids simply blot up the blood.

We know today that services in the future will be highly personalized.  That has to be true because the consumer market is fad-driven and because mobile broadband has linked technology to moment-by-moment living and not to life planning.  We know that this means “agility” but what the heck does “agility” mean in a tangible way?  It means compositional architectures so it should start with very effective modeling.  Who talks about their service models in SDN or NFV?  It means event-driven processes but nobody talks about service states and state/event tables.  It means resource fabrics and process fabrics that combine to form the cloud, “information fields” created by things like IoT.  Where are the products for this?

If you read the stories of SDN and NFV and the cloud, you’d think we have already met and defeated the future.  The cloud is so entrenched in the media culture that people who are talking about private IT think that has to mean “private cloud” because after all the cloud has won.  Won, if winning is securing 2.4% of global IT spending.  SDN is transforming every company and every network and yet the actual processing power expended on SDN Controllers today is less than that of a single graphics arts firm.  NFV is on every operators’ lips but canny CFOs say that they can’t prove the business case with current activities.

The TMF can solve its problems in six months by simply starting at the top and defining what operations for the new age should look like, forgetting all the old-timers that would offend and all the conservative thinkers whose coffee breaks would be ruined.  Ericsson could buy some critical startups and assemble a complete SDN and NFV story that, when supplemented by professional services, would make them winners even over Huawei.  Every piece of the SDN and NFV pie that’s needed to meet current and future needs is out there already, waiting to be exploited.  Every design and plan needed for transformed operations and “service agility” has already been proposed by somebody, often by multiple people.

Does Ericsson want to fall forever behind?  Huawei is increasing revenues by 20% annually while competitors struggle just to stay even.  Does this sound like you’re winning the game you insist on playing, Dear Vendors?  The TMF has spent more time talking about its ZOOM transformation than would have been required to do a truly effective demonstration using running software.  Does this sound like “focusing on demonstrating progress and then driving consensus, not the other way around” as new TMF president Peter Sany said in the interview?

We are in the industry that has been responsible for more change than any other in modern times.  Why have we allowed ourselves to become so fearful of change now?  We should fear the status quo instead, because it’s not doing well for most of us.

There’s a Revolution In SDN (If We Can Dig it Out!)

One of the biggest issues I have with companies’ positioning is that they are postulating a totally revolutionary impact for something that differs in no significant sense from the status quo.  If a new technology is going to change the world of networking, don’t you think it should do something significantly different?  Perhaps the “revolutionary technology” that’s most impacted by this problem is NFV (and I’ve ranted on it there), but SDN has the same problem and perhaps with less reason.  It’s easy to make claims of an SDN revolution credible, even though most of them aren’t now.

Packet networking is all about packet forwarding.  You can’t connect users if you can’t get them the traffic that’s addressed to them.  In a very simple sense, Level 2 and 3 network technologies (Ethernet and IP) manage packet forwarding via three processes.  One is the process of addressing, which appends a header onto a packet to identify where it’s supposed to go.  The second is route determination, which uses a series of “adaptive discovery” exchanges among devices to determine both the connection topology of the network’s devices and the location of addressed users.  The third is the forwarding process itself—how does a route get enforced by the collective forwarding behavior of the devices?

My opening principle says that SDN has to do something different in this process to make a difference in the market.  The difference can’t be in addressing or nothing designed for the current network services would work with the new, which means that the difference has to be in the forwarding process and/or route determination.

OpenFlow proposes to eliminate adaptive routing behavior by replacing it with centralized control of forwarding on a per-device basis.  The devices’ forwarding tables are updated not as a result of adaptive discovery but by explicit commands from the SDN Controller.  Two models of device-to-controller relationship are possible.  In one, the controller has a master plan for routes and simply installs the correct forwarding entries according to that plan.  The devices get all they need from the Controller when the network (or a device) is commissioned.  The second model is a “stimulus” model where a device that receives a packet for which it has no forwarding instructions queries the SDN Controller for a “mother-may-I”.

It is possible to eliminate adaptive behavior through this process.  An SDN Controller can define failure modes and quickly install rules to restructure routes around something that’s gone bad.  It’s possible that security could be better in this situation because you could hypothesize a device that would pass user requests for any packet handling to a controller for validation and instructions, which would mean no connectivity to anything would exist until the controller validated the relationship being requested.  This could be a pretty significant behavioral twist in itself.

The difficulty that an SDN revolution based on the central model brings is the classic difficulty of central models, which is the performance and availability of the controller.  If the controller goes south, you have the network frozen in time.  If the controller is overwhelmed with requests, you have a network whose connectivity lags more and more behind current demands.  Logically you’d need to establish practical control zones in SDN and federate controller zones to divide up responsibility.  There are a bunch of ways this could be done, and some advocate pressing protocols like BGP into service.  I advocate defining the ideal solution to the problem and then seeing if current protocols like BGP can serve.  If not, you do something new.

The packet forwarding element of SDN is where the real potential value lies.  Even today, where SDN is (in my view, gratuitously) limited to MAC/IP/Port address recognition, you can envision forwarding structures that don’t map to a classic IP or Ethernet service today.  Some of them could be very useful.

Example—the security angle I just mentioned.  Suppose we designed an SDN network that was a set of three layers—edge, metro, core.  Suppose that we had all these layers divided into control zones that made each zone look like a kind of “virtual OpenFlow switch”.  In the metro and core, we’d be focusing on providing stable performance and availability between any metro zone and any other, either directly or via the core.  In the edge zone we’d focus on mapping user flows to forwarding rules for the traffic we wanted to carry—explicit connectivity where permitted.  The central two layers would be operated in preconfigured-route mode and the edge in stimulus mode.  All of this is within the capabilities of OpenFlow today.

Another thing we could do with OpenFlow today is to reconfigure networks to respond to traffic, either based on time of day or on traffic data made available to the controller.  OpenFlow networks are not going to be impacted by old Ethernet bridging/route restrictions or router adjacencies in IP; you can have as many paths as you like and engineer every packet’s path if that is what’s needed (obviously that would take a lot of controller horsepower but the point is that it could be done).

With some simple extensions we could do a lot more with OpenFlow SDN, and a bunch of these have already been proposed.  Three very logical ones are the support for a more general DPI-based flow matching to rules, enhancements to what can be done when a match occurs (especially packet tagging in the rule itself), and the use of “wild-card” specifications for matching.  If you had these capabilities you could do a lot that standard networks don’t do well, or at all.

One thing is intrinsic load-balancing.  You could at any point on any route initiate a fork to divide traffic.  That would let you “stripe” loads across multiple trunks (subject as always to the question of dealing with out-of-order arrivals).  You could prioritize traffic based on deeper content issues, diving below port-level.  You could implement one of the IP schemes for location/address separation.  You could mingle L2/L3 header information including addresses to manage handling, handle traffic different depending not only on where it’s going but where it came from.  You could authenticate packets and tag them to reduce spoofing.

The point here is that there is no reason why an IP or Ethernet service has to behave traditionally other than that the technology offers no practical alternative.  What OpenFlow SDN could provide is totally elastic match-and-process rule handling.  We could build routers like building software by defining processes, defining triggers, and initiating the former based on the latter.  And because the mechanism would be protocol-independent it would never be obsolete.  This is what OpenFlow and SDN should be, could be.

Why isn’t it?  Some university researchers have proposed most if not all of the extensions I’ve mentioned here, and many of the applications.  The challenge is turning all of this into product, and there the problem may be that the vendors aren’t interested in true revolution.  VCs who used to fund stuff that was revolutionary now want to fund stuff that’s claimed to be revolutionary but doesn’t generate much cost, change, or risk—only “flips” of the startups themselves.  I think the ONF should face up to the challenge of SDN revolution, but given who sponsors/funds the body that may be an unrealistic expectation on my part.  If it is, we may wait a while for SDN to live up to its potential.

The Cloud, NFV, their Relationship, and Opportunity

Everyone who’s followed NFV knows that there is a relationship between NFV and the cloud.  Logically there would have to be, because public cloud services host applications on a per-tenant basis with tenant isolation for security and performance stability.  That’s what network features for individual customers need, so it would be totally illogical to assume a totally new multi-tenant approach would get invented when NFV came along.

The thing is, this simple justification would lead you to believe that not only was there a relationship between the cloud and NFV, the two were congruent.  As you’ll see, I think it’s very likely that NFV and the cloud will evolve into a common organism, but we’re not there yet.  That current separation is something that proponents of both NFV and the cloud need to minimize, and that a lot of NFV marketing is exploiting in a cynical and negative way.  Thus, we need to understand just what the relationship between the cloud and NFV is, and what the differences mean right now.

A good discussion of differences should start with similarities, if for no other reason to prove that convergence of NFV and the cloud is not only inevitable, it’s already happening.  Cloud computing is a computing architecture that allows components/applications to be hosted on shared servers in a highly separated (multi-tenant) way.  The obvious advantage of this shared hosting is that the cost of the servers are amortized across more applications/users and so the per-user cost is less.  This is analogous to the “capex reduction” benefit of NFV.

The problem is that pooled, shared, resources are not infinitely more efficient as they become infinitely large.  There’s a curve (expressed by the Erlang C “cumulative distribution” curve) that shows that utilization efficiency grows quickly as the resource pool gets bigger, but this tapers off to eventually a plateau and further increases in the pool, even large ones, make little difference.  The biggest savings occur early on.  What that means is that enterprises with large data centers approach the efficiency of cloud providers, which means that public cloud services couldn’t save much in the way of capex.  Note that operators have quietly shifted away from a pure capex-driven NFV value proposition.

Fortunately for cloud providers and NFV proponents there’s another force at work.  Most SMBs and even some enterprises have found that the cost of supporting IT infrastructure is growing faster than the cost of the infrastructure.  For many, TCO is already two-thirds opex to one-third capex.  By adopting a cloud model of IT (particularly SaaS, which outsources most support) a business can transfer operations to a player who can get access to skilled labor and use it efficiently.  If we look at the savings side of cloud computing’s benefits, opex reduction is now the most compelling story.  And that is also the same with NFV.

Cost reduction vanishes to a point, though.  You can’t justify major technology revolutions through cheapness because at some point the percentage reductions in expense that you can present won’t fund the technology changes any more.  ROI based on cost management always declines over time, so you need to have something else—new benefits.  For cloud computing, this means turning the cloud into a new-age application architecture that can do stuff that was never done with traditional IT.  Amazons growing repertoire of cloud services or my notion of point-of-activity empowerment are examples of a benefit/revenue-driven cloud.  For NFV, this is the “service agility” argument.

What’s the difference between NFV and the cloud, then?  The first answer is that because NFV targets large sophisticated buyers, it has to do a better job of harnessing benefits from its incremental costs or there will be no movement to NFV at all.  NFV is in many ways a kind of super-DevOps, an architecture to automate the processes of deployment and management to the point where every erg of possible extra cost has been wrung out, every inefficiency of utilization eliminated.  First and foremost, NFV is a cloud-optimizing architecture.

Because NFV addresses today (for its prospects) the problems all the cloud will face down the line, “cloud” approaches look a lot like NFV approaches if you just look at one application/service in one limited test.  Most of the NFV PoCs, for example, really look more like cloud hosting than dynamic, agile, flexible NFV.  This has allowed virtually every vendor who purports to have an NFV story to over-promote what’s really a cloud value proposition.  You can replace many custom network appliances not with virtual functions but with cloud components, and that’s particularly true for functions that have very long lives.  Where NFV becomes critical is when you have to do the deployments a lot, often, for short intervals.

NFV’s “service agility” benefit depends largely on evolving how services are built to generate more of this dynamism.  This point gets missed a lot, in no small part because vendors are deliberately vague about the details.  If we need to rethink service creation, we necessarily have to spend some time considering the new architecture.  It’s a lot easier to say that we’ll cut provisioning time from two months to two days, which is great for time-to-revenue, right?  But if the customer didn’t want the service in two days but had two months’ notice (as opening a new office would likely offer) we have less chance of any revenue gain.  If the customer has the service already we get nothing; you can’t accelerate revenue you’re already collecting.

Here in facing the service dynamism issue, interestingly, the cloud may be leading NFV.  I believe that Amazon knows darn straight well that its cloud service future lies in being able to build cloud applications that are cloud applications, totally unsuitable for premises execution.  I also believe that we are seeing, in the mobile revolution, more and more situations where these new cloud applications could be a major benefit to users and a major revenue source.  That’s where my trillion dollars a year in incremental revenue for point-of-activity empowerment comes from.

NFV has led the cloud in recognizing that DevOps has to grow a lot to manage the scale of computing that cloud success would imply, and to manage the increasing dynamism that exploiting the cloud’s special characteristics would create.  But NFV has been totally unable to deal with the issue of how dynamism is realized, how application components that create dynamic experiences become service components.  The OTTs and the cloud developers are thinking more about that than the network operators and the NFV advocates.

Neither NFV nor the cloud can now succeed without the other.  Without NFV, the cloud’s growth will expand its cost of operations in a non-linear way until there’s no further benefit that can be realized.  Without the cloud and an understanding of the notion of cloud-specific services/applications, NFV will never realize a nickel from service agility and stall out when operations costs can’t be reduced further.  The question is which constituency—cloud/OTT or network operator—is going to get smart first and address the issues that the other constituency today is handling a lot better.  It may be that if the “cloud” wins, OTTs win and operators are forever consigned to public utility business models no matter how they’re regulated.  If NFV wins, then operators have a darn good chance of making the OTTs of today into the CLECs of tomorrow.

 

What Google’s MVNO Plans Could Mean for Operators

A number of independent rumor sources say that Google is finally going to make the MVNO move, striking reseller deals with Sprint and T-Mobile to become a Mobile Virtual Network Operator (MVNO).  This is what I thought Amazon should have done with its Fire phone, and what I still think would be a possible move for Apple.  It’s a move that promises to reposition Google in the network market, in the cloud, with advertisers, and with users.  One that threatens its traditional competitors and creates some new ones.

This isn’t likely to be a simple “make-more-money” play.  MVNOs are typically companies who offer lower-cost services with some restrictions or priority limits on access relative to the customers of their parent carriers.  Given that Sprint and T-Mobile are price leaders already it may be difficult for Google to discount much and still retain any profit margin at all.  That suggests that Google may have other plans to leverage the relationship through add-ons.  If not, if Google is looking at a “premium MVNO” that would charge more, they’ll still have to justify that extra cost somehow.

For the industry, this could be a radical move.  While other players like Amazon and Apple have not yet pulled the MVNO trigger, it’s likely that a move by Google could spur them to action.  Even if there’s no immediate response, the threat here is clear.  A major handset player (even an indirect one, via Android) and a cloud player and an ad giant becomes an MVNO?  A lot of carrier planners are going to get heartburn on that one, even if there were no other risks, which there are.

One thing at risk here is further disintermediation by handset vendors.  Most customers have as much or more loyalty to their devices as to their carriers.  Many mix service and device features up, in fact.  Operators have been increasingly concerned about the power of the device vendors, particularly Apple, in controlling service adoption and upgrades.  There was a lot of operator interest in Mozilla’s phone project, which was a lightweight platform intended to act as a portal to carrier-hosted features rather than be a device with a lot of local smarts.  It never took off, but it was an indicator that operators were taking handset vendors seriously, risk-wise.  They’ll surely be even more concerned now.

This is old news, of course, and what I believe will be the real risk here is at a higher level.  Mobile services, as I’ve pointed out before, are unique in that they reset the relationship between users and information by supporting what users are doing and not helping them plan.  You may research stuff online from your living room or den, but you make immediate purchase decisions from your phone—because it’s with you when you’re in a buying mode.  What I’ve called “point-of-activity empowerment” is a potential powerhouse new set of benefits, something that could drive almost a trillion dollars a year in new revenue.

With both Android and an MVNO position, and with content, ad, and cloud resources aplenty, Google could frame a bunch of new services targeting the mobile user.  Those services could help Google make a transition from being dependent on advertising (a two-thirds-of-a-trillion dollar space even if all forms of advertising are counted) to paid-for services that could bring in four trillion dollars or more in total.  They could also help operators monetize their infrastructure investment better, but not if Google gets the money.

The mobile/behavioral services tie in nicely with some of Google’s other interests, like self-driving cars.  These are gimmicks now but a lot of what would have to be behind the vehicles in the way of route knowledge and even IoT integration could be useful to human drivers and pedestrians.  There’s also a strong ad tie-in with integrating movement, social framework of the user, and their “intent” as expressed by questions or searches they launch.  All of this stuff could be the basis for a series of services, both to advertisers/retailers and to users.

A new giant MVNO like Google and the prospect for Amazon and Apple to follow suit generates a lot of potential changes in the mobile operator’s plans.  There are already examples being reported of MVNO grooming using SDN, and that would be more likely if big names like Google get into the game.  Even more radical changes could come in the IMS, EPC, and NFV areas.

Mobile service-layer technology has been overly complex, costly, and high touch.  Vendors like Metaswitch have already introduced lighter-weight technology for IMS that would be ideal for an MVNO, depending on the technical integration offered by the parent operators.  Google could base their service on a simpler stack.  Beyond these basics, Google would be likely to jump into a different voice and message model (think Google Voice and Gmail, perhaps, or Hangouts overall) and that would put pressure on operators to find a voice/SMS platform that’s more agile and cheaper.  If we find out that Google’s deal is for broadband/data only, we’ll know something very important—classical mobile voice and SMS is dead.

EPC is an issue because most of what EPC does is accommodate mobility and provide assured paths for premium services.  If Google takes a complete OTT voice and IM model, there’s nothing for EPC to do other than to follow users when they move from cell to cell.  Other location-independent routing approaches have been proposed; might Google try one?  At the least, we might be looking at a future where “Internet offload” offloads everything, which makes the concept a bit damp.

For NFV, this could be the goad that finally generates useful action or the straw that breaks the camel’s back.  Carrier standards processes have long missed the mark when applied to software-driven functionality of any sort, and the ETSI NFV work (with its first phase just completed and published) has a very powerful/revolutionary concept (MANO) that’s buried in a rigid and complex framework that doesn’t cover the full service spectrum in infrastructure terms and isn’t fully integrated with operations or applications.  Vendors at this point are certain to jump out to build around the edges of the spec to take advantage of the limited scope and to differentiate themselves.  In doing so they might propel NFV to a place where it could actually help operators build agile services—services to do what Google is likely now signaling it plans to do.

It’s my view that the Google move will propel SDN, NFV, application architectures for mobile empowerment, and a bunch of other things.  The propulsion will be focused on vendors, though, and not on the standards processes.  There is simply no time left now to diddle with consensus.  Once Google gets in the game, once Amazon and Apple are stimulated to accelerate their own MVNO positions, it’s every vendor for themselves.

The Cloud: Just Sayin’…

IBM reported its numbers which, in terms of revenue and guidance at least, were not happy.  I’ve talked about the opportunities IBM still has in some prior blogs, and speculated on some of the marketing factors and other decisions that may have led them to where they are.  What I’d like to focus on today is something a bit different, something that has impact not only on IBM but also on every other tech company that sells to business.  Call it the “politics of the cloud.”

IBM has for years been a master of the national account, the large-scale buyer whose IT activity justified a team dedicated to them.  This is a natural evolution from the early days of IT when the only accounts that could afford computers were large accounts.  I’ve argued that over the last decade, IBM has steadily divested itself of the hardware that had populist appeal, making itself more and more into the giant who targeted giants.  I think their current quarter is certainly consistent with that, but it’s not my story here.

You sell giant IT to giant companies, but most of all you sell it to professional IT types within those giant companies.  IBM’s reliance isn’t just on large buyers, it’s on technically sophisticated buyer organizations—CIOs with nice big staffs and budgets.  This is important because while most big companies aren’t very interested in becoming little, more and more of them appear interested in weaning away from those big IT organizations.  That’s where the cloud might come in.

If you slice and dice the numbers as you’re all aware I’m fond of doing, you come to the conclusion that business IT is a trillion dollar market annually, worldwide.  The most optimistic numbers I’ve seen for public cloud services peg the space at just south of thirty billion, which is less than 3%.  Further numbers-dicing brought me to the conclusion that the benefits of public cloud versus private IT in a pure economic sense would limit the public cloud market to about 24% of IT spending, roughly ten times what it is now.   The question that IBM may be raising is whether there are benefits that aren’t in the realm of pure economics, drivers beyond financial that dip into the realm of company politics.

Many, perhaps even most, of the big companies I’ve surveyed and worked with had considerable internal tension between IT and line departments.  One company I worked for went through regular cycles of decentralized versus centralized IT as the angst the line people felt overcame financial efficiency, then in turn was overcome by that efficiency down the line.  We’ve all heard stories about departments wanting IT to run itself as a profit center, competing with other outside options.  All of this made good media fodder but nothing really came of it.  Till the cloud.

The cloud, as a decentralized way of getting application support, has perhaps the first opportunity in the whole of the information age to unseat central IT.  Suppose that line departments played on that cloud characteristic and rolled their own per-department strategies?  Obviously you can’t have everyone doing their own thing to the exclusion of regulations or information integration, but you could darn sure reduce the role of IT in buying stuff and focus them on integration.

In this model, what happens to the IT giants like IBM who have bet on selling to the CIO and IT organizations, or at least have sold things that it takes centralized IT to consume?  Likely nothing good.  IBM saw good growth in its cloud services, but their annual run rate of $3.1 billion compares to over $22 billion in total revenue for this quarter alone.  First, how much of this is offsetting losses in sales of IT elements, and second how much of it depends on organized IT to buy it?

For the cloud overall, that’s the question.  We can’t get beyond about 24% penetration into IT spending for public cloud services unless we find drivers beyond the economic efficiencies that the IT organizations would recognize.  SMBs justify the cloud in large part through presumed reductions in technical support costs.  Might a larger company take the step of downsizing IT and end up in the same situation as an SMB—dependent on software services and not internal IT resources?

I don’t know the answer to that, frankly, and I’m not confident it would be possible to survey to find it out.  I also don’t know whether further development of cloud infrastructure by network operators and others (like Amazon) might create a new set of application services and big data analytics that would then tend to make mobile productivity support increasingly an exploitation of third-party services.  Might that trend bend further application development toward the cloud, and might that then combine with push-back against IT to build a larger trend?  I don’t know that either.

The mainframes of old were justified by two truths—you couldn’t buy a cheap small computer, and with IT serving the mission of capturing retrospective business data (remember data entry clerks?) it made sense to stick all the compute power in one place.  Cheaper systems begat distribution of IT resources, and even though we still have central data centers and repositories, we’re not done with the transformation created by distributing processing outward.  Workers with mobile devices, wearables…where does it stop?

Certainly, if there’s any validity to this position, SaaS is what matters, cloud-wise.  The migration of current applications to the cloud isn’t what a line department revolution would drive, and mobile worker productivity enhancement based on new software architectures clearly doesn’t require IaaS hosting of old stuff.  Amazon’s movement to “platform services” of cloud-resident software features and Microsoft’s and IBM’s enhancements of basic IaaS with PaaS features are probably valid stepping-stones, but perhaps more valuable to providers than to users themselves.  So a line-department cloud drive is going to require a different kind of marketing, one that stresses applications and not just platforms, ecosystems not products.

All of this is speculation, of course, and there are always counter-trends just as there are today and have been in all of IT’s history.  I don’t think that IBM will be killed by the cloud, that centralized IT will fall in some giant line-department coup.  I do think that the factors I’ve cited here will change how businesses use IT and how applications are linked to workers and their productivity.  As that happens, the basic rules of IT spending will change.  When they do, I have to admit that my 24% number will be harder to justify.  It might turn out to be right, but the value was derived by assessing relative cloud economics with current application models.  A major change to those models could create a major change in IT spending, cloud penetration, and vendor fame and fortune.

We are almost certainly underplaying the revolution we call “the cloud”, and in particular we’re discounting the impact that the cloud could have on the relationship between IT as a kind of universal staff function and the line organizations.  Nike and Reebok, as I used to say in my seminars, make sneakers not networks.  The purpose of business is not to consume IT, and many users in my survey complain that IT has become an end to itself.  You need information technology of course, like you need transportation, power, natural resources, capital, and labor.  The relationship among the last three of these has always been filled with tension.  Maybe that tension is expanding.  If it is, then we may see very different drivers emerge.

Different winners too.  The cloud is a unique partnership between hardware and software.   Of the two, it’s software that shapes the business value and the economic tradeoffs.  So can IBM or another “hardware vendor” build a software position strong enough to make hardware unimportant?  Is a cloud partnership inevitably one with a network operator or CSP and not an enterprise?  Can an IT vendor become a CSP by leveraging their current strength?  More questions I can’t answer, but that will likely be answered decisively by real-world company politics before the decade is out.

The NGN Bridge: Drivers, Trends, and Carpentry

You’ve probably noticed by now my enthusiasm for the metro space.  I think that enthusiasm is vindicated by the recent Street speculation on Verizon’s next-gen metro program, which the Street analysts say will go primarily to Ciena with a nod toward Cisco.  The thing is, there are other fundamental numbers in play that have been validating metro as a market for years now.  We’re just starting to see the results, and we’re not done.  Some of the trends weave through our ever-popular themes of NGN, SDN, and NFV.

If you look at BEA data you find that between 1990 and 2013 (the last posted year), growth in consumer spending on communications services has run about 30% ahead of growth in personal spending overall.  But what’s in this category tells the real story.  Telecom services is lagging overall spending growth by about 10% through that period, spending on postal/package services has fallen by more than half, and spending on Internet access has grown three hundred times faster than personal spending overall.

Despite what people may think, consumer Internet access is a metro service.  If this service is the only driver of increased ARPU to speak of in the whole of communications, which is what the data (from the Bureau of Economic Analysis) says, then the only area where we can expect to see much enthusiasm for additional investment is in the metro.

Here’s some other interesting stuff.  If we use the same 1990 baseline, growth in business spending on IT overall has been about 30% slower than investment in equipment overall.  For networking it’s been only about 17% slower, but slower nevertheless.  So what we’re saying is that the consumer is the real driver of networking, and likely the real driver of IT.  Consumers, we must note, do nothing in terms of site-to-site communication, they want experience delivery.  And with the drive to mobility, their experiences are increasingly local.

Profitable, mass-consumed, video is delivered from metro caches.  Popular web content in general is cached in a metro area.  Ads are cached, and as we start to see mobile-behavioral services emerge we’ll see that those services are fulfilled from cloud infrastructure that’s local.  NFV is going to host virtual functions proximate to the service edge in the metro, so NFV cloud resources will also be metro.  Are you getting the picture here?

The Street analysis of Verizon’s metro update spending is interesting because 1) it’s relevant to what is for sure the future of capex and 2) it demonstrates that metro is more about fiber transport than about IP.  Ciena is the projected big winner here, but in technology terms it’s optics that’s the big winner.  My model says that metro spending, which is now running about 40% optical, will shift through the rest of this decade to nearly 60% optical.  IT elements, which make up less than 5% of metro spending today (mostly caching/CDN, IMS, etc.) will grow to make up almost 17% of metro spending by 2020.  That means that all network gear other than optical, which today accounts for more than half of metro spending, will account for only 23% by 2020.

There’s plenty of precedent for a focus on experiences and hosting.  In our golden analysis period here (1990-2013), the telecom industry has lagged businesses overall in spending on fixed assets, but information processing services have seen six times the growth rate in investment.  This, with no significant contribution at that point from the cloud.  That means that we are generating more spending growth on infrastructure outside telecom.  To be sure, broadcasting and telecom has about four times the spending of the information processing services sector, but that’s obviously changing given the difference in growth rate.  In the last five years, in fact, the two sectors added about the same number of dollars in capital investment even though, as I said, information processing is a quarter the size of telecom/broadcasting.

The net trend is obvious.  The cloud is going to shift more and more information services to metro.  NFV is going to do the same, and as a result of this we’ll be seeing most data center communications become metro services.  Given that companies network sites and not people, and that sites are not increasing by any significant percentage you can see that all of the “services” that have upside for the future are metro services.  And all of these metro services are bending, in infrastructure terms, toward fiber and IT.

The big vendors in the IT space should be in the cat-bird’s seat here, and certainly some of them (like HP) are.  The big network vendors need to have a fiber position or a server position to be well-set.  Many, including Alcatel-Lucent and Ciena and Infinera, have fiber and Cisco has both fiber and servers.  NSN, Ericsson, and Juniper have neither to speak of, so these guys are the ones who have to face the biggest transformation.  Ericsson has already signaled its intentions to rely more on integration and professional services, NSN is looking at a merger with Alcatel-Lucent according to the Street, and Juniper has wanted somebody to buy them for a long time.

Why has this sort of thing not received much attention?  We are looking at the future of networking though past-colored glasses.  We are presuming that success of the Internet means success of routing, that success of the cloud means success of traditional data center switches, and that network investment will be spread out over the globe and not focused on metro-sized pockets.  As a result, we’re missing a lot of the real trends and truths.

Our industry trends strongly suggest that carrier infrastructure is trending toward a polarization between a bottom fiber layer and a top layer consisting  of virtual elements  in the form of software, hosted on servers and sited in data centers close to the point of user attachment (the metro, of course).  It’s not likely in my view that we’ll see a lot of fiber guys getting into the server space, nor will we be seeing server kingpins launching fiber optic transport and agile optics.  But this polarization does put pressure on the optical people because software is almost infinitely differentiable and fiber is definitely not.  That suggests that an SDN and NFV strategy and software partnerships and relationships (of the type Ciena is attempting) may be absolutely critical for fiber players to sustain margins.

Virtualization is what develops the new-age binding between optics and IT elements, and that’s a software construct.  Anyone can be a software player; even optical giants could do prodigious work there if they wanted to.  So it is likely that virtualization software in the manifestations of both SDN and NFV will frame how the two polarized ends of future infrastructure will join, and who will do the carpentry.

It’s also worth noting that the statistics I’ve been citing suggest that SDN and NFV are not “driver technologies” forcing change, but rather are steps being taken to accommodate broader market sweeps.  If that’s the case (and I believe it is) then it further validates the view that SDN and NFV could be opportunities for vendors who face “fundamentals-driven disintermediation” to establish a new model that could survive into the post-2020 future.  Certainly it means that foot-dragging on SDN and NFV are more likely to do harm than good, because preserving the current market paradigm would mean reversing macro trends no vendor can hope to control.

That, to me, all of this means that software is critical for the current electrical-layer incumbents whether they see it or not.  Even though these guys are by my reckoning all sliding into a pit as the polarization between optics and IT develops further, they have a chance to redeem themselves (albeit at a lower level of sales) through software.  Even this is probably not a surprise; nearly all the network vendors have been promising a greater focus on software for years.  SDN and NFV are simply the latest technologies to represent this long-standing trend, and they probably won’t be the last.

They will be critically transformational, though.  Economics is what transforms networking in a force sense; technology only guides the vector through which the force is applied.  We have some positive outcomes still available, some cat-bird seats still available.  We just need to see who sits in them.

Tech Future: IBM’s in Trouble and Maybe You Are Too

It’s always interesting to listen to or read about what’s happening in the tech market.  You get the impression that the industry is a vast river that’s dragging everyone to a common destination.  We have systemic this and technology-trend that and it’s all pretty much relentless.  There are obviously systemic trends, and I’ve certainly talked a lot about them.  There are also individual trends, things that by accident or design have been propelling companies into different orbits than the market overall.  Fridays are a good time to talk about them.

Intel announced its numbers, and it had record revenues.  The fact that yesterday was a bad day for tech drove down shares more than the results raised them, but from an industry perspective it’s pretty obvious that microprocessors are doing well.  One reason is that there has not been the total shift away from PCs that’s been predicted since tablets came along.

I have an Android tablet, and just about a month ago I replaced it with an ultrabook that had the ability to fold into various configurations to make it more convenient to use.  One reason was that for me, the tablet could not really replace a PC because of the business use.  When I go on vacation, I have to presume that something might come up that would force me to do a bit of work.  No, it’s not that people contact me and ask me to (I’d just say “No!”) but that I might want to extend the trip and push it into a period when I’d already agreed to deliver something.  Anyway, I can use an ultrabook like a PC and also like a tablet so it makes more sense.

That’s what I think is happening in the PC world.  We are learning that the exciting revolution of the tablet is exciting but not necessarily compelling.  Many people, I think, are finding that a slightly different PC is really a better choice for them, and that’s making the tablet more a supplement to the PC than a replacement of it.  I’ve noted before that many will still make the tablet transition because all they really do is go online, but more people need PCs than we think.

IBM is about to announce its numbers, and the Street is lining up on the bullish side of the room for Big Blue, where sadly they also find me.  IBM has weathered more industry transitions than any other tech company and it would be sad to think it might now be in deep trouble, but all the indications are there.  IBM has been slowly shifting out of the more competitive x86 space, first by selling off PCs and then all its x86 servers.  This looked smart to many (including IBM, obviously) but it had an unexpected consequence.

Tech is commoditizing overall.  Apple consigned itself to a second-class PC company for decades because it had no acceptance in the business market.  Today, Apple is moving despite their consumer targets and IBM, king of business, is trying to ride their coat-tails.  The hidden truth there is that you can’t be a computer/software success selling only to big companies.  IBM killed its own brand by getting out of all the product areas that the masses could buy.  Now they find it difficult to deal with competition in the Fortune 500 when the Everybody Ten Million is safely in someone else’s camp.

HP is in the opposite situation.  The fact that PCs have outperformed has helped HP, who by some measures is gaining market share faster than anyone else.  HP is also dealing with the industry revolution in a saner way, not by jumping on the tablet bandwagon like it’s the only savior of western culture but by creating a PC/server business polarization that lets the company play both sides of the opportunity, or the same side from two different directions.

The point here is that exaggerated measures are often induced in response to exaggerated trends, and we all know that there’s nothing that can happen in tech these days that’s not going to be blown way out of proportion.  A company who avoids throwing themselves out a tenth-story window to avoid a puff of vapor, whether they’re smart enough to see it for what it was or just lucky enough not to notice it, has a better future than the one who jumps.

In networking, we are facing the same thing.  Everything that’s happening in networking today, and everything that’s happening in business IT as well, is driven by one common truth, which is that we have largely used up the benefit case for more spending.  We have empowered workers as well as current IT architectures can, we’ve connected people and information as well as anyone is willing to pay for.  You want more cost, you need more benefits.  This means that “jumping” as IBM has done is a bad idea, because there’s nowhere to land that’s any better.

What’s frustrating (to me, at least) is that there is no reason to believe that there are no other benefits out there.  My models have shown that businesses could spend nearly $400 billion per year on network/IT services if mobile empowerment of workers were harnessed optimally to improve productivity.  They say another $600 billion annually in consumer spending, this largely on services, could accrue if we were to give mobile users the kind of things that even current trends show they want.

Cisco is widely seen as having led us into the IP age, but that’s not the case.  Cisco was blundering along in IP and IBM, who owned business networking, killed their own opportunity with an overweight vision of “SNA”.  Buyers fell into Cisco’s arms.  The key point, though, is that this seismic change came about first and foremost because we were having a revolution—one of connection cost.  A lot of new things were empowered by that revolution, including IP and distributed computing and the Internet.

We’re still having a cost revolution.  The cloud is a market response to the lifting of constraints on information delivery and process distribution that has arisen because of that same declining cost per bit that’s vexing the operators today.  The problem the cloud faces, and IBM and HP and Cisco and even Intel all face, is that we’re not seeing this for what it is.  If a resource becomes cheap, the best strategy is to exploit it more.  OTTs arose because operators didn’t deal with their own market opportunity, but the OTT model isn’t perfect either.  They’ve picked low apples, things like ad-sponsored services were barriers to entry were limited and ROI high.  The real tech of the future is what PCs were in their heyday and what mobile devices are now—a mass market.  What does a mass-market cloud-coupled IT world look like?  What software does it depend on, what platforms does it make valuable, what services does it both drive and then facilitate?

Answer that question and you’re the next Cisco.

More on the Savings or Benefits of NFV

My blog recently on NFV performance has generated a long thread of comments (for which I thank everyone who’s participated), and from the thread I see a point emerging that’s really important to NFV.  The point is one I’ll call scope of benefits.

Operators build networks to sell services from.  If you presume that the network of the future is based in part on hosted resources that substitute for network components, then the evolution to that future network will occur by adding in network components, either to fulfill new opportunities or as an alternate way of fulfilling current ones.  If I want to sell a security managed service I need the components thereof, and I could get those by selling a purpose-built box on premises, a generalized premises box/host with associated software, or a hosted software package “in the cloud” or in an NFV resource pool.  NFV, early on, was based on the presumption that hosting higher-level functions like security on a COTS platform versus custom appliance would lower costs.

I’ve made the point, and I made it in that blog, that operators now tell me that they think that NFV overall could have no more than about a 24% impact on capex, which was in the same range as they expected they could obtain from vendors in the form of discounts (as one operator puts it, by “beating up Huawei on price”).  In the LinkedIn comments for the blog a number of others pointed out that there were examples where capex savings were much higher—two thirds or even more.  The question is whether this means the 24% number is wrong, and if not what it does mean.

Obviously, operators say what they say and it’s not helpful to assume they’re wrong about their own NFV drivers, but I can’t defend their position directly because I don’t know how they’ve arrived at it.  However, I did my own modeling on this and came up with almost exactly the same number (25% with a margin of plus-or-minus two percent for simple substitution, up to 35% with roughly the same range of uncertainty if you incorporated assumptions about multiplication of devices to support horizontal scaling). That number I understand, and so I can relate how those 66% savings examples fit in this picture.  The answer is that scope-of-benefits thing.

Suppose you have a food truck and sell up-scale sandwiches.  Somebody comes along and tells you they have an automatic mayo creator that can make mayo at a third the commercial costs.  Does that mean your costs fall by 66%?  No, only your mayo cost.  The point here is that operators are going to impact capex overall in proportion to how much of total capex a given strategy can impact.  Security appliances represent less than 5% of capex for even the most committed operator in my survey, and across the board their contribution to capex wasn’t even high enough to reach statistical significance.  So if I cut capex for these gadgets to zero, you’d not notice the difference.

If you want to make a big difference in capex you have to impact big areas of capex, most of which are actually not even NFV targets.  Virtual access lines?  Virtual fiber transport?  I don’t think so, nor is virtual radio for mobile very likely.  Yes, we can virtualize some functions of access or transport or radio, but we need real bits, real RAN.  Where we find opportunities for real capex reduction at a systemic level is in L2/L3 infrastructure.  It’s the highest layer that we see a lot of, and the lowest that we can reasonably expect to virtualize.  Every access device is L2/L3, as well as most aggregation components, points-of-presence, EPC, and so forth.

I’m not advocating that we replace everything at L2 and L3 with virtual devices, though.  The problem with that is the fact that capex can’t be used as a measure of cost reduction anywhere at all.  We can only use total cost of ownership, and as I’ve said TCO is more and more opex.  The question that any strategy for capex substitution would have to address is whether opex could at the minimum be sustained at prior levels through the transition.  If not, some of the capex benefits would be lost to opex increases.  And since we have, at this moment, no hard information on how most NFV vendors propose to operationalize anything, we have great difficulty pinning down opex numbers.  That, my friends, is also something the operators tell me, and I know from talking to vendors that they’re telling most of the vendors that as well.

One of the key points about opex is the silo issue.  We are looking at NFV one VNF at a time, which is fine if we want to prove the technical validity of a VNF-for-PNF substitution.  However, the whole IP convergence thing was driven by the realization that you can’t have service-specific infrastructure.  We can’t have VNF-specific NFV for the same reason.  There has to be a pool of resources, a pool of functional elements, a pool of diverse VNFs from which we draw features.  If there isn’t then every new service starts from scratch operationally, they share resources and tools and practices inefficiently, and we end up with NFV costing rather than saving money.

Service agility goes out the window with this situation too.  What good is it to say that NFV provides us the ability to launch future services quickly if we have VNFs that all require their own platforms and tools?  We need an architecture here, and if we want operators to spend money on that architecture we need to prove it as an architecture and not for one isolated VNF example.  There is no such thing as operations, or resource pools, in a vacuum.

Where we start is important too, but there is no pat answer.  We could pick a target like security and expect to sell it to Carrier Ethernet customers, for example.  But how many of them have security appliances already, things not written off?  Will they toss them just because security is offered as a service?  We could virtualize CPE like STBs, but at least some box is needed just to terminate the service, and the scale of replacing real CPE with a virtual element even in part would be daunting without convincing proof we could save money overall.  One operator told me their amortized annual capital cost of a home gateway was five bucks.  One service call would eat up twenty years of savings even if virtual CPE cost nothing at all.

I said this before, and I want to repeat it here.  I believe in NFV, and I believe that every operator can make a business case for it.  That’s not the same thing as saying that I believe every business case that’s been presented, and operators are telling me they don’t believe those presented business cases either, at least not enough to bet a big trial on them.  So my point isn’t to forget NFV, it’s to forget the hype and face the real issues—they can all be resolved.

Do Vendors Now Risk Running Out of NFV Opportunities?

It’s about time for somebody to ask the question “Is NFV going mainstream?” so I may as well ask it here, then try to answer it.  To be sure, NFV deployment is microscopic at this stage but NFV interest is high and growing.  It’s obviously possible that this is a classic hype wave (remember the CLECs?) but there are some interesting signs that perhaps it’s more substantive.  The question isn’t whether NFV is here, but whether it’s now so accepted that it’s on the way that vendors are positioning for NFV as a given, for them and for the industry.

Case in point:  Alcatel-Lucent has hired former Oracle software/operations guru Bhaskar Gorti, who takes leading IP Platforms, which is where NFV activity reports in (CloudBand).  This could be big as far as signaling Alcatel-Lucent’s NFV intentions because up to now CloudBand hasn’t really had much executive engagement.  Many in Alcatel-Lucent have considered it little more than a science project, in fact.  Gorti could put some executive muscle behind it in a political sense, and his operations background means he might also do something useful to link CloudBand to the operations efficiency mission of NFV, a mission that’s been a bit of an NFV weak point for Alcatel-Lucent up to now.

Conceptually, CloudBand has always been a contender in the NFV space.  Of all the network equipment vendors, Alcatel-Lucent is the only one that has really gotten decent reviews from operators in my surveys.  Obviously the company is credible as a supplier of NFV technology, but its big problem for many operators is that it’s seen as defending its networking turf more than working to advance NFV.  Which is why Gorti’s access to Alcatel-Lucent CEO Combes could be important.  He’s either there to make something of NFV or to put NFV to rest.

Why now?  I think the first reason is one I’ve blogged on before.  Operators say that 2015 is the year when either, NFV-wise.  Either NFV gets into some really strong field trials that can prove a business case, or NFV it loses credibility as a path to solving operator revenue/cost-per-bit problems.  Alcatel-Lucent needs leadership either way, and arguably they need OSS/BSS strength either way.  Operators are well aware of the fact that network complexity at L2/L3 is boosting opex as a percentage of TCO, to the point where many tell me that by 2020 there would be little chance that any rational projection of potential capex reduction would matter.

A second reason, perhaps, is HP.  Every incumbent in a given market knows that the first response to a revolutionary concept is to take root and become a tree in the hopes that it will blow over.  However, at some point when it doesn’t you have to consider that losing market share to yourself through successor technology is better than losing it to somebody else.  HP has nothing to lose in network terms and it’s becoming increasingly aggressive in the NFV sector.  HP also links NFV to SDN and the cloud, and their story has a pretty strong operations bend to it.  If HP were to get real momentum with NFV they could be incumbent Alcatel-Lucent’s worst nightmare, a revolutionary that’s not investing their 401k in the networking industry.

Another interesting data point is that Light Reading is reporting that Cisco is going to say that software and the cloud are the keystones of its 2015 business strategy, due to be announced to an eager Wall Street and industry audience late this month.  Cisco poses a whole different dimension of threat for Alcatel-Lucent because they are both a server player (they displaced Oracle in the server market lineup recently) and obviously a network equipment giant.  Were Cisco to take a bold step in NFV, SDN, and the cloud they’d immediately raise the credibility of these issues and make a waffling in positioning by Alcatel-Lucent (which arguably has been happening) very risky.

Then there’s the increasing sentiment that service provider capex has nowhere to go but lower, at least if current service/infrastructure relationships continue as is.  Every operator has been drawing the Tale of the Two Curves PowerPoint slides for a couple years.  These show trends in revenue per bit and cost per bit, and in all the curves there’s a crossover in 2017 or so.  If operators can’t pull up the revenue line, drive down the cost line, or both, then the only outcome is to reduce infrastructure spending to curtail their losses.  Well, what’s going to accomplish that lofty pair of goals if not NFV?

We have some “negative evidence” too.  SDx Central (even the name change here is a nod to NFV!) did a story on things not to look for in 2015—restoration of carrier capex was one.  Another was some sanity at Juniper.  Juniper is a company Alcatel-Lucent has to be concerned about in a different way, which is as the “there but for the grace” alternative for themselves.  Juniper doesn’t have servers.  Its prior executive team didn’t know the difference between SDN and NFV and some at least say that its CTO and vice-Chairman is conflicted on this whole software thing.  It’s changed CEOs twice in the last year and it’s getting downgraded by Street analysts because of capex trends.  Arguably Juniper had as good a set of assets to bring to the NFV party as anyone, but they stalled and dallied and never got up to dance.  The article, and many on the Street, think it’s too late.  How long might it be before it’s too late for Alcatel-Lucent?

It may be already in a traditional NFV sense, in which case the selection of Gorti may have been truly inspired.  We are now, in my view at least, at the point where if you don’t demonstrate a truly awesome grasp of the operationalization of not only NFV but SDN and legacy networking, you don’t get anything new to happen with carrier spending.  There’s not time to save them with new services, and those new services could never be profitable without a better operations platform than we can provide with current OSS/BSS technology.

We seem to be aligning NFV positioning with the only relevant market realities that could drive deployment.  That means that those late to the party may find it impossible to grasp one or more of the NFV benefits and make it their own.  Differentiation could drive later players to avoid opportunity in order to avoid competition.  That would be, as it always is, fatal.  So 2015 is more than the year when field trials have to work, it’s the year when vendor positioning has to work.