The Question of MWC: Can NFV Save us From Neutrality?

At MWC, US FCC Chairman Wheeler tried to clarify (some would say “defend”) the Commission’s Neutrality Order.  At almost the same time Ciena released its quarterly numbers, which were light on the revenue line.  I think the combination of these two events defines the issues that network operators face globally.  I just wish they defined the best way to address them.  Maybe the third leg of the news stool can help with that; HP and Nokia are partnering on NFV.  Or perhaps the new EMC NFV initiative might mean something good.  Let’s see.

Ciena missed on their revenue line by just short of $30 million, about 5% less than expectations and down about a percent y/y.  This is pretty clear evidence that operators are not investing as much in basic transport, which suggests that they are holding back on network capacity build-out as they search for a way to align investment and profit better.  It’s not that there isn’t more traffic, but that the primary source of traffic—the Internet—doesn’t pay incrementally for those extra bits.

Operators obviously have two paths toward widening the revenue/cost-per-bit separation to improve profits.  One is to raise revenue and the other to lower costs, and it’s fair to say that things like the cloud, SDN, and NFV have all been aimed to some degree at both.  Goals are not the same as results, though.  On the revenue side, the problem is that operators tend to think of “new services” as being changes to the service of information connection and transport.  I think that the FCC and Ciena are demonstrating that there is very little to hope for in terms of new connection/transport revenue.

The previous neutrality order, sponsored by VC-turned-FCC-chairman Genachowski, Wheeler’s predecessor, was a big step in favor of OTT players over ISPs.  It had a stated intention of preserving the Internet charging model, meaning bill-and-keep, no settlement, no paid QoS.  It didn’t actually impose those conditions for fear of running afoul of legal standing but even its limited steps went too far and the DC Court of Appeals overturned it.  Wheeler had the opportunity to step toward “ISP sanity”, and in his early statements he seemed to favor a position where settlement and QoS might come along.  That hope was dashed, perhaps because of White House intervention.

We still don’t have the full text of the order, but it seems very clear from the press release that the FCC is going to use Title II to establish its standing to make changes, and then do everything that Genachowski wanted, and more.  The order will ban paid prioritization—as far as I can tell no matter who pays.  It will “regulate” interconnect, which seems likely to mean it will not only sustain bill-and-keep but also take a dim view of things like making Netflix pay for transport of video.  And the FCC also proposes to apply this to mobile.

The Internet is the baseline connectivity service worldwide.  More traffic flows through it than through everything else combined, and so you can’t hope to rebuild losses created with Internet services by subsidizing them from business IP or Ethernet.  If Wheeler’s position is what it appears to be, then profitable ISP operations isn’t possible for long, perhaps not even today.  Whether that will mean operators push more capex into independent content delivery mechanisms, which are so far exempt, remains to be seen, as does the technology that might be used.  Certainly there will be a near-term continued capex suppression impact while the whole thing is appealed.

To me, that’s the message of Ciena.  If operators knew that they could make pushing bits profitable they would sustain fiber transport investment first and foremost because that’s where bits are created.  They’ve instead focused on RAN, content delivery, and other things that are not only not bits but also not necessarily sustainable given neutrality regulatory trends.  Dodging low ROI may get harder and harder with mobile services subject to neutrality in the US and roaming premiums ending soon in the EU.

Does that leave us with cost reduction?  Is there revenue out there besides connection/transport?  Some sort of non-traditional connection/transport could help.  SDN might offer some options here but the work isn’t moving in that direction—it’s all about white boxes and lowering costs.  The cloud is a pure new revenue opportunity, but my contacts among operators suggest that they’re not all that good exploiting cloud opportunity yet.  We’re left, I think, with NFV, which is why NFV has gotten so hot.

Up to now, NFV vendors have fallen into three categories.  One group has network expertise and functions and a collateral interest in sustaining the status quo.  Another has the resources to host stuff, but nothing much to host on it.  The third group has nothing at all but an appetite for PR, and this has sadly been the largest and most visible group.  Perhaps that’s now changing.

I’ve believed for some time that HP was one of the few vendors that actually had a full-spectrum NFV implementation that included operations integration and service lifecycle management.  They also have a partnership program, and now that program is expanding with the addition of Nokia.  Nokia has functionality and mobile expertise, but no respectable hosting capability and no MANO or OSS/BSS integration.

Nokia says IT and the telco world are merging more quickly than expected, which is true.  NFV is a big part of merging them, in fact.  Nokia wants to be the connected environment of the future, where virtualization can deliver the lower costs that operators need for profit and that users need to sustain their growing commitment to the Internet.  Nokia is strong in the mobile network, RAN and virtual IMS/EPC.  They’re essentially contributing VNFs to the picture, but VNFs in the mobile area which represents the last bastion of telco investment.  That could prove critical.

HP is strong in the infrastructure, cloud, and management areas, plus service-layer orchestration.  Their deal with Telefonica suggests that big operators see HP not as a kind of point-solution one-off NFV but as a credible platform partner.  That’s critical because wherever you start with NFV in VNF functional terms, you pretty much have to cover the waterfront in the end or you’ll fail to realize the benefits you need.

The two companies told a story to TelecomTV that made these basic points, though I think without making a compelling link to each company’s own contribution and importance.  Both were careful to blow kisses at open standards and to acknowledge their pact, which includes integration and professional services to sell and support as a unit, isn’t exclusive.  This, I think, is attributable to the vast disorderly mass of NFV stuff going on.  Nobody wants to bet on a single approach, a single partnership.HP

That’s likely what gives EMC hope.  EMC has to be worried that almost everyone in the NFV world is treating OpenStack and NFV as synonymous.  Even though that’s not true, and even though even the ETSI ISG is now seemingly accepting the notion of orchestration both above (what I’ve called “functional” orchestration) and within (“structural” orchestration) the Virtual Infrastructure Manager (VIM) where OpenStack lives, it’s worrying to EMC’s VMware unit.  Which obviously EMC wants to fix.

How far they’ll go here is hard to say.  I doubt that EMC/VMware are interested in doing a complete MANO with OSS/BSS integration, so they could create a VIM of their own to operate underneath this critical functional layer.  The fact that they’ve included Cyan as an early partner suggests this, but IMHO Cyan doesn’t match other NFV players like Alcatel-Lucent, HP, and Overture in terms of MANO and operations integration.  EMC can’t drive the NFV bus without MANO but only ride along, and everyone who has a good MANO is already committed to OpenStack.  EMC is also colliding with full-solution player Oracle, who presented their own NFV approach at MWC and targeted some of the same applications the HP/Nokia alliance targets.

My guess here is that EMC will be looking to other telco network vendors (such as Alcatel-Lucent or Ericsson) for partnering in a way similar to that presented by HP/Nokia.  I’d also guess that EMC’s NFV initiatives will put more pressure on Cisco to tell a comprehensive NFV story.  Here their risk is that “partnership” after the HP/Nokia deal will almost have to include much tighter sales/support integration, and a perfect partner for EMC will be hard to find.

Perfection in networking is going to be hard to find, in fact, and we are on a track to search—if not for perfection then at least for satisfaction.  NFV and the cloud could provide new revenue for operators, but there’s no incentive for them to subsidize an under-performing Internet investment with those revenues.  A decade ago, responsible ISPs were calling for changes in the business model because they saw this all coming.  Well, it may now be here.

CIOs See a New Cloud Model Emerging

In some recent chats with enterprise CIOs, I noticed that there were some who were thinking a bit differently about the cloud.  Their emerging views were aligned with a Microsoft Azure commercial on sequencing genomes, though it wasn’t possible to tell whether Microsoft influenced their thinking or they set the tone for Microsoft.  Whatever the case, there is at least a chance that buyers and sellers of cloud services may be approaching the presumptive goal of personalization and empowerment, but in a slightly different way.  This way can be related to a total-effort or “man-hour” vision of computing.

It’s long been common, if politically incorrect, to describe human effort in terms of “man-hours” (or man-days, or man-years), meaning the number of people needed times the length of time required.  If something requires a hundred man-hours, you could theoretically accomplish it with a single person in a hundred hours, a hundred people for an hour’s effort, or any other combination.  Many compute tasks, including our genome example, could similarly be seen as a “total-effort” processing quantum that could be reached by using more resources for a shorter time or vice versa.

Traditional computing limits choices in total-effort planning, because if you decide you need a hundred computers for an hour you have to ask what will pay the freight for them the remainder of the time.  One of the potentially profound consequences of the cloud, the one that’s highlighted by Microsoft and accepted by more and more CIOs, is that the cloud could make any resource/time commitment that adds up to the same number equivalent in a cost sense.  A cloud, with a pool of resources of great size, could just as easily commit one hundred computers for an hour as one computer for a hundred hours.  That facilitates a new way of looking at computing.

Most of you will probably recognize that we’re stepping over the threshold of what used to be called grid computing.  With the “grid” the idea was to make vast computational resources available for short periods, and what separated it from the “cloud” was that specific resource/time assumption.  When the cloud came along, it focused on hosting stuff that we’d already developed for discrete IT, which means that we accepted the traditional computing limitations on our total-effort tradeoffs.  One box rules them all, not a hundred—even when you have a hundred—because we built our applications for a limited number of boxes committed for long periods of time.

The reason why we abandoned the grid (besides the fact that it was old news to the media at some point) was that applications were not designed for the kind of parallelism that the grid demanded.  But early on, parallelism crept into the cloud.  Hadoop, which is a MapReduce concept, is an example of parallelism applied to data access.  My CIO friends suggest that we may be moving toward accepting parallelism in computing overall, which is good considering that some initiatives are actually trying to exploit it.

NFV is an example.  We hear all the time about “horizontal scaling”, which means that instances of a process can be spun up to share work when overall workloads increase, and be torn down when no longer needed.  Most companies who have worked to introduce it into NFV, notably Metaswitch whose Project Clearwater is a scale-friendly implementation of IMS, understand that you can’t just take an application and make it (or its components) scalable.  You have to design for scaling.

Another related example is my personalization or point-of-activity empowerment thesis.  The idea is to convert “work” or “entertainment” from some linear sequence imposed by a software process into an event-driven series.  Each “event”, being more atomic in nature, could in theory be fielded by any given instance of a process.  While event-driven programming doesn’t mandate parallelism, it does tend to support it as a byproduct of the nature of an event and what has to be done with it.

It seems to me that we have converging interest in a new application/service/feature model developing.  A combination of horizontal scaling and failover on one side and point-of-activity personalization on the other side is combining with greater CIO interest in adopting a grid model of parallel processing.  All of this is enriching the cloud.

Which, as I’ve said before, needs enriching.  The cloud broke away from the grid because it could support current application models more easily, but at the very least it’s risky to assume that current application models are where we’re heading in the long term.  I think the three points on parallelism I’ve made here are sufficient to justify the view that we are heading toward event-driven, parallelistic, computing and that the cloud’s real benefit in the long term comes from its inherent support for this capability.

I also think that this new model of the cloud, if it spreads to include “hosting” or “the Internet” (as it’s likely to do) is what generates a new model of networking.  Event-driven, personalized, parallel applications and services are highly episodic.  Yes, you can support them with IP-connectionless services, but the big question will be whether that’s the best way to do it.  If we assume that Internet use increasingly polarizes into mobile/behavioral activity on one side and long-term streaming on the other, then that alone could open the way to new models of connectivity.

I mentioned in a prior blog that mobile/behavioral services seem to favor the notion of a personal agent in the cloud operating on behalf of the user, fielding requests and offering responses.  This would internalize actual information-gathering and processing and mean that “access” was really only accessing your agent.  On the content side, it is very likely that today’s streaming model will evolve to something more on-demand-like, with a “guide” like many of the current streaming tools already provide to disintermediate users from “Internet” interfaces.  That would facilitate even radical changes in connectivity over time.

It’s really hard to say whether “facilitating” changes is the same as driving them.  What probably has to happen is that conventional switches and routers in either legacy device or software-hosted form would need to evolve their support for OpenFlow and through that support begin to integrate new forwarding features.  Over time, and in some areas like mobile/EPC replacement, it would be possible to build completely new device collections based on SDN forwarding.  If traditional traffic did indeed get subducted into the cloud core by personal agent relationships and CDN, then these could take over.

What remains to be seen is how “the cloud”, meaning cloud vendors, will respond to this.  Complicated marketing messages are always a risk because they confound the media’s desire for simple sound bytes, but even buyers find it easier to sell cost reduction now than improved return on IT investment over time.  The best answers aren’t always the easiest, though, and the big question for the cloud IMHO is who will figure out how to make the cloud’s real story exciting enough to overcome a bit of complexity.

What’s “NFV” versus “Carrier Cloud?”

We had a number of “NFV announcements” at MWC and like many such announcements they illustrate the challenge of defining what “NFV” is.  Increasingly it seems to be the carrier cloud, and the questions that raises are “why?” and “will this contaminate NFV’s value proposition?”

NFV has always had three components.  One is the virtual network function pool (VNFs) that provide the hosted features that build up to become services.  Another is the resources (NFV infrastructure or NFVI) used to host and connect VNFs, and the last is the MANO or management/orchestration functions that deploy and sustain the VNFs and associated resources.  It’s not hard to see that VNFs are pretty much the same thing as application components, and that NFVI is pretty much like cloud infrastructure.  Thus, the difference between “cloud” and “NFV” is largely the MANO area.  If MANO is there, it’s NFV.  But that raises the first question, which is whether there are versions of MANO, and even whether it’s always useful.

Automated deployment and management is usually collectively referred to as “service automation”.  It is roughly aligned with what in the cloud would be called “application lifecycle management” or ALM.  This function starts with development and maintenance, moves through testing and validation, and ends with deployment.  In most applications, ALM is something that, while ongoing in one sense, is episodic.  You do it when there are new apps or changes, but generally ALM processes are seen as stopping short of the related process of sustaining the applications while they’re active.

MANO is a kind of bridge, something that binds the ALM processes of lifecycle management to the sustaining management processes.  This binding is where the value of MANO comes in, because if you assume that you don’t need it you’re tossing the primary differentiator of MANO.  So what makes the binding valuable?  The answer is “dynamism”.

If an application or service rarely changes over a period of months or years, then the value of automating the handling of changes is clearly lower.  Static services or applications could in fact be deployed manually, and the resource-to-component associations for management could be set manually as well.  This is actually what’s getting done in a lot of “NFV trials”, particularly where the focus is on long-lived multi-tenant services like mobile services.  It’s not that these can’t be NFV applications, but that they don’t exercise or prove out the primary MANO differentiator that is the primary NFV value proposition—that dynamic binding of ALM to sustaining management.

Applications/services become dynamic for one of two reasons.  First, they may be short-lived themselves.  You need service automation when something is going to live for days or hours not months or years.  Second, they may involve highly variable component-to-resource relationships, particularly where those relationships have to be exposed to service buyers to validate SLA performance.

I reported that NFV is making better progress toward proving itself out at the business level, which is both good and important.  However, some of that progress is of the low-apple variety.  We’re picking applications that are actually more cloud-like, ones that represent those long-lived multi-tenant services that don’t actually require much ALM-to-management bridging.  Full-spectrum service automation is less stressed in these services, and so the case they make for NFV is narrow.

You can absolutely prove out an “NFV” implementation of mobile networking, mostly because early operator work with these services demonstrates that there is a capital cost savings from the transition from appliances to servers, and that operations costs and practices aren’t much impacted by the shift because the hosted version of the service looks in a management/complexity sense much like the appliance version did.  You can also prove out “NFV” implementations of virtual CPE for business users for much the same reason.  The services are actually rather static over the contract period, which is rarely less than a year.  Where dynamism is present, it’s often in the service-layer feature set (firewalls, NAT, etc.) and NFV principles here can reduce both capex and opex because they eliminate truck rolls.

There’s still a risk here, though, because we’ve provided only a limited validation of the full range of MANO “bindings”.  The fact that so many vendors present a “MANO” strategy that’s really OpenStack is a reflection of the fact that many multi-tenant NFV trials are really carrier cloud trials.  Is MANO more than that?  If it’s not then a lot of people have wasted years defining NFV.  If it is, then we have to be clear on what “more” means, and prove that an assertion of additional functionality is both true and presents value to justify its costs.

I think we have to get back to dynamism here.  If there is a value to “network services” that can not only be sustained in an evolving market but grow over time, that value has to be derived from personalization or specialization.  Since bits are neither personal nor special, the value has to come from higher layers.  Some of these valuable services may be, like content or mobility, based on multi-tenant behaviors of facilities and components, but it’s hard to see how we can depend on new services of this type arising.  What else of this nature is out there?  On the other hand, it’s easy to see that applications and features built on top of both mobile and content services could be very valuable.

I’ve talked before about mobile/behavioral symbiosis for both consumer and worker support.  It’s also easy to conceptualize additional services in the area of content delivery.  Finding not only videos but portions of videos is very useful in some businesses.  Hospitals who video grand rounds would love to be able to quickly locate a reference to a symptom or diagnosis, for example.  In the consumer space, sharing the viewing of a portion of a movie would be as valuable as sharing a YouTube stupid pet trick—maybe even more so depending on your pet-trick tolerance.

Building upward from multi-tenancy toward personalization/specialization is what MANO should and must provide for, and we have to insure that we don’t get trapped in the orchard of low apples here.  If we do, then we risk having NFV run out of gas.

HP Grabs a Potential Lead in the Greatest IT Race

We’re obviously in a period of transformation for computing and networking, and it’s equally obvious that HP is intent on improving its position in the market through this transition.  They’ve made two recent announcements that illustrate what their strategy could be, and it could mean some interesting dynamics in both computing and networking over the next couple years.

The first step was HP’s acquisition of Aruba Networks, a leading player in the enterprise WiFi space.  If you’ve been following my blog for any period of time you know that I believe that mobile worker empowerment (point-of-activity empowerment) is critical for enterprises to take the next step in productivity enhancement.  That, in turn, is critical in driving an uptick in technology spending.  We’ve had three periods in the past when IT spending has grown faster than GDP, and each was driven by a new productivity dynamic.  This could be the next one in my view.

WiFi is important to this developing empowerment thesis for two reasons.  First, all workers who are potential targets for mobile empowerment spend some of their time on company premises where WiFi is available, and over 75% spend nearly all their mobile time on-prem.  That means that you can hit a large population of workers with a WiFi mobile productivity approach.  Second, the cost of WiFi empowerment is lower than empowerment via 4G, and you can always spread a WiFi strategy over 4G if it proves useful to do so.

Mobile productivity obviously means getting to the worker, so there has to be a communications ingredient, and that’s likely the basis for the HP move with Aruba.  It also requires a new computing model, something that turns applications from driving the worker through fixed workflows to responding to work events as they occur.  This new computing model is very suitable for cloud implementation, public or private, because it’s highly dynamic, highly agile.  Given that HP wants (like everyone else) to be a kingpin of the cloud, it makes sense to be able to link a cloud story with a WiFi/empowerment story.

The second announcement involves NFV, which may seem a strange bedfellow for worker WiFi and empowerment but isn’t.  I’ve commented before that my model says that optimum NFV deployment would create the largest single source of new data centers globally, and could create the largest source of new server deployments too (the model can’t be definitive on that point yet).  That’s certainly enough reason for a server giant like HP to want to have an NFV position.  Now, HP is announcing what’s perhaps the most critical win in the SDN/NFV space.

Telefonica has long been my candidate for the most innovative of the major network operators, and they’ve picked HP to build their UNICA infrastructure, foundation for Telefonica’s future use of NFV and SDN.  I think Telefonica is the thought leader in this space, the operator who has the clearest vision of where NFV and SDN have to go in order for operators to justify that “optimum NFV deployment” that drives all that server opportunity.  They are very likely going to write the book on NFV.

And HP is now going to be taking dictation, and in fact perhaps being a contributing author.  HP is one of those NFV players who have a lot more substance than they have visibility with respect to NFV (as opposed to most NFV vendors who are all sizzle and no substance).  I’ve seen a lot of their material and it’s by far the most impressive documentation on NFV available from anyone.  That suggests HP has a lot more under the hood to write about, even if they haven’t learned to sing effectively yet.

There are dozens of NFV tests and trials underway, most of which are going to prove only that NFV can work, not that NFV can make a business case.  Operators are now realizing that and are working to build a better business case (as I’ve reported earlier) but many of the trials are simply not going to be effective in doing that.  The ones most likely to succeed are the ones sponsored by operators who understand the full potential of NFV and SDN and who are supporting tests of the business case and not just the technology.  Who, more than Telefonica as the NFV thought leader, can be expected to do that?

HP has just sat itself at the head of the NFV table because they’re linked with the operator initiative most likely to advance NFV’s business case.  And everyone in the operator community knows this in their heart; I told a big US Tier One two years ago that Telefonica was the most innovative operator and they didn’t even disagree.  So imagine the value of working with an operator of that stature on defining the best model to meet the NFV business case.  It just doesn’t get any better.

Well, maybe it does.  NFV is not a “new” technology, it’s an evolution of the cloud to add a dimension of management and dynamism to cloud infrastructure.  The lessons of NFV can be applied to cloud computing and thus can be applied to mobile productivity.  For network operators, cloud computing services that are aimed at the mobile worker’s productivity would be far more profitable and credible than those aimed elsewhere.  WiFi and 4G integration with dynamic applications, created and managed using NFV tools, could be the rest of that next-wave business case that could drive the next tech growth cycle.  With proper exploitation of Aruba and an interweaving of NFV tools honed in the Telefonica deal, HP could build a new compute model.

The operative word as always is “could”.  HP has thrown down a very large gauntlet here, one that broad IT rivals like Cisco, IBM, and even Oracle can hardly ignore.  They’ve also put NFV players like Alcatel-Lucent on notice.  They’ve made a lot of enemies who will be eager to point out any deficiencies in the HP vision.  And deficiencies in vision are in the eyes of the beholder in a very literal sense.  HP has, like many companies, failed to promote itself to its own full potential.  That may not have mattered in a race where all the runners are still milling around the starting line.  If you make your own start, clean and early, you darn sure want to make sure everyone in the stands know you’re running, and leading.  The question for HP now is whether they can get that singing voice I’ve mentioned into order, and make themselves as known as they are good.

Ericsson’s Pre-MWC Announcement Sweep and NFV

It used to be that trade shows were the best place to make new announcements because the media was all there, presumably captivated by the potential of all those vendors and products.  Lately the opposite has been true; there are simply too many voices shouting to be heard and prospects get clickitis from even trying to follow the flood of URLs that emerge.  Ericsson is taking this to heart by making a big announcement (a series, in fact) ahead of MWC, and it’s worth seeing how it might tie into industry conditions, competition, and futures.

Ericsson is interesting among the major network vendors because it’s really a “network equipment vendor” only in the mobile space.  The company relies much more on a combination of OSS/BSS software (from its acquisition of Telcordia) and on professional services.  On one hand, that could be a plus in the NFV space because it’s not much at risk to having its hardware displaced and operations efficiency is the sweet spot for NFV.  On the other hand, NFV is a wide-scope initiative with much of the early PoC work focused on areas where Ericsson isn’t an incumbent.

Their MWC pre-announcement seems to be aiming at shoring up its position, and one reason they may be doing that is that at least some vendors (Oracle for one) are planning NFV announcements at MWC.  Why not rain on their parades a bit?  However, the most interesting thing about what Ericsson is doing is who it seems to be targeting, and why.  For more we have to look at what they’ve announced.

The first step in Ericsson’s positioning sweep is what they call Digital Telco Transformation, which is a combination of their two strategic priorities, OSS/BSS software and professional services.  The goal of this is offered by their head of consulting and integration:  “Only a holistic approach that reinvents the telco operating model can ensure operators avoid major business model disruptions and realize their digital telco vision.”  The key phrases are “holistic” and “reinvents the telco operating model”.

Because of their narrow equipment footprint, Ericsson is vulnerable to the death-of-a-thousand-cuts fragmentation of NFV momentum that all these PoCs create.  They need to focus on the problem at a higher level than equipment, which means they have to focus on operations as the unifier of all the NFV stuff going on.  “Holistic” means “across all services and equipment” to Ericsson, and “reinventing the telco operating model” means starting the transformation to NFV with an operations transformation.  The net is that operations is driving operators to the future in Ericsson’s view.

This operations focus is interesting because up to this year, operators were driving PoCs out of their science and technology activity with the CIOs largely on the sidelines.  That’s been changing this year as I’ve recently pointed out, and Ericsson is surely playing to that change and to this new and critical constituency.  One that, by the way, Ericsson is in a good position to leverage.  If operations efficiency is a key to making a broader business case for NFV (as I think it is), then the OSS/BSS people are the ones to work with.

Another thread in the same tapestry is the second Ericsson positioning step.  Expert Analytics 15.0 is aimed at tracking customer satisfaction through the service transitions NFV and other developments would create.  The tool is aimed at creating a feedback loop from customer perception/satisfaction to service automation and change, and it also involves OSS/BSS and the CIO.  The tool is aimed at gathering information from many sources and driving automated processes, which sure sounds like a broad strategy.

Thread three is the App Experience Optimization which aims at the app user, and whose goal is the optimization of the app experience and operator profits in tandem.  This is really a professional services tie-in, something that brings Ericsson’s integration and optimization expertise to bear.  It extends that notion of a broad process that involves a lot of stuff, and it ties things into mobile services more tightly.

The fourth and fifth elements are products, both aimed directly at mobile infrastructure.  One enhances Ericsson’s solutions in mobile backhaul routing and the other in CDN.  Ericsson is integrating these network product elements with the operations announcements in a new software release and through some customized portfolios.  This appears to me to be a second anchor—exploit operations, exploit mobility—to a new positioning that’s intended to do what I called “supersetting” in yesterday’s blog (referencing Cisco’s own SDN and NFV approach).  Ericsson likely believes that if you generalize the problem and involve operations, you can defeat point-PoC NFV initiatives from other vendors, all of whom are reluctant to be “holistic” on NFV at all for various reasons, largely tactical.

While Ericsson makes a few connections between their announcement-fest and NFV it’s pretty obvious that NFV isn’t the specific focus.  It sure seems to be waiting in the wings, though.  Pushing this out now, well ahead of MWC, might also tune reporters at the show to the importance of operations in NFV, which is at the minimum going to force those who have an operations story to accent that piece of their announcement, perhaps laying back on areas where Ericsson has nothing comparable.  Those who don’t have an ops story could even risk negative coverage.

It’s harder to say whether the Ericsson strategy will actually pay off in NFV traction.  While it is true that the CIO is an ever-more-important player in NFV, it’s also true that they haven’t exactly been the prime movers in NFV so far.  Operators are conflicted about whether operations needs to be leading the charge or changed out completely in favor of something else.

It’s also true that OSS/BSS vendors, including Ericsson, have yet to deliver on a real NFV story, though some have come close or promised it.  But of course if you’re Ericsson you may reason (as I believe Cisco has) that if the price of advancing NFV is advancing your own disintermediation, why bother?  By positioning their suite of announcements as operations enhancements and business model transformers, they’re elevating their story out of the NFV clutter.  They may also be elevating it out of NFV relevance, though.

The tangible weak point is the MANO thing.  Operators generally believe that service automation is about MANO, that NFV has introduced that key concept to the picture, and that MANO isn’t a part of OSS/BSS.  That’s what leads some operators to see a completely new operations picture orchestrated around MANO.  Some also see MANO creeping up into the OSS/BSS, and Ericsson doesn’t seem to be taking a position on that critical point.

It’s hard to see how Oracle would expect to win over competitors who could actually deliver an NFV product unless Oracle could deliver the heart and soul, at least, which is MANO.  Ericsson may be hoping that the OPNFV effort (open-source software for NFV, under the Linux Foundation) will create software, or that it will at least marginalize those who attempt to provide an NFV product.  That might be true if 1) OPNFV gets something done quickly and 2) it’s comprehensive enough to deliver on the expected NFV benefit case.  It doesn’t seem likely either condition will be met this year.

Maybe I’m a conspiracy theorist here, but it seems to me like we’re seeing a lot of maneuvering around NFV right now, and I’m inclined to believe that it’s because operators are making progress with the business case.  If that’s true, then we’re approaching the point where lack of a coherent NFV position could be a real problem for vendors who want to sell to operators.  If that is true, then a lot of vendors who have been blowing kisses at NFV and engaging in vacuous partner programs (as opposed to programs where there is real value from a key player) are going to have to at least buy a ring and practice getting down on their knees to a strong partner.  It’s too late to build something at this point.

Can Cisco Win the SDN/NFV Race by Not Running?

One of the “in-passing” comments made by John Chambers in Cisco’s earnings call was that the company was “We are pulling away from our competitors and leading in both the SDN thought leadership and customer implementations.”  Interestingly there’s a strong element of truth in that statement despite the fact that Cisco is widely regarded as being anti-SDN, and that truth and the reason behind it may have important consequences beyond SDN.  Cisco raises the question of whether less is more in addressing “revolutionary” technology.

Most people think of SDN in terms of the Open Network Foundation (ONF) and the OpenFlow protocol.  This combination is supposed to revolutionize networking and the vendor space by changing the focus from adaptive L2/L3 protocols and existing switch/router products to central control of forwarding and white-box devices.  Obviously vendors like Cisco (and all its competitors) are less than happy about that particular outcome, but many of the competitors have at least blown kisses at ONF/OpenFlow SDN because it might help them gain market share at Cisco’s expense.

From the first, Cisco’s SDN strategy has been one I’ve always described as supersetting.  You defend yourself against a market development by enveloping its goals in a larger goal set whose implementation you can control better.  Cisco came out with Application Centric Infrastructure (ACI) to do just that.  All network forwarding behavior is a black box to the users of the network, so Cisco embraced the visible consequences of SDN in ACI, which taps the same benefits and thus lessens incentive for users to migrate.  They even added concepts like a richer management and policy-based control framework, things lacking in formal SDN.  Finally, they made ACI an evolution of current network technology and not a centralized white-box-and-standards-geeks revolution.

What supersetting as a concept relies on is the fact that revolutionary change tends to expose users to costs faster than to benefits.  A user looking at their first deployment of purist OpenFlow SDN has to get a controller (new), learn to use it correctly (new operations practices), get white-box switches (from a new vendor), and then stick this whole thing inside a network that still largely works the old way.  Cisco says “Hey, suppose we could give you what this SDN revolution thing is supposed to deliver, but without any of that expensive novelty?”  You can see how that would be appealing.

Even if buyers are willing to look ahead a bit and value the new networking model, there’s still a set of issues.  We know for sure that you can build a data center network from purist SDN.  We know you can build a core domain that way.  We know that it absolutely would not scale to the level of the Internet and we can’t even prove it would work at the scale of a decent-sized enterprise VPN.  SDN “domains” have to be small enough for controllers to be able to handle, but it’s not clear how “small” that is.  It’s even less clear how you connect the domains, unless you want to fall back on traditional L2/L3 concepts that is.

NFV faces an almost-identical challenge.  What we can do with NFV today not only isn’t revolutionary, it’s often not even particularly useful.  That’s largely because the early SDN applications simply don’t impact enough cost or have generate enough prospects for new services to impact either cost or revenue overall.  You have to go big, but operators, like most buyers, will not overhaul their whole technology model for five or ten, or even twenty percent reductions in cost.  I’ve quoted a good operator friend on this before:  “If we want a 20% savings we’ll just beat Huawei up on price!”

We have to realign that problem of cost and benefit timing with NFV, perhaps even more than we do with SDN.  In both SDN and NFV one of the biggest unresolved issues is the relationship between an “enclave of the new” and the vast network of the old.  The specific challenge for NFV is that the necessary scope of benefits is much larger—opex efficiency, new service revenues, and capital savings.  NFV raises early risks, creating more of the very thing that Cisco saw as a vulnerability of SDN-a-la-OpenFlow, and then exploited with ACI.  Might Cisco have similar supersetting plans for NFV?

They might, indeed.  Their Tail-f acquisition gives them a Yang/Netconf combination that lets them control multi-vendor infrastructure.  Not only does this help them in their SDN supersetting by giving Cisco a way to extend ACI benefits across other vendors’ equipment, it provides Cisco with a solution to the end-to-end service control challenge that NFV so far has decline to address.

If you marry ACI with Tail-f’s ConfD approach and then stir in a little OpenStack, you’d have something a lot more useful than NFV would be alone.  It’s not fully responsive to market needs in my view, but it’s better than “standard” NFV.  And Cisco just announced a single-northbound-API version of ConfD would be free, making it either a nice camel’s proverbial nose or a kiss blown at purist NFV just as Cisco’s OpenFlow support is a kiss for purist SDN.  Or both.

The most vulnerable period of any new technology is its initial adoption, because of that mismatch of pace-of-costs versus pace-of-benefits I opened with.  Most buyers probably realize this; remember my comment that one of the top three NFV questions is “Where do I start?” even among those who have started?  I also think that the reluctance or inability of both standards groups and vendors to take a big, full, bite of the issues up front contributes to this problem.  Any uncertainty about starting a revolution tends to send people back to the coffee shop, favoring evolutionary approaches.

Cisco now has a policy-management strategy for handling SDN and NFV services.  It’s not what I’d endorse as ideal, but it’s there, and better than a chasm of omission such as we get from many other vendors.  Cisco also has a management/orchestration strategy with that same qualifier of “better than nothing”.  They are singing well in a market where most of their competitors are tongue-tied.  And they have servers, a cloud presence.  All that adds up to their being a force in NFV in the same way as they’ve been a force—a winner—in SDN.

I think Cisco is winning in SDN and their supersetting strategy could win in NFV too.  What could change the opportunity landscape for a competitor?  Here’s my list of requirements a Cisco competitor would have to deliver on, and I want to point out that these same requirements exist for the cloud and SDN as well.  Thus, it’s these three things that will determine whether Cisco itself has a lock on being the “next Cisco”, or whether somebody could still ride a revolution to that heady goal.

First and foremost strong service modeling using the “declarative” approach that defines objects and goals rather than scripting steps.  This is something that HP and IBM can deliver for sure today (Cloudify probably can as well), and there may be others who can as well but haven’t supplied me the documentation.  All deployment and management has to be based on this modeling.

Second and nearly as critical, integrated management that includes legacy infrastructure.  That means you have to be able to deploy services edge to edge, and also to manage both legacy and NFV elements comingled in whatever way operators find useful.  HP has this for sure, and I think Alcatel-Lucent, Oracle, and Overture have it as well.  I think IBM could supply it but I’m less sure about them.

Third, federation of administrative “zones”.  Operators will need to partner with other operators, and likely with cloud providers as well.  Many will have to create arms-length subsidiaries that will need to create cooperative services across the border.  Everyone will have to deal with how NFV domains link with legacy VPNs, the Internet, etc.  Here I can’t name vendors because I don’t have enough information on anyone’s solution.  In fact, I have doubts that anyone has a full solution here yet.

Federation has been ignored by vendors because it’s not as important in making the business case for NFV at present.  It may be absolutely critical for getting that first NFV application or trial to expand, not only because partnerships are routine among operators but because early NFV enclaves are likely to develop from independent trials, and they’ll have to link up quickly or risk creating silos.  Remember that absent NFV- or SDN-specific federation approaches you end up falling back on adapting legacy interconnect, and that favors Cisco’s evolutionary model because it works with existing gear by definition.

Little steps reduce risk in one sense and expand it in another.  It’s fine to shuffle to success as long as your movement adds up to progress toward the ultimate goal.  In SDN, Cisco is exploiting the shuffling of others by simply embracing the goal without the revolution.  That could work in NFV and even in the next generation of the cloud, where cloud-specific apps will remake the landscape.  It may be that standards groups and Cisco competitors alike have gotten too used to incrementalism as a means of controlling cost and risk, and don’t recognize that related notion of “critical mass”.  You have to generate enough early benefits to justify ongoing commitment and growth in any new technology.  Cisco’s SDN supremacy is proof that didn’t happen in the SDN space, and supersetting isn’t out of gas yet.

Why Mobility and NFV are the Cloud’s Best Friends

A recent research report continued on a theme that’s become a bit of a cloud computing mantra—we’re exiting the “early adopter” phase of the cloud and heading into the main event.  In some sense this is true, because we are certainly in the “early” phase of the cloud.  In another sense it’s misleading because almost all big companies whose spending dominates IT overall have been cloud adopters for a year or more.

The biggest issue with the early adopter model, IMHO, is its intrinsic presumption of homogeneity of the cloud market model itself.  We assume that adoption is working against a static goal, that users are on a learning curve that will take them to the same place at the end.  Suppose, though, that it’s not the user that’s changing but the cloud?  If in fact the cloud is a moving target, then how would you know if users were “early” to the cloud or perhaps not even there yet if measured by the evolving cloud paradigm?  What is the actual evolution of the cloud model, and where are we in that process?

If we start at the top (as I’m always inclined to do) then we can divide cloud computing into two main benefit models—cost-based and feature-based.  The cost-based model of the cloud says that it can do the stuff we do today at a lower cost.  The feature-based model says the cloud can do stuff we don’t do today at all.

There has always been a problem with the cost-based benefit model.  If you look at the essence of cost-based cloud adoption, it is this:  “I can create an IT service so much cheaper than you can that I can earn a respectable ROI on my service and still offer you enough savings to offset your concerns.”  That’s obviously possible, but also obviously something that gets harder as you move out of “early adopters” with special cost situations and into the mainstream.  Economy of scale isn’t linear, as I’ve pointed out before.  The curve plateaus, and that means that every win makes the next one harder to come by.

If the cost-based cloud is indeed under pressure you’d expect warning signs, and I think they’re there for all to see.  If IaaS margins and value propositions are under more threat, we should see more focus on PaaS and SaaS because these two cloud models displace more costs and thus build benefits for a broader community.  That’s exactly what we do see.  You’d expect to see cloud pricing fall as providers try to get volume adoption kicked off, and we see that too.  You’d expect to see cloud providers looking more to augment basic cloud revenues as their own margins shrink, and new “platform service” enhancements to the cloud seem to be announced every day.

IaaS, in my model, runs out of gas short of 10% penetration into overall IT spending.  If you add in SaaS and PaaS you can get cost-based cloud up to less than 25%, but beyond that the model says that savings are too low for buyers and profits too low for sellers.  By 2020 you need a transition to a feature-driven model of the cloud or you hit the wall.

Feature-based cloud opportunity seems simple and obvious, even inevitable, on the surface.  It’s easy to identify things (like mobility) that are opportunity drivers.  The problem is that these drivers have been around for a year or more and nothing of any consequence has happened.  At least, so the enterprises say.  Almost all of them recognize that there are special application planning and development techniques associated with building cloud-specific apps that could drive a feature-based cloud explosion.   They just don’t know what they are.

Some startups do.  Many, in fact, have build custom applications for the cloud and are exploiting cloud capabilities extensively.  Most social-network companies fit this model, for example.  The problem of course is that the skills involved are in great demand and enterprises can’t afford them, nor would there be enough to drive a buyer-side revolution.  It would have to be up to vendors…

…who in the main are happy to stay the course with regard to application design.  It makes sense to milk your R&D as far as you can, and if everyone else has that same mindset you have little to fear.  To be fair, though, it’s easier to think cloud revolution in a narrow social-network application range than for a broad enough enterprise market to make things interesting.

The other issue with evolving the cloud to being feature-driven is that the architecture of the cloud today is all wrong.  Most cloud providers have few data centers designed to be (you guessed it) cost-efficient, there is little attempt to integrate big data or contextual resources.  The latter of these conflicts with the former; contextual mobile apps demand fairly local processing for short control loops and response time.  We need a rich intra-cloud network and that’s hard when you don’t even have an intra-cloud infrastructure outside a data center.

The feature-driven model of the cloud seems to need the very thing that the cost-driven model doesn’t want—distribution of resources.  There are two paths that might be taken to that goal—one by exploiting the opportunities of mobile contextual services and the other by exploiting Network Functions Virtualization.  In both cases there’s an incentive to push processing more to the edge, and to distribute processes and data co-equally.  That distributable-process notion seems to be the major difference between a cloud-specific application and one that just runs in the cloud.

Users of the cloud aren’t in the early-adopter phase, then.  The cloud is in the early-adoption phase, and we get out of that when we can enlighten enterprises and enrich cloud infrastructure at the same time and at a reasonable pace.  I don’t think any evolution of the traditional cloud model is going to get us to that point, but I think that either mobility or NFV might well do it.  That would make these two network technologies the best friends the cloud ever had.

Some Progress to Report in the NFV Business Case

There’s some good news on NFV, which I’m happy to be able to report since good news (other than hype) is hard to come by these days.  I reported early this year that operators’ CFOs had told me in the spring of 2014 that they were so far unable to make a conclusive business case for NFV.  They also said that they were not of the view that current PoCs would enable them to do so.  Over the last three weeks those same operators have told me that they are in fact seeing more clarity on the business case.

Where we stand today is that about half the operators I’d talked with in 2014 now believed that they could make a business case for NFV deployment based on the evolution of their current tests and trials.  The difference between then and now is that operators say they’re getting a lot more involved with operations integration in their testing, and that operations was the weak link in their business cases up to now.

An even more heartening piece of news is that l two thirds of operators now say that their CIO is getting engaged in the NFV trial process, representing the OSS/BSS side of things.  While many network vendors are squeamish about CIO involvement because they don’t deal with that side of the house much, the fact is that without strong support from operations there was never much of a chance of making NFV work.

Most of this progress has come along since late November when operators completed their annual technology preview to their 2015 budget processes.  A couple operators told me that this process is what brought what one called “clarity” to their trial planning and was instrumental in getting the CIOs involved.  The general view was that this fall’s tech planning will likely focus on broader deployment, which means 2016 could be a good year for NFV types.

I got a couple of other interesting comments from operators while talking about the trials, and so I want to summarize them here.  I can’t say too much in detail without violating confidences.

First, it seems clear that trials are dividing into those that are starting to get broader CxO buy-in and those that are still science projects.  About a third of all PoCs seem to fall into the latter category and I’d guess that as many as two-thirds of these won’t advance beyond the lab this year.  All I can say is that the ones that have made the most progress are those that involve one of the key vendors I’ve talked about.

Second, even where CIO involvement has solidified and operations progress made, there is still an indication that the scope of the projects is pretty limited.  While the operators themselves may not see it this way it’s my own view that most of the cases where the business case for NFV can likely be made will still not prove the NFV case broadly.  Early applications “covered” by the tests probably won’t involve more than about 10% of capex maximum, which means NFV has to be pushed further into other areas to make a big impact.  As far as I can see that big impact can be proved in only five or six trials.  Everyone else may have to dance some more before they get a broader sign-off.

Third, we are starting to see a polarization in NFV announcements just like we have already noted for trials.  Some vendors are involved enough with the “real” activity, the stuff with a business case behind it, to be able to frame products or product changes in the light of real opportunity.  The rest are still just singing in the direction of the nearest reporter/editor.

Fourth, we’re starting to see some pressure and desperation among vendors, even sadly a few who are actually doing good things.  Operators have long sales cycles and a lot of backers/VCs and even internal company sponsors are just not able to stay the course much longer.  We will almost surely see some casualties in the space late this year or early in 2016.

Fifth, I’m starting to see some winners.  Some companies—a very few—are stepping up and doing the right thing in trials and in customer engagements.  My favorite vendor has generated a lot of good stuff now, documents so thorough and relevant that it is amazing that so little is known about it.  Perhaps those who really have the right stuff aren’t interested in letting it all out yet, or perhaps companies are still groping for way to make something as technical as NFV into a marketing story that’s not going to take rocket scientists (or at least computer scientists) to understand.

So are we ready for field trials?  Even the more optimistic operators aren’t betting on trials before perhaps September, though a few say that if things were to go just soooo…well, perhaps mid-summer.  Personally I think that as many as three operators could go to full field trials by mid-July if they and their vendors did the right thing, but there’s a lot of work still to be done, particularly on the integration front.

That’s my key take-way, in fact.  Yes, we are finally making progress but a full solution to agile, efficient, operations is still elusive.  A few players could get there for sure, a few more could get close enough to be able to do a convincing trial and complete their work in the fall.

There are still a few troubling signs, even among operators who are making progress.  The question “Where do I start with NFV?” still rates among the top three in the most-advanced group of operators.  What they’re asking is for help identifying the places where NFV can make an early difference at the business level, and that shows that planning for NFV is still more technically driven.  The fact that they’re asking vendors the question is troubling given that there is no simple answer, no strategy that works for even a majority of operators.  You have to understand your own business to understand how to improve it.

And yet…I’m more hopeful.  It’s better to be wondering about NFV business cases than worrying over details of standards and specifications in a vacuum.  NFV has absolutely no merit if it doesn’t have business value, and we’re further along in deciding where that value can be found and how prevalent those situations are in various markets and operators.  That’s good progress.

Finding the Opportunity to Justify Agility

In recent blogs I’ve been arguing that we’re focusing too much on the technology of new services and not about the actual opportunity.  There are scads of technological suggestions on where new revenue can be located, but where do you find opportunity?  That’s a question that’s always being asked by anyone who sells anything, including network and cloud providers and vendors.  The only sure way to find out whether someone is a prospect is to present them with an optimum value proposition and see if it resonates, but that’s obviously not a scalable scenario.  There are better approaches, and that’s the topic for today.

I’ve studied buyer behavior since 1982 in a survey sense, and in 1989 I built a decision model that uses survey data to forecast stuff.  The model was designed to incorporate what I learned about how decisions are made, and a high-level view of that process is essential as the opening round of any opportunity targeting approach.

The process of turning random humans or companies floating in space into committed customers is what I call trajectory management.  All these things are jumping around in Brownian Movement, with no structure or organization, no cohesion, no way to target with sales efforts efficiently.  What you have to do is to get them in sync, or at least get a useful community of them in sync.  You want them moving toward your marketing/sales funnel in an orderly way, so they intersect it, are guided into a common mindset, and turned over to the sales process.

In modern times, trajectory management is based on a progression:  editorial mentions sell website visits, website visits sell sales calls, sales calls sell products.  Buyers become “suspects” when they see your name in the media, they become prospects when you engage them on your website and with “anonymous collateral”, and they become customers through a personal contact.  You can’t jump over steps in this progression, so you have to be very wary of presenting too much information in the early part of the progression.  You will never sell a product because it’s mentioned in Light Reading or Network World, and trying to get enough into a story for that to happen will cause the story and your effort to fail.  Get the story, get the website visit, get the sales call.

Where does “targeting” come in, then?  Everywhere, largely for a combination of collateralization and prioritization.  You need to understand the messages that resonate most and most quickly, and you need to understand who will likely pull the trigger on a deal fastest given a particular value proposition.  All of this is like jury selection; it’s not an exact science but people still pay a lot of money to find stuff that could help.

Most countries have economic resources available online, and that’s true of the US of course.  The combination of tools that works the best is economic information by industry, particularly on employees and capital spending, and distribution of industries geographically.  With this combination you can do a lot to improve your chances of making the right statements, building the right products, targeting the right prospects with the right message.  You can also learn a lot about business behavior.

A logical question to ask when you’re talking about the opportunity for a product or service is the size of the market the new thing fits in, preferably by industry and geography.  Government data offers this in those two phases I mentioned, but running through a complete example would take too much time, so let me look only at the first phase in detail and describe the second in general.

If you think that cloud computing is the transfer of current data center apps to public cloud services, you could presume the biggest spenders on centralized IT would be the best targets.  The US market data says that North American Industry Classification System (NAICS, the new version of Standard Industrial Classification or SIC) 552, which is credit intermediation and related activities, has the highest central IT spending of all sectors.  Wholesale trade, renting/leasing, and retail trade follow (excluding the IT industry itself).  Many of these wouldn’t be considered cloud prospects, so an analysis like this could open up some new possibilities.

Suppose you think that the real opportunity factor is prospects for SaaS?  Obviously the government isn’t going to survey on that topic, but what they do survey on is the spending on integration services.  If you’re a company who relies on integrators you may well be unable to acquire and sustain an IT organization of your own, so you’re a great prospect for a cloud service that gives you the applications you want in an as-a-service package.  Top industry there is retail trade, which is interesting because they’re also high on the ranking in terms of central spending.

For doubters, let me point out that the retail industry has been a big consumer of architected services for electronic data interchange—EDI, the stuff that transfers purchase orders and payments and so forth.  Data on current SaaS spending is hard to break out from company sources, but my surveys have shown retail firms to be early adopters of SaaS as well.

How about opportunities for network services and NFV-based services?  These are likely tied pretty closely to what companies are spending on network equipment.  The industries leading that category are (again omitting IT and networking companies themselves) our old friend credit intermediation followed by miscellaneous scientific and technical services.  This last category also ranks high in use of integration services, which would suggest it’s a good target for network-as-a-service.

Or perhaps you see network and cloud opportunities arising where companies have a very large investment in personal computers and distributed elements?  Two old friends here, retail trade and miscellaneous scientific and technical services.  So the same top industries show up using this metric as well.

This total-spending measurement is a good opportunity gauge, but it may not reflect the level of acceptance.  We can also look at firms by how much of their IT budget is spend on centralized IT, distributed IT, networking, and integration.  In terms of percent spent on central IT the banks and credit firms top the list and retail is down in the standings quite a bit.  The industries with the largest percentage of budget spent on distributed IT are petroleum and coal products and printing and related activities.  Networking as a percent of total spending is highest for machinery firms, food-beverage-tobacco, and our miscellaneous scientific and technical services.  Integration spending is highest as a percentage in construction, ambulatory health care, and apparel.

Whatever measure you use to rank industries, the next step is to get a density map of that particular NAICS across your prospecting geography.  If you know you want to target our favorite “miscellaneous scientific and technical services” you look for metro areas or other points where that NAICS is found most often.  If you have access to more refined data (typically from commercial rather than government sources) you may be able to get the density plot for headquarters locations only, which is better since in most cases technology buying is done from the company HQ.

With knowledge of NAICS and some further insight on things like how much the NAICS outsources and how its budgets are spent overall, it’s possible to determine what messages would likely resonate.  My surveys show that companies with very centralized IT have strong CIO organizations and are less likely to favor things like SaaS, where companies with more distributed IT are the opposite.  That helps not only in generating collateral for sales use, but also in deciding the kind of reference accounts that would be considered most valuable.

The point of this is that NFV and SDN and the cloud are all selling into a world where IT is the rule and not the exception, and by understanding how IT dollars are currently spent, you can optimize getting some of those dollars for your early SDN or NFV services.

Reading Cisco’s Earnings Through a VMware NFV Lens

Cisco came out with their numbers, and while I could talk about them by themselves, I think it might be more helpful to consider them in light of some other news, which is VMware’s decision to get serious about NFV and even Facebook’s tossing in a new data-center switch.  I’ll mention some Cisco-numbers commentary where appropriate, but I want to open with some quotes from John Chambers:  “Let’s start with a basic question, like why Cisco?”, and end with “We are well positioned for a positive turn in either service provider or emerging markets. But we are not modeling those turns for several quarters despite the better results we saw in this last quarter….”

VMware and Cisco have a lot of competing dynamics and a lot of common factors.  Most recently, NFV as a technology to support has joined both groups, a response to the market conditions on the provider side.  The reason is obvious; my model has been showing that NFV is likely to be the largest single source of new data center deployments for the next decade, and also likely the largest single source of new server deployments.  It will consume the most platform software and the most new management tools, and frame cloud computing’s future.  That’s a heady mix; too much upside to ignore.

The challenge for everyone in the space is that Facebook proves that if you build data centers for net-centric use, you may find it easier to design your own stuff, or at least pick from commodity white-box options (Cyan and Pica8 had some new developments in these areas earlier this week too).  The obvious solution is to rely on symbiosis to get a piece of the new action.  If you can support the NFV revolution that’s driving change you might stave off the wolves.

For VMware, there may be another dimension.  Nothing feels better than to rub a rival’s nose in a dirty dog dish, and that’s less risky when the rival is chained.  Cisco has demonstrated that because of its hardware dependency, software-defined anything is considered a risk to be managed not an opportunity to be seized, which attitude somewhat limits Cisco’s aggressive options.  VMware has no incumbency in network equipment and so can dance wild and free into the virtual age.

Let’s get to John’s early question, “Why Cisco?”  This has two faces, one of which is why Cisco is doing what it always does, which is be a fast follower.  Cisco’s need to sustain its near-term hardware revenue stream creates for the company is significant.  Most of what Cisco announces these days boils down to 1) assertions that the “Internet of xxx” where “xxx” is anything exciting and different (Chambers did his Internet of Everything routine on the call), generates a boatload of new traffic operators have a Divine Mandate to carry or 2) enhancements to switching/routing protocols to create SDN-like features without having to adopt white-box SDN.  The second face is “Why Cisco?” in the sense of why buy from them, and for network operators it’s clear the answer has to be “because they’re supporting where we need to go”.  That’s clearly at odds with the first interpretation of that same question.  They’d like Cisco to lead in NFV.  They are not, which is probably a factor that VMware has noticed.

However, VMware has its own issues.  Their stock has been in general decline over the last six months as financial rags tout views like “Virtualization in Decline!”  Under pressure from open-source software in the cloud, they can hardly afford to ignore an opportunity like NFV, but their traction opportunities are a bit slim.  Cisco is a server vendor, which means they have a big financial upside from all those servers and data centers.  VMware can hope for a platform software upside on a hosting deal, but servers cost more than platform software, particularly when so much operator focus is on open-source versions of what VMware would like to license to them.

If you look at NFV objectively it’s hard to see why Cisco would find it such a threat.  Because NFV could at least admit standard boxes under the NFV tent through generalization of the Virtual Infrastructure Manager (which HP and Oracle among others have already proposed) it could build a layer above both legacy boxes and evolved network device notions like SDN.  That might reduce operators’ interests in changing out gear by making it unnecessary to switch switching (so to speak) just to talk to it from NFV.

Part of the explanation may be Cisco’s beating the revenue line, which came about because it increased revenue 7% y/y after a bad similar quarter last year.  It’s hard not to see this as a possible indication of a rebound in carrier capex, especially if that’s what you devoutly hope it is.  Stay the course a few quarters longer and 1) Chambers can retire on an up or 2) he won’t have to retire at all.  Take your pick, Chambers-watchers.  But even Chambers has to admit that nothing is turning around for the next couple quarters and their forward growth guidance of about 4% reflects that.

VMware is probably salivating over Cisco’s business challenges, but it has its own issues.  The cloud has jumped more on OpenStack than VMware hoped because public cloud has been much more valuable than private cloud, which VMware could have supported easily (and did, belatedly IMHO) as an evolution of virtualization.  Cisco took the OpenStack approach but while aggressive action there might have at the least handed VMware its head by advancing OpenStack more visibly, Cisco has been more interested in differentiating UCS than promoting standards.

UCS growth was strong in Cisco’s quarter, up by 40% compared to switching’s 11% and routing’s 2%.  But UCS was a bit under $850 million when routing was about $1.8 billion and switching double that.  You can see that Cisco needs more UCS but doesn’t need it to indirectly (via SDN and NFV) cannibalize switching and routing revenue.  And above all it doesn’t need any weakening of account control.

Now we have an NFV war, which pits a new (and thus totally hazy) VMware offering against a Cisco offering that’s always seemed more like and NFV billboard than a product.  Cisco doesn’t sing a bad NFV song (it sounds in fact a lot like Oracle’s story on the surface including the TMF-ish CFS/RFS angle) but they aren’t singing it out loud and strong as another song goes.  VMware has a chance to take a lead in positioning…if…

…they can figure out something to say.  Obviously parroting Cisco’s story with more verve and flair would be better than nothing (or even than Cisco) but probably not enough given that others in the space are demonstrating substance.  VMware needs to join my club of players who actually can do something to drive NFV, not the club of NFV hype artists (a much larger group).

So does Cisco, not only to compete with VMware, but also to compete with Oracle and Alcatel-Lucent and HP and even Overture.  There are big companies with big stakes in the ground leaping on the NFV bandwagon daily, and it’s hard to imagine none of them have taken the decision seriously.  I suspect that Cisco did too, and took the tactical short-term approach.

The stakes are rising now and it’s time to get to the last quote from the call that I opened with.  Chambers admits that the service provider market isn’t going to turn around in a few quarters.  Earth to John: It’s never going to “turn around” in the sense of returning to investment patterns of the past.  You just gave yourself a couple of quarters to face facts, as so many of your competitors are obviously doing.  I’d get moving if I were you.

They could, too.  Cisco’s NFV story may be a billboard but I think I know the company well enough to know that what’s missing is intentions and not capabilities.  They almost certainly have as much as Oracle or Alcatel-Lucent, at the least, and they have a lot of inventive people.  And their desire to preserve the present could align pretty well with operator goals not to sink themselves in accelerated write-downs.  If Cisco could get the orchestration/operations story right they could come out of this quickly.  Faster than VMware for sure, unless they bought their position.

That’s my advice for VMware.  Unless you already have a product sitting somewhere waiting for an organization to spring up, you’re going to need to start spending like a sailor on M&A and hope that John waits just a little bit longer.  Hope, but don’t bet on it.