What’s Behind Cisco’s Big Cloud-Management Buy?

Cisco’s acquisition of CliQr (I hate these fancy multi-case names; they just make it harder for me to type and spell-check so I won’t use the name from here forward!) raises a whole series of questions.  Foremost, at the industry-strategic level, is the matter of the value of the hybrid cloud and how that value might change IT.  At the vendor-competitive level, you have to wonder if Cisco is, instead of being the next IBM, is now focused on being the current one.  It’s possible they might succeed, and if they do it will say more about the industry than about either Cisco or IBM.

Rarely do earth-shaking changes in a market occur when buyer requirements are static and the benefit case driving purchasing is likewise.  We can see creeping trends but not revolutions.  On the other hand, a major shift in a paradigm will open a broad opportunity that didn’t exist before, and if some player can grab it, the results can be transformational.

Earth to marketplace; if Cisco spent over two hundred million on buying what’s primarily a hybrid cloud management vendor, they don’t think the whole world is going over to public cloud.  Smart on Cisco’s part because it’s not.  Even under the best of conditions, my model currently has the maximum share of IT spending represented by public cloud services not reaching 50%.  Since I agree with the hype (gasp!) that every enterprise and mid-sized businesses and about a third of small businesses will embrace public cloud, that means we are looking at a lot of hybrid cloud prospects.

At the 50% penetration level, and given that about half of that penetration will be in the form of pure-cloud, only-cloud, applications, there is little risk to IT vendors that the cloud will eat their direct business sales.  In the net, they’ll gain both hardware and software revenue in the transition.  Which of course impacts the industry and competitors at the same time.

Staying with the industry, a pure-cloud positioning could be risky for vendors if that’s cast as being a vote for pure public cloud.  Enterprise IT has always been more against cloud than for it, providing that it’s public cloud we’re talking about.  Enterprise line management, who have often been for anything that seemed to them to cut through the labyrinthian  IT bureaucracy and insufferable arrogance, are finding that IT from outsiders is even less responsive and that somebody who’s delivering arrogance is at least delivering something.  Even line departments now think they’ll need their own data centers.

That doesn’t mean that populism in IT is dead, which is another reason why hybridization in general and cloud management in particular is important.  Line departments have tasted (illusory, to be sure) freedom and they aren’t going to give it all up.  Cloud management is important in hybrid models if you presume that line organizations are driving the bus.  If internal IT was, they’d simply harmonize public and private in their own special (arcane) technical way.

If line organizations are going to run hybrid clouds then the internal IT processes will have to be more private-cloud-like than simply legacy-integrated applications.  That has major implications for application vendors, users, network vendors, and just about everyone.  We’re going to drive a model of true virtualization, where resources are resources no matter where they are.

This argues for building more cloud/networking tools into the OS than are there today, and doing so in a more agile way.  PLUMgrid approach could be the secret sauce in this area; build out the OS to be more cloud-aware so that cloudiness isn’t an overlay that could be done inconsistently in public versus private cloud deployments.  It also argues for NaaS deployment coequally with cloud services because you can’t have today’s fixed-site network model when half of your spending is on virtual elements.

For vendors in general, the hybrid move is as important a guarantee of future revenue as you’ll see in this chaotic market, as I’ve already suggested.  That could relieve the stress of “bicameral marketing” where vendors sell IT to enterprise like the cloud is never happening while trying to sell to cloud vendors like nothing but the cloud matters.  Fifty-fifty is a fair split of opportunity, particularly when the cloud’s half is Greenfield money and not stolen from CIO data center budgets.

That doesn’t mean every vendor is a winner, which is surely what’s behind Cisco’s move.  Cisco has no real data center incumbency; they’re primarily a network-linked server vendor.  That means that they know that they can step out quickly and with safety when legacy IT vendors will still have nagging worries whether the cloud will hurt more than help, in the near term at least.  Cisco also knows that despite the fact that most of the cloud opportunity is new money, early cloud applications will evolve out of the data center apps, and thus will demand more sophisticated integration.

Cloudbursting and failover are things that can be addressed as requirements now, even though they should in the end be automatic byproducts of resource independence.  That’s a horse Cisco can ride because current applications aren’t agile in a resource sense, and a management system can go a long way in making them cloud-ready.  Cisco, because they don’t have a big stake in the current IT paradigm, can focus on facilitating the transformation to the new one when incumbents in the data center of the past have to be more circumspect.

This is nice for Cisco right now.  It doesn’t stay nice.

Let’s say that hybrid cloud becomes the rule, and that it absorbs that 50% share of IT spending.  We have today a set of fixed sites linked to a set of fixed data centers.  We’re in a position to sell private networking to businesses because there’s a rigid IT structure that justifies those switches and routers.  What happens when there is no real focus of traffic because there’s no real focus of IT?  Gradually, in the world Cisco is betting on, private networking diminishes everywhere and in the WAN it is subducted into a completely virtual network vision.  WAN services transform to agile NaaS.

A completely agile NaaS pretty hard to differentiate at the device level.  If white boxes have a future outside the data center, this is where that focus would have to be.  And inside the data center, without any fixed LAN-to-WAN relationship to play on, there’s no reason to think white boxes couldn’t sweep that segment too.  At the very least, commoditization of networking seems the outcome.

So is Cisco stupid?  Most incumbents are, if you take the long view, but here we have to admit to another possibility.  If network commoditization is inevitable, then there’s no point worrying about what drives it.  The key is to get yourself positioned in an area where commoditization won’t happen, where differentiation remains.  Where problems need a combination of a new solution and a trusted vendor.  Where you can acquire somebody at a very rich price because you have a lot at stake.  Ring any bells?  I think it does.

What MWC Contributed Overall to the Sense of NFV

MWC generated a lot of ink, and some of the developments reported by Light Reading, SDx Central, or both create some nice jumping-off points for comments.  You’ll probably not be surprised if I have a different spin on many of the things I’ve chosen, and that I hope we can gain some overall sense of where things are going with NFV.

One thing that struck me was the continued tendency of the media to talk about an “NFV architecture” or “NFV strategy” even when what’s being discussed is at best just a piece of NFV functionality and at worst isn’t really any piece at all.  It’s frustrating to me, because I think operators who are trying to get the range of NFV are hampered by lack of any organized placement of the offerings.

One example of a questionable use of the terms is the series of OpenStack-related stuff.  There is no question that OpenStack will dominate the deployment of VNFs, but OpenStack is not orchestration, it’s a part of the Virtual Infrastructure Manager.  So are any DevOps tools linked to NFV deployment, and so (IMHO) is YANG.  Because the ETSI ISG’s description of what the NFV Orchestrator does is rather vague (and seems to overlap with other descriptions for VIM and VNF Manager), and because deployment is orchestration at one level, we seem to be conflating OpenStack and orchestration and then MANO.  That’s not good.

On the NFV Infrastructure side we have a bit more logical positioning to report.  I noted last week that we’d seen the beginning of a separation of NFVI from the rest of NFV because the hosting part of NFV is where most of the capital dollars will be spent.  One news item that expands on my point was the announcement by Red Hat and Amdocs that the Amdocs Cloud Service Orchestrator has been integrated with the Red Hat Linux stack.  It’s not rocket science to make a Linux app run on a specific version of Linux, but this shows that Red Hat wants to have a role in NFV’s success should there be a big adoption wave.  Amdocs is interesting as a partner because they’re really more a player at the service side of the NFV story.  Were they unusually interested in getting some ink of their own, or does Red Hat’s move indicate it thinks that service integration and orchestration will be big?

Another example is the Telecom Infra Project spin-out from the Open Compute Project.  OCP is an initiative started by Facebook to drive hosting costs down by using a standard (commodity) design.  Facebook has blown some specific kisses at telecom, and certainly they would benefit if telecom costs were to be reduced to the point where it would drive down service prices, thus facilitating more Facebook customers.  I don’t think NFV in any form is likely to have an impact on consumer broadband pricing, however; certainly not in the near term.  This move could have an impact on NFV hardware vendors like Dell and HP, and since Intel is involved in the project it could be another data point on the path toward an Intel-driven attempt to get those optimum 100 thousand data centers deployed.  You can bet Intel chips will still be inside.

The cost angle raises the point that Orange has indicated some dissatisfaction with the cost-based justification for NFV.  My own contacts seem to think that the issue is not so much that cost-based justification doesn’t work as that capex reduction as a driver won’t work.  Opex reduction, says my contacts, is still much in favor at Orange (and everywhere else in the telco world), and most operators believe that the same service automation capabilities that would generate opex reduction would also be necessary to facilitate new services.  That’s what Orange is said to favor over cost reduction, and if the same tools do both opex and services, then it makes sense to presume this is yet another repudiation of the simplistic save-on-capex slant on NFV justification.  That approach has been out of favor for two years.

The opex and services angle might be why we had two different open-source projects for NFV announced—OPEN-O and Open Source MANO (OSM).  Interest in the first group is high in China (and of course among big telco vendors) and in the latter high in Europe.  While OSM demonstrated at the show, it’s not clear to me just what their approach looks like in detail because their material doesn’t describe it at a low level.  OPEN-O has less to show at this point than even OSM.  The fact that there are two initiatives might be an indication that China wants to pursue its own specific needs, but it might also mean that nobody likes where we are and nobody is sure the other guy can be trusted to move off the dime.

Even given the mobile focus of MWC, there still seemed to be a lot of noise related to service chaining and services that use the concept.  In the main, service chaining is useful in virtual CPE applications where a virtual function set is replacing a set of appliances normally connected at the demarcation point, in series.  I have no doubt that service chaining and virtual CPE are good for managed service providers targeting businesses, but I’m not convinced that they have legs beyond that narrow market niche.  Unlike mobile infrastructure, vCPE doesn’t seem to pull through the kind of distributed resource pool that you need for NFV to be broadly successful.

There was a lot of skepticism on whether the NFV focus on 5G was smart.  It’s not that people don’t believe that NFV would be good for (and perhaps even necessary for) 5G success, but that 5G seems a long way off and NFV might have already succeeded (and not need 5G pull-through) or failed (and wouldn’t be helped then by 5G).  I think that operators are anxious to see how 5G would impact metro mobile backhaul, and they need to know that pretty quickly because they are already committed to upgrades there to support WiFi offload and increased 4G cells.  This may be a case where a little hype-driven forward thinking actually helps planners unload future risks.

It seems, when you look at the news out of MWC overall, that NFV is kind of splitting down the middle in terms of vendor positioning and operator comments.  On one hand, operators and some vendors are getting more realistic about what has to be done to make NFV a success.  That’s resulting in more realistic product positioning.  On the other hand, this industry hasn’t had much regard for facts or truth for years.  Technical media isn’t too far from national media, where coverage of nasty comments always plays better than coverage of issues.  Excitement is still driving the editorial bus, and that means that vendors who overposition their assets are still more likely to get ink than those who tell the truth.

The thing is that you can only argue about what UFOs are until one lands in the yard and presents itself for inspection.  We’re narrowing down the possible scenarios for NFV utility even as we’re broadening the number of false NFV claims.  I think that by the end of this year, we’ll start to see a narrowing of the field and a sense of real mission…finally.