Cisco is always a bellwether for the router market, so when their earnings call gives the Street angst, it’s not just about Cisco. Cisco said that every sector in its product space grew in the last quarter, except the most important one, which was service provider routing. Softness there obviously impacts Cisco’s forecast, whose weakness relative to expectations has led to some Street analysts to question the stock (others still defend it). Whether Cisco has broader revenue problems or not, service provider spending weakness is an issue for Cisco and its competitors alike.
Cisco’s quarter wasn’t bad, in fact, and it showed that in other product areas like switching (data center switching), the company posted strong sales. What some on Wall Street didn’t like was the softness in router sales, which Cisco attributed to network operator spending softness. There’s always an implication (especially on earnings calls) that issues are temporary, but we know that’s not necessarily the case with operator spending.
Since 2012, operators have been saying that their cost- and revenue-per-bit curves were going to cross over in the late part of this decade. They’ve been pursuing significant operations cost reductions as a means of preventing their net profit per bit from going negative, but there’s no question that operators think they need to spend less on networking. Commodity boxes and open-source software are their solution, and that threatens not only Cisco but every vendor.
If this isn’t a surprise, why hasn’t it been taken care of? Cisco and its competitors have had six years of opportunity to frame their business model differently in some way, focusing perhaps on things that would have boosted their revenues outside of routing. The problem, I think, is twofold. First, growth in other product areas hasn’t shown signs of replacing losses (real or potential) in the routing area. Second, every public company at every level is focused on making their current-quarter numbers. Talking about what comes next is the same as talking about how the current situation is going to change, and that suppresses spending today.
The truth is that Cisco has, more than any other vendor in all of service provider networking, tried earnestly to prepare for a time when routing won’t carry them. To do any better would mean deliberately focusing buyers on what they don’t want or need today, to prepare them for a shift in the market. Realistically, that’s not going to happen for Cisco or anyone else. Was there a mistake Cisco made? Two, I think. Can they fix them, even now? Possibly, I think.
The first Cisco error was not pushing software and the cloud fast enough. Cisco has developed a range of software strategies, but they’ve never seemed to have been fully engaged with them. Absent a very strong software position, their UCS server business was just another potential commoditization-cannibalized product space. Remember the old-line Sun theme that the “network is the computer?” Cisco could have pushed that strongly, as the player best equipped to create that fusion.
The second Cisco error was not jumping on the automation bandwagon in an effective way. Today, we know that for service providers in particular, service and application automation converge. Early on, when NFV first reared its head and suggested that carrier cloud could be a real opportunity, Cisco bought an automation player, Tail-f. The problem was that it was a network guy’s view of automation, focusing on things like configuring routers. In a world transitioning to carrier cloud, that’s clearly not the mission. But Cisco let their Tail-f deal define them, and so failed to promote broad-spectrum automation at a time when it could have staved off capital budget pressure.
What Cisco (and of course its competitors, at least those who lack RAN credentials) face now is a buyer base that wants open-platform hardware like open switch/routers (white boxes) and open servers, with both supplemented by open-source software. If all this openness succeeds, then a few percent shortfall in revenue forecasts are the least of vendors’ worries. Their business could totally implode.
But will it succeed, and if so, when? The fact is that network operators have booted their opportunities for taking control of their infrastructure destiny even more decisively than vendors have. Very few operators have actually taken any initiative in framing their technology future, and those that have tended to apply network-centric concepts to a server/software-centric age. ONAP, evolved from AT&T’s ECOMP, is the best example of vendor-centric software. Is it workable? Yes, almost surely. Is it optimum? Absolutely not, nor is it likely to be made optimal by open-source-driven software evolution.
Even in the cloud, operators are missing their chance. A commodity cloud-hosting service market will always be dominated by the player who has the lowest internal rate of return, the bar set by the CFO to decide what projects meet ROI targets and what don’t. Operators have that, because they used to be public utilities. They could have dominated the cloud market on price alone. They had the facilities needed for edge computing at the access edge, where it’s most valuable. They could have framed IoT as what it really is, which is an opportunity for sensor-driven correlation and contextualization services, and then been the only player to provide them. Instead they let the public cloud providers take the lead, and Telefonica now announces a cloud deal with Amazon.
Server vendors haven’t done any better. Dell had a golden opportunity to take the lead in carrier cloud back in 2013, and they didn’t step up even though they told the big Tier Ones they would. HPE had a primo integration contract for carrier could and couldn’t see the forest for the trees, ending up by losing their deal. Today, Dell/VMware is finally catching on with “virtual cloud networking” and HPE is buying Plexxi for application networking. Both could have been exploited five years ago.
Are open initiatives then the rule? If so, then standards and open-source groups should be stepping up, and in general they’re not setting the world on fire either. The TMF has tried to get beyond OSS/BSS while keeping hold of its own incumbency, without much success. The ONF and MEF are working to rebrand themselves, and coming out with some new approaches, but there’s the question of whether what they do will be both clearly useful and artfully promoted. Open-source bodies are grappling with the fact that you make the most progress by exploiting stuff that’s available, but stuff that’s available is likely to reflect the thinking and needs of the past.
It shouldn’t be surprising that sellers in every market expect buyers to suck it up and continue spending no matter what. The alternative of their not spending is too awful to contemplate, and the alternative of changing the product/service mix in anticipation of future trends isn’t much better. Operators and vendors have, on this point, a different risk/reward profile.
For vendors, the problem is balancing both risks and rewards between the short-term and long-term. If revolutionary services came along, they could generate revolutionary spending. However, that spending on new gear could benefit the vendors only if they actually got the money, and if operators are going to be tentative and careful (as I said they would be in the last paragraph) then vendors who push too much for that future pie in the sky could end up killing short-term sales momentum.
For operators, there is the reality that a new service set could not only involve vast new first-cost issues, but also threaten a trillion dollars in sunk infrastructure costs. That means that they have to be careful not to get too far ahead of real opportunity. In today’s hype-driven market, how are they going to know what “real opportunity” is. Everything is either a revolution it’s not mentioned. That’s more than anything the reason why operators have focused on cost reduction; they know what costs are, and future revenue is still something at the end of the Yellow Brick Road.
The problem for everyone here is that, if you don’t want capex cut, cost reduction means service lifecycle or zero-touch automation. It’s been a popular notion for some time, but we’re still grappling with what that means. Cisco talked a lot about being a leader in “intent-based” networking, but what the heck is that, and how does it relate to operations efficiency? Intent modeling as an attribute of service modeling is critical to ZTA, but does Cisco or anyone else have a true, complete, model-driven ZTA approach? No. Do bodies like the TMF, who has defined service-layer technology, have any vision of what ZTA would mean to operations at all levels? Not that they’ve shown. Even ETSI seems to have a ZTA strategy that’s a formula for multi-year research and consideration, not for timely solutions.
Timely solutions to ZTA could be critical to Cisco and everyone else, because of the onrush in interest in white-box, open-source devices to replace vendor hardware. AT&T’s DANOS and the ONF Stratum project embrace a P4-language forwarding architecture that could, over time, significantly impact proprietary network devices. There are always risks to new technologies, and the more revolutionary they are, the greater that risk. There’s also a risk to watching profit per bit go negative, so if the operations efficiencies of ZTA can’t be achieved, then open-architecture network devices are going to look very good, and vendor fortunes will then be at risk.
The operators will end up setting the tone for and pace of “transformation”. They may think they’re “getting control” of the process, and they may indeed be exercising more influence, but that’s not going to matter if they let themselves fall into depending on the familiar stuff, the familiar vendors, the familiar technologies. We have nothing in view today regarding future services or costs that wasn’t just as visible in 2013. Nothing we’ve done since then has transformed us. We have to look harder today, and find different things, and that’s where operators have to lead.