If networking turns to an open model, with open-source software and white-box hardware, can today’s network vendors survive? Sure, but not in the manner to which they’ve become accustomed. If you’re a big predator in a shrinking habitat, you can shrink to match the food supply (evolutionarily speaking, of course), you can learn to eat something different, or you can move to a new habitat. That’s the choice that’s facing most network vendors today.
Let’s start off by explaining my “most” qualifier. I think that open-model networking, when applied to a static requirements set, will eventually absorb the greater part of every piece of the carrier network of today. Some parts, like optical, will resist for a longer time, but eventually, static requirements breed total commoditization. Nothing promotes commoditization more than a market that lacks differentiating features, and nothing promotes feature requirement stagnation like a static set of requirements. That’s the “eat something different” path above.
Vendors have tried for ages (well, at least for decades) to promote the idea that bits, which since they’re either “1” or “0” are rather hard to differentiate, can somehow be made into something more—like “elastic bits” or “on-demand bits” or “QoS bits”. All these efforts have failed. Price is always arbitraged against features on basic connection services, and if there’s a very low-cost best-efforts transport available (like the Internet) then applications will seek to leverage it. Presenting higher-priced options simply won’t work well.
The notion of shrinking to match the food supply is what likely happens to vendors by default, meaning that if they join the other lemmings rushing to the sea, the same fate awaits all of them. Ignoring the cliff ahead doesn’t create a soft landing, but some vendors might happen to land on a soft spot. Transferring our lemming analogy to network vendors, some could survive by becoming the “Red Hat” of networking, focusing on telecomizing both open software and hardware and bundling it with support. Is that a good choice for our network predators?
No. If networks are built from hosted software instances on commodity hardware, the logical provider for the combination is a computer vendor, not a network vendor. It’s not that network people couldn’t do it, but that the computer players have the advantage of the “new opportunity” benefit. A computer vendor who makes an open-model network sale gains the profit of that sale. A network vendor making the same sale loses the profit difference between the current high-margin proprietary gear and the lower margin of the open-model framework. Thus, the computer players can move first and fast. They’d win.
We’re left with finding new habitat, which means finding additional services beyond the current bit-centric connection services that operators could sell, and then providing operators what they need to get to those services. Is this a good approach? No, but it’s probably the best approach on the table.
A service “beyond bits” is in OSI model terms, a service above bits. The lower three layers of the OSI model define connection services, and the upper four define services as presented to applications. Transport-layer features handle things like flow control and end-to-end recovery. Session services handle relationship flows, relating to multiple consumers of connectivity at the same network service access point. Presentation services harmonize information coding and formats, and application services integrate non-connection elements like discovery. Above all of this are the applications, the experiences.
Networks could in theory provide all seven of the OSI layers, by adding higher-layer features to the network service access points. In practice, it’s difficult to make these layers’ services useful if they’re disconnected from the actual point of user/resource attachment, which is inside a data center, on a desktop, or in someone’s hand. That leaves the applications above the network, the thing that delivers the “experiences”.
Some see this truth as the justification for edge computing, but before you run out and declare Gartner’s SASE model the future, consider that the edge is somewhere to host an experience, not the experience itself. Absent an experience to host, an edge computing facility contributes to global warming and doesn’t do much else. So, for now, forget the “where?” and focus on the “what?”
I’ve already blogged about the primary experience categories—personalized content, IoT, personalized or social communications…you get it. The challenge with these experiences, from the perspective of our network operator, are many, but four stand out.
Number one is that operators like pushing bits. It’s their culture. The majority of operators evolved when connectivity was a mandate, so they never had to market anything. They didn’t “sell”, they allowed people to buy. Experiential services aren’t like that. You need to promote them because the demand for them isn’t obvious/natural. To make things worse, the current Internet model and the current profit-per-bit crisis of operators stems from the fact that over-the-top players have evolved to fulfill users’ experience needs. Operators would have to play catch-up.
Our second challenge is related to the first. Operators think “Field of Dreams”. They’re used to deploying into a market tuned for consumption, and that biases their planning to be “supply-side”. Don’t meet demand, it will meet you as long as you do a good (five-year-long) standards project. That’s the 5G problem. We talk about what people could do with 5G, but what we really need to be asking is what services are (1) available under 5G that are not supported under 4G, (2) who is going to build the non-connection parts of those services, (3) who’s going to market them, and finally (4) who’s going to buy them? What operators need is a new service ecosystem, and they either have to resign themselves to being all the pieces of it or building partnerships to fill the voids they leave.
Challenge number three is first cost. For almost a century, operators have planned infrastructure deployments based on a kind of “S-curve”. When infrastructure deploys, the cost of deployment isn’t offset by service revenues because there are no customers for a service that was just announced. For a period of time, cash flow dips below the zero line. As adoption picks up, the revenues rise and cash flow rises too, going positive and eventually plateauing as the adoption saturates the total addressable market. That period when cash flow is negative is called “first cost”. If first cost is high, which would be the case if a large infrastructure investment was needed just to make the scope and performance of the service credible, then it’s a major burden to operators, one that will hit their bottom line and stock price. It’s hard to see how you could ease into a new service in a world that the Internet has made fully connected.
The final challenge is architecture. It’s hard to control first cost, but it’s impossible if you don’t have an architecture that can be scaled with marketing success, and that doesn’t require regular fork-lift upgrades to keep current in scope and features. And it’s this challenge that vendors could help with.
The problem with architecture for experiential services is that it requires you balance two imponderables. First, what are the services you’re targeting? Second, what’s the infrastructure you run on? Operators have generally grasped the second of these things, and standard/specification efforts like ETSI NFV have illustrated what happens if you accept the two requirements but attack only one, or attack them in the wrong order. Your first mission is to fulfill the service opportunity; only then does it matter how you do it or what you do it on.
Network vendors and computer/cloud vendors both have a foot in the door here, in that they’re engaged with the current OTTs and thus know what they’re doing, and they’re also engaged with the telcos. But even here, network vendors are at a bit of a disadvantage, given that their incumbency is in the bit-pushing area, and the opportunity is in what’s actually an application area, the IT domain. Further, experience with OTTs should show network vendors that those companies have been champions of open-model networking. Does that mean that having network operators push into experiential, OTT-like, services would promote operators to follow OTTs to the open model?
Still, the help-operators-get-more-experiential path is the best option open to network vendors, because it has at least a chance of working and nothing else does. The fact that IT vendors are already there simply means that network vendors have to move much more aggressively. They can’t play to come from behind, because if they get behind, they won’t have any play at all.
Who’s at the advantage here? Many will say it’s players like Ericsson or Nokia, because of their 5G connection, but I don’t agree. I think 5G is the boat anchor disguised as a life jacket. It locks vendors in the vain hope of connection-service revenue gains. I think that players like Cisco and Juniper have the edge. Cisco has a computer asset base it’s mishandled from the first. Juniper has a cloud software base (HTBASE and its Juke composable infrastructure model) that it’s likewise mishandled. Will one of these players outfumble the other?
For both companies, success may equal failure. Juniper already reported, and earnings were off for them in both products and services. Could that motivate Juniper to “think outside the bit”, as my trademarked ExperiaSphere tag line suggests we must? Cisco reports next week, and if they fall short and take a stock hit it could motivate them.
Cisco may have the better base with the harder climb to face. They have more assets, but also more assets at risk if they try to shift to a less bit-centric positioning. Juniper has a very critical asset in the area of infrastructure abstraction, and there are a lot of moves they could make (and haven’t) that could gain them traction above the connection services layers, without risking much in their traditional market. Of course, they could both continue to do what they’ve been doing, which is “not enough.”
Making no choice is a choice, always. In this case, it’s the choice of diminution, getting smaller to match the opportunity size. That’s what’s going to happen to those network vendors who don’t see the light.