A lot of companies in the networking space are “reorganizing” or “refocusing”, and most of them say they’re getting more into “software” and “services”. Their intentions are good, but their plans might be as vague as I’ve summarized them to be. In 2020, every company in networking is going to have to face significant pressures, and the COVID-19 problems are only going to exacerbate what would have been significant challenges, and risks, even with a healthy global economy.
What’s the future of network equipment? That’s the real question, and it’s a difficult question to answer, in no small part because people aren’t very good at staring risk in the face if there’s a way to stick their head in the sand instead. Most senior executives in the network equipment space learned their skills in the heyday of supply-side infrastructure. We had IT and information, and we needed a better mechanism to deliver it to workers. Networking was that way, but technical limitations and lack of mature service deployments were barriers. Pent-up demand means supply-side drivers, and that’s what we had…till recently.
For about ten years, there have been clear signs that enterprises and operators were both running into cost/benefit problems with incremental network spending. On the enterprise side, the balance of funding for IT and networking, historically a combination of sustaining budgets for existing applications and services and project budgets for advancing productivity, slid more and more into the budget-only category. For some companies, a full third of all past IT and network funding was thus put into question. For service providers, revenue per bit has declined, and more stringent capex and opex cuts are necessary to keep ROI on infrastructure from going negative.
The biggest problem that networking has, which is therefore the biggest problem that network vendors have, is that networking is the truck that delivers your Amazon order, not the order itself. The value proposition isn’t what networking is but what it delivers. That, to a very significant degree, puts the means of raising budgets and improving network spending beyond the reach of network vendors themselves.
The biggest challenge for the network operators is how to rise up the value chain, and sadly they seem stuck in the notion that adding lanes to the highway, or introducing a highway with adjustable lanes, is progress. Earth to Operators: It’s still delivering the goods, not being the goods. There is no connectivity value chain above a bit pipe, so operators need to be thinking about what they can do besides push bits, and vendors need to be helping.
I’ve talked about what I thought was above the network, value-wise. It’s the contextualization services that could improve personal viewing, communications, ad targeting, and worker productivity. Rather than beat you over the head with that (again!), let me just say that the OSI model long recognized that applications were the top of the value chain, the highest of the higher layers. In today’s world, that almost surely means some set of cloud-native, cloud-hosted, tools that create something people actually want. If that’s true, then companies like Cisco will eventually have to look above things like ACI to cloud-application tools. “Eventually” isn’t necessarily “immediately”, though, and even companies with strong higher-layer strategies evolving as we speak may need a mid-life kicker.
If basic network equipment is going to be under increased price pressure, then the emergence of an open-model approach is inevitable. In fact, we’ve been seeing efforts to establish open-model networking in the network operator segment of the market for a decade. Back in early 2013, I had a meeting with most of the senior management of a big telco, and they were looking even then at having router software hosted on server platforms. Their thinking was representative of the explorations that eventually drove NFV.
NFV’s problem, from an infrastructure efficiency perspective, was that it was designed to support a lot of dynamism and elasticity, in an environment where virtual functions were dedicated to users. The real problem wasn’t that at all, it was supporting multi-user instances of “router functionality”. Today, operators see that coming about through a combination of open-source network software and white-box devices built to work together.
The separation of “software” from “hardware” here is one driver behind the vendor’s proposed shift to a more software-focused business model. Software is where features live, even in appliances like routers, and so it seems likely that the hardware side of the duo of the future will commoditize. In fact, some may hope it does just that, because that would create a sharp reduction in capex, stave off Huawei competition, and still leave an opportunity in “software” and “services”.
The obvious question is whether it’s possible to differentiate vendor software for gadgets like routers, from open-source software for the same devices. Cisco, always a leader in thinking about how to optimize sales in any situation, has addressed this challenge through the notion of a control/management ecosystem. Basic forwarding devices live in a management and policy world defined by initiatives like ACI. Even if this world is created by open standards to appease network operators’ fears of lock-in, it would take time for the open-source movement to address this new higher layer of network software functionality. Let’s call this new layer the policy layer, for reasons you’ll see below.
The task of creating a policy layer for an IP network in open-source form would be somewhat like creating the Kubernetes ecosystem; you assemble symbiotic elements around a central, pivotal, element. However, while Kubernetes is likely to play a role in policy-layer definition, it wouldn’t be the central element because most open-device deployments would, like real routers, be physical appliances in a static location. Kubernetes is a great anchor point for cloud application models because it’s about deploying the pieces. What, in a network control and management layer, is the critical anchor point?
I (again) resort to the world of poetry; Alexander Pope in particular. “Who sees with equal eye as god of all…” is Pope’s question, and if Pope were a network person, he’d likely say “the central controller”. I think that networks are built around a vision, a vision of universal connectivity but subject to policy controls that could limit what of the connectable universe can really be connected, and how well. Think of this as a sort of SDN controller, or as the controller that some propose to have in a segment routing approach. This controller may act through agents, it may be physically distributed even though it’s logically singular, but it’s the place where everything in a network should be looking.
Policies define the goals and constraints associated with the behavior of a system of devices or elements, which of course is what a network is. Policies define the services, the way infrastructure cooperates, and the steps that should be taken to remedy issues. Policies, properly done, can transform something so adaptive and permissive it’s a risk in itself (like an open IP network) into a useful tool. Finally, policies may define the way we unite cloud thinking and network thinking, which is crucial for the future of networks. Network vendors, especially Cisco, have been pushing policies for a long time, and it may be that policies are about to pay off.
Of course, policies can’t be a bunch of disconnected aspirations. Any policy system requires a means of applying the policies across discontinuous pieces of infrastructure, whether the discontinuity is due to differences in technology or implementation, or due to differences in ownership and administration. If Kubernetes contributes the notion of an anchor to our future policy layer, it also contributes the notion of federation…at least sort of. I came across the idea of federation, meaning the process of creating central governance for a distributed set of domains, in the networking standards activities of the early 2000s. It’s not well-known in networking today, but the approach we see in concepts like Google’s Anthos looks broadly useful in network federations too.
You could view this future equal-eye approach as one that starts with the overall policy-layer models, and from them drives lower-layer functions appropriate to the network technology in use in various places. We don’t have to presume uniform implementation of networks, only universal susceptibility to management, if we have something that can apply policies. The most mature example of this is the cloud, which suggests that cloud tools designed to support some sort of hybrid cloud or multi-cloud federation could be the jumping-off point for a good policy story. Cisco’s ACI and prior policy initiatives align better with the older network-to-network policy federation of the past than they do with the cloud.
Interestingly, Cisco rival Juniper bought HTBASE, one of several companies that specialize in creating cloud virtual infrastructure. It was a strong move, weakly carried through, but it probably took out one of the few possible targets for network vendor acquisition in the space. Cisco, to get a real position above the network in the cloud, would have to buy up somebody a lot bigger, and that would be pretty difficult right now. For the other network vendors, things are even sketchier. Nokia and Ericsson are laser focused on 5G, which isn’t anywhere close to where “higher layer value story” would mandate they be thinking.
This would have been a great time for a startup, or better yet a VC stable of related startups, to ride in and save the day. There doesn’t seem to be any such thing on the horizon, though. VCs have avoided the infrastructure space for over a decade, and they’ve avoided substantive-value companies in favor of an option for a quick flip, which this sort of thing wouldn’t create. It seems to me that the cloud providers themselves are the ones most likely to drive things here.
Cloud providers will likely build the framework for contextual services, as I’ve already noted. They might also end up building the framework of the control/management ecosystem that will reign over the open-model technologies that will be a growing chunk of the network of the future. However, vendors like Cisco or even Juniper could give them a run for their money.
We’re clearly going to take a global economic hit this year, a hit that could make buyers even more reluctant to jump off into a Field of Dreams investment in infrastructure. If that’s the case, then a policy-layer-centric vision of the future might be an essential step in bridging the gap between a network that hauls stuff and one that is stuff.