How do you compete in an open-device future? It seems obvious that we’re not only headed in that direction, in some market areas we’re already on the doorstep. Vendors are not going to lie down with a rose on their chest, they’ll try to fight, but how will they do that? There are a number of potentially fruitful avenues they could take, and some may try to take them all.
The challenge of open devices is the dualism of software and hardware. Both software and hardware have to be interchangeable, which means that the marriage point for the two has to be maintained in a standard way. Fortunately for vendors, there’s a solution.
One of the most obvious, and most promising, strategies for differentiating future products in an open-model world is through custom silicon. We see this in the PC market already, with graphics processing. There are various standard APIs for computers already (DirectX, OpenCL, and OpenGL for example) that permit specialized hardware implementation for open functions. We just got such a standard for network devices in P4, a flow-programming language that uses a plugin to match the specific details of the silicon used. As long as a vendor provides a P4 driver for their chip, open P4 software will run on it.
In order for this approach to work, the vendor would have to do custom silicon on their own, or others would be able to reproduce their hardware model and it wouldn’t be differentiating any longer. That’s a challenge for at least some vendors, but it could well spur significant advances in flow-handling silicon, both by network vendors and by chip players who want to take advantage of P4 for special missions.
Chip-level differentiation by itself has challenges. Features are what normally differentiate things, meaning useful features. Chips might make things a bit cheaper or faster, but the scope of applications in which these subtleties are meaningful is narrow. It would be better to have something that changed the basic utility of a service.
Which introduces our second possibility for vendors; a “service architecture”. A network is a cooperative community of functional elements. The elements can commoditize if the community architecture can be differentiated. Since features tend to be created above the flow or traffic level, this kind of differentiation would fit the goal of creating high-level benefits.
We have, for the cloud at least, some examples of service architectures in the collections of mission-targeted web services offered by cloud providers. In technical terms, this approach means defining “services” made available through APIs that could then be coupled in some way to the open-model software. In order not to have your new service ecosystem look like a renewal of proprietary networking, though, you’d have to ensure that the open-model coupling was itself open, meaning you couldn’t build specific device linkages to the service level to pull through your whole product line.
This could be a heavy lift, though, for a couple of reasons. First, service providers at least are very wary of proprietary models. Think of a 5G that didn’t conform to international standards and you’d get my point. On the enterprise side, there’s a greater willingness to trade “open standards” for a specific cost (capital or operations) advantage, or for unique benefits.
The evolution of the “Kubernetes ecosystem” offers an example of an approach. Google’s Anthos just won a DoD deal for multi-cloud Kubernetes orchestration. Anthos isn’t a proprietary tool, at least not directly, but because Google effectively branded it as part of their multicloud, it’s associated strongly with Google, and others have so far been inclined to compete rather than adopt.
The biggest problem, though, is the almost-universal reluctance of operators to contemplate services above the network layer. Vendors themselves aren’t exactly eager to shift their focus to higher-layer services; it opens them up to competition from the computer and platform software side, the cloud provider side…you get the picture. Add to that the fact that their buyers are willing to believe any silly story that’s presented (like everyone buying 5G to support their IoT sensors) as a way of dodging the need to face an unknown technical world.
That doesn’t mean that the services option is a bad one, but it probably means it’s not a quick one. If operators are straining to sustain capital spending, they’re even less likely to jump into a new area. That’s even more likely to be a problem when that new area would require a considerable up-front capital investment in infrastructure (the dreaded “first cost”). If operators let the old earth take a couple of whirls, as the song goes, then it might well whirl into 2022 before attitudes shift toward services, and vendors are going to have a problem waiting that long.
That leaves our third option, which is another sort of “ecosystem” story. If you offer network equipment that’s linked into a network-level ecosystem, then your devices tend to pull through, particularly in the enterprise space. Enterprises say they want open technology choices, but they still tend to buy from their favored and traditional vendor unless that vendor messes up badly.
Cisco is the proof in this pudding, of course. Their last quarter was decent, especially considering the onrushing pandemic that contaminated the end of it. Their salvation was the enterprise market, and that market was sustained for Cisco by skillful framing of a pan-network story, things like ACI, and a strong program to certify technologists, making their careers somewhat dependent on Cisco’s continuing dominance of the enterprise.
The problem in this area is that, for most vendors, you can’t simply declare an ecosystem. Cisco had the best account control of all the network vendors before the current problems came along, and so they could exploit their assets immediately. Other vendors would have to wrap ecosystem claims around something novel. Juniper is already trying that with Mist/AI.
Management, or operations automation, is certainly a logical basis for a network-level ecosystem. If you want to collectivize network-layer behavior and so sustain your focus while building differentiation, the next level up is a great choice. The challenge here is that for enterprises, zero-touch or automated lifecycle management is a harder sell. The majority of enterprise-owned network technology goes in the data center, where you can oversupply with capacity at little or no ongoing cost.
Hard or not, this seems like it’s going to be the strategy of choice for network vendors in the near term. But just because it’s inevitable doesn’t mean it’s going to work. Except for Cisco, the idea of a network-layer ecosystem is hard to promote in the enterprise because it’s inherently a defense of your base, and Cisco is the market leader. In the service provider space, the problem of open-model pressure on a capital spending plan that’s already in decline for return-on-infrastructure reasons, isn’t easily solved.
Bits aren’t differentiable; they’re ones or zeros. Inevitably, bit-pushing is going to commoditize, and so as long as either enterprises or service providers focus on bit-pushing as their bottom line in networking, the network vendors face ongoing challenges. Like it or not, broader network virtualization issues may be the only answer.
What’s “above” the network, if not the “virtual network?” There’s still an open zone up there, a place where new security, governance, and management features can be added. SD-WAN is only now starting to move out of the specialized mission that launched it, into a new virtual network space. That may be where everyone needs to go, but even that’s a challenge, because SD-WAN’s old mission competes with MPLS VPNs that drive service provider spending. The next decade is going to require a careful navigation of risks and opportunities for network equipment vendors, and some aren’t going to stay out of the deep brown.