Fierce Telecom did a nice piece on the Top 12 Telecom Disruptors of 2019, focusing on the vendor side of the industry. I’d like to propose that the top disruptor isn’t on the list, because rather than being a vendor, it’s an “anti-vendor”. The open-model network is, by the account of operators themselves, vendors, and in my own view, the most important thing that’s happened in telecom since IP.
Open-model networking is in a sense a superset of all the stuff we’ve been talking about for the last decade. SDN? It’s in there. NFV as well. We can find cloud-native, carrier cloud, 5G, white boxes, gray boxes, virtual networks, SD-WAN, and all the rest tucked into open-model corners. The whole thing can be seen as an example of how you really should start transformations from the top and move down, because if you start at the bottom it will take a decade for things to shake out to a point where real progress can be made.
A network is a mesh of nodes and trunks, with the former providing for the routing/switching of traffic between instances of the latter, to create full connectivity among the endpoints being served. From the very first days of data networking, before IP networks were even popular, we built networks with nodes and trunks. As networks became more essential to businesses, and as operators transitioned to a data-centric service model, vendors supplied the nodal and trunk technologies…at a profit.
The problem with this vendor-centric network vision is that year after year, all public companies are expected to show profit improvement. That can come only by a combination of raising revenue and cutting cost. Raising revenue has been the mainstay strategy for network equipment vendors, based on the fact that the public appetite for bandwidth has been growing yearly. However, the public doesn’t consume routers, it consumes broadband, and the intermediaries in this situation—the network operators—have their own profit-growth goals. People won’t pay twice as much for twice the bandwidth; they really don’t want to pay anything extra at all. That’s what launched the open-network model.
The first real interest I saw in open-model networking by telcos was in 2013, before NFV got started. A big Tier One was looking at the Vyatta hosted-router model, just acquired by Brocade. The theory was simple; if you had router software that conformed fully to IP standards, and you could run it on a commodity server, you could eliminate the profit margin of the router vendors and build your own network more profitably.
NFV actually kind of short-circuited this early interest; the operator focused on NFV and on hosted-feature models for all devices, but NFV focused on the customer edge devices, the “virtual CPE”. Another operator (AT&T) has now taken up the leadership position in open-model networking, and they’re elevating the concept to the point where it includes that early hosted-router interest, and a lot more. Nothing can stop open-model networking at this point, so the question is who will profit from it, and how, and to get to the answer we need to decompose an open-model node.
A node is a combination of feature software and hardware platform. Commercial off-the-shelf (COTS) servers are a viable hardware platform, providing that the network throughput is adequate and the resource cost is limited. White-box switches are the other option, a better one where cost/performance has to be critically controlled. The current trend seems to be to use white-box switches for open-model nodes that are not part of a resource pool (on premises, in cell sites, etc.) and COTS servers where nodal resources are drawn from a pool, and a “universal” solution is needed to maintain pool efficiency.
On the software side, the platform software could be either a scaled-down version of a server OS, meaning Linux, or an “embedded control” or device-specific OS. The current trend is to use the former for COTS server pools and the latter for discrete open-model nodes. There are a number of commercial and open-source choices for both approaches.
The feature software itself could, in theory, take one of three possible forms. First, it could be a simple software instance of a router or appliance, like the original Vyatta stuff. Second, it could be a virtual network function (VNF) deployed using the ETSI NFV ISG’s mechanisms, and finally it could be a programmable flow machine based on something like the P4 flow-programming standard. It’s hard to say which of these dominates today, so we’ll have to look a bit deeper at this point.
Today, the reason for the lack of a dominant approach is that we lack a dominant mission. Simple cloud instances, the “Vyatta model”, are almost surely the best approach where the nodes will serve multiple customers. While ETSI NFV could in theory deploy “shared” elements, that’s not been the focus of the group, and in my view the work on that mission is inadequate. ETSI NFV is most useful where a per-customer service is being created by a combination of hosted feature instances, and P4 is most useful when it’s applied to white-box devices that have specialized semiconductor support for forwarding, and are equipped with the P4 driver software.
Over time, what I’d expect to see is a fusion of these approaches, which means that we’ll get a series of “compatible” hardware models, both for specialized hosting of network features in the cloud, and for use in discrete-node applications. This will be accompanied by a common API set that will present platform features upward to the feature software. Feature software of all sorts will then be hosted on top, and whether the platform conforms to ETSI NFV management and orchestration or offers P4 flow programming will depend on the feature software itself.
This obviously commoditizes hardware, and a lot of vendors will beat their breasts on that point, but the IBM PC revolutionized desktop computing by creating a commodity hardware model. You can still make money on that kind of hardware, though obviously not the margins network vendors might be used to. The defense against that would be to get out of the hardware business, at least for the specific types of our “compatible” hardware models that don’t offer useful differentiation and thus protect pricing from competitive impact.
You could also be competitive at the platform level, the system software that overlays the hardware. Think of this as the competition among Linux distros that we see today, or the competition among the various flavors of Kubernetes. Our “compatible” hardware flavors might focus on a series of network missions, and platforms could then support each of the flavors/missions. That means that vendors could offer a combination of software platform and hardware for specific missions, and work to differentiate themselves through chips, features, and operations support.
In a way, this approach leaves network vendors doing much the same thing they do now—practicing feature differentiation and even some lock-in—but doing it in a different way. They have to unbundle software and hardware/platform, and they have to obey some standards with respect to their implementation of both, but they can continue to offer proprietary alternatives to any fully open solutions that emerge.
This approach has an obvious problem; being a part of any open-model movement creates a significantly greater competitive risk, particularly when server and software vendors might be better-suited to compete there. It also has a potential benefit, in that it could prepare network vendors for participation in broader carrier-cloud services, should those services emerge.
The network of the future will be the product of the old smart/dumb debates of the past. Operators will either have to cut costs to the bone while staying in dumb pipe mode, or climb the value chain. If they do the former, then their ability to purchase network equipment will be seriously constrained, and that will tip network vendors into the commodity space. If they do the latter, they will necessarily consume more “feature-hosting” resources. Network services are also features and can be hosted on these new resources, along with the new service features that live higher on that value chain. This is a great opportunity for network vendors to look at the feature-hosting, open-model, technology, and make a place for themselves in the space, if they move fast enough to get ahead of the trend.
That’s been the real problem, of course. Both buyers and sellers in the telecom space have tried to ignore the obvious signs of industry change, hoping for some miraculous force to save them. It’s time everyone faces facts; they have to do something, something uncomfortable, to save themselves.