It’s nice to find a thoughtful piece on technology, and a potential new source for such pieces. The Next Platform, published in the UK by Stackhouse, is offering such a piece HERE and it’s worth digging into a bit. While the titular focus is “disaggregated routing” and perhaps DriveNets’ recent funding bonanza, there are plenty of broad messages to consider.
The opening of the story is certainly strong. It frames the current network technology developments as the outcome of a long-term struggle between the benefits of specialized chips in pushing packets, and the benefits of software in molding packet-pushers, first into networks than then into differentiable services. While that’s certainly not the sort of top-down approach to things that I naturally look for, it does lead to some provocative points.
In computing, we’ve seen a hardware evolution and a software revolution taking place at the same time. Early microprocessor chips were, by current standards, so anemic as to be useless in most commercial applications. As they got more powerful and more could be done with them, it’s hardly likely that PC buyers would turn into a vast army of programmers building their own applications to take advantage of the new power available. What was needed was packaged software, something that was sold by somebody to many users.
The concept of separate, packaged, software releases the bond that ties an entire IT investment to its most capital-intensive, depreciation-sensitive, asset, the hardware. In IT, you buy a server, and while you depreciate it over perhaps five years, you can run anything you like on it.
Let’s now look at this same power dynamic in networking. For decades, there’s been routing software available in open-source form. The UNIX Berkeley Software Distribution (BSD) stuff included it, and that helped to pull TCP/IP into network supremacy when UNIX began to displace proprietary operating systems. As recently as 2013, Tier One operators were trialing hosted-router software (from Vyatta) as a replacement for proprietary routers. The software ran on “COTS” or commercial-off-the-shelf servers, and performance was…well…OK. Not surprisingly, silicon innovation evolved to improve it.
We have a lot of switch/router silicon available today, from companies like Broadcom, Nvidia, Marvell, Intel, and even Cisco (Silicon One). This is the same kind of innovation we’ve seen in computer graphics chips, network adapters, and other interfaces used in computer systems. It creates the same challenge, which is to create packaged software that’s portable across some reasonable base of technology. The solution is what’s generically called a “driver”, which is a software component that takes an abstraction of a broad set of interface requirements and maps them to one or more specific implementations. Software can then be written to the abstraction, and the proper driver will let it run in multiple places.
Run on what? The industry has evolved its own answer to that too. Rather than demand a COTS server architecture to glue chips onto, a “white box” model has emerged that optimizes the overall hardware platform for the specific network mission the chips are supporting, which is switching and routing. White boxes will generally have a specific mission, and a specific set of optimizing chips to support the mission. You put packaged software on white boxes and you have a network device.
Here’s where I think the article misses an important truth. Networks are built from network devices, and how the devices combine to build a network is intrinsic in the architecture of the devices. If we took packaged routing software and combined it with a white box, we’d have a white-box-generated example of what could be called a “black box abstraction”. Black boxes are functional elements whose internal structures are opaque. They’re known by the relationship among their interfaces, the external properties. Thus, a white-box router and a proprietary router are both “routers” and they build “router networks”.
This is the most critical point in the evolution of networks today, because if we nail a new technology into a legacy mission, we may constrain it so much that it loses a lot of its potential. What’s the difference between any two implementations of a black-box router? From the outside (by definition) they look the same, so the only meaningful difference can be cost. Cost, friends, isn’t enough to drive transformation.
On a box-per-box basis, meaning one white-box router versus one proprietary router with the same specs, you’re looking at between 25% and 40% cost savings of the first versus the second. Operators tell me that a major replacement of network technology would have to save around 35% in capex to be meaningful and justify a change, so there’s an issue with the justification of the boxes from the first. Then there’s the problem that there really aren’t white box models made for every possible router configuration, and since network devices are less common than computers, there’s less economy of scale.
Where DriveNets comes into this picture is that they’ve abstracted the “white box” interior of the black-box router. They assemble arbitrary configurations by linking multiple white boxes into one virtual box, which behaves exactly like a black-box router from the outside. This is one of several things that’s described as “disaggregation”, which loosely means “taking things apart”, and it fits. What DriveNets disaggregation does, out of the box (no pun intended), is to reap the maximum savings from the white-box-and-packaged-software approach, and extend the option to all possible classes of network devices. That’s enough to make a business case for a network upgrade, and they’re unique in that regard.
Now (finally, you may think) we come to the point about boxes building networks and the box/network functionally interdependent relationship. Black boxes are linked into a network via those external interfaces, which means that the initial definition of the abstraction then defines how the devices are used, the kind of networks they build, and the services those networks directly create. Suppose you don’t want that?
The SDN movement and the NFV movement both, in theory, offered a way of changing that. SDN separated the control and data planes, and implemented the control plane differently (via a central controller). NFV offered the possibility of decomposing devices into virtual functions that would then be recomposed into more flexible virtual devices, and the opportunity to create virtual functions that current routers didn’t even support. Neither has been a dazzling success at replacing routers, but they both demonstrate that there is life beyond router networks, no matter how each black-box router is realized.
What kind of life? We know what an SDN network model looks like, and we can assign black-box element properties to all the components and then hope for an open implementation of each. Anyone’s flow switch should work with anyone’s SDN controller. What this means is that SDN creates a new network model that represents not individual devices/nodes, but the network as a whole. A community of SDN elements is one big black box, made up of interior black boxes. Abstraction within abstraction, wheels within wheels.
That, I think, is the general model of networking we’re heading toward. On the outside, at the network level, what we build has to look like IP. On the inside there’s another collection of abstractions that represent the open assembly models that add up to that exterior black-box abstraction of an IP network.
Things like separating the control plane and the data plane, however you do it and however far apart they are, are the responsibility of the inner-model relationships. If you elect to extend the “services” of the IP network to include the specific user-plane interfaces of 5G or the implementation of CDN, they add to the features of the network abstraction and they’re implemented within. You could also say that those future “services” are another higher level in the abstraction hierarchy, consuming the “IP Network Abstraction” and its contained abstractions, and other abstractions representing non-IP-network features, likewise then decomposed.
This gets me to where I like to be, at the top of the model. It frames the future of networks by creating service abstractions, functional (like “IP Network”) abstractions, and low-level realizations within them all. It defines openness in terms of open implementations to defined abstractions. It envelopes what we can do today with or on IP networks, and what we could evolve to wanting, including edge computing.
If this is a good statement of the future of networking, then where does it leave current vendors, including DriveNets? The model makes everything a decomposition (a “disaggregation” if you like) of a functionally defined glorious whole. Anything we have or do now is a special case of an expanding generalization. That’s what vendors have to be thinking about.
For traditional network vendors, both switch/router and 5G, the dilemma is whether to embrace a lower role in the abstraction hierarchy, to adopt the full model and position your offerings within it, or to ignore the evolving reality and hope for the best. Router vendors who really offer “disaggregated software and hardware” will have to support a hardware abstraction that embraces white boxes. Do they then also have to embrace a hierarchy of abstractions like DriveNets? A major question for those vendors, because they have broad sales targets, a broad product line, and a lot to lose if they fail to support what seems to be evolving. But they may lose differentiation, at least to a degree, if they do.
DriveNets, despite its enviable market position, has its own challenges. The article cites Hillel Kobrinsky, DriveNets co-founder and chief strategy officer: “DriveNets is going to focus completely and exclusively on the service provider space – forget large enterprises, hyperscalers, and cloud builders. DriveNets started at the core for routing, Kobrinsky adds, and has moved into the peering and aggregation layers of the network and has even moved out to the edge and is sometimes used in datacenter interconnects. But DriveNets has no desire to move into other routing use cases and has no interest in doing switching at all. At least for now.”
Self-imposed limitations are good because you can “un-impose” them easily, but bad as long as you let them nail you to a limited role in an expanding market. With 5G interest and budgets exploding, and with edge computing differentiating from cloud computing, “now” in the service provider space may be measured in weeks. Enterprise and cloud provider opportunity is already significant, and any significant and unaddressed opportunity is an invitation for competitive entry. The competitor may then take a broader view of their target market, one that includes what you thought was your turf. And, a true and full adoption of the model hierarchy I’m talking about would be a great way to enter the market.
Great, because there are already initiatives that could easily be evolved into it. The biggest driver for adopting this abstraction-hierarchy model of networking may be projects like Free Range Routing and the ONF’s Stratum. Open RAN developed because operators wanted it and vendors were ready to oblige them, particularly vendors who didn’t have incumbent products in the 5G RAN space. Could Stratum or FRR create an appetite for an agile high-level model for services and networks? If so, could that then drive everyone to either adopt their own broad model, or be supersetted by the rest of the market?
If there’s any issue you want to watch in networking, it’s my recommendation that you watch this one. If this industry is going to be moved and shaken, this is where it will be done.