It’s always smart to get a broad view of the future, and conferences can provide that. A good article titled “Open source, programmability, and as-a-service play a big role in future networks” discussing what happened at the Future:Net 2021 symposium was provided in Network World, and so we’ll start a two-part view of the future by analyzing what the article reported. As always, I’ll offer my comments on the points, and we’ll follow this blog with the results of my own attempt to get a vision of the future from the stakeholders, including users and suppliers.
One early point the article makes is that networks and networking are going to be progressively more software-centric over time, and that this trend is at least strongly influenced by (if not largely created by) open-source software. That raises an interesting question about the chicken/egg relationship; is software driving this change, or is a desire for more flexibility driving software? That’s our first point of discussion.
The case for a software-centric shift is largely based on the fact that networks have been under cost pressure for at least a decade. Users have traditionally spent a combination of budget dollars and project dollars on networks, with the former sustaining what’s there and the latter bringing new business cases, new technology, and new spending. Over the last decade, the share of project dollars has declined sharply, meaning that networks today are largely in sustaining mode, and that’s all about cost conservation. Open network technology is a smart way to conserve costs, and even with white box versus server hosting, that approach demands software. Open source is a popular option because it’s free.
The case for a flexibility-based shift is based on the fact that network technology is changing at the functional level, and that these sorts of changes are more easily accommodated with software, even when the software and hardware are bundled and proprietary. Recently, though, there’s been a push for “disaggregated” network devices, with software and hardware unbundled. In theory, at least, that opens the possibility that a user might switch network software to gain flexibility for existing devices. Once that realization hits, it’s a smaller step to depend on open-model networking with white boxes and open-source software.
I think this one can be called a kind of co-evolution relationship. Both the software-centric and flexibility-centric drivers have been around long enough that users probably can’t tell which of the two came first for them, or separate their impacts on current planning. The net is that the statement is true; we’re increasingly thinking network software, and open-source at that, for network needs.
Almost everything that runs software needs an operating system. One role that the OS plays is abstracting hardware so that different implementations of the same thing (like different disk drives or graphic chips) are harmonized. One interesting thing about switches is that the OSs tend to be network operating systems or NOSs, meaning the network functionality is built in.
There are probably two dozen white-box NOSs available, and SONiC is surely one of them. It has two big advantages, in fact. The first is that Microsoft uses SONiC in Azure infrastructure, so the NOS is tested under real-world, high-scale-and-traffic, situations. The second is that Microsoft (who developed it) donated it to the Open Compute Project, and so it’s been jumped on by a large number of vendors, including Broadcom, who makes the switching chips used in the great majority of high-performance white-box designs.
SONiC is designed with the standard NOS goal of abstracting the hardware, and that includes the switching chips used in high-performance white-boxes. That makes SONiC switching/routing software portable to most any popular white-box configuration. However, some users are concerned about support, since SONiC is an OCP project and doesn’t have a specific corporate source to fall back on. There are other NOSs used by enterprises, and right now SONiC doesn’t have even half the market. According to enterprises I talk with, the main ones include Arista (the EOS family), Arrcus (ArcOS), Pluribus (Netvisor ONE).
The key point for enterprises is integration and support. Most enterprises are best served by finding a white-box supplier who has the products they need, and letting them provide a device and bundled network OS. For operational reasons it’s best to get the same NOS for everything, and that consideration means it would be smart to look for a product source that matched both current and future demand.
The network operator community might prefer another option, like the ONF Stratum NOS, because Stratum includes P4 flow-programming language support that aligns it with more new switch platforms. Broadcom, though, doesn’t support P4 (that might change with their recent anti-trust settlement but there’s no reference to it that I’ve seen), so you can see that there’s already a bit of parochialism in the open-model, white-box world.
The article moves then to what I think is the most interesting topic, which is the relationship between networks and network applications. Today, we operate networks to be largely application-independent, and network requirements are loosely set by aggregating the requirements of the applications. The future, according to the article and the conference, is one where we program application-specific behaviors rather than the network, and the summation of behaviors happens inside the cloud. Networking is a service, a NaaS, in short.
This is, IMHO, another way of talking about the whole SASE concept. A SASE is a gateway to a collection of services, a place where application needs are explicitly recognized and commissioned via the SASE/NaaS gateway. Thus, each application tells the box what it needs, and the box brokers it. This, obviously, means that the box likely doesn’t have much of an idea of what the network collective is, and that means that for this to scale the operations process has to be automated.
The way that happens, said the conference, is that a goal state is recognized and AI/ML learns to achieve it from whatever state an issue might put the network in. Since the network really doesn’t have a collective SLA (that’s an application property), this has to be able to address both the application-specific SLA and the collective network behaviors. All the more reason to look for automation!
This could lead to hardware-as-a-service for things like data center and LAN switches that cannot be virtualized and delivered through SASE/NaaS in a cloud-like way. Hardware as a service would mean that users would pay on subscription for hardware ports they need, and the HaaS provider would provision things and sustain operations. Obviously, this sounds a lot like Cisco’s concept, and I didn’t agree with it when Cisco touted it, and I still don’t. I think it is likely that as SASE/NaaS evolves, there are likely to be management services offered from the cloud to control and sustain local network and data center switches. I don’t think most enterprises will accept the obvious security concerns this would create, though, and I don’t thing ports-as-a-subscription-service is likely either.
Conferences are always a balancing act. On the one hand, they have to address things that users actually care about, but on the other they have to be 1) exciting and 2) favor the positioning and interests of vendors who drive the process and provide speakers. It’s been my experience that the further talk advances into the future, the more of the excitement-and-vendor-interest driver influences what’s said. That seems to be the case here.
I didn’t attend this conference (those who know me realize I almost never attend any conferences), so I’m happy to get a good summary of the key positions. I think there’s a lot of truth in them, but some of the views don’t match market views I’ve heard. Tomorrow I’m going to talk about what enterprises, vendors, and providers themselves say about key technologies.