Open Source and Networking: Progress?

Open-source technology is making headway in networking despite some very visible failures. In many respects, that’s good news, but the way the success is evolving may point to some challenges for the industry down the line. The biggest challenge it faces is its relationship with traditional network standards and industry group activity, and the second-biggest is a loss of a collective vision to built toward.

In my last blog, I noted that the traditional generational, long-cycle, standards-driven processes that have defined network infrastructure progress have to be set aside in favor of something more cloud-like. I also noted that while open-source is a part of the answer on how software-driven transformation should take place, there are issues already arising. We can see some of them in the shifting attitude toward open-source networking elements.

In my surveys of enterprises, I find that 100% of them rely “heavily” on open-source software by their own definition. Among network operators, only about two-thirds say that’s true, and of that group, well over 80% say their reliance is in the domain of the CIO, meaning somewhere in their operations software. Neither enterprises nor network operators say they rely heavily on open-source in networking. In the former group, less than a fifth say they have any significant reliance on open-source networking technology, and even in the latter group, less than a third say they rely in any sense on open-source.

An interesting corollary to that point comes when you ask how much they expect to rely on open-source in networking in the next five years. Enterprises will then say they expect heavy reliance about half the time, but 100% of network operators think they’ll be relying heavily on open source for networking in that five-year future. Why is there such a difference, and what’s going to change that makes the future look brighter for open-source networking?

One thing that I’ve noticed, and that the referenced article suggests, is that open-source networking projects tend to have a fairly narrow scope. That’s also true for cloud projects, but we have a lot deeper and broader experience pool with the cloud than with networking. The reason scope is important is that most business problems or service requirements span a fairly broad functional area, which means they’re likely to require multiple open-source elements to fulfill requirements. That, in turn, means integration, which turns off enterprises more than it does network operators, at least in the planning stages.

One reason why operators are more likely to accept a multi-element, integrated, solution is that they have specific players offering the integration. All of the major cloud providers and all the major software players who offer open tools also offer a “telco” version. The likely reason that’s the case is another reason why operators accept the integrated model of open networking; enterprises see themselves consuming the results of a shift to open-model, open-source, networking, as services. Operators have to create the services from the open-model elements.

You could make a strong case for the position that the way open-source projects in networking are developing directly promotes operator reliance on “master software” players like IBM/Red Hat, HPE, and VMware. They assemble the pieces and create a sense of ecosystem for what could otherwise be seen as a disorganized set of developments. However, after-the-fact integration doesn’t bring pre-development insight to the picture. Can open-source network software projects create the best strategy without design-level integration of the separate concepts. You can build a boat from car parts, but you could surely build a better one if you knew, before you got the parts, that a boat was the goal.

We could go back to the cloud to rebut this, though. We’ve had a bit of a wild-west cloud development environment for a decade, and it’s worked to provide us with excellent tools, right? Yes, but it’s taken time. Kubernetes, for example, has evolved significantly under market pressure, through the addition of features to the software project itself, and through adjacent projects. Could we have done better had we understood where we were going from the first?

There’s also the question of just how many insights about open-source software we can safely transfer from the cloud to networking. I think one of the major problems we have in open-source networking is that proponents have, from the first, assumed that their needs were different and that they couldn’t simply endorse the cloud approach. Networks were “different”.

Whatever network insights might have been brought to cloud development were diverted into network-specific projects. NFV was originally charged with identifying standards, but the first steps they took disqualified almost all the work the cloud had done, and that could have been adopted by reference. Instead, new stuff was defined. Networking is a smaller-scope opportunity than the cloud; can we expect a thriving community of developers to build a network-specific strategy from open-source software?

The answer to the latter question may be the critical point here. Telcos have already demonstrated that they not only tolerate but encourage long-cycle technology evolutions. For vendors who hope to profit from change, that means long delays in revenue realization for any efforts they put forth. That discourages participation, but it also narrows project scope so that things focus on quickly achievable things that are easier to introduce because they require less, in terms of displacement of assets or changes in practices.

The edge trends that the article cites may be the pathway to hope for both vendors and operators. It’s easier to adopt new technology at the edge because edge technology is deployed per-user, and there doesn’t have to be a systemic shift in strategy to accommodate it. However, there are signs that if the edge has a role in the creation of a service rather than a point-of-connection feature, then some of that agility in terms of replacement or introduction of edge elements may go away. Is NaaS, because it’s personalized, something created only at each user’s edge point? It depends on how it’s framed.

There are also questions on the value of harmonizing on an operating system or other “platform” tool. We have many Linux distros and Linux has prospered. Would something like SONiC actually have to emerge to create open edge platforms? We already have Linux running on very small white-box elements. The key things in an edge platform might be things that lie above the operating system, that control the operationalization of an edge-centric service. Even there, the requirement is less that we standardize on a specific product, than that we standardize on a specific set of APIs.

Which really leads us to what’s now called “intent modeling”. An element in an intent-modeled world is a black box that presents APIs. Anything inside the black box that presents the required APIs is conformant. Which, of course, means that the thing we should be looking at to promote open-source is first a great intent-model API definition set. Lay out how the new element is expected to react with other cooperating elements, and you give development a specific goal set to meet. You also require a top-down vision of functionality and relationships, which is where everything should be starting in any case.

Intent modeling could ensure that open-source software meets the most critical test, which is the test of value. What’s the point of software-centric network infrastructure if you constrain it to work exactly like a box network would? But there has to be some framework in which software elements cooperate to form services, as there is for boxes. If intent modeling creates such a framework with its APIs, then it’s done a major service for the industry.