The transformation of telecommunications and the networks that underlay the industry is coming to grips with what may seem a semantic barrier—what’s the relationship between “open” and “open-source?” This seems to many a frustratingly vague problem to be circling at this critical time; something like the classic arguments about how many angels can dance on the head of a pin or how big a UFO is. There’s more to it than that; a lot of stuff we need to address if we’re going to meet operator transformation goals.
Operators have, in modern times at least, demanded “open” network models. Such a model allows operators to pick devices to fit in on the basis of price and features because the interfaces between the devices are standardized so that the boxes are at least largely interchangeable. Nobody can lock in a buyer by selling one box which then won’t work properly unless all the boxes come from the same source.
I’ve been involved in international standards for a couple of decades, and it’s my view that networking standards have focused on the goal of openness primarily as a defense against lock-in. It’s been a successful defense too, because we have many multi-vendor networks today and openness in this sense is a mandate of virtually every RFI and RFC issued by operators.
The problems with “open” arise when you move from the hardware domain to the software domain. In software, the equivalent of an “interface” is the application program interface or API. In the software world, you can build software by defining an API template (which, in Java, is called an “interface”) and then define multiple implementations of that same interface (Java does this by saying a class “implements” an interface). On the surface this looks pretty much like the hardware domain, but it’s not as similar as it appears.
The big difference is what could be called mutability. A device is an engineered symphony of hardware, firmware, and perhaps software, that might have taken years to perfect and that can be changed only with great difficulty, perhaps even only by replacement. A software element is literally designed to allow facile changes to be made. Static, versus dynamic.
One impact of this is elastic functionality. A router, by definition, routes. You connect to a router for that purpose, right? In software, a given “interface” defines what in my time was jokingly called the “gozintas” and “gozoutas”, meaning inputs and outputs. The function performed was implicit, not explicit, just like routers. But if it’s easy to tweak the function, then it’s easy to create two different “implementations” of the same “interface” that don’t do the same thing at all. Defining the interface, then, doesn’t make the implementations “open”.
On the positive side, mutability means that even where different “interfaces” to the same function are defined, it’s not difficult to convert one into another. You simply write a little stub of code that takes the new request and formats it as the old function expects, then invokes it. Imagine converting hardware interfaces that way! What this means is that a lot of the things we had to focus on in standardizing hardware are unimportant in software standards, and some of the things we take for granted in hardware are critical in software. We have to do software architecture to establish open software projects, not functional architecture or “interfaces”.
IMHO, both SDN and NFV have suffered delays because what were explicitly software projects were run like they were hardware projects. IMHO, open-source initiatives like Open Daylight or OPNFV were kicked off to try to fix the problem, which is how open-source got into the mix.
Open-source is a process not an architecture. The software is authored and evolved through community action, with the requirement that (with some dodges and exceptions) the stuff be free and the source code made available. There are many examples of strong open-source projects, and the concept goes back a very long way.
You could argue that the genesis of open-source was the in university and research communities, the same people who launched the Internet. The big, early, winner in the space was the UNIX operating system, popularized by UC Berkeley in what became known as “BSD” for the Berkeley Software Distribution. What made UNIX popular was that at the time it was emerging (the ‘80s) computer vendors were recognizing that you had to have a large software base to win in the market, and that only the largest vendors (actually, only IBM) had one large enough. How do these vendors compete without anti-trust? Adopt UNIX.
The defining property of open-source is that the source code is available, not just the executable. However, very few users of open-source software opt to even receive the source code and fewer do anything with it. The importance of the openness is that nobody can hijack the code by distributing only executables. However, there have been many complaints through the years that vendors who can afford to put a lot of developers on an open-source project can effectively control its direction.
For network operators, open-source projects can solve a serious problem, which is that old bugaboo of anti-trust intervention. Operators cannot form a group and work together to solve technical problems. I was involved in an operator-dominated group, and one of the big Tier Ones came in one day and said their lawyers told them they had to either pull out of the group or wrap the group into a larger industry initiative that wasn’t operator-dominated, or face anti-trust action. The problem of course is who buys carrier routers except carriers, and how can you preserve openness if you have to join forces with the people who are trying to create closed systems for their own proprietary benefit?
An open-source project is a way to build network technology in collaboration with competitors, without facing anti-trust charges. However, it poses its own risks, and we can see those risks developing already.
Perhaps the zero-day risk to creating openness with open-source is the risk that openness wasn’t even a goal. All software isn’t designed to support open substitution of components, or free connection with other implementations. Even today, we lack a complete architecture for federation of implementations in SDN and NFV for open-source implementations to draw on. Anyone who’s looked at the various implementations of open office software knows that pulling a piece out of one and sticking it in another won’t likely work at all. The truth is that for software to support open interchange and connection, you have to define the points where you expect that to happen up front.
Then there’s the chef-and-broth issue. Let’s start with a “software router” or “VR” concept in open-source. The VR would consist of some number of components, each defined with an “interface” and an “implementation” for the interface. A bunch of programmers from different companies would work cooperatively to build this. Suppose they disagree? Suppose the requirements for a VR aren’t exactly the same among all the collaborators? Suppose some of the programmers work for vendors who’d like to see the whole project fail or get mired in never-ending strife?
Well, it’s open-source so the operators could each go their own way, right? Yes, but that creates two parallel implementations (“forks”) that if not managed will end up breaking any common ground between them. We now have every operator building their own virtual devices. But even with coordination, how far can the forks diverge before there’s not much left that’s common among them? Forking is important to open-source, though, because it demonstrates that somebody with a good idea can create an alternative version of something that, if it’s broadly accepted, can become mainstream. We see a fork evolving today in the OpenStack space, with Mirantis creating a fork on OpenStack to use Kubernetes for lifecycle orchestration.
Operators have expressed concern over one byproduct of community development and forking, which is potentially endless change cycles, version problems, and instability. I’ve run into OpenStack dependencies myself, issues where you need specific middleware to run a specific OpenStack version, which you need because of a specific feature, then you find that the middleware version you need isn’t supported in the OS distro you need. Central office switches used to have one version change every six months, and new features were advertised five or six versions in advance. The casual release populism of open-source is a pretty sharp paradox.
The next issue is the “source of inspiration.” We’ve derived the goals and broad architectures for things like SDN and NFV from standards, and we already know these were developed from the bottom up, focusing on interfaces and not really on architecture. No matter how many open-source projects we have, they can shed the limitations of their inspirational standards only if they deliberately break from those standards.
The third issue is value. Open-source is shared. No for-profit company is likely to take a highly valuable, patentable, notion and contribute it freely. If an entire VR is open-source, that would foreclose using any of those highly valuable notions. Do operators want that? Probably not. If there are proprietary interfaces in the network today, can we link to them with open-source without violating license terms? Probably not, since the code would reveal the interface specs.
The bottom line is that you cannot guarantee an effective, commercially useful, open system with open-source alone. You can have an open-source project that’s started from the wrong place, is run the wrong way, and is never going to accomplish anything at all. You can also have a great one. Good open-source projects probably have a better chance of also being “open”, but only if openness in the telco sense was a goal. If it wasn’t, then even open-source licensing can inhibit the mingling of proprietary elements, and that could impact utility at least during the transformation phase of networking, and perhaps forever.
“Open” versus “open-source” is the wrong way to look at things because this isn’t an either/or. Open-source is not the total answer, nor is openness. In a complex software system you need both. Based on my own experience, you have to start an “open system” with an open design, an architecture for the software that creates useful points where component attachment and interchange are assured. Whether the implementation of that design is done with open-source or not, you’ll end up with an open system.