Yesterday the “New ONF” formed by the union of the old ONF and ON.Labs announced its new mission and its roadmap to achieving it. I’m a guy who has worked in standards for well over two decades, and the experience has made me perhaps more cynical about standards than I am about most things (which, most of my readers will agree, is pretty darn cynical). The new ONF actually excites me by stating a goal set and some key points that are spot on. It also frightens me a little because there’s still one thing that the new group is doing that has been a major cause of failure for all the other initiatives in the service provider transformation space.
The “new ONF” is the union of the Open Network Foundation and ON.Labs, the organization that created the ONOS operating system and CORD, both of which I’ve talked about in the past. I blogged about the importance of CORD early on (see THIS blog) and gain when Comcast jumped into the consortium, HERE, and everyone probably knows that the ONF is the parent of OpenFlow SDN. The new ONF seems more focused on the ON.Labs elements, from which they hope to create way to use software-based or software-defined elements to build market-responsive networks and network services.
Networks of old were a collection of boxes joined by very standardized hardware interfaces. Then, enter virtualization, software definition, the cloud, and all the other good stuff that’s come along in the last decade. Each of these new initiatives had/have their champions in terms of vendors, buyers, and standardization processes. Each of these initiatives had a very logical mission, and a logical desire to contain scope to permit timely progress. Result? Nothing connects in this wonderful new age.
This is a perhaps-flowery restatement of the opening positioning that the ONF offers for its new concept of the Open Innovation Pipeline. The goal of the process is the notion of the “Software-Defined Standard”, something that by itself brings tears to the eyes of an old software architecture guy like me. We’ve gone on way too far along the path of supposed software-defined stuff with little apparent concern for software design principles. The ONF says they want to fix that, which has me excited.
Digging to the details, what the ONF seems to be proposing is the creation of an open ecosystem that starts (at least in many cases) with the ONOS operating system, on which is added the XOS orchestration layer (which is a kind of service middleware). This is used to build the variety of CORD models (R-CORD, M-CORD, etc.), and it can also be used to build new models. If this approach were to be followed, it would create a standardized open-source platform that builds from the bottom to the top, and that provides for easy customization and integration.
But it’s at the top of the architectural heap that I find what makes me afraid. The architectural slide in all of this shows the open structure with a programmable forwarding plane at the bottom, a collection of Global Orchestrators at the top, and the new ONF focus as a box in between. This vision is of course device-centric, and in the real world you’d be assembling conforming boxes and presumably other boxes, virtual or real, to create networks and services. I don’t have a problem with the idea that there’s a forwarding plane at the bottom, because even service elements that are outside the service data plane probably have to forward something. I’m a bit concerned about that Global Orchestrator thing at the top.
I’ve been a part of a lot of standards processes for decades, and it seems like all of them tend to show a diagram that has some important function sitting god-like at the top, but declared safely out of scope. That’s what the ONF has done with those Global Orchestrators. The problem with those past bodies and their past diagrams is that all of them failed their critical mission to make the business case, and all of them failed because they didn’t include elements that were critical to their business case in their scope of work. So the fact that the ONF seems to do this is discouraging.
The ONF is right in saying that there’s an integration problem with the new-generation virtualization-based services. They are also right in saying that having a common platform on which the elements of those new services are built would solve that problem, through the simple mechanism of a common implementation platform on which the features were built. However, the past says that’s not enough, for two reasons.
First, everything is not built on the ONF’s architecture. Even if we presumed that everything new was built that way, you still have to absorb all the legacy hardware and accommodate the open source initiatives for other virtualized-element models, all of which aren’t based on the ONF’s elements. We have learned the bitter truth in NFV in particular; you can’t exclude the thing you are evolving from (legacy devices in particular) in your model of a future service, unless you never want to get there from here. You could accommodate the legacy and “foreign” stuff in the ONF approach, but the details aren’t there yet.
Second, there’s the issue of the business case. I can have a wonderful architecture for building standardized car parts, but it won’t serve me a whit if nobody wants to buy a car. I’ve blogged a lot about the business case behind a new virtual service element—SDN, NFV, or whatever you like. Most of that business case is going to come from the automation of the full service lifecycle, and most of that lifecycle and the processes that automate it live in that Global Orchestrators element that’s sitting out of scope on top of the ONF target functionality.
All of this could be solved in a minute with the inclusion of a model-based service description of the type I’ve been blogging about. I presented just this notion to the ONF, in fact, back in about 2014. A model like that could organize all of the pieces of ONF functionality, and it could also organize how they relate to the rest of the service processes, whether they’re NFV processes, OSS/BSS processes, or cloud computing. Yes, this capability would be in a functional Global Orchestrator, but there aren’t any of them available and we know that because nobody has successfully made the business case with one, nor have they integrated all the service lifecycle processes.
There is a modeling aspect to the XOS layer, and it’s got all the essential pieces, as I said in my first blog on it (see above). However, in execution, XOS seems to have changed its notion of “service” from a high-level one to something more like the TMF’s “Resource-Facing Services” or my ExperiaSphere “Behaviors”. They’re what a network or infrastructure can do, more than a functional assembly that when decomposed ends up with these infrastructure capabilities. That seems to be what created the Global Orchestrator notion; the lost functionality is pushed up into the out-of-scope part. That’s what frightens me, because it’s the mistake that so many others have made.
I’m not knocking the new ONF here, because I have high hopes for it. They, at least, seem to grasp the simple truth that software defined stuff demands a definition of stuff in software terms. I also think that, at a time when useful standards to support integration in SDN and NFV seem to be going nowhere, the notion of a common platform seems unusually attractive. Is it the best approach? No, but it’s a workable one, which says a lot at this point.
There have been a lot of recent re-launching of standards and industry groups and activities, brought about because the original efforts of the bodies generated interest, hype, media extravagance, and not much in the way of deployment or transformation. The new ONF now joins the group of industry mulligans, and the question is whether it will jump off what’s unquestionably a superior foundation and do the right thing, or provide us with another example of how to miss the obvious. I’ll offer my unbiased view on that as the details of the initiative develop.