How ONAP Could Transform Networking–or Not

A story that starts with the statement that a technology is “entering a new phase” is always interesting.  It’s not always compelling, or even true, but it’s at least interesting.  In the case of Network Functions Virtualization (NFV) the story is interesting, and it’s even true.  It remains to be seen whether it’s compelling.

NFV was launched as a specification group, a form of an international standards process.  Within ETSI the NFV ISG has done some good stuff, but it started its life as a many-faceted balancing act and that’s limited its utility.  Think operators versus vendors.  Think “standards” versus “software”.  Think “schedule” versus “scope”.  I’ve blogged on these points in the past, so there’s no need to repeat the points now.  What matters is the “new phase”.

Which is open-source software.  NFV had launched a number of open-source initiatives from the ISG work, but what has generated the new phase is the merger of one of these (the Open-O initiative) with AT&T’s ECOMP.  ECOMP mingles the AT&T initiatives toward an open and vendor-independent infrastructure with some insights into SDN, NFV, and even OSS/BSS.  The result is a software platform that is designed to do most of the service lifecycle management automation that we have needed from the first and were not getting through the “normal” market processes.

ECOMP, it’s clear now, is intended to be not only what the acronym suggests (“Enhanced Control, Orchestration, Management & Policy”) but more what the title of the new merged (under the Linux Foundation) initiative suggests, an Open Network Automation Platform.  I like this name because it seizes on the real goals of openness and automation.

I also like AT&T’s focusing of its venture arm on building new stuff on top of ONAP, and AT&T’s confidence in the resulting ecosystem.  In an article by Carol Wilson in Light Reading, Igal Elbaz, VP of Ecosystem and Innovation for AT&T Services says, “We believe [ONAP] is going to be the network operating system for the majority of the network operators out there.  If you build anything on top of our network from a services perspective, obviously you want to build on top of ONAP. But many operators are adopting all of a sudden this solution so you can create a ubiquitous solution that can touch a large number of customers and end users around the world.”

It’s that last notion that catapults NFV into its new age.  Some network operators, through support for open-source initiatives, have defined the glue that holds future network infrastructure and future services together.  Some of this involves virtual functions; more probably will involve some form of software-defined networking.  All of it could totally change the dynamic of both SDN and NFV, by creating an open model for the future network.  If ONAP can create it, of course.

The comment by AT&T’s Elbaz raises the most obvious question, which is that of general adoption of ONAP by network operators.  There is certainly widespread interest in ONAP; of the 54 operators I know to have active transformation projects underway, ONAP is considered a “candidate” for use by 25 off them.  That’s not Elbaz’s majority of operators, but it’s a darn good start.  I think we can assume that ONAP can reach the more-than-half goal, and likely surpass it.  It might well garner two-thirds to three-quarters of operators, in my view.

A related question is vendor support.  Obviously if a majority of network operators adopted ONAP, vendors would fall into line even if they were in tears as they stood there, which many might well be.  However, the only alternative to supporting ONAP would be rolling your own total service automation solution, which vendors have obviously not been linking up to do since NFV came along.  Would they change their tune now, with a competing open-source solution from and accepted by operators?  I don’t think so, and so I think that once ONAP really gets where it needs to be, it kills off not only other vendor options but any competing open strategies as well.

Where does ONAP need to get to, though?  I think the best answer to that is “where Google already is with their Google Cloud Platform”.  The good news for the ONAP folks is that Google has been totally open about GCP details, and has open-sourced much or even most of the key pieces.  The bad news is that GCP is a very different take on the network of the future, a take that first and foremost is not what launched ECOMP and ONAP, or even what launched NFV.  It may be very challenging to fit ONAP into a GCP model now.  Particularly given that GCP’s founding principle is that networks are just the plumbing that can mess up the good stuff.

Google’s network, as I’ve noted before, was designed to connect processes that are in turn composed to create experiences/applications.  Operators today are struggling to make business sense of pushing bits between endpoints in an age of declining revenue per bit.  Google never saw that as a business.  In fact, Google’s approach to “virtual private clouds” is to pull more and more cloud traffic onto Google’s network, to take even the traffic that’s today associated with inter-cloud connectivity off the Internet or an external VPN.  You could make a strong case for the notion that Google views public networking as nothing more than the access on-ramp that gets you to the closest Google peering point.

Google’s relationship with the Internet is something like this; everything Google does rides on Google’s network and is delivered to a peering point near the user.  GCP carries this model to cloud computing services.  Google also takes a lot of time managing the address spaces of cloud services and its own features.  User data planes are independent SDN networks, each having its own public (RFC 1918) address space.  Processes can also be associated with public IP addresses if they have to be exposed to interconnection.

Nothing of this sort is visible in the ECOMP/ONAP material, but it’s also true that nothing in the material would preclude following the model.  The big question is whether the bias of the ECOMP/ONAP model or architecture has framed the software in an inefficient way.  Google has planned everything around process hosting.  If process hosting is the way of the future, then NFV has to be done that way too.

The SDN and NFV initiatives started off as traditional standards-like processes.  It’s understandable that these kinds of activities would then not reflect the insight that software architects would bring—and did bring to Google, IMHO.  Now, with ONAP, we have another pathway to SDN and NFV, a path that takes us through software rather than through standards.  That doesn’t guarantee that we’ll see a successful implementation, but it does raise the prospects considerably.

We also have to look ahead to 5G, which as a traditional standards process has made the same sort of bottom-up mistakes that were made by those processes in the past.  We have a lot of statements about the symbiosis between 5G and SDN and NFV.  I’ve read through the work on 5G so far and while I can see how SDN or NFV might be used, I don’t see clear service opportunities or infrastructure efficiency benefits that are linked to any of those applications.  The big question might turn out to be whether AT&T or the others involved in ONAP can create a credible link between their work and credible 5G drivers.  Same with IoT.

A software revolution in networking is clearly indicated.  Nothing we know about future network services or infrastructure evolution harkens back to device connections and bit-pushing for differentiation.  We may be on the path for that software revolution—finally.  That would be good news if true, and it would be a disaster for the industry if it’s not.