A Tale of Two Transformations: ONAP versus the Cloud

You know I love to do blogs that contrast old and new, and today what I want to do is look at the Linux Foundation ONAP Casablanca release and contrast that with Amazon’s re:invent event. This, of course, is a bit of a proxy for the contrasting of the network operator versus cloud giant mindsets, and in several important ways.

ONAP is of course the sprawling initiative that went over to the Linux Foundation, primarily from AT&T’s ECOMP service lifecycle automation initiative. When ECOMP first came along, I was very hopeful that it would resolve what I believed were serious problems in the operationalization of next-gen networking, including of course both SDN and NFV. As the releases came along, I became very concerned that perhaps ONAP was following the bottom-up approach that’s characterized (and fatally wounded) so many telecom initiatives. I had a brief resumption of hope about a year ago after a briefing from some of the key people, so the question is whether Casablanca continues the hopeful path or pushes me back over the “doubts” red line.

The latter, sadly, and in fact I’m at this point very doubtful that the fundamental problems with ECOMP are even being addressed, and even doubtful that at this point any resolution of the issues is possible. ECOMP is following what I’ve characterized as the “device networking” vision of the future, and not the “cloud-native” vision. The key point for me is the notion of a model-driven architecture. I told the ONAP people that I wasn’t interested in future briefings that didn’t focus on model-driven transformation of the ONAP architecture. Casablanca is the second release that has failed to provide that focus, and while I was promised a paper that would explain where things were going, it was never delivered.

If you go back my blog on “cloud-native” approaches to transformation about a year ago, you’ll find a complete explanation of the problem. To quickly recap, NFV has formalized a transformation architecture that proposes to build networks with “devices”, but to allow those devices to be virtual rather than always physical. To use the NFV ISG terms, “virtual network functions” or VNFs are the hosted equivalent of “physical network functions” or PNFs. I believe, and so does a senior Vodafone exec quoted in a Fierce Telecom piece, that services should be built from feature components designed for the cloud.

The VNF/PNF thing is insidious because it came about in large part to eliminate the need to transform operations/management systems to support NFV. It does that by presuming that the virtual functions of the future look like the PNFs they’re derived from and are managed the same way. That’s a problem because it’s only true if you keep envisioning future services as being derived from device behaviors. If you don’t do that, you need to have a model to describe how features are assembled, and so a service model is key to a cloud-centric vision of networking. If ONAP doesn’t have it, then ONAP doesn’t support cloud-native evolution.

The Casablanca release is replete with comments about VNFs and PNFs, and the wording makes it clear that what ONAP is now focused on is providing the operational layer of NFV. Since NFV isn’t cloud-native, ONAP doesn’t need to be. Since Casablanca seems to double down on the old NFV model of virtualization, it’s locked in device networking, which is why I think we can declare the whole ONAP concept lost in a value sense at this point.

Now let’s look at the other side, Amazon’s re:invent conference. Obviously, Amazon has a cloud-centric view of things, and re:invent this year showcases the shifts in technology and market requirements that a cloud visionary would think were most critical. The most interesting thing about the results of the conference is that they map new AWS features to what I think are clear visions of the role of the cloud in the future. In doing that, they don’t attempt to reshape old IT elements into cloud form, but rather create new cloud-specific elements. Isn’t that what you’d expect cloud-native to be? Let me cite three good examples of Amazon’s approach. First, there’s the hybrid cloud. Second, there’s IoT, and finally there’s blockchain.

Amazon has long recognized that IoT is about efficient event processing. For Amazon, that means a fairly broad ecosystem of AWS services, ranging from Lambda for processing to other specialized tools for IoT device management. Network operators, in contrast, jumped immediately on the notion that IoT would make every sensor into a 5G customer, and focused only on the issues of managing all those new (paying) devices.

In their enterprise and hybrid cloud space, Amazon’s Firecracker lightweight VM takes aim at the container and Kubernetes spaces (which its competitors have jumped on), providing an implementation of virtual machines that is at least operationally if not quite in a resource sense, the equal of container systems. This is the tool Amazon uses in its Lambda service, which means Amazon is thinking ecosystemically about how IoT applications will evolve.

Another hybrid cloud strategy is Outpost(s), a two-level approach to creating a bridge from data center to cloud. One level of Outpost allows AWS control plane behavior to manage enterprise deployment and applications, and the other extends VMware control plane behavior into AWS. Obviously this is a nice migration approach to hybrid cloud.

In blockchain, Amazon is offering the expected blockchain APIs, but rather than depending on them fully, it announced Amazon Quantum Ledger Database, which is a crypto-verified ledger that’s tuned to address blockchain applications for the enterprise without requiring the primitive tools be used to create enterprise applications.

The impressive difference here is that Amazon has a high-level strategy that’s directed at where the cloud is going. They work back from that strategy to define the services that facilitate and exploit the expected cloud transformation, with little regard for current non-cloud practices. AWS development is increasingly different from traditional application development, because Amazon expects applications of the future to be so cloud-specific that current thinking wouldn’t let those applications be built optimally, if at all.

Everyone, Amazon included, backfills new tools/elements into an architecture. Without that approach, you end up with total development chaos, but the approach means that you have to start with something that can grow with the market’s needs. ONAP has demonstrated its willingness to anchor itself in the device past rather than to leap into the feature future. Amazon has demonstrated it builds for the future first, then provides basic migration support second. Amazon is taking the right path to the cloud and transformation, and that means operators are now at an increasing disadvantage to cloud providers and other OTT competitors.

The current ONAP trend and the current cloud trend don’t intersect, and that’s the problem. It’s not that ONAP couldn’t be useful, but that it won’t be as useful as it could be and it serves to cement the kind of thinking that operators simply cannot afford—a view that revolutionary transformation can be achieved not in baby steps, but in no strategic steps at all. As long as we plan as though we’re building networks from devices, we’re stuck with new versions of the agility and efficiency issues we face today. Cloud providers know that, and they’re proving it with their own response to evolution. That gives them an edge that operators won’t be able to overcome.

My advice to the operators is simple, a paraphrase of an old song. “You’ve got to get out of this place.” No amount of gilding of the device lily will make it useful, or even survivable. Abandon this old-think now, while there’s still time, or the cloud will roll over you.