The Two Drivers to Network Change, and How Each is Doing

Hosting service features in some form is going to happen.  The timing is fuzzy, the specifics of the technology to be used is perhaps even fuzzier, but it’s going to happen.  This is a good point in our hosting-features evolution to think a bit about the options available and the things that might help select among them.  We’ll start with the broad approaches and go on from there.

Feature hosting really started with hosted router instances.  I can recall talking with Tier One operators about the value proposition for hosted routers and switches back in 2012, and by 2013 the operators were both interested in and investing in software routers and switches.  These early efforts focused on hosting a router or switch, rather than on hosting features in a broader sense.

Software-defined networking (SDN) came along at about the same time, and it was directed at creating a different model for forwarding packets, something to replace the adaptive routing behavior of current networks with a centrally controlled forwarding process.  You could say that this initiative was a kind of “router/switch as an intent model” because the goal was to present traditional interfaces and services but use a different technology to forward packets, under the covers.

Network Functions Virtualization (NFV), which emerged in late 2012 and matured starting in 2013, took a broader view.  NFV said that services were made up of specific features, and that these features could be hosted on servers and connected with networking and produce something that looked like virtual devices.  The initial focus of NFV, and what most (including me) would say is the continued focus, was on features above Layers 2 and 3, switching and routing.  Firewalls and other endpoint service features were enveloped in “virtual CPE” (vCPE) and features of mobile networks were converted from appliance form to hosted form.

Software-defined WAN or SD-WAN took yet another approach, this one aimed at abstracting the service away from the connection infrastructure that hosts it.  In many ways, SD-WAN is an evolution of legacy concepts of “tunneling”, “virtual wires”, or (as my sadly passed friend Ping Pan said) “pseudowires”.  You build virtual pathways between endpoints over whatever connectivity is available at L2 or L3, and your service presentation is made by the elements (usually software components hosted on or in something) themselves, independent of the underlying transport.

All of these initiatives came along from the feature side, but there have also been hosting-side changes.  Virtualization, cloud computing, and containers represented ways of packing more features onto a physical server, thus improving the economics of feature hosting.  The recent announcements in “white box” operating systems (the P4 forwarding-programming language, AT&T’s dNOS, now a Linux Foundation project called “DANOS” but both meaning “disaggregated network operating system”, and ONF Stratum) represent ways of exploiting off-the-shelf servers or open white-box switches to host arbitrary functionality, based on legacy features or new forwarding paradigms.

Picking the key options out of this list demands some scenario modeling, which I’ve been working to do.  My model says that there are two primary pathways to the hosted model of networks.  The first is the subduction path, where SD-WAN technology establishes service experiences and effectively disintermediates the underlying infrastructure.  That infrastructure then evolves based on some combination of the other options.  The second is the modernization path, and here we have the P4-virtual-device model being adopted in places where major infrastructure evolution is created by “outside forces”, like 5G or IoT.

The subduction path says that service-layer enhancements are valuable enough to induce network operators, managed service providers (MSPs) and even end users to adopt SD-WAN to reap service-layer benefits independent of operator infrastructure.  The advantage this approach has is the breadth of its support; there are almost two-dozen vendors and many service provider and MSP adopters of the technology.  It doesn’t impact existing infrastructure so there’s no risk of displacing undepreciated assets, and it provides a level of service-layer visibility that’s lacking in most of today’s services.

The many-cooks asset is also SD-WAN’s biggest liability.  Prospective SD-WAN users say that they have a hard time digging out a rational market vision from the host of competing positioning statements from vendors.  Nobody is singing melody in the SD-WAN song.  On the other hand, legacy vendors tend to softly push against the technology, for the obvious reason that being disintermediated isn’t exactly a positive thing.

The modernization path is almost the opposite.  On this track, the driving force is doing a better job of building infrastructure in places where new builds or greenfields are seen likely to emerge.  As I said in yesterday’s blog, the easiest place to introduce something new is where something new is needed.  However, the best of the something-news would be something seen as more evolutionary than revolutionary.  AT&T’s commitment to put white-box DANOS-based devices in cell sites is an example; these are much more likely to be open routers than a device based on a new forwarding paradigm (like SDN/OpenFlow).

The evolutionary approach to transformation is the biggest asset for the open-box-OS approach, and the biggest liability it has is the fact that evolution is hitched to another star, the star of whatever’s forcing the new-infrastructure deployment.  How long will 5G take?  How about IoT?  Given that little rational thought seems to be focused on presenting a model for either one that makes a business case for all the stakeholders, the answer could be “very long indeed”, in which case the modernization path to a hosting strategy fails.

What about SDN and NFV as drivers?  Forget it.  Both SDN and NFV face a common problem, and each then has its own unique problems.  None appear to be moving toward a solution.

The big common problem is “concept scope”.  What, exactly, is SDN or NFV?  Every vendor who has any role for software in networking calls their strategy a form of SDN, and every vendor who has a software feature or virtual device running on any platform calls it NFV.  The lack of an accepted singular model makes it hard to postulate a network goal, much less an evolution to achieve it.

SDN’s unique problem is the lack of a validated and scalable central-control strategy.  We can make SDN work in any data center.  We can probably make it work in transport missions.  Can we make it work in a VPN?  Some say yes, and others say no.  Could it scale to the Internet level?  Most say it cannot, and all admit that there are a lot of proof points needed and few clear paths to getting them.  If SDN can’t be everything, we’d need to know exactly what it can be, and what we’d need it to be, to expand its adoption.  We’re not going to get those answers soon, if ever.  Is it a coincidence that the ONF, who promoted OpenFlow-controlled SDN, is advancing Stratum, a general hosting-side solution?

NFV’s big problem is that it’s taken them so long to do anything, that there’s nothing uniquely useful left to do.  The limited scope of the ISG efforts make it little more than a crippled form of orchestration or DevOps, in a world where cloud and virtualization initiatives have built something bigger and better, and provided a much larger base of adoption.  There’s nothing in the NFV work that isn’t, shouldn’t, or couldn’t be in cloud work.  The whole focus of NFV seems to be virtual CPE, which the open-box-OS solutions like DANOS or Stratum do much better.  There was a lot of good, insightful, powerful thinking early on, but whatever we can learn from NFV has already been learned, implemented better, and adopted elsewhere.

In the resource-driven side of networking, the modernization path, hosting is now and will be always about the cloud.  Cloud initiatives, though, will be supplemented by business-level zero-touch automation, which might evolve from the ONAP work, from ETSI’s ZTA group, from a newly enlightened and invigorated TMF, or perhaps from some new initiative.  But even here, it’s obvious that orchestration in the cloud is growing upward from the resource level.  Give it long enough, and it will produce an application set that makes telecom and network services into nothing but an application.

What about the race for influence supremacy among our possible drivers of change?  The subduction model of change is in my view reactive; it means that no major new services are emerging, that 5G and IoT don’t generate massive new deployments of devices, just trivial access/RAN changes.  The modernization model is proactive, it means that some major new deployments are happening that can leverage new technology.

Right now, the model says that the odds slightly favor the proactive modernization model because overall economics are good and there’s a viable set of 5G/IoT deployments.  However, the decisive period is 2019-2020, because it’s then that the modernization-driven deployments will have to achieve some mass.  If that doesn’t happen, then SD-WAN will become the vehicle of the future.