Have the Cable Companies Unlocked the Secret of Function Virtualization?

Could the cable industry be trying to do function virtualization the right way?  Light Reading reports some insight on that issue HERE, and references an Altran open-source project in support of the industry’s efforts.  One thing that strikes me immediately on reviewing some of the detail is that CableLabs is apparently starting off with something more cloud-state-of-the-art, and that might mean the effort will end in something that’s actually useful.

The cable providers have many of the same challenges as the telcos, but they also have an important advantage; the CATV plant is well-suited for broadband delivery, even delivery at fairly high speeds.  The plant is shared, to be sure, but over the years cablecos have been segmenting their networks to limit the number of customers on spans, especially in areas where cable can deliver services to businesses.  In contrast to the telcos, whose DSL plants are largely inadequate for commercially competitive broadband, cable guys are sitting pretty.

Not so in the rest of the infrastructure.  In fact, cable companies have complained about all the same kinds of issues with proprietary hardware, vendor lock-in, rising equipment prices and opex, and declining profit per bit.  Unlike the telcos, whose ability to directly collaborate on technology is limited by regulations (Bellcore, in the US, had to tread lightly and as a result didn’t have as much influence, and was eventually bought by Ericsson, a vendor), cablecos have used their own technology group, CableLabs, to set standards and plan technology evolution.

The CableLabs initiative, in which Altran plays a key role, is an open-source project code-named “Adrenaline”, and it’s an interesting mixture of hybrid fiber/coax (HFC) evolution, edge computing, and function virtualization.  According to Belal Hamzeh, CTO and SVP at CableLabs, “the virtualization infrastructure extends from the regional office all the way down to the modem in the household.”

Adrenaline is explicitly about hosting virtual service functions, but unlike the telco NFV initiative that was initially directed at the cloud and then morphed more into universal CPE, Adrenaline is about distributing everywhere, as the quote from Hamzeh suggests.  CableLabs feels that things like FPGAs and GPUs, which weren’t in the initial NFV stuff and are still not really accommodated, are fundamental to cableco needs for function hosting.  So are containers and Kubernetes, which is why I think this is a more cloud-centric shot at function hosting.

It would be difficult to create a homogeneous hardware/hosting model from the home to the cable regional office, but that doesn’t appear to be the goal.  Instead, Adrenaline seems to be providing something like a combination of a PaaS platform and P4-driver-ish features, which would make at least some functions transportable to different locations because the hardware presented a common set of APIs.

This approach adds significantly to the agility of infrastructure.  Any cloud-hosted function is dynamic and horizontally scalable if it’s written properly.  With Adrenaline, functions can also scale in the “vertical” direction, meaning moved out of hosting and outward toward the user, even to the cable modem at the end of the connection.  They could also be moved back, of course, if something required hardware features not available at a given point in the connection path.

CableLabs’ contribution to Adrenaline seems to be focused on the SNAPS-Kubernetes support for using Kubernetes to deploy applications for FPGAs and GPUs, something that (as I’ve already noted) CableLabs deems critical for virtualization of the cable infrastructure and its functions.  It is critical, IMHO, because of the fact that unlike NFV, Adrenaline is explicitly based on a kind of hierarchy of hosting—district to edge to home and everything in between.  That means it really is an edge architecture as well as a cable-cloud architecture and universal CPE, and since some of these missions are certain to involve specialized semiconductor support, the ability to orchestrate stuff for specialized chips is critical indeed.

The Kubernetes reference here is also critical, because Adrenaline is designed to be built on containers and orchestrated by the near-universal container orchestrator, Kubernetes.  This combination simplifies the function-hosting vision of CableLabs versus the NFV ISG, which was virtual-machine focused and has only recently been trying to accommodate mainstream cloud, which is all about containers and Kubernetes.  I think that it’s clear that Adrenaline is a software architect’s vision of function hosting, where NFV was a hardware-deployer’s vision.

White boxes and open-model networking are also central to Adrenaline.  In fact, it’s explicitly expected that white box technology will be used for everything that can accommodate the mission, which eventually is likely to be everything.  It may be that it’s the fact that cable network elements have special requirements, and that cable companies have two (coax and fiber) or three (add cellular) delivery frameworks to contend with, that created such interest in custom chips, and from that to chip-centric (or at least chip-accommodating) deployment.

Why does this seem to be working better than the telco version?  I think there are three reasons.

First, Adrenaline has the advantage of seeing where NFV went wrong.  There’s nothing like watching the climber before you fall into a crevasse to encourage you to take a slightly different route.   Cable players were in fact interested in NFV at first, and I think they lost interest when progress seemed too slow.

The second reason is that the cable business is rooted in television and the telco business is rooted in voice calls.  A set-top box is a kind of virtual channel storefront, delivering an experience that’s not unlike cloud portals to applications.  Cable’s TV delivery conditioned its coax infrastructure for broadband (remember @Home?) and that also helped cablecos get a preview of every carrier’s future.

The third reason is that CableLabs is the cableco standards organization, where for the telco world there are numerous standards bodies that each have representation from the carriers’ professional standards teams.  You can elevate a single team to software-centricity and cloud-centricity more easily than you can elevate all those telco standards geeks.

There is every reason to wonder if the telco community would swallow their pride and adopt a cable industry standard for function hosting.  At the fundamentals level, I think there’s no question that Adrenaline is a better approach, but I also think that it would have to address the same questions of practicality (things like management and onboarding) that NFV floundered on.  At the very least, though, it might inject a new variable in the current battle for the telco cloud.

I’ve recounted initiatives from Google and Microsoft to grab a piece of the telco cloud, to host telco functions and services rather than having the telcos build out infrastructure on their own.  VMware and IBM/Red Hat also have designs on the space, hoping to supply software that will run either in the cloud or on telco infrastructure.  Adrenaline could actually be a good framework to boost either plan, and if it were expanded a bit to accommodate telco cloud services beyond VNFs and 5G Core, it could serve as the framework for a generalized service-provider higher-layer strategy.

The Adrenaline package is in at least lightweight use, but I’m not able to find a full set of documents on it at this point, so I can’t assess just how suitable to the full function-virtualization or carrier-cloud missions it is, or how much more might be required.  Still, this is a very promising start, and if it does deliver a generalized higher-layer PaaS, it could be a revolution in carrier cloud, impacting both software vendors and the cloud providers—and of course, the network operators themselves.