Just how difficult is “carrier cloud”? There’s been a lot of talk about how hard it is for operators to deploy their own cloud resources, particularly when many services have a much bigger footprint than the operators’ own real estate holdings. There’s been a lot of talk about how public cloud partnerships are favored by operators, in part because of the footprint problem and in part because most are uncomfortable with their ability to sustain their own cloud resources. Now there’s talk about whether even public cloud hosting could raise unexpected costs. Maybe, but it’s not clear just whose problem that is.
The article cited here is about the question of the portability of network functions, meaning the hosted functions that were created under the specifications of the Network Functions Virtualization ISG of ETSI. According to the article, “Network functions (or NFs) developed for one vendor’s cloud cannot enter another’s without expensive repurposing by their suppliers.” There are three questions this point raises. First, is it true? Second, whose problem is it? Finally, does it matter?
The best we can say regarding the first question is “Maybe”. Any software that runs in a public cloud could be designed to use cloud-specific web service tools, which would make the function non-portable. However, nearly any function could be written to run in a virtual machine without any specialized tools, an IaaS application. These applications would be portable with a minimum of problems, and in fact that was a goal of the NFV ISG from the first. Subsequent efforts to align with “CNFs” meaning “Containerized Network Functions” admit to a bit more risk of specialization to a given cloud, but it’s still possible to move properly designed VNFs between clouds with an acceptable level of effort.
The second question has a bit of its own ambiguity. The author of a VNF determines whether it uses non-portable features, which means that a VNF “supplier” could in theory write either largely portable or thoroughly non-portable VNFs. In that sense, it would be the VNF supplier who made the bad decision to render a function non-portable. However, many of these suppliers are beholding to a particular cloud provider, via their development program. Cloud providers would love to have VNFs run on their cloud alone, so in a sense their proprietary goals are at fault. But operators themselves have the final word. Nobody puts a gun to your head and demands you buy a non-portable VNF. If operators, either individually or collectively through their participation in the NFV ISG, demanded that all VNFs forswear the use of all but the most essential non-portable features and take steps to make it easier to port those that must be used, VNF authors would toe the line because they couldn’t sell otherwise.
But the third question is likely the critical one. Does any of this matter? It’s the hardest question to answer because it’s a multi-part question.
The first and most obvious part is “Does NFV matter at all?” The NFV ISG got started over a decade ago, and quickly aimed well behind the proverbial duck in terms of pooled-resource hosting (despite my determined efforts to bring it into the cloud era from the first). The initial design was off-target, and because most groups like this are reluctant to admit they did a bunch of the wrong stuff, the current state of NFV is still beholding to much of that early effort. Are there operators who care about NFV? Sure, a few care a little. Is NFV going to be the centerpoint of carrier cloud? Doubtful, no matter who hosts it.
The second part of the question is “Does cloud portability matter?” The majority of operators I’ve talked with aren’t all that excited about having to integrate one public cloud provider into network services, and are considerably less excited about the integration of multi-cloud. In the enterprise, the majority of users who purport to be multi-cloud are talking about two or more clouds in totally different areas (a Big Three provider plus Salesforce is the most common combination). Slopping application components between cloud provider buckets isn’t something anyone likes much, and so operators get little encouragement for the idea outside the tech media. So, the answer to this question is “not a whole lot.”
The third piece of the question is “Does cloud portability in carrier cloud actually mean function portability?” The majority of interest by operators in cloud partnerships at the service feature level comes in areas like 5G, where some elements of 5G are provided by the cloud. These elements integrate with the rest of the network through interfaces defined by the 3GPP or O-RAN Alliance, which means that they’re essentially black boxes. Thus, if Cloud Providers A and B both offer a given 5G feature, connected via a standard interface, the feature itself doesn’t need to be portable because the same black box is available from multiple sources.
The final piece? “What about features of more advanced services?” The truth is that the hosting of basic network functions isn’t going to move the ball much for operators, or for cloud providers. The big question is whether something bigger, like “facilitating services” or even OTT services like the digital-twin metaverse, might lie ahead. If it does, then broader use of better features could provide operators with the incentive to self-host some things.
The problem is that this question has no clear answer at the moment. Some operators are committed to facilitating services, some are even looking at OTT services, but nobody is doing much more at this point than dabbling. One reason is that there are no real standards for how these new features/functions would work, which means that there’s a risk that they wouldn’t be implemented and interfaced consistently.
That’s the real risk here, not the VNV risk. VNFs were designed to be data-plane functions, and were gradually eased over into standardized control-plane elements for 5G. IMHO, generalized facilitating services or OTT service features would likely be “real” cloud components, unlikely to obey the management and orchestration rules set out for NFV. Still, though, they’d need to have some standardization of APIs and integration or providers of the new features/functions wouldn’t be able to write to anything that would ensure they could be linked with operator networks and services. The NFV ISG, in my view, is not the appropriate place to get that standardization done, and until we have such a place, then the risk the article describes exists, just in a form different from what’s described.