What are Operators Planning for New Technology in 2021?

Operators usually do a tech planning cycle that runs from about mid-September to mid-November.  The ones I’ve been tracking (about 50) are now done with their cycles, so this is a perfect time to look at what operators think they need to be doing, and facing, in 2021.

Tech planning and budget planning, I must note, are related but not the same.  Most operators will plan out any major technology initiatives or potential new directions, and then work their business cases in the budget cycle.  The tech cycle is designed to ensure that they understand how to acquire and deploy something new, and what they could expect to get from it in broad terms, before they spend a lot of time running numbers on the candidates.  Nearly all major network technology shifts in the past have started with a tech planning item.

One thing I find highly interesting (and unusual) in this year’s planning cycle is that two tech initiatives came up as not only tied for the top, but co-dependent.  The issues were 5G deployment and open-model networking.

Obviously, all operators are at least committed to 5G and 37 of the ones on my list were actively deploying 5G.  The reason this is important is that when you’re looking at operator technology initiatives, it’s not the brilliance of the technology that matters, but how well the technology is funded.  Nobody questions 5G funding credibility for 2021, period.  That makes 5G almost unique, and that makes things that are tied to 5G automatic concept winners.

The reason for the linkage between 5G and open-model networking is a winner for two reasons beyond simple association.  First, operators recognize that there is little or no near-term incremental revenue credibility to 5G deployment.  Of the operators I’ve chatted with, only 12 suggested that they believed they could see “new revenue” from 5G in 2021, and only 26 thought they’d see it in 2022.  Frankly, I think both these numbers are about double what’s really likely to happen.  Second, operators know that because 5G will happen, whatever 5G is built on will tend to get nailed to the financial ground with a stake about five to seven years long.  It’s easy to extend current technology directions, but much harder to change them.

One thing that this last point has done is to commit operators to standard first, open second in terms of planning.  They want to make sure that everything they deploy conforms to the structural model of the 3GPP 5G RAN and Core specs, and then then want to maximize the number of open components within those models.  This vision eliminates (or at least reduces) the risk that the operator might have to forklift an entire early deployment to adopt an open-model approach, if such an approach were to be unavailable or impractical at the time of the first deployment.  You can introduce new open-model solutions to 5G elements on a per-element basis because the standard guarantees the interfaces needed to connect them.

But is open-model 5G a stepping-stone toward open-model networking overall?  That’s a complicated question that only about half of operators seem to have considered, or considered in any organized way.  Clearly the functional elements of 5G, the 3GPP building-blocks, are specialized and so not likely to be reused in non-5G applications.  What operators think should or will be reused are the tools associated with the deployment, operationalization, and modernization of open-model 5G.  The principles, then, of something like Open RAN or Cloud RAN or whatever, should be principles that could be extended to the network at large, to all its services and missions, in the future.

This point seems to be a goal flying in the face of details, or vice versa.  A bit less than half the operators had really looked at the question of the technology needed for open-model networks, both 5G and in the broader context.  The others were saying things like “both will be supported by virtual functions” or “we expect to increase our use of white boxes”, without the kind of details that prove careful consideration has been involved.

Among those that have actually thought things out, there’s a debate that’s not fully resolved, and a few who don’t think any resolution is possible.  They’re aware that there are NFV-style VNFs in play, and actually called out in the 3GPP 5G stuff.  They also know that there’s something called a “containerized network function” and something called a “cloud-native network function”, and it’s their view that neither of these things are defined with any rigor.  They also know that it’s almost certain that no hosted network function of any sort is going to replace high-capacity data-path devices like switches and routers.  Any open-model approach there will necessarily be based on white boxes.

To me, the white-box piece of this story is the most critical.  Networks carry packets, which means that virtually all network elements have a data-plane component.  It’s credible to think that a hosted function of some sort could provide individual user data planes (it’s not, to these key operators, credible that this would be a net savings for both capex and opex).  It is not credible, according to the operators, to believe hosted routers will replace all proprietary routers, where it is entirely credible that white-box routers could.  Thus, the open-model network of the future is going to have a large number of white boxes, and it’s likely that the biggest piece of that network—the aggregation and core IP stuff of today—will be white-box-based if it’s open.

For this group, the question is whether the source of the white-box router and the proprietary router are the only difference between the two.  Open, disaggregated, software running on a variety of white boxes that are 1:1 substituted for proprietary devices is one choice.  Router complexes (such as those of DriveNets, who won AT&T’s core) are another choice.  SDN flow switches and a controller layer are a third.

One operator planner put it very well; “The question here is whether an open network is open at the device level, [meaning] based on open implementations of traditional elements, or open at the functional level, meaning based on open implementations of service features, in non-traditional ways.”  Both paths lead to some white-box elements, but one path means a lot more of them.

Another issue that this “literati” group is beginning to deal with is the notion of the control plane as the feature layer of the future network, whatever the implementation model.  IP has a control plane, one that SDN centralizes.  5G (and 4G) separated the “mobile control plane” and the “user plane”, which means defining a second control plane.  Services like video delivery have a series of CDN-related features that could be collectively defined as a control plane, and cloud computing creates something like a control plane for orchestration, service mesh, and other stuff.  Are all these control planes going to get more tightly coupled, even as the data plane becomes more uncoupled?

This may be a question that’s tied into the other priority consideration from this tech cycle; “carrier cloud”.  Operators used to see carrier cloud as being their implementation of public cloud, justified by selling cloud computing services to enterprises.  They thought that hosting NFV or 5G on it was just a mission for an infrastructure they saw as inevitable and already justified.  Now, obviously, there is no realistic chance for operators to compete with public cloud providers.  There may not be a realistic mission to host NFV or 5G in the cloud at all; white boxes might be the answer.  Should operators even be thinking about carrier cloud as their own resource pool, or is “carrier cloud” the set of things they outsource to the public cloud providers they used to be thinking of competing with?

Almost all operators I’ve chatted with believe they cannot deploy “carrier cloud” to address any near-term service or technology mission.  That would generate an unacceptable first cost to achieve coverage of the service area and reasonable economy of scale.  They think they have to start in the public cloud, which of course makes public cloud providers happy.  But the big question the literati are asking is what is it that we host there?

Cloud providers want to provide a 5G solution more than a 5G platform in the cloud.  Microsoft is a good example; they’ve acquired both Affirmed and Metaswitch to be able to sell 5G control-plane services, not just a place to put them.  The smaller operators are increasingly OK with that approach, but the larger operators are looking harder at the risk of a major lock-in problem.  Better, they think, to create a 5G platform and 5G hosted feature set, and then have the public cloud providers host it with minimal specialization of the implementation to any given cloud.  That way, the operators can use multiple providers, switch providers, or pull everything or part of everything off the public cloud and pull it back to self-hosting in the original carrier cloud sense.  There will be an exploration of the business case for these competing approaches in 2021.

There’s also going to be an exploration of just what we’re heading for with all these control planes.  While it’s true that the OSI concept of protocol layering means that every layer’s service is seen as the “user plane” or “data plane” to the layer above, modern trends (including 4G and 5G) illustrate that in many cases, higher-layer control functions are actually influencing lower-level behavior.  Mobility is the perfect example.  If that’s the case, should we be thinking of a “control plane” as a collection of service coordinating features that collectively influence forwarding?  Would it look something like “edge computing”, where some control-plane features would be hosted proximate to the data plane interfaces and others deeper?  The future of services might depend on how this issue is resolved.

The unified control plane may be the critical element in a strategy that unifies white boxes and hosted features and functions.  If there’s a kind of floating function repository that migrates around through hosting options ranging from on-device to in-cloud, then you really have defined a cloud with a broader scope, one that is less likely to be outsourced to cloud providers and one that opens the door to new revenues via the network-as-a-service story I’ve blogged about.  About a quarter of operators are now at least aware of the NaaS potential, but nobody had it on their agenda for this year’s cycle.

The final issue that’s come along is service lifecycle automation.  This has the distinction of being the longest-running topic of technology cycles for the decade, illustrating its importance to operators and their perception that not much progress is being made.  Operators say that a big part of this problem is the multiplicity of operations groups within their organization, something that carrier cloud could actually increase.

Today, operators have OSS/BSS under the CIO, and NMS under the COO.  In theory, systems of either type could be adapted to support operations overall, but while both the TMF and some vendors on the NMS side have encouraged a unified view in some way, nobody has followed through.  The thought that NFV, which created the “cloud operations” requirement, could end up subducting both came up early on, but that never happened either.  The missteps along the way to nowhere account for most of the lost time on the topic.

Today, open source is ironically a bigger problem.  AT&T’s surrendering of its own operations initiative to the Linux Foundation (where it became ONAP) made things worse, because ONAP doesn’t have any of the required accommodations to event-driven full-scope lifecycle automation.  There are very few operators who’ll admit that (four out of my group), and even those operators don’t know what to do to fix the problem.  Thus, we can expect to see this issue on the tech planning calendar in 2021 unless a vendor steps in and does the right thing.