The service providers themselves may be giving carrier cloud its death blow, not tactically but strategically. In the last two months, operators worldwide have been shifting their thinking and planning decisively away from large-scale data center deployments. Carrier cloud deployments, which my model said could have generated a hundred thousand new data centers by 2030, now looks like it won’t happen. And it’s not just that it will be temporarily outsourced to public cloud providers. It’s G-O-N-E.
In mid-September, operators will begin (with various levels of formality) their normal fall technology planning cycle, which will take till mid-November and guide spending plans for the years beyond. Over 85% of them now say that they don’t want to “make any significant investment in data centers”. That doesn’t mean they won’t have them (the do already), but that they are not looking to create services and features that will require large-scale in-house hosting.
The current market dynamic was spawned by operators deciding that, rather than building clouds of their own to offer cloud computing, they’d partner with the cloud providers. Then the operators started to show interest in hosting 5G features, and all three providers are now in a push (Google, most recently) to provide not only minimal hosting but also the 5G software itself. When that pathway opened for them, they insisted it was just a transitional approach, a way of scaling costs as 5G deployed. Now?
Now, they’ve been easing away from the cloud, obviously. OSS/BSS systems, their own “enterprise applications” were the next thing to be ceded by many operators to public cloud hosting. Hey, enterprises think the public cloud is the next big thing for their applications, so why should service providers be different? Answer, of course, is that service providers had expected to deploy their own clouds and somehow lost the will…or the justification.
There were two reasons why operators said they weren’t interested in having data centers anymore, and they were roughly equally cited. The first was that they lacked the skills to build and sustain cloud computing infrastructure, and were doubtful that they’d learn those skills by consuming the infrastructure from a third party. The other was that they doubted they would ever really have the applications to justify their own carrier cloud infrastructure. In either case, it boils down to the fact that they don’t want to get into the hosting business.
Part of the problem here is that back in 2013 when I first modeled the carrier cloud space, operators believed that they would be deploying data center resources to host NFV. By this time, modeling their input on the topic, I came up with an NFV-driven data center expansion of a thousand data centers worldwide, up to now. In point of fact, my operator contacts say we have on data centers we can attribute to NFV. Without the pre-justification of data centers, the next application would have to bear the entire first cost.
5G, the chronologically next of the drivers, started off in planners’ minds as a pure RAN upgrade—the 5G Non-Stand-Alone or NSA version that ran 5G New Radio over 4G LTE infrastructure. That was a reasonable evolutionary approach, but the operators came to believe that the competitive 5G market would force them to deploy 5G Core almost from the first. Had the operators started off with carrier cloud using NFV as the driver, they could have hoped for another three or four thousand 5G-justified data centers by this point. They started late, and didn’t have the pre-deployed data centers, so they’re behind on this too.
The rest of the application drivers for carrier cloud, the largest drivers, are all now seriously compromised. IoT, video advertising and personalization services, and location/contextualization-based services, are all over-the-top services that operators have historically not offered and are culturally uncomfortable. Does anyone think an operator would build out cloud infrastructure on a large scale to prepare for any of them? They don’t believe it themselves, not anymore, and that’s the critical point.
If you need some specific evidence of this point, consider that AT&T is, according to the WSJ, looking to sell off its Xandr digital advertising unit. This unit would have been a logical way to exploit new personalization/contextualization features that might be created or facilitated by virtual network infrastructure. If you had even the faintest thought of future engagements in personalization/contextualization, would you kill off the easiest way to monetize your efforts? I think not. Recall, also, that AT&T is a leader in looking to public cloud providers to outsource its carrier cloud missions.
If you’re a software or equipment vendor, this is a disappointing outcome, but frankly one that those very players brought on themselves. Sales of a new technology to a buyer is more than taking an order on a different order pad. Vendors in the data center and cloud technology space just couldn’t engage the buyer effectively, largely because they didn’t speak the same language. The fact that all these data center drivers will either go unrealized or be realized on public cloud infrastructure is a serious hit to the vendors who could have built those hundred thousand data centers.
This is also going to have a major impact on the transformation of the network, the shift from routers and devices to software-centric network-building. When there was a carrier cloud to host on, it was logical to presume that the network of the future would be built largely on commercial servers. Now, it’s almost certain that it will be built on white boxes and different elements of disaggregated software.
There’s always been a good chance that the to-me-mandatory control- and data-plane separation requirements of software-based network infrastructure would demand a special data plane “server”, a resource pool dedicated to fast packet handling. The control plane could, in theory still have been hosted on carrier cloud, but if there’s no carrier cloud and the only alternative is to host the whole network control plane on a third-party provider, control-plane white-box deployment starts to make a lot of sense.
The question is how this comes about. You can take a router software load and run it in the cloud, in which case your control and data planes are not separated. You can also take traditional router software and run it in the cloud for control-plane handling alone, letting it then communicate with a local white-box data plane for the packet-handling. Or you can build true cloud-native control-plane software, in which case whether you run it on a white box, your own server, your own cloud, or a cloud provider’s cloud wouldn’t matter much. That could facilitate the evolution of the control plane into the binding element between legacy connection services and new over-the-top or higher-layer services.
Is the network of the future a data plane of white boxes, joined to a control plane that spans both dedicated white boxes and some sort of cloud, even the public cloud? Does that cloud-centric piece then expand functionally to envelop traditional control and management functions, new services that grow out of the information drawn from current services, and things we’ve never seen or heard of? Do operator services and services of over-the-top players somehow intermingle functionally in this control-plane-sandbox of the future? I think that might very well happen, and I also think it might happen even without a specific will to bring it about.
This might also frame out some of the details of edge computing. 5G already has a near-real-time segment in its control plane, which to me implies that we’re starting to see network/control-plane technology divide into layers based on the latency tolerance of what runs there. If we’re able to assign things to an appropriate layer dynamically, we can see how something like a mobile-edge node could host 5G features and also higher-layer application and service features that had similar latency requirements. If we had a fairly well distributed edge, we might even see how failover or scaling could be accomplished, by knowing what facilities exist nearby that could conform to the latency-layer specifications of the component involved. This might even end up influencing how we build normal applications in the cloud.
One question this all raises is whether the operators are in any position to supply the right infrastructure and platform architecture for carrier cloud. A more important question, since I think the answer to the first question is obvious, is whether the operators are in any position to define how network features/functions are hosted in carrier cloud. Should they let the cloud providers run with that, redefining things like NFV and zero-touch automation? NFV, at least, was supposed to identify specs, not create new ones. Might the trend toward public cloud hosting of 5G end up helping carrier cloud, and even helping operators transform, more than operators themselves could have? I think that’s a distinct possibility.
Another question is whether the operators really think they can host all network features in a public cloud. NFV hosted virtual devices, so it didn’t present network and latency issues greatly different from current networks. If you start thinking cloud-native, if you start thinking even about IoT and optimum 5G applications, you have to ask whether some local hosting isn’t going to be needed. We might well end up without “carrier cloud” but with a real “carrier edge” instead, and that could still generate a boatload of new data center opportunities. We might also see specialized hosting in the form of white-box implementations of network transport features, things that benefit from their own chipsets.
The cloud is a petri dish, in a real sense. Stuff lands in it and grows. The goal of vendors, cloud providers, and the operators themselves must be to fill the dish with the right growth medium, the technical architecture (yes, that word again!) that can do what’s needed now and support what blows in from the outside. I think that natural market forces just might be enough to align everyone with that mission, and so it’s going to be a matter of defining the relationships in that control-plane-cloud. Who does that? Probably, who gets there first.