The news for Open RAN just keeps getting better, but we all know that news is an imperfect reflection of market reality. There is indeed a lot of good news and important stuff, but there’s also some major questions being raised by operators, and implied by the very “good news” we’re happy about.
Operators are getting more and more committed to an Open RAN approach. There’s been many announcements of vendors’ and operators’ support, and the US House passed (with almost unheard-of unanimity) an Open RAN bill designed to improve the credibility of claimed adhering technology elements. As I blogged last week, operators are seeing “Open RAN” as the gateway to a more open network model.
To me, the most important development in Open RAN is the Dish deployment. Dish obviously intends to be the fourth big mobile operator in the US, and it’s solidly in the Open RAN camp as far as infrastructure. They’ve had to pull together a lot of technology pieces to make this work, and while that illustrates the integration challenge of open technology (which is almost always built up from individual projects), it also illustrates that the challenge can be and is being met, which means there’s a prototype approach out there for other operators to learn from.
One thing we can learn already is that there’s still an expectation that Open RAN will involve white boxes that use some custom chips. Qualcomm is a recent Dish partner in their initiative, and other chip vendors (including Intel) are expecting to reap some new opportunities by supplying 5G-tuned technology elements. That raises the question of how “the cloud” and “white boxes” will be combined in an Open RAN initiative, and how that cooperation will evolve as operators look beyond 5G RAN for open-model network candidates.
We know that as you get toward the edge of a network, the number of users to connect per geographic point gets smaller. You can push resources toward the edge, but you can’t go further than the number of users per pool of resources would justify. It follows that as you move toward the edge, there is less chance that your “open” strategy can consist entirely of cloud-hosted features and functions. You’ll start to see white boxes.
Interestingly, the same is true throughout the data path. While it may be possible to create a server that, when combined with some kind of superchip augmentation to traditional CPUs, would be able to push packets as fast as a white-box forwarding device, it’s not clear that there would be much value. Data paths are threaded through a combination of nodes and fiber, and the latter kind of goes where it goes. You know where you need nodes, which is at the place where multiple trunks terminate. White boxes make sense there.
This combination seems to me to suggest that it’s likely that white boxes will play a very large role in not only Open RAN, but in whatever builds out from it to create wider-scale open-model networks. In fact, if we forgot OSI models and network philosophy, we might be able to see the future network as the “white-box layer” with the “cloud-feature layer” on top.
There is, in 5G RAN and Open RAN, a concept of a “near-real-time” element, which acknowledges that there are pieces of the 5G control plane that have to be married tighter to the user plane. All over-the-top services, from video streaming to social media, also acknowledge that the entire OTT space is further from real-time than most anything that’s part of the network. We also know that OTT applications are users of the network, not part of it.
If we map this to our device layers, we can say that white boxes are likely to handle the “near-real-time” pieces of feature/function distribution, and the cloud the higher layers. We could also presume that the cloud could be handling things like backing up or scaling some of the near-real-time parts, given that an overload or failure would likely result in more disruption than a slightly longer data/control pathway to a deeper cloud element. Particularly if edge computing is where that overflow stuff is hosted.
Edge computing, in fact, may be justified less by the new things like IoT that might be done with it, than by the requirements of an open network and reliable hosting of features. That is more likely true if we start to think about how 5G’s control plane and user plane align with IP’s control plane and data plane.
We create a whole new layer, a tunnel layer, in wireless networks to accommodate the fact that cellular network users move between cells while in a session of any sort. To preserve the session, we have a system that detects the movement and moves the tunnel accordingly. Since the user’s session traffic is going to the tunnel and not to the IP address of the cell they started in, moving the tunnel moves the traffic. But if we have superchips that do packet forwarding based on instructions from the 5G control plane and mobility management, why couldn’t the data network offer an interface to convert those instructions (the 5G N2 and N4 interfaces) to forwarding table changes? No tunnels.
We do something similar to this in content delivery networks. When you click on a content URL, the decoding isn’t like that of an ordinary URL that links you to a specific IP address, it’s a dynamic relationship-building process that links you to the optimum cache point for what you’re looking for. Again, we could do that with the data-plane forwarding process.
Even cloud computing might have a similar enhancement. Right now, things like Kubernetes (at a primitive level) and service meshes like Istio (at a sophisticated level) do load balancing and discovery tasks to allow messages to reach dynamic and scalable microservices. Why not let that happen down in the chips?
What I think is emerging from 5G, and from other developing network missions, is the recognition that there’s a kind of middle-ground between “in the network” and “on the network”, a “partnered-with-the-network” piece that I’ve generally assigned to the category of “network-as-a-service” because it slaves connectivity to something that’s not part of traditional IP route determination and handling. As we morph from the white-box piece toward the cloud, we’re changing the relationship between “control” pieces and “data” pieces, and we’re flattening OSI layers to subduct more stuff down to the chip level, where we can do the functions efficiently.
It’s facile to say, as the 3GPP does, that features of 5G are hosted with NFV, but that’s a mistake because it means that we’re evolving into a future where low-level packet handling is getting a lot more efficient and agile, while we’re nailing ourselves to architecture models designed when it wasn’t. Things like NFV, or arguments like whether VNFs should be CNFs or CNNFs, are implementation details that should be based on specific tradeoffs in latency versus efficiency. One size does not fit all.
The challenge of this vision goes back to our hardware layers. We have white boxes that will obviously have to host components of the network. We have edge systems that will provide more localized supplementary hosting, both as backup to box stuff and as a feature repository for things that have to be somewhat close but not totally local to the data plane. We have cloud technology to host more OTT-like elements of services. If this were all homogeneous, it would be easy to see how pieces are deployed and coordinated, but it’s not.
The further out we go from traditional cloud data centers, the less likely it is that the totality of cloud software will be available for us to draw on. There are already versions of container and orchestration (including Kubernetes) software designed for a smaller resource footprint. There are therefore already indications that one toolkit may not fit all projects in the open-model network future. How do we harmonize that? Multiple orchestration systems fall outside even today’s concept of “federation” of orchestration.
This I what I think will become the battleground of future network infrastructure. It’s not about finding a solution, but recognizing there’s no single solution to find. We need to federate the swirling layers of functionality without constraining how any of them are built or used. It’s the ultimate application of abstraction and intent modeling, waiting for an architect to lay out the blueprint.
Who? The public cloud providers, the software platform players like VMware and Red Hat, and the white-box software newcomers like DriveNets, all have a place in the story to exploit. It’s not a race for a solution as much as a race for a vision, and those kinds of races are the hardest to call, and the most interesting to watch.