What will open-model networks look like, and how will we build them? What are the issues operators say are limiting their use? Operators are surprisingly confused on these issues, largely because they’re confused about higher-level issues that relate to planning and deployment. I’ve had a chance over the last month to gather operator concerns about open-model networks, how they’re built, and how they’re sold. Here’s the list of what I’ve learned.
The top item on my “learned” list is that operators are more worried about evolution than revolution, and yet it’s revolution that’s driving planning. This is forcing vendors to adopt a kind of two-faced sales model. You have to check the boxes that represent the long-term open-model goals, or you’re not in the game at all. Once you do check the boxes, though, you have to be able to present the evolutionary path that gets the operator to the goal without undue risk and cost.
Most vendors seem to be putting their focus in one or the other of these two “faces”, and my planning-stage contacts agree almost universally that’s not the best approach. The vendors who have actually executed on some aspect of open-model networking disagree; they drove themselves to the point of a buy decision and so didn’t need the vendors to espouse revolution to get their attention.
The biggest challenge this dualism has created isn’t the message either; most vendors seem to know what their revolutionary value proposition is. The problem in most cases is how to present it. There’s little chance a vendor other than a current incumbent with a lot of strategic influence could cold call on senior planners who need the strategic story. In any event, the sales organization isn’t normally equipped for that level of engagement. Marketing material would be the answer, presented through an organized program but via the website. Most vendors don’t have the story collateralized for that conduit.
The second issue is that operators are uncertain on the best future open-model approach. Some believe in the “NFV” or cloud hosting model for future networks while others believe in the white-box approach. This is the technical-strategy point where operators feel they’ve gotten the least useful information from their vendors. Some, in fact, say they’re not even sure where their vendors fall on the issue.
Part of the problem here is what we could call the “challenge of competing gradients”. The deeper you go into networks, meaning the further you are from the user edge, the more traffic you’d expect to see on a given trunk, and terminating at a given node-point. The more traffic you have there, the less credible hosting functions on cloud resources are for the data plane. Operators believe they could host lower-volume traffic in the cloud, but not core network traffic. But the closer you are to the edge and the lower the per-node traffic, the less likely it is that you could justify a resource pool there to host something on. That resource pool is likely available only deep inside, where traffic limits hosted-function value. Catch-22.
A pure white-box strategy offers another open-model alternative, but it’s not without its issues. First, there are no current white-box products large enough to handle all the routing missions as a monolithic element. You need to somehow collectivize multiple boxes into a single operating entity. That’s been done by DriveNets, which is likely why they won AT&T’s core, but it’s not widely supported or understood. Second, white boxes (says one operator) “just don’t seem as modern an approach.” Operators, particularly the larger ones, are seeing carrier cloud as their ultimate future resource. Hosted router instances in carrier cloud are thus “modern” (even though they don’t work for large traffic volumes, operators themselves agree).
This second issue is really a reflection of the fact that operators see open-model networking entirely as a cost management strategy, which is my next point. Despite the fact that they want to think about the future, despite their desire for “modern” approaches, they are really not targeting anything new, just targeting doing old stuff cheaper. That’s made effective strategic positioning for vendors more difficult because operators don’t know what they want to hear about future services. It also means that the open-model solution has a primary goal of transparency. To paraphrase an old buyer comment I got, “The worst kind of project to present is a cheaper-box substitution; the best you can hope for is that nobody will ever see the difference.” Other, of course, than cost. That means that sales efforts will tend to bog down in the equivalence problem; is this box really completely equivalent to my old box?
The obvious question is what the alternative to a cost-management-driven open-model transformation would be. That question is really a two-parter. The first part is the question of the service targets, and the second the question of the infrastructure needed to deliver to those targets. There are two credible broad targets—over-the-top services/experiences and enhanced network services—but the two can blur together.
The notion that elastic bandwidth and turbo buttons and the like generate more revenue has been proposed and debunked for decades. The likelihood is that any new connection services will have to come by absorbing what I’ll call “boundary activities” related to true over-the-top things. Two clear examples are the user-plane elements supporting mobility and the elements of content delivery. Both can be classified as examples of network-as-a-service (NaaS), as I’ve noted in prior blogs. In both cases, quality service and destination are attributes of a higher-than-network relationship, and because the network may not directly address these requirements, there’s a new element introduced to provide what’s needed. In 5G, for example, it’s the UPF.
What NaaS does in this case is to build a kind of overlay service, one that has connection properties that can be controlled directly rather than inherited from device behavior below. That’s one example of an enhanced network services, and also why the two types of services beyond the basics can blur.
Real over-the-top or higher-layer options also exist, and here the most obvious candidate is IoT. The vision of IoT-based location services as coming about by having a zillion startups exploiting free IoT sensors appeals to idealists, but it’s not going to come about unless governments create regulated monopolies and define basic IoT services for a fee. The more realistic example is that some operator deploys sensors/controllers and then abstracts them through APIs to offer a higher-level, more useful, representation of what’s happening.
Think of this example. You have a five-mile avenue with cross streets and traffic lights at each corner. You want to know about traffic progress and density. You could query five miles worth of sensors and correlate the stuff, or you could ask a “traffic service” for a picture of conditions between the start and end of the avenue. Operators needn’t get involved in navigation, route planning, food delivery, timing of goods movement, and so forth. They simply offer the “traffic service” to those who do want those applications.
Another issue operators raised is simple confusion over popular marketing terms. Many of the terms used these days are as confusing to operators as they are enlightening. One in particular, “disaggregated” is clearly a victim of over-definition by vendors. If you can take a router out of its box these days, you can bet the vendor is claiming it’s “disaggregated”. Most operators weren’t confident in their ability to define the term. Operators who were confident said it meant that software was separated from hardware (almost 64%), that control and data plane were separated (28%), and that a router instance was composited from multiple white boxes (8%).
This uncertainty over the meaning of the term seems to arise in part from deliberate vendor hype, and in part because vendors are letting the media and analyst community carry the water for messaging. It’s often the case that the first definition given for a concept, or the first one to get significant media attention, sets the term in stone. That can lead to significant mischaracterization by buyers. In at least three cases I’m aware of, a mandate for “disaggregated” was set by management and misunderstood at a lower level.
I think that any credible open-model network strategy has to provide software/hardware disaggregation and control/data-plane separation. I think any strategy aimed at high-capacity missions, even metro-level aggregation, will have to be composited from multiple white boxes. This, I think operators who picked any of the options for defining the term were partially right, but since only two picked them all, there’s still a lot of education needed.
That sums up the issue with open-model networking. It’s hard to have a partial revolution, to have a technology impact that’s minimized to one of three areas, but that’s supposed to deliver benefits across the board. There is no one reason why open-model networks aren’t exploding, what’s really needed is either a recognizing of the “ecosystemic vision” that combines the three definitions of “disaggregated”, or a camel-concept to stick its nose under the tent.
Operators do have a sort of camel-in-waiting. The place where open-model networks are expected to “start”, in the view of almost all operators at all stages of commitment, is in 5G Open RAN and 5G Core. Everyone points out that 5G is budgeted and that the momentum for open-model networking has been greatest (and most visible) in 5G Open RAN. Yes, there are operators who are taking a broader approach, but the approach that’s getting universal attention is the Open RAN area.
AT&T has been involved in Open RAN for years, perhaps longer than any major operator. They’ve cited their hopes that it would reduce capex for them, and spur competition and innovation, and both these points are critical to where open networks are heading. It’s very true that RAN, because it’s an edge technology and thus represents a mass deployment (and cost), would have a major impact on capex. That’s even more true if we assumed the Open RAN camel managed to get its nose under the broader network tent.
The innovation side is harder to pin down. What kind of competitive innovation would be possible in an open technology? Does AT&T think competition in Open RAN will be a race to the commodity bottom, or are they seeing a broader impact? Might Open RAN open up innovation on the design of the network overall, and even the design relating to how higher-level network services are coupled to the network? That hope would seem to require at least a plan on what those services, and that coupling, would look like.
For prospective vendors, this is an important point because it means that it’s likely that a sales initiative aimed at Open RAN would connect without having the strategic groundwork laid, a groundwork that’s proving difficult to put into place. The first point operators made, remember, was that strategic/tactical dualism and the problems it created. Such a targeting could create the risk of “tunnel vision” from vendors, aligning their initiatives so specifically to Open RAN that they can’t easily extend to the rest of the network. Open RAN, for example, is said by most operators to mandate “control and data plane separation”, but it doesn’t, it only mandates that the 5G control plane be separated from the user plane that unites control and data plane in IP networks.
Will we have camels leading us, or disaggregation Einsteins? That’s going to depend on which vendors catch on first.