What Operator Planners Think About the “Service Layer”

I promised recently that I’d post something on how operator planners viewed my “service plane” concept, as soon as I could get data. Over last week, I did some data gathering, and today I think I can keep my promise—sort of. The reason for the qualification is that what I’m calling the “service plane”, in the name of staying as generalized as possible, is actually what operators are thinking of as “edge computing”. The reason that’s the case is interesting in itself.

Anyone who’s ever worked with telcos understands that they’re rooted in supply-side market-think. Demand is out there, and once a telco works through the process of deploying something to meet it, in a way that’s capital- and operationally efficient, the demand will then consume what they deploy. This is usually called the “Field of Dreams” (FoD) model, “Build it, and they will come.” Even savvy planners are more infected with this than we might think, and certainly more than what I’ve thought.

What my “savvy planners” have done is created a kind of modified Field of Dreams that I’ll call the “Field of Taking Advantage” (FoTA from now on). They recognize that they can’t just deploy stuff and wait for demand to develop, because they’ve seen that CFOs will no longer back that sort of strategy. Instead, what they propose is wait until a new network technology emerges that’s linked with an opportunity that is already budgeted. When this first opportunity results in deployment of its necessary technology elements, they expect it to be (spontaneously) taken advantage of by other missions. The budgeted opportunity stimulates the dream, in short.

Both FoD and FoTA thinking tends to make operator planners focus on the infrastructure and not on the opportunity. My way of thinking (even after decades of working with operators) is that you generalize an architecture by looking at the range of missions you’re targeting. Theirs is that you look at the general infrastructure required and deploy it, justified by that first mission. That’s why I talk about “service planes” and my planner friends think of edge computing. To get reasonable input from planners, I have to accept their frame of reference.

If we consider telecom/cableco infrastructure as being three layers deep (access, metro, and core), we can map traditional network devices into these layers easily. The planners see “hosting” as having the same three layers. They map white-box hosting to the access network, traditional cloud computing in the core network, and edge computing in the metro. To them, the edge is the first point inward from the user where there’s enough concentration of traffic and customer value to justify actually having a data center.

Just where this is, or where they think it is, is pretty clear to them. Most network operators have real estate housing the terminations of a series of access trunks, which might be FTTH/FTTN, copper loop and DSL, CATV, or mobile backhaul. In the old days, we’d have called these places “edge offices” where “Class 5” phone switches were deployed. They’re big facilities, with plenty of space to house racks of servers. Similarly, these Class 5 edge-office locations would connect with Class 4 transit sites, broken down in the US by what was called “Local Access and Transport Areas” or LATAs. A LATA roughly corresponds to a Standard Metropolitan Statistical Area or SMSA, and there are about 250 of them in the US.

To my planners, edge computing would be based on server farms deployed in the edge and transit office locations, or their equivalent in cable networks or other networks that didn’t evolve explicitly from the public switched telephone network (PSTN). Thus, their FoTA thinking is that some key application would justify building out edge hosting in these locations, and that the hosting there would then be available to exploit with other missions. What is that key application? It’s 5G, of course, and in particular, it’s 5G RAN (New Radio, or NR, to purists) and O-RAN in particular.

I’ve blogged about the symbiotic relationship between 5G O-RAN and edge computing before, and I won’t repeat all that here. What I want to point out instead is that to the operator planners, what’s important is that O-RAN (and 5G in general) will require hosting, not what kind of hosting it will be, or what the software technology necessary to create and sustain it would look like. This view has two important implications for my service-layer concept.

The first implication is that operators are at the moment not sensitive to the issue of the platform software for O-RAN and 5G. There is no strong desire to have offerings adhere to some specific set of rules. Yeah, they’ll say “cloud-native” or “container” or maybe even “Kubernetes”, but it’s just a mantra, not a commitment. That means that vendors who want to provide hosting for 5G features could in theory take very different tacks and not raise any hackles among the operators. However, since FoTA-think requires that follow-on missions inherit the infrastructure that the first mission justifies, and since the platform software is part of infrastructure, the platforms available to be inherited could be wildly diverse.

The second implication is that there is no service-layer thinking being applied to 5G hosting, and so nothing but the hosting facilities themselves are being put into place to facilitate things later on. A new service mission, say IoT, is going to have to frame out its own architecture in the same way that O-RAN framed out 5G NR. This could, and likely would, lead to a bunch of service silos, and would reduce the chances that follow-on missions could be supported quickly and with minimal incremental cost (capex and opex).

Before we wring our hands and declare that all is lost, there is a glimmer of hope. While planners don’t have a specific vision of a service layer, they do have a view of what the common characteristics of these future missions would likely be, and just as they view “the edge” as being an element of a service layer, they’d also view software to address these mission characteristics as being part of it too. They can accept the notion that there’s a software collection that would fit my “service layer” model, but they can’t quite get their arms around what the software would be or what specific missions it would serve.

The planners are pretty decisive about what isn’t a service-layer mission. The service layer is not connectivity. There is no user/data plane in it. There’s no IP control plane either. Instead, planners when pressed say that missions for edge computing will fall into two categories—request-driven (transactional) and event-driven. However, they’re not (at least at this point) really thinking about the difference in software architecture between the two, or the differences in platform software that each of the categories would require.

This is a really important point, in my view, because the industry has a lot of experience and tools to support transactional applications (RESTful APIs are a mainstay) but much less to support event-driven applications. However, online services are increasingly looking at event-driven structures for their applications, which obviously includes applications that would normally be thought of as transactional.

The AsyncAPI specification, which came out of the OpenAPI initiative, is a good example of an event-driven API, and the OpenAPI initiative is a valuable effort to structure the relationship between software elements in an open way. Unfortunately, even software engineers are often unaware of either of these, and none of the planners I’ve engaged with were aware of them either. Even if they were, you can’t just adopt an event-driven API in software not designed to be event-driven.

The net of all of this is that even savvy telecom planners aren’t really thinking about what they call “edge computing” as clearly as they should be. It would be somewhat simplistic to believe that all they cared about was having a platform to host something on, but they’ve really not thought out what would be needed beyond that, and perhaps as little as a quarter of them have even started to lay out a path to getting that critical information.

You really can’t plan “edge computing” without planning for a “service layer”, because it’s the latter that frames the relationship between missions and platforms, and without that relationship you really can’t define the platforms either. Of course, for service-layer planning, you also need to have a sense of mission planning, at least to the point of having some sense of application types or classes (transactional and event-driven were my examples). I don’t think it would be difficult to get this, in a technical sense, but it may be for other reasons.

Vendors are almost surely going to have to play a big role here, and this sort of planning almost screams out “Significant sales delay and expensive and fruitless educational sell!” at the top of its lungs. Those words scare vendor executives more than the question “What kind of software does edge computing need?” scares telecom planners. We may wait a bit for progress here, in short, and that could slow everyone’s realization of benefits.

Nokia Proves the Value of 5G

Today’s market isn’t an easy one for network equipment vendors, especially those focused on service providers. Nokia may have a strategy to navigate this critical space, and if so every vendor needs to consider it. In its current quarter, Nokia beat on all the key financial metrics, and held to its guidance, which is pretty positive. The most significant point is that Nokia’s success comes largely from the service provider space, which most vendors (Juniper, for example) are finding to be a weak sector. What’s going on with Nokia is important to every vendor in the space.

What’s going on, of course, is 5G, and the highest-level truth Nokia’s numbers expose is that 5G is where it’s at, equipment-wise, for operators. I’ve noted many times that 5G’s unique benefit is that there’s a budget for it, and Nokia’s win for the quarter is surely due to its ability to exploit that budget. Yes, Huawei’s problems haven’t hurt them a bit, but if Huawei were the only issue, other equipment vendors should have seen their own pop. It’s only the 5G players that seem to be moving.

In Nokia’s case, the growth of 5G has more-than-offset the decline in investment in 4G technology. It’s contributed strongly (via private wireless, including 4G and 5G) to a significant Nokia sales increase in the enterprise. I think it’s helped Nokia across the board, and that’s surely a signal of 5G’s importance to equipment vendors overall. Nokia is a 5G success story.

That doesn’t mean that Nokia can expect to continue happily riding the 5G wave to success. Nokia and Ericsson, the two 5G network equipment giants other than Huawei, have been jousting with each other and also trying to cope with operator interest in open-model networking for 5G. Nokia has been the most aggressive of all the big telecom vendors in supporting open-model 5G (Huawei has been the least), but open 5G presents a risk if big-name players from outside the telecom space jump in and field something credible. Nokia wants to make sure that doesn’t derail their own success, and it may be working.

There’s no question that an open 5G strategy would be less costly than one based on proprietary technology from a traditional telecom vendor, but operators are a conservative bunch, and they need to trust that a new open approach to 5G would offer them reliable and timely ingredients from which to build their services. To do otherwise would put the operators themselves at competitive risk. While operators have been broadly interested in open-model 5G, they’ve been antsy about three specific things, according to my operator contacts.

Thing One is integration and support. Operators tell me that an open-model 5G strategy would likely involve somewhere between a dozen and two-dozen elements, most of which would be from different, smaller, vendors. This creates a significant risk, from operators’ points of view, that the whole thing will creak, groan, and collapse rather than sing, once assembled. Then, who do you point a finger at?

Thing Two is evolution. The majority of big 5G deals will be made with companies who have already deployed 4G, and who need to come up with a graceful way of evolving their infrastructure from 4G-specific to the 5G model that’s specifically designed to be backward compatible. Operators know the vendors who built their 4G networks, and those vendors aren’t the new 5G open-model players. Is it possible to evolve to an open-model 5G infrastructure and preserve 4G in parallel for as long as needed, which could be very long indeed?

Thing Three is the radio network, meaning explicitly the radio technology. There have been major improvements in the “open 5G radio” story recently with the announcement that the massive-MIMO radio problem is on its way to being solved, but it’s not solved yet. Without a solution available now, and with no promise of just when one can be expected, and with questions on when massive MIMO would be needed, this is a big risk to ask operators to accept. Nokia will be making a major, and they say market-leading, massive MIMO announcement shortly, according to their earnings call.

The obvious import of this is that if you want to make money in the telecom equipment space, you’d better have a good 5G story. Why that’s the case isn’t much of a question either; it’s easier to sell something that’s budgeted than something that isn’t. The thing that is a question is just how vendors could get onto the 5G bandwagon. It’s not as easy as it might seem.

In order to have a useful play in 5G in 2021, you need a play in O-RAN, something that Nokia and rival Ericsson have demonstrated already. That means that you have to field one or more O-RAN elements yourself, create a strategy to host them, or both. It is possible for any major network equipment vendor to create O-RAN elements, but if that task hasn’t been underway for at least nine months, it might take too long to bear fruit this year. In addition, cloud software vendors like VMware have fielded their own strategy for O-RAN, one that will ultimately include both platform software and some specific O-RAN component implementations. The space could get pretty competitive.

More competition can be expected, even from the network router players. Nokia is in an interesting position because they are a credible O-RAN player, and because they also provide network routing gear. Many companies in the router space will find it difficult to eke out a 5G positioning because they have nothing to host it on, and no time to build components for O-RAN. Nokia has a leverageable story in the core, with O-RAN to ease the sales dialog.

Things aren’t as rosy with the other telecom router giants. Cisco alone among the telecom equipment players is a credible provider of hosting platforms, both hardware and software. Their O-RAN strategy is to exploit alliances with others. This strategy has been in place for a year, though, and I’m not seeing a lot of movement or traction. Juniper recently announced licensing O-RAN technology from Netsla/Turk Telecom, but that arrangement is new and isn’t bearing visible fruit yet. Juniper’s Contrail Cloud stuff could be a platform for 5G, of course, but this whole story needs to mature. Thus, neither Cisco nor Juniper are in a position to really exploit 5G, and they may not be in that position in 2021.

Among the telco network routing vendors, startup DriveNets may have an interesting advantage here. Their disaggregated routing model, coupled with their ability to host third-party elements as part of their cluster software, means that they could in theory host 5G O-RAN and/or 5G Core components, and since the routing control plane is already hosted there (along with the user/data plane) the cohabiting of all these elements could generate a more efficient implementation. Juniper cites its own control/data-plane disaggregation credentials when talking about 5G, so they might also have designs on exploiting this angle.

It’s also possible that a software player (Red Hat, VMware, or Wind River, for example) could create an alliance with white-box players and offer a platform solution, or even a model architecture of their own. White boxes running OpenFlow, combined with server-hosted control-plane and O-RAN elements, could create a whole new network model, and would provide an inroad for these vendors with operators whose infrastructure spending is largely focused on 5G.

The reason this is interesting is that we’re also seeing a lot of edge stories, including one from Red Hat, who made a number of “edgy” announcements at their conference. Edge computing, of course, requires hosting as a utility function, but it also requires both a conception of just what edge services are and how they would be architected for deployment. Right now, we don’t have any player touting a good story in either of those spaces, and if the same player grabbed 5G O-RAN positioning power and edge power, they could end up being formidable.

All the network vendors, including Nokia, Cisco, Ericsson, and Juniper, aspire to be an edge player, but this is going to be a major challenge. The edge is a lot more like the cloud than like the network, which means that cloud software vendors may have such a natural advantage here that it will be hard for network vendors to overcome. In fact, I think that only riding O-RAN to an early edge lead could level the playing field for the network vendors.

5G is a major driver of telecom spending, and that’s going to be true for years to come. If edge computing manages to escape the hype phase and deliver actual value propositions, it will be an even greater driver, and 5G O-RAN in particular may be the on-ramp to edge computing, which could make 5G-ready vendors even bigger winners. Nokia needs a bit more strategizing, but it might have a shot.