What Operator Planners Think About the “Service Layer”

I promised recently that I’d post something on how operator planners viewed my “service plane” concept, as soon as I could get data. Over last week, I did some data gathering, and today I think I can keep my promise—sort of. The reason for the qualification is that what I’m calling the “service plane”, in the name of staying as generalized as possible, is actually what operators are thinking of as “edge computing”. The reason that’s the case is interesting in itself.

Anyone who’s ever worked with telcos understands that they’re rooted in supply-side market-think. Demand is out there, and once a telco works through the process of deploying something to meet it, in a way that’s capital- and operationally efficient, the demand will then consume what they deploy. This is usually called the “Field of Dreams” (FoD) model, “Build it, and they will come.” Even savvy planners are more infected with this than we might think, and certainly more than what I’ve thought.

What my “savvy planners” have done is created a kind of modified Field of Dreams that I’ll call the “Field of Taking Advantage” (FoTA from now on). They recognize that they can’t just deploy stuff and wait for demand to develop, because they’ve seen that CFOs will no longer back that sort of strategy. Instead, what they propose is wait until a new network technology emerges that’s linked with an opportunity that is already budgeted. When this first opportunity results in deployment of its necessary technology elements, they expect it to be (spontaneously) taken advantage of by other missions. The budgeted opportunity stimulates the dream, in short.

Both FoD and FoTA thinking tends to make operator planners focus on the infrastructure and not on the opportunity. My way of thinking (even after decades of working with operators) is that you generalize an architecture by looking at the range of missions you’re targeting. Theirs is that you look at the general infrastructure required and deploy it, justified by that first mission. That’s why I talk about “service planes” and my planner friends think of edge computing. To get reasonable input from planners, I have to accept their frame of reference.

If we consider telecom/cableco infrastructure as being three layers deep (access, metro, and core), we can map traditional network devices into these layers easily. The planners see “hosting” as having the same three layers. They map white-box hosting to the access network, traditional cloud computing in the core network, and edge computing in the metro. To them, the edge is the first point inward from the user where there’s enough concentration of traffic and customer value to justify actually having a data center.

Just where this is, or where they think it is, is pretty clear to them. Most network operators have real estate housing the terminations of a series of access trunks, which might be FTTH/FTTN, copper loop and DSL, CATV, or mobile backhaul. In the old days, we’d have called these places “edge offices” where “Class 5” phone switches were deployed. They’re big facilities, with plenty of space to house racks of servers. Similarly, these Class 5 edge-office locations would connect with Class 4 transit sites, broken down in the US by what was called “Local Access and Transport Areas” or LATAs. A LATA roughly corresponds to a Standard Metropolitan Statistical Area or SMSA, and there are about 250 of them in the US.

To my planners, edge computing would be based on server farms deployed in the edge and transit office locations, or their equivalent in cable networks or other networks that didn’t evolve explicitly from the public switched telephone network (PSTN). Thus, their FoTA thinking is that some key application would justify building out edge hosting in these locations, and that the hosting there would then be available to exploit with other missions. What is that key application? It’s 5G, of course, and in particular, it’s 5G RAN (New Radio, or NR, to purists) and O-RAN in particular.

I’ve blogged about the symbiotic relationship between 5G O-RAN and edge computing before, and I won’t repeat all that here. What I want to point out instead is that to the operator planners, what’s important is that O-RAN (and 5G in general) will require hosting, not what kind of hosting it will be, or what the software technology necessary to create and sustain it would look like. This view has two important implications for my service-layer concept.

The first implication is that operators are at the moment not sensitive to the issue of the platform software for O-RAN and 5G. There is no strong desire to have offerings adhere to some specific set of rules. Yeah, they’ll say “cloud-native” or “container” or maybe even “Kubernetes”, but it’s just a mantra, not a commitment. That means that vendors who want to provide hosting for 5G features could in theory take very different tacks and not raise any hackles among the operators. However, since FoTA-think requires that follow-on missions inherit the infrastructure that the first mission justifies, and since the platform software is part of infrastructure, the platforms available to be inherited could be wildly diverse.

The second implication is that there is no service-layer thinking being applied to 5G hosting, and so nothing but the hosting facilities themselves are being put into place to facilitate things later on. A new service mission, say IoT, is going to have to frame out its own architecture in the same way that O-RAN framed out 5G NR. This could, and likely would, lead to a bunch of service silos, and would reduce the chances that follow-on missions could be supported quickly and with minimal incremental cost (capex and opex).

Before we wring our hands and declare that all is lost, there is a glimmer of hope. While planners don’t have a specific vision of a service layer, they do have a view of what the common characteristics of these future missions would likely be, and just as they view “the edge” as being an element of a service layer, they’d also view software to address these mission characteristics as being part of it too. They can accept the notion that there’s a software collection that would fit my “service layer” model, but they can’t quite get their arms around what the software would be or what specific missions it would serve.

The planners are pretty decisive about what isn’t a service-layer mission. The service layer is not connectivity. There is no user/data plane in it. There’s no IP control plane either. Instead, planners when pressed say that missions for edge computing will fall into two categories—request-driven (transactional) and event-driven. However, they’re not (at least at this point) really thinking about the difference in software architecture between the two, or the differences in platform software that each of the categories would require.

This is a really important point, in my view, because the industry has a lot of experience and tools to support transactional applications (RESTful APIs are a mainstay) but much less to support event-driven applications. However, online services are increasingly looking at event-driven structures for their applications, which obviously includes applications that would normally be thought of as transactional.

The AsyncAPI specification, which came out of the OpenAPI initiative, is a good example of an event-driven API, and the OpenAPI initiative is a valuable effort to structure the relationship between software elements in an open way. Unfortunately, even software engineers are often unaware of either of these, and none of the planners I’ve engaged with were aware of them either. Even if they were, you can’t just adopt an event-driven API in software not designed to be event-driven.

The net of all of this is that even savvy telecom planners aren’t really thinking about what they call “edge computing” as clearly as they should be. It would be somewhat simplistic to believe that all they cared about was having a platform to host something on, but they’ve really not thought out what would be needed beyond that, and perhaps as little as a quarter of them have even started to lay out a path to getting that critical information.

You really can’t plan “edge computing” without planning for a “service layer”, because it’s the latter that frames the relationship between missions and platforms, and without that relationship you really can’t define the platforms either. Of course, for service-layer planning, you also need to have a sense of mission planning, at least to the point of having some sense of application types or classes (transactional and event-driven were my examples). I don’t think it would be difficult to get this, in a technical sense, but it may be for other reasons.

Vendors are almost surely going to have to play a big role here, and this sort of planning almost screams out “Significant sales delay and expensive and fruitless educational sell!” at the top of its lungs. Those words scare vendor executives more than the question “What kind of software does edge computing need?” scares telecom planners. We may wait a bit for progress here, in short, and that could slow everyone’s realization of benefits.