Software-Defined Telecom and Success by Subtraction

Recently I had a friend ask me why some things I’d declared as failures were considered successful by the media.  The obvious reason is that something that’s good for publicity is good to the media; their standard of success is different.  The less obvious reason, and one just as pervasive, is what I’ll call “success by subtraction”, and it’s hitting telecom especially hard.  Hard enough, in fact, to make it a big problem.

Everyone knows that developments these days are over-hyped, and that’s been the case for about two decades.  One result of this over-promotion is that expectations for a product are set so high that there’s little chance it can meet them.  NFV is a good example; to succeed in its goal of improving profit per bit significantly, it would have to be widely adopted, and that’s not happening now nor is it likely if we stick to the ETSI NFV ISG model.  However, NFV can be applied in a very limited way to provide edge features for business data services—virtual CPE.  If we look only at that space, we could declare that NFV was “growing”.  It’s not that it actually has a chance of meeting its original goal, but that we’ve removed elements from that original goal to correspond with what can actually be done.  Success by subtraction.

The problem with success by subtraction is that it hides major industry blunders, spending vast amounts of money and time doing something that’s not going to pay off on the investment.  Meanwhile, of course, the problem that was supposed to be solved by our blundering technology remains, and other possible paths to its solution are ignored.

You can see the success-by-subtraction paradigm clearly in 5G too.  OK, we need 5G in that we need the improved efficiency of spectrum use, subscriber density, and so forth.  But is 5G a revolution?  We hear that it lets people download videos much faster, but of course most people watch videos rather than downloading them.  The reason for the 5G-is-faster line is that it’s a better story, but you could argue that what it’s threatening to do is offer some operators a justification for making 5G more expensive.  If the real value of 5G is efficiency, why wouldn’t it make mobile service cheaper?

Where the real subtraction comes in is in the evolution of 5G features—from radio-network-centric first steps to full 5G Core.  We hear about all the stuff 5G Core will do, and we don’t hear much about the fact that the focus of 5G now has nothing to do with 5G Core, and may never have anything to do with it.  We could very well see 5G stall out in the “New Radio” enhancement and yet be touted as a success.  Success by subtraction.

Then there’s IoT.  The “Internet of Things” started (logically) as being about putting “things” on the Internet for exploitation by OTT players.  OTTs were enthralled by the notion that they could exploit these IoT sensor resources, and telcos were enthralled by the opportunity to send monthly bills for all the sensors to be connected.  There are a few examples of cellular IoT (in the transportation and utility industries, for example) and these are touted as the leading edge of a wave that will lead to the full IoT realization.  Except, as we already know, there’s no clear business case for that model and no real chance we’d get it into play.  But never fear, we’ve now declared home thermostats on home WiFi to be IoT, despite the fact that we had home WiFi and home sensors long before IoT came along.  Still, we’ve included this old stuff in the new IoT model, and thus declared our goals met.  Success by subtraction.

These examples demonstrate the problem, but it’s hard to say how much real industry hurt is done by just failing to live up to any of the technology promises we tout.  The real problem I see with this subtracting trend is in the area of service lifecycle automation.  I’ve been tracking operations costs and trends for decades, and one point that’s been clear is that the great majority of “process opex”, meaning operations costs attributable to service and network operations, lies in the network side.  Processing service orders, doing billing, and so forth, are fairly predictable cost elements that don’t account for more than about a seventh of overall opex, and are handled with some efficiency already.  Sure, we could wring out a little more efficiency, but for the most part, they’re efficient enough.  Where we have problems is in handling problems, unusual conditions.  “Lifecycle” automation, in short, has to automate the entire lifecycle.  If you put it that way, most operators agree.

The problem is that whether we’re talking about something like NFV’s Management and Orchestration (MANO) processes or something broader like ETSI Zero-Touch Automation (ZTA) or ONAP, we’ve ended up focusing almost totally on deployment and not on the full service lifecycle.  It’s certainly important to get a service deployed, but that doesn’t address anything like the majority of the opex problem.  We’re seeing a contraction of scope that makes what’s being done the new “goal”, a success by contraction in the making.

Deployment is one state/event set of the many that are involved in service lifecycle automation.  If we were to go back to the old days of protocol handlers, we’d see that there is an “initialization” phase to getting a connection established.  It’s essential that this be there, of course, but if all you can do with a connection is initialize it, there’s little chance you’ll have too many people caring about your capability because the benefits won’t justify the implementation costs.

You see a lot of OSS/BSS vendors jumping on the ONAP concept, but remember that very few network operators have OSS/BSS systems fielding network events.  Lifecycle automation isn’t happening, isn’t going to happen, in the OSS/BSS.  What’s needed is a different model, one where service data models link events to processes, not to “systems” like OSS or NMS.  The very concept of a “management system” or “support system” screams “monolithic implementation!” for all who listen.  There is no “system” in a distributed cloud-native implementation of lifecycle automation.  The data model systematizes things for us, assembles processes to handle events.

Don’t take this to mean that OSS/BSS or NMS goes away; the concept remains, and there would also be a collection of processes that align with the current systems.  It doesn’t require operators reorganize themselves, only that they build processes around data models, not into systems.  That’s the critical cloud-level innovation, the thing that separates a cloud-native implementation from anything that came before.  But it’s also critical for full-scale, cradle to grave, service lifecycle automation, because it accommodates all service events and states, and is scalable enough to manage the kind of event load we might expect to find if all service events were handled.

We are not going to reform the media.  Ad sponsorship is now almost universal, and it encourages the production of entertainment and ads, not education or truth.  We’re not going to reform vendors either; the nature of the stock market dictates they stand tall during earnings calls.  Read the transcripts yourself and you’ll see what I mean.  What we have to do is reform buyers, and in the telecom space that means the network operators themselves.  They need to back initiatives that are heading in the right direction, both with funds and with qualified software architects.  They need to frame their software and standards around the state of the cloud, not around the structure of software systems that are often older than the current generation of engineers.

We also need to expect more from all these stakeholders, including the media.  Yes, it’s tough to write interesting stuff every day without breaching the boundaries of reality.  Yes, it’s tough to frame a future architecture without compromising the present, and yes, it’s difficult to toss aside decades of a business model.  Refusing to do those things doesn’t make things easy, though.  It only offers a short-term respite in what will be a long-term collapse.  We are seeing open-source software producing almost all the useful innovation in a software-defined age.  Wouldn’t it be better to be a part of this revolution than to be making up stories to promote an unrealistic future vision or a short-term product strategy?  I think it’s time for everyone to accept that only the open-source vision of the future has any chance of really coming true, and working harder to understand what that vision is, and means.