The Tangled Web of OSS/BSS Modernization

I had an opportunity to chat with some insightful network operator CIO staff types, the topic being the near-and-dear one of “What should the future of OSS/BSS be?”  I’ve noted in some past blogs that there’s a surprising diversity of viewpoints here, ranging from the “throw the bums out!” model to one of gradual evolution.  There may also be an emerging consensus, at least on some key points.

OSS/BSS systems are the network operator equivalent of “core applications”, similar to demand deposit accounting (DDA, meaning retail banking) for banks or inventory management for retailers.  Like the other named applications, OSS/BSS emerged as a traditional transactional model, largely aimed at order management, resource management, and billing.

Three forces have been responsible for the changing view of OSS/BSS.  One is the desire of network operators to avoid being locked in to products from a single vendor.  Most early OSS/BSS systems were monolithic; you bought one and used all of it.  That was no more popular in the networking industry than lock-in has been for any other vertical.  The second is the increased desire for customer self-care and the need to support online portals to provide for it.  The final one is the combination of increased complexity in resource control and decreased complexity in billing.  We used to have call journaling and now we have one-price-unlimited calling.  We used to have fixed bandwidth allocation and now we have packet networks with almost nothing fixed in the whole infrastructure.

The reason these forces are important is that they’ve operated on the same OSS/BSS market but taken it in different directions.  The lock-in problem has led to a componentized model of operations, with at least some open interfaces and vendor substitution.  That doesn’t necessarily alter the relationship between OSS/BSS and the business of the operators.  The self-care issue has led to the building of front-end technology to generate what used to be customer-service transactions as direct-from-user ones.  This has likewise not altered fundamentals much.

It’s the third force that’s been responsible for most of the talk about changes to OSS/BSS.  As networks moved from simple TDM to complicated, multi-layered, packet, the process of “provisioning”, the meaning of a “service level agreement” and even what customers are billed for have all changed.  The new OSS/BSS vision is the result of these individual drives, and more.  But what is that vision?

If you listen to conferences and read the media sites, the answer is probably “event-driven”.  I think there’s merit to the approach, which says in effect that a modern operations process has to be able to respond to a lot of really complex stuff, ranging from changes in the condition of services based on shared resources (packet networks, server farms, etc.) to changes in the market environment and competition.  Each change, if considered an “event”, could drive an operations component to do something.

Event-driven OSS/BSS could also take componentization and elimination of lock-in to a new level.  Imagine a future where every OSS/BSS structure is fixed, meaning that the processes that align with each service state and event are defined.  You could buy a single process for best-of-breed ultimacy.  Imagine!

This is a big change, though.  The question my OSS/BSS pundits were struggling with is whether you really need an event-driven OSS/BSS at all, or whether you need to somehow shortstop events so they never impact operations.  Can the networks themselves manage their own events?  Can service composition and lifecycle management be separated from “events” and kept largely transactional?  Could we avoid lock-in by simply separating the OSS/BSS into a bunch of integrated applications?  It might all be possible.

The primary near-term issue, according to experts, is insulating the structure of OSS/BSS from the new complexities of virtualization.  Doing that is fairly straightforward architecturally; you define the network as a small number (perhaps only one) virtual devices that provide a traditional MIB-like linkage between the network infrastructure and the OSS/BSS.  Then you deal with the complexities of virtualization inside the virtual device itself.  This is applying the intent-model principle to OSS/BSS modernization.

My OSS/BSS contacts say that this approach is actually the default path that we’re on, at least in one sense.  The way that SDN and NFV are depicted as working with OSS/BSS presumes a traditional interface, they say.  The problem is that the rest of the requirement, namely that there be some network-management process that carries the load of virtualization, hasn’t been addressed effectively yet.

The second issue, so the OSS/BSS experts say, is the problem of silos at the operations application level.  Everyone wants to sell their own suite.  In theory, that could be addressed by having everyone comply with TMF specifications and interfaces, but the problem is more complicated than that.  In order for there to be interoperability among disjointed components, you have to obey common functional standards for the components (they have to do the same thing), a common data model, and common interface specifications.  You also have to sell the stuff on a component basis.  Operators say that none of these are fully supported today.

The logical way to deal with things is probably to define a repository model and presume that the applications all work with that repository in some way.  However, operators who want some specialized tuning of data structures to accommodate the way they offer services, bill for them, etc. might have some issues with a simple approach.

It’s fair to ask whether the TMF could do what’s needed here, and the answer you get from operators is mixed.  There is a general view that the TMF perhaps does too much, meaning that its models and structures go further than needed in standardizing operations software and databases, and by doing so limits utility and agility.  All of the experts I chatted with believed that the TMF specifications were too complicated, too.  Almost all of them said that OSS/BSS specifications needed to be open to all, and the majority said that there should be open-source implementations.

Which, most say, we might end up with, and fairly soon.  The challenges of virtualization have led to a displacement of formal standardization by open-source projects.  That same thing could happen for OSS/BSS, and the experts said that they believed the move to open-source in operations would naturally follow the success of an open model for virtualization and service lifecycle management inside that virtual-device intent model.  They point to ONAP as a logical place for this.

I’ve been involved in telecom operations for decades, and I’ve learned that there is nothing in networking as inertial as OSS/BSS.  A large minority of my experts (but still a minority) think that we should scrap the whole OSS/BSS model and simply integrate operations tasks with the service models of SDN and NFV orchestration.  That’s possible too, and we’ll have to wait to see if there’s a sign that this more radical approach—which would really be event-driven—will end up the default path.