Is “the latest and greatest” always great? There are definitely many examples of fad-buying in the consumer space. In business, though, it would probably be a career-killing move to suggest a project whose only benefit was adopting “the latest thing”. That doesn’t mean that there’s not still a bit of latest-thing hopefulness in positioning new technologies, but it suggests that these hopes will be dashed. According to a Light Reading piece, that’s now happening with SDN/NFV, but I think there’s a bigger question on the table.
The article documents a survey of operators asked when they expected “virtualization” to lower their opex by at least 10%, run in November and again in May. One result is pretty much predictable; in the most recent survey a lot of operators jumped ship on hopes that this level of savings would be generated in 2017, since that’s now only half-a-year away. The other result that’s interesting is that the largest group of operators think they won’t see 10% opex savings till 2021 or beyond. To understand what this means, we have to first look at what we mean by “opex” and analyze from there.
The kind of “opex” that technology could address is what I call “process opex” meaning the costs associated with service sale, delivery, and support. That differs from what I’ll “financial opex” which counts in my process opex items but also any other service-related costs that are expensed rather than capitalized. The biggest piece of the difference are things like roaming charges paid to other operators, or backhaul leases. But if we presume process opex to be the target, then for this year the total opex is about 27 cents per revenue dollar.
Is setting a 10% opex target realistic? A 10% savings in process opex would yield about three cents per revenue dollar. To put this in perspective, total capex across all operators averages around 19 cents per revenue dollar, so a 3 cent opex savings corresponds to a reduction in capex of about 16%. That’s just shy of what operators have said they think the maximum capex reduction for deployment NFV might be. My own model of potential savings, which I introduced last year, predicted that SDN/NFV deployment could achieve the 10% savings level in 2019, which happens to be roughly the median time of the survey responses. Thus, I think 10% is a good target.
It’s the next point that’s critical. Is “virtualization” just SDN/NFV? I also said last year that if you were to apply intent-model-based software automation to services and service lifecycle management, you could achieve that same 10% savings two years earlier, which means 2017. That means that applying orchestration principles and service modeling principles to current and new services, even presuming no infrastructure transformation at all, could generate more benefit than SDN and NFV. SDN and NFV would of course pull through these changes in time, but not as the primary technical goal. Not only that, the delay in adoption inherent in linking opex transformation to massive transformation of infrastructure would slow the ramp of savings.
You cannot achieve opex reduction with hardware, period. You can get it with software automation that is pulled through by hardware, or by software automation on its own. I think the biggest factor in the delay of realization of “virtualization” opex benefits is the fact that we don’t really have a handle on the software-automation side, in part because SDN and NFV are driven by the CTO group and operations (OSS/BSS) is the CIO. It’s not that operators aren’t seeing the benefits realized as much as they’re not even seeing a path to securing them. We are only now starting to see operators themselves try to put SDN and NFV into a complete operations context.
We do have, with some initiatives like the Orange NaaS story, indications that operators are elevating their vision of services to the point where the vision disconnects from infrastructure transformation. Because NaaS tends to be based on overlay technology (as I discussed in an earlier blog this week) it disconnects service processes from infrastructure technology—either the current state or its evolution. This could mean that NaaS would drive consideration of operations automation separate from infrastructure transformation, bringing us closer to first facing the issue and second addressing it independently of SDN and NFV.
NaaS disconnects service control from real infrastructure by introducing an intermediate layer, the overlay network. That lets service operations automation operate on something other than real devices, which preserves the current infrastructure until something comes along that really justifies changes there. When it does, the NaaS model insulates operators and customers from the technology transition/evolution. But it still doesn’t create operations efficiency. For that you need to virtualize and software-automate operations itself.
In a software-centric vision of operations, you’d have a data structure made up of “objects” that represent services, features, sub-features, implementation options, infrastructure, etc. This structure would consist of a series of “intent models” that like all good abstractions hide the details below from the use above. Operations, management, lifecycle processes, and anything else that’s service-related would be defined for each of the elements of the structure in a state/event table. This kind of model is composable, and it’s compatible with emerging cloud trends as well as with SDN evolution. NFV hasn’t spoken on the issue yet.
Despite a lack of clarity on how NFV could address this model, there does seem to be some operator momentum on making it work for NFV and also for SDN. Since we don’t really have much in the way of SDN/NFV deployment there’s plenty of opportunity to put something into place for it, when it comes along. The difficulty has been above, with the OSS/BSS processes. NaaS could bring clarity to that by defining a “network function” pairing—physical and virtual, or PNF and VNF. That function could then become the bottom of an OSS/BSS service model, and the SDN/NFV orchestration process could be tasked with decomposing it into management commands (for PNFs) or deployment (VNFs).
Having a demarcation between abstract features and real infrastructure has benefits, one of which is that you can evolve operations at both levels with a high level of independence. For example, instead of having a single service automation and orchestration platform for everything from old to new, top to bottom, you could in theory have different platforms responsible for decomposing different objects at various places in the model. That means you could define something with one model (TOSCA for example) at the top and another (YANG) at the bottom for legacy, and stay with TOSCA for cloud-deployed elements. Of course you could also still adopt a single model, if the industry could agree on one and all vendors accepted it!
A demarcation xNF also has risks because the xNF boundary is a likely barrier to tight integration between operations and infrastructure. How big a risk that is depends on your perspective; today most network lifecycle processes are not managed by OSS/BSS systems. However, a fully integrated approach could let operations tasks be assigned to handle events even fairly close to (or at the level of) real network/server hardware. It’s hard to say how useful that would be, so it’s hard to say what we’d lose by foreclosing it.
The Light Reading piece exposes a problem, but I think it’s more than just a transient shortfall of realization versus need in opex management. The problem isn’t that SDN/NFV is not delivering opex benefits fast enough. The problem is that opex benefits aren’t in scope for SDN or NFV in the first place. We’re blowing kisses at operations when we have to, hoping that buyers don’t really dig into details. What we need to do now is face reality, and recognize that if we want opex efficiency we’re going to get it by transforming operations not transforming the network. Until we do that, we’re going to undershoot everyone’s expectations.