I know that in past blogs I’ve noted that we often create unnecessary problems in the industry by overloading hot new technology terms. “Software-Defined Network” is a great example; some people have used the term to describe a major shift from distributed device-adapted networking to centralized software networking. Others think you get to SDN by simply adding an API or a service catalog on top of a management system.
There’s no question that “riding the hype wave” is an established means for vendors to get good PR, but I think it’s also true that software definition of networking is naturally kind of imprecise. Since I’m such an advocate of top-down thinking, let’s look at software’s role in networking from the top and see if we can dig out some significant truths.
Any product or service has “price/demand elasticity” in an economic sense. You can set a price, and at that price the demand you can access will depend on how many prospective buyers can justify the price by making a business case. In the early days of data networking, the price of data connectivity was very high—so much so that fewer than ten thousand sites in the US could justify the cost of a data circuit whose capacity was about a tenth that of consumer broadband today. Those data services were profitable to the seller, but limited in their upside.
The whole of broadband evolution since then has been about trying to make it possible to continue to bring broadband to more users, so as to increase the total addressable market (TAM). A big part of that is lowering the price of bandwidth without making it unprofitable, and this is where software comes in.
If we looked at networking of 1990, we could easily suppose that cutting the price of broadband circuits by better use of fiber optics (for example) would lower the cost. We might at first be able to reduce costs enough to drive major reductions in price, keeping profits steady. However, there are costs of service other than cost per bit, meaning there are costs beyond capital costs. As we make generating bits cheaper, we increase the component of total service cost that non-bit costs represent.
Most of those non-bit costs involve what would today be called “opex” meaning operating expenses. Most opex cost is human cost. Furthermore, the need for efficient human intervention in service processes generates delays in making services available or changing them. Those delays make it more difficult to apply network-centric solutions to business problems that you can’t anticipate well in advance. What this all adds up to is that to take the next step in cost management we need to operationalize services differently, in a less human-intensive way. That’s the broad topic of software automation of services.
This point is rarely if ever recognized, but software automation of services is necessarily based on two principles. First, you must virtualize all service-layer assets so that they can be controlled by software without manual intervention. Second, you must preposition access assets that have to serve as your service conduit so that you aren’t forced to roll trucks to deliver an agile virtual service through a new access connection.
Prepositioning access assets is a business decision, one that’s historically been strongly influenced by public policy. If we neglected regulatory impact, a free market for access would mean a kind of bit-arms-race among access providers to get the fattest and most versatile pipe to the highest-value service buyers. Every operator would still exploit their own access pipes. Many have seen this (right or wrong) as an invitation to monopolistic behavior on the part of large operators, and some countries (Australia with their NBN project, for example) have attempted to “open” access networking by disconnecting it from competing operators and making it a not-for-profit pseudogovernmental asset. Most have implemented open network mandates of some sort.
We’ll need to sort out the regulatory position on “open access” if we want to get the most from software automation of services. If the model of net neutrality that’s emerged for the Internet were to be applied generally to access infrastructure, it’s hard to say whether the infrastructure could be profitable enough to induce operators to oversupply capacity. Since it’s also hard to see how many buyers would be willing to pay for capacity oversupply against potential future need, we have a problematic leg for the access dimension of cost-effective networking that needs fixed, but whose fixing is beyond technology’s capabilities.
Given that, let’s focus on the service side.
I think the best way to view service-layer automation is to say that it has two layers of its own. One, the top layer, is OSS/BSS and portal-related and it’s designed to manipulate abstractions of network capabilities. The next layer is designed to coerce behaviors from those network capabilities. A network can be said to be software-defined if its lower-level capabilities (“behaviors” as I call them) can be abstracted under a generalized (intent) model. That means that an operator can say that they have software-defined or even virtual networking as long as they can abstract their management interfaces so as to permit operations systems or retail portals to call for a “service” and have the request properly mapped to the management APIs.
SDN and NFV are mechanisms to address the virtualization of service-layer assets from that critical abstraction downward. They do this by making service connection features and higher-layer features software-hostable and thus more easily deployed and changed. SDN allows for the building of new connection models (the “chain” model example I offered in a previous blog on SDN/NFV is one such new connection model) and NFV allows for the injection of new service features/functions without deploying specific appliances. Both these things augment the versatility of pooled resources as the basis for services, which makes them more agile and more easily automated.
This structure explains what I think is a critical point, which is that if your goal is operations efficiency and service agility you can realize a lot of that goal at the OSS/BSS level. If you use ODL-type modeling to encapsulate/virtualize legacy management interfaces you can control current services with software and radically improve agility and efficiency. You could say the same thing in NFV terms by saying that an Infrastructure Manager that supported the right northbound abstractions and linked to legacy management interfaces on the southern end would look to NFV processes pretty much like virtual-function-based stuff would look.
This could have been done all along, of course. The TMF took most of the critical steps within the last decade, but in more recent projects they seem to have gotten themselves mired in politics or vendor collisions and lost their edge. They reignited their efforts when it became clear that NFV might end up subducting operations into MANO. Now it’s become clear that NFV MANO isn’t even addressing its own issues thoroughly, but the TMF is still not leading the higher-layer stuff as it should be. That sets the stage for a vendor to jump out to take advantage of the opportunity, but of course most vendors have had the same opportunity as standards groups and failed to exploit them. We’ll have to see if things change.
I think that recognizing the elements associated with successful software automation of services would be a giant step forward. Telecom is an enormous industry with a disproportionately large sunk cost in equipment. It takes a lot of benefits to bring about systemic change in this situation, and perhaps isolating the operations side from the network side could help operators address benefits that are less impactful on installed equipment. This would, of course, come to a degree at least at the expense of SDN and NFV revolution, because it would tap off benefits these two technologies might otherwise claim for their own.
What would the difference between a service-automation-driven versus SDN/NFV-driven modernization approach? The big difference is that SDN/NFV address efficiency through infrastructure change, which is limited in the pace at which it can be adopted by depreciation of current assets. My model says that the latter cannot make any meaningful contribution to opex reduction until 2018, while the service automation approach achieves in 2016 what SDN/NFV could achieve only in 2018. By 2022 the two models converge, largely because operations modernization that doesn’t happen external to SDN/NFV will surely happen as a result of it. So the big difference in the next 8 years is the timing of savings, which service automation drives a full three years faster. Another difference is ROI. The SDN/NFV investment needed to drive savings in 2018, its first “good” year is five times that needed to achieve much the same level of savings for service automation in 2016.
All this shows that if you take a service-automation view of modernization, orchestrate service processes and abstract current infrastructure, you get by far the best result. In point of fact this approach would actually reverse the carrier revenue/cost-per-bit convergence now set for 2017, and in my model the two never converge at all. With the SDN/NFV approach, operators have to wait till 2019 to restore balance between revenue and cost.
I’m going to blog more on this later, but for now let me say that the optimum approach is (and always has been) to prioritize the service-automation side of the picture and build a framework in which infrastructure modernization can pay back in both opex and capex. That’s why it’s so important for SDN and NFV proponents to push operations integration, and so sad that most of them do not.