One of the useful trends in network services these days is the trend to retreat from the technology basis for a service and focus on the retail attributes. You can see this in announcements from operators that they’re supporting “network-as-a-service” or “self-service”, but in fact these same trends are a critical part of the “virtual CPE” (vCPE) movement in the NFV space. They’re also tied in to managed services and SD-WAN. The NaaS space, then, might be the unifier of all the trends we’re talking about these days. So where is it now, and where is it going?
There would seem to be a lot of value in a more dynamic model of network services. Users even today report that service changes take an average of 17 days to be completed, with some requiring over 30 days. The problem is acute for networks that cross international borders, but it’s present even where you only have to change operators from place to place or adapt to different access technologies.
The delay frames two hypothetical problems—one being that the cost of all the stuff that takes 17 average days to complete surely reduces profits, and the other being that those days represent non-billing days that could have been billed. I say “hypothetical” here because it’s clear that you don’t have 17 days of frantic activity, and that even if all 17 days could be made billable, that revenue is available only per-change, not per-customer-per-year. How much of the 17 days of delay is accounted for in customer planning (I know it takes three weeks so I place an order three weeks before I need service) and wouldn’t have been paid for anyway is impossible to determine.
The challenge that NaaS presents, then, starts with setting realistic goals and frameworks for achieving them. You definitely need a portal system to let customers interact with fast-track service provisioning, changes, and self-maintenance, but clearly having an online way of initiating a 17-day wait is counterproductive. Obviously such a strategy would set an even higher level of expectations for instant response, and I think that frames the way NaaS has to be approached.
Business services today (and consumer services as well) are provided mostly as Ethernet or IP services. Today, the services are “native” byproducts of the devices that are deployed, and the time it takes to configure and plan the services is impacted by the fact that the setup of real devices will have to be changed in some way to make service changes. If you wanted to give a user a self-service portal, you’d risk the user asking for something by mistake that would destabilize the real infrastructure and impact other users. There are ways to mitigate these problems, but obviously they’re not satisfactory or operators wouldn’t be looking at new technologies to create agility.
New technology isn’t the answer either, in part because you’d have to evolve to it and somehow support all the intermediate network states and in part because even new network technology would still give self-service users an opportunity to truly mess something up. Logically the service network has to be independent of infrastructure. You need an overlay network of some sort, and of course the physical network of today (and in all the intermediate states through which it evolves, to whatever ultimate technology you target) forms the underlay.
The big points about NaaS from an underlay or physical-network perspective are ubiquity, internetwork gateways, and headroom. You can’t sell an overlay service unless you can reach all the locations in some way. You can’t assume uniform physical facilities so you have to be able to jump your overlay between different physical networks, whether the networks are different for reasons of technology or administration. Finally, you have to insure that you have enough underlay capacity to carry the sum of the overlay services you want to sell.
If we look at this from the perspective of the overlay network, we have four principle requirements. First, every customer network service access point has to be part of some connection network and have a unique address. Otherwise you can’t get them anything. Second, the overlay has to be able to ride uniformly on whatever combination of connection networks exist for the prospect base that forms the service target. Third, if network technology is likely to evolve, the overlay has to be able to accommodate the new technology and the transition states. Finally, the mechanisms for setting up the overlay network have to be fully automated, meaning software-based.
There is no technical reason why all business and consumer network services we have today, including the Internet, couldn’t be built as an overlay/underlay. We already see managed services being offered in this form, and this model is what players ranging from Orange to AT&T or Verizon are either offering now or plan to introduce. Most are also looking to augment overlay connection services with hosted value-add features, which is what virtual CPE is all about.
One reason for the interest in vCPE is that you really need to have some gadget to sit on the “underlay demarcation point” of the physical network and groom off the overlay stuff. Given that this device is the binding element between infrastructure and service-layer technology, it’s also a logical point where end-to-end management and SLAs would be applied. And given that, you might as well host some stuff there and make a buck along the way.
In fact, the real question with respect to this vCPE element is less whether it should offer hosting than whether it should ever give the mission up. While it’s true that cloud-hosting edge-provided services could improve capital efficiency versus a premises device, we can’t eliminate the premises device and so it’s only the incremental mission of NFV-like vCPE hosting is at risk. Cloud economies probably can’t be created with only business vCPE services to drive opportunity.
NaaS could be proven out readily using overlay technology and supplemented effectively using premises hosting of virtual functions. From that starting point we could then build out in both an SDN and NFV direction at a pace determined by the benefits of adoption. Because overlay NaaS could be extended to a wide service area immediately with costs that scale to customer adoption (because the CPE is really the only incremental element in many configurations), we could obtain the benefits we need without huge investments up front.
Overlay NaaS is a perfect complement to operations-driven software automation. You have to change the front-end service processes to achieve efficiency or agility, and once you do that you could drive an overlay NaaS service set fairly easily. If I were an OSS/BSS player I’d be excited enough about the possibilities to promote the idea—it could give me a seat at the early service modernization table.
That doesn’t mean that SDN and NFV would be killed off either. A layer of overlay NaaS, as noted above, insulates the network from disruptions created by technology evolution, but it could also reduce the feature demand on lower layers by elevating the connectivity features and related higher-layer features (firewall, etc.) to the overlay. This could promote adoption of virtual wires, accelerate the use of SDN to groom agile optical paths, and shift infrastructure investment decisively downward. Ciena, an optical player, is a player in the Orange NaaS approach, and their Blue Planet is one of the few fully operationally integrated orchestration solutions. Coincidence? I don’t think so.