What’s Involved in Creating “Service Agility?”

“Service agility” or “service velocity” are terms we see more and more every day.  NFV, SDN, and the cloud all rely to a degree—even an increasing degree—on this concept as a primary benefit driver.  There is certainly a reason to believe that in the most general case, service agility is very powerful.  The question is whether that most general case is what people are talking about, and are capable of supporting.  The sad truth is that our hype-driven industry tends to evolve drivers toward the thing most difficult to define and disprove.  Is prospective execution of our agility/velocity goal that nebulous?

Services begin their life in the marketing/portfolio management portion of network operators, where the responsibility is to identify things that could be sold profitably and in enough volume to justify the cost.  Ideally, the initial review of the new service opportunity includes a description of the features needed, the acceptable price points, how the service will get to market (prospecting and sales strategies) and competition.

From this opportunity-side view, a service has to progress through a series of validations.  The means of creating the service has to be explored and all options costed out, and the resulting choice(s) run through a technology trial to validate that the stuff will at least work.  A field trial would then normally be run, aimed at testing the value proposition to the buyer and the cost (capex and opex) to the seller.  From here, the service could be added to the operator’s portfolio and deployed.

Today, this process overall can often take several years.  If the opportunity is real, then it’s easy to see how others (OTT competitors for example) could jump in faster and gain a compelling market position before a network operator even gets their stuff into trial.  That could mean the difference between earning billions in revenue and spending a pile of cash to gain little or no market share.  It’s no wonder that “agility” is a big thing to operators.

But can technologies like SDN, NFV, and the cloud help here?  The service cycle can be divided into four areas—opportunity and service conceptualization, technology validation and costing, field operations and benefit validation, and deployment.  How do these four areas respond to technology enhancements?  That’s the almost-trillion-dollar question.

There are certainly applications that could be used to analyze market opportunities, but those applications exist now.  If new technology is to help us in this agility area, it has to be in the conceptualization of a service—a model of how the opportunity would be addressed.  Today, operators have a tendency to dive too deep too fast in conceptualizing.  Their early opportunity analysis is framed in many cases by a specific and detailed execution concept.  That’s in part because vendors influence service planners to think along vendor-favorable lines, but also in part because you have to develop some vision of how the thing is going to work, and operators have few options beyond listening to vendor approaches.

If we think of orchestration correctly, we divide it into “functional” composition of services from features, and “structural” deployment of features on infrastructure.  A service architect conditioned to this sort of thinking could at the minimum consider the new opportunity in terms of a functional composition.  At best, they might have functional components in their inventory that could serve in the new mission.  Thus, NFV’s model of orchestration could potentially help with service conceptualization.

Where orchestration could clearly help, again presuming we had functional/structural boundaries, would be in the formulation of a strategy and the initiation of a technology trial.  The key point here is that some sort of “drag-and-drop” functional orchestration to test service structures could be easy if you had 1) functional orchestration, 2) drag-and-drop or an easy GUI, and 3) actual functional atoms to work with.  A big inventory of functional elements could be absolutely critical for operators, in short, because it could make it almost child’s play to build new services.

Structural orchestration could also help here.  If a service functional atom can be realized in a variety of ways as long as the functional requirements are met (if the abstraction is valid, in other words), then a lab or technology trial deployment could tell operators a lot more because it could be a true functional test even if the configuration on which it deployed didn’t match a live/field operation.  Many DevOps processes are designed to be pointed at a deployment environment—test or field.  It would be easy to do that with proper orchestration.

The transition to field trials, and to deployment, would also be facilitated by orchestration.  A functional atom can be tested against one configuration and deployed on another by changing the structural recipes, which is easier to test with and accommodates variations in deployment better.  In fact, it would be possible for an operator to ask vendors to build structural models of operator functional atoms and test them in vendor labs, or to use third parties.  You do have to insure what I’ll call “structure-to-function” conformance but that’s a fairly conventional linear test of how exposed features are realized.

We now arrive at the boundary between what I’d call “service agility” and another thing with all too many names.  When a service is ordered, it takes a finite time to deploy it.  That time is probably best called “time to revenue” or “provisioning delay”, but some are smearing the agility/velocity label over this process.  The problem is that reducing time-to-revenue has an impact only on services newly ordered or changed.  In addition, our surveys of buyers consistently showed that most enterprise buyers actually have more advanced notice of a service need than even current operator provisioning delays would require.  How useful is it to be able to turn on service on 24 hours’ notice when the buyer had months to plan the real estate, staffing, utilities, etc?

The big lesson to be learned, in my view, is that “service agility” is a lot more than “network agility”.  Most of the processes related to bringing new services to market can’t be impacted much by changes in the network, particularly in changes to only part of the network as “classic NFV” would propose.  We are proposing to take a big step toward agile service deployment and management, but we have to be sure that it’s big enough.

We also have to be sure that measures designed to let network operators “compete with OTTs” don’t get out of hand.  OTTs have one or both of two characteristics; their revenues come from ads rather than from service payments, and their delivery mechanism is a zero-marginal-cost pipe provided by somebody else.  The global adspend wouldn’t begin to cover network operator revenues even if it all went to online advertising, so the operators actually have an advantage over the OTTs—they sell stuff consumers pay for, bypassing the issues of indirect revenues.  Their disadvantage is that they have to sustain that delivery pipe, and that means making it at least marginally profitable no matter what goes on above.

That’s what complicates the issue of service agility for operators, and for SDN or NFV or even the cloud.  You have to tie services to networks in an explicit way, to make the network valuable at the same time that you shift the focus of what is being purchased by the buyer to things at a higher level.  Right now, we’re just dabbling with the issues and we have to do better.