Service Lifecycle Management 101: The Boundary Layer

This is my third blog in my series on service management, and like the past two (on the resource layer and service layer) this one will take a practical-example focus to try to open more areas of discussion.  I don’t recommend anyone read this piece without having read the other two blogs.

The service layer of a network model is responsible for framing commercial presentation and integrating OSS/BSS processes.  The resource layer manages actual deployment and FCAPS/network operations.  The goal should be to keep these layers independent so that technology doesn’t impact service definitions and retail considerations impact technical deployment only insofar as they change technical requirements.  The “boundary layer” that makes this happen is the topic today, and this layer is actually a kind of elastic speed-match function that will expand and contract, absorb functions and emit them, depending on just how an operator frames services and deploys them.

In a perfect world (meaning one aimed at presenting us with ease and pleasure!) we’d see a boundary layer exactly one object thick.  A resource layer would emit a set of “Behaviors” or “Resource-Facing Services” (RFSs) that would then be assigned commercial terms by a corresponding service-layer element.  We’d end up with a kind of boundary dumbbell, with one lump in the service layer and one in the resource layer, and a 1:1 bar linking them.

To understand why the boundary layers are probably more complicated, let’s look at an example.  Suppose we have a “Behavior” of “L3VPN”, which offers IP virtual-network capability.  We might have four different models for creating it—IP/MPLS, IP/2547, hosted virtual routers, and SD-WAN.  These technologies might be available in some or all of the service area, and might be able to deliver on any SLA offered for L3VPNs or only a subset.  That sets up our example.

Suppose an order for L3VPN comes in, and asks for an SLA and served locations that fit all the models, or even just two of them.  We could presume that the resource layer would decide on which to use, based on cost metrics.  Suppose that we had no options that did everything.  We’d now select multiple implementations, and to support that we’d have to ensure that each deployed L3VPN had a “gateway” port that let it attach to other implementations.  We’d pick the implementation based on cost as before.  So far, so good.

Now suppose some clever marketing person said that because SD-WAN was hot, they wanted to have a specific SD-WAN offering.  We now have two choices.  First, we could define a specific service of SD-WAN VPN, which would decompose only into SD-WAN implementations.  Second, we could introduce a “TechType” parameter into the L3VPN model, which could then guide the decomposition below.  It’s this situation that opens our boundary discussion.

Defining two services that are identical except in how they can be decomposed is an invitation to management/operations turmoil.  So passing a parameter might be a better solution, right?  But should the decomposition of that parameter-driven implementation choice lie in the service or resource layer?  Whether we do SD-WAN or IP/MPLS VPN is implementation, if we presume the SLA and location requirements can be satisfied either way.  But it was a commercial decision to allow the technical choice?  That might suggest that we needed to make the choice in the service layer.

A boundary layer strategy could accommodate this by exposing some parameters to allow resource-layer decomposition to accommodate technology selection and other factors based on retail service commitments.  You could consider the boundary layer, and implement it, either as part of the resource layer or the service layer, and where services are numerous and complicated you could make it a separate layer that was administered by agreement of the architects in both areas.

You have to be careful with boundary functions, and that’s a good reason to keep them independent of both layers.  Any parameters that don’t describe a basic SLA and yet are exchanged between the service and resource layers could end up complicating both layers at best, and creating brittle or condition-specific models and implementations.  A good example is that if you decide on consideration to withdraw a specific implementation of our L3VPN model, service definitions that included parameter-based decomposition that relied on that implementation would now be broken.  That could be fixed for new customers, but what happens when a service model for an active service instance is changed?

The boundary layer is probably the logical place to establish service and infrastructure policy, to integrate management practices between services and resources, and to create a “team mentality” around transformation and new service models.  Not surprisingly, it’s not a place on which operators (or vendors) have focused.  Most seem to think that having a boundary layer at all is an admission of failure, or perhaps an accommodation to the “antiquated” divided management model that now prevails between OSS/BSS and NMS.  Not true.

High inertia in the service-creation process is arguably a direct result of the view that somehow a service is a resource, that all services derive directly from inherent protocol-centric resource behaviors.  If that view could be battered down we could end up with more agile services even if no infrastructure changes were made, and the explicit focus on a boundary function is IMHO mandatory in doing the battering.

Infrastructure lives in a world of long capital cycles, from five to fifteen years.  Services live in a world where six months can be interminable.  With an explicit notion of a boundary layer, we could set both service and resource planners on the task of creating abstract components from which future services would be built.  Is that a formidable task?  Surely, but surely less formidable than the task of building infrastructure that anticipated service needs even five years out.  Less formidable than defining a service strategy that had to look ahead for a full capital cycle to prep installed resources to support it.

Services at the network level reduce to those pipes and features I talked about in the first blog of this series.  As simplistic as that may be, it is still a step whose value is recognized in software architecture—you define high-level classes from which you then build lower-level, more specific, ones.  We have many possible kinds of pipes, meaning connection tunnels, and clearly many types of features.  We could divide each of these base classes into subclasses—real or virtual for “pipes”, for example, or hosted experiences versus in-line event/control tasks for “features”.  If the boundary layer of the future was equipped with a set of refined abstractions that could easily map up to services or down to implementations, we need only find examples of both to create a transformed model, at any point we desire.

The boundary layer, in fact, might be the best possible place to apply standards.  There are many variations in operator practices for service management and resource management today, and many different vendor and technology choices in play even without considering SDN, NFV, and the cloud.  Standards, to be effective, would have to accommodate these without constraining “transformation” and at the same time stay applicable to the critical “what we have now” dimension.  That’s what the boundary layer could accomplish.

No software development task aimed at abstraction of features and resources would ever be undertaken without some notion of a converging structure of “classes” that were built on and refined to create the critical models and features.  Since both services and infrastructure converge on the boundary layer, and since the goal of that layer is accommodation and evolution in parallel, it’s probably the place we should start when building our models of future services.  Since we have nothing there now, we would be free to pick an approach that works in all directions, too.

In the next blog in this series, I’ll look at how starting service modeling with the boundary point might work, and how it would then extend concepts and practices to unify transformation.