Service Lifecycle Management 101: Principles of Boundary-Layer Modeling

Service modeling has to start somewhere, and both the “normal” bottom-up approach and the software-centric top-down approach have their plusses and minuses.  Starting at the bottom invites creating an implementation-specific approach that misses a lot of issues and benefits.  Starting at the top ignores the reality that operators have an enormous sunk cost in network infrastructure, and a revenue base that depends on “legacy” services.  So why not the middle, which as we saw in the last blog means that boundary layer?

A boundary-layer-driven approach has the advantage of focusing where the capabilities of infrastructure, notably the installed base of equipment, meets the marketing goals as defined by the service-level modeling.  The trick for service planners, or for vendors or operators trying to define an approach that can reap the essential justifying benefits, is a clear methodology.

The most important step in boundary-layer planning for service lifecycle management and modeling is modeling legacy services based on OSI principles.  Yes, good old OSI.  OSI defines protocol layers, but it also defines management layers, and this latter definition is the most helpful.  Services, says the OSI management model, are coerced from the cooperative behavior of systems of devices.  Those systems, which we call “networks”, are of course made of the devices themselves, the “physical network functions” that form the repository of features that NFV is targeting, for example.

Good boundary-layer planning starts with the network layer.  A service or resource architect would want to first define the network behaviors that are created and exploited by current infrastructure.  Most network services are really two-piece processes.  You have the “network” as an extended set of features that form the communications/connection framework that’s being sold, and you have “access”, which is a set of things that get sites connected to that network framework.  That’s a good way to start boundary planning—you catalog all the network frameworks—Internet, VPN, VLAN, whatever—and you catalog all the access pieces.

You can visualize networks as being a connection framework, to which are added perhaps-optional hosted features.  For example, an IP VPN has “router” features that create connectivity.  It also has DNS and DHCP features to manage URL-versus-IP-address assignment and association, and it might have additional elements like security, tunnels, firewalls, etc.  The goal of our network behavior definition is to catalog the primary network services, like IP VPN, and to then list the function/feature components that are available for it.

From the catalog of services and features, we can build the basic models at the boundary layer.  We have “L3Connect” and “L2Connect” for example, to express an IP network or an Ethernet network.  We could also have an “L1Connect” to represent tunnels.  These lowest-level structures are the building-blocks for the important boundary models.

Let’s go back to IP VPN.  We might say that L3Connect is an IP VPN.  We might further classify IP VPN into “IPSubnet”, which is really an L2Connect plus a default gateway router.  We might say that an L1Connect plus a SDWAN access set is also an IP VPN.  You get the picture, I think.  The goal is to define elements that can be nuclear, or be made up of a combination of other elements.  All of the elements we define in the boundary layer relate to what it looks like as a service and how we do it through a device or device system.

Don’t get caught up in thinking about retail services at this point.  What we want to have is a set of capabilities, and a mechanism to combine those capabilities in ways that we know are reasonable and practical.  We don’t worry about the underlying technology needed to build our L2Connect or whatever, only that the function of a Level 2 connection resource exists and can be created from infrastructure.

The boundary-layer functions we create obviously do have to be sold, and do have to be created somehow, but those responsibilities lie in the resource and service layers, where modeling and lifecycle management defines how those responsibilities are met.  We decompose a boundary model element into resource commitments.  We decompose a retail service into boundary model elements.  That central role of the boundary element is why it’s useful to start your modeling there.

I think it’s pretty self-evident how you can build boundary models for legacy services.  It’s harder to create them when there is no service you can start with, where the goal of the modeling is to expose new capabilities.  Fortunately, we can go back to another structural comment I made in an earlier blog.  All network services can be built as a connection model, combined with in-line elements and hosted elements.  An in-line element is something that information flows through (like a firewall) and a hosted element is something that performs a service that looks something like what a network endpoint might do (a DNS or DHCP server).  A connection model describes the way the ports of the service relate to traffic.  Three connection models are widely recognized; “LINE” or “P2P”, which is point-to-point, “LAN” or “MP” which is multipoint, and “TREE” which is broadcast/multicast.  In theory you could build others.

If we presume that new services would be defined using these three most general models, we could have something that says that a “CloudApplication” is a set of hosted elements that represent the components, and a connection model that represents the network service framework in which the hosted elements are accessible.  Users get to that connection model via another connection model, the LINE or access model, and perhaps some in-line elements that represent things like security.

If new services can be built this way it should be obvious that there are some benefits in using these lower-level model concepts as ways to decompose the basic features like L2Connect.  That’s an MP connection model built at L2, presumably with Ethernet.  If this approach of decomposing to the most primitive features is followed uniformly, then the bottom of the boundary layer is purely a set of function primitives that can be realized by infrastructure in any way that suits the functions.  L3Connect is a connection model of MP realized at the Level 3 or IP level.  You then know that you need to define an MP model, and make the protocol used a parameter of the model.

Even cloud applications, or cloud computing services, can be defined.  We could say that an ApplicationService is a hosted model, connected to either an L2 or L3 Connect service that’s realized as a MP model.  How you host, meaning whether it’s containers or VMs, can be a parameter of the hosting model if it’s necessary to know at the service layer which option is being used.  You could also have a purely “functional” hosting approach that decomposes to VMs or containers or even bare metal.

There is no single way to use the boundary layer, but for any given combination of infrastructure and service goals, there’s probably a best way.  This means it’s worth taking the time to find what’s your own best approach before you get too far along.

In our next piece, we’ll look at the modeling principles for the service, boundary, and resource layers to lay out what’s necessary in each area, and what might be the best way of getting it.