One of the challenges of transforming the way we do networking is the need for abstraction and the difficulty we experience in dealing with purely abstract things. What I’ll be doing over the next week or ten days (depending on what else comes up that warrants blogging) is looking at the process of building and running the network of the future, as a means of exploring how technologies should be used to get us there. I’m hoping that the process example will make the complexities of this evolution easier to understand.
We know by now that the best way to do transformation is to automate the service lifecycle from top to bottom, and the best way to do that is to model the services and decompose the models to drive software processes. There’s probably not much disagreement on this, but there’s a lot of mystery in the area of how that can be done. The goal here is to work to dispel some of that mystery.
The approach I’m going to advocate here is one that separates the commercial (service-domain) and technical (resource-domain) activity, and that is based on intent modeling and abstraction. I’m a fan of top-down approaches, but in this case I’m going to start at the bottom because we have a network already, and the first test of any new network methodology is whether it can embrace where we are and so jump from that to where we want to be.
Network “services” at the high level are made up of two things—“pipes” and “features”. A pipe is something that has two ends/ports and provides for the movement of information through it. A feature has some indeterminate number of ends/ports, and the outputs are a complex function of the inputs. Everything from access connections to the Internet can be built using these two things.
When network architects sit down to create a model of the network of the future, they’ll be building it from nuclear pieces that we’d likely recognize, things like “VPN”, “VLAN”, “Application”, and “Connection”. The temptation might be to start with these elements, but a good software architect would say that you have to go back to the most basic form of things to standardize and optimize what you’re doing. So “Connections” are “pipes”, and all of the other things we have listed here are “Features”. Keep this in mind as we develop this.
Our network architects should start their processes by defining the things that infrastructure as it now exists can deliver. A “network” today is a layer of protocols that has specific properties, meaning that it is a combination of pipes and features that combine to deliver specific capabilities. I’ve called these capabilities Behaviors in my ExperiaSphere work, and they are roughly analogous (but not exactly so) with the TMF’s notion of Resource-Facing Services (RFS). All of the technical pieces of current retail or wholesale services are Behaviors/RFSs.
An RFS should be functional, not procedural, meaning that it should describe what happens and not how it’s done. If we have an RFS called “VPN”, that means in our convention that it’s a Level 3/IP private network feature with an unspecified number of access ports. It doesn’t mean it’s MPLS or RFC2547 or SD-WAN; all of these are means of implementing the VPN RFS. The same is true for our “Firewall” feature, our “IMS” feature, and so on.
When our network architects are done with their process, they’d have a list of the “feature primitives” that are used to create services based on current technology. This is an important fork in the road, because it now defines how we achieve service automation and how we take advantage of the feature agility of virtualization.
The goal of service automation is to define a set of models and processes that will deliver each of our abstract features no matter what they’re built on. That means that all mechanisms for building a VPN would be structures underneath the general structure “VPN”. We have to define “VPN” in terms of its properties, its ports, and the parameters (including SLA) that it either accepts or asserts, then we have to insure that every mechanism for implementing the VPN supports exactly that set of things, no more or less.
Ideally, you’d think up your properties, ports, and parameters based on the functionality you’re describing, but in the real world it’s likely that you would do a rough cut based on foresight, then refine it as you subdivide “VPN” into the various ways you could create one. Our team of network architects would probably take this approach, and at the end of their work they’d have a complete list of the “three Ps” for each feature. This would become input into the work of network engineers who would take charge of the feature implementations.
A feature, as a high-level abstraction, can be implemented in any way or combination of ways that conforms to the high-level description (our three Ps). In today’s networks, implementation variations are caused by geographic scope, administrative areas, technology or vendor differences, and so forth. For a given feature, like our VPN example, the first step would be to list out the various implementation options and the way that a given service would be assigned a set of these options. Thus, decomposition of an abstract VPN feature starts by examining the way that an implementation (or set of implementations) is selected. For each, you’d then have network engineers describe the process of deploying the implementations they’re responsible for, and mapping between their parameters and those of the high-level feature.
The implementations referenced here would not need to be modeled or described in any specific way, as long as their model/description was suitable for decoding by something (a DevOps tool, YANG, etc.) and as long as the model could be referenced and activated by the high-level selection process just described.
I mentioned the policy-based selection of implementation alternatives, and this would make up what I’d call a boundary layer, meaning that in theory there is a set of processes that link retail services to network services, and could be considered to be divided among those two things in any reasonable way. The only caveat is the requirement of isolation; you should never mix commercial and technical policies because that would risk setting up a brittle service definition that might expose implementations up in the service layer where they definitely don’t belong. See below!
The other pathway from the high-level feature model inventory is to service architects who want to build new features, those that perhaps were not available with standard network devices but would be suitable for the application of virtualized features. An example might be a publish-and-subscribe tool for event distribution. The service architect would define the high-level feature (“PandS”) and would also provide the three “Ps” for it. The result would then again be turned over to a network or server/software engineer for translation into a deployment and lifecycle implementation.
To return to an earlier point, it’s important in this model that the implementation differences that are reflected when decomposing the high-level objects have to be technical policies and not business policies. What we want here is for the “Behaviors” or RFSs that are exposed by infrastructure are the boundary between the service domain and the resource domain. Customer portals and customer service reps should not be making implementation decisions, nor should commercial issues be exposed directly to resources. It’s OK to have parameters passed that guide selection, but these should be technically and commercially neutral.
I’ve noted before that I think a modeling approach like TOSCA is the best way to describe services, even ones that have nothing to do with hosted features. However, you can see that since the decomposition of our RFSs or Behaviors into actual resource commitments is hidden inside the RFS/Behavior object, there’s no reason why we couldn’t let the decomposition be handled any way that works, meaning that it could take advantage of vendor features, previous technology commitments, etc.
If this approach is followed, we could build models of current services and describe those services in such a way that automated lifecycle processes could operate, reducing opex. These same processes would also work with implementations of current features that had been translated to virtual-function form, facilitating the evolution to SDN and NFV where the business case can be made.
Some will say that by tapping off opex benefits, this approach could actually limit the SDN and NFV deployments. Perhaps, but if it’s possible to utilize current technology more efficiently, we need to identify that mechanism and cost/benefit analyze it versus more radical transformations.
In the next blog on this topic, I’ll talk about the service layer of the model, and in the one following it the way that we could expect to see these two layers integrated—those pesky boundary functions. The other topics will develop from these three main points; stay tuned!