Yesterday I talked about service transformation through lifecycle management, starting with how to expose traditional networking services and features through intent models. Today, I’m going to talk about the other side of the divide—service modeling. Later, we’ll talk about the boundary function between the two, and still later will take up other topics like how service architects build services and how portals then activate them.
The resource layer that we’ve already covered is all about deployment and management of actual resources. The service layer is all about framing commercial offerings from contributed “Behaviors” or “Resource-Facing Services” (RFS) and sustaining commercial SLAs. Again, there’s nothing to say that a common model and even common decomposition software wouldn’t be able to deal with it all. I think TOSCA would enable that, in fact. Whether that single-mechanism approach is useful probably depends on whether any credible implementations exist, which operators will have to decide.
The top of the service layer is the actual commercial offerings, which the TMF calls “Products” because they get sold. These retail services would be represented by a model whose properties, ports, and parameters (those “three Ps”) are the attributes that the buyer selects and the operator guarantees. The goal of the whole service modeling process is to decompose this single retail service object (which you will recall is an intent model) into a set of deployments onto real resources. That, in my view at least, could include functional decomposition, geographic/administrative decomposition, or a combination thereof.
Functional decomposition means taking the retail service properties and dividing them functionally. A VPN, for example, could be considered to consist of two functional elements—“VPNCore” and “VPNAccess”. It would be these two functional elements that would then have to be further decomposed to create a set of Behaviors/RFSs that included our “Features” and “Pipes” primitive elements, and that were then deployed.
Geographic/administrative decomposition is essential to reflect the fact that user ports are usually in a specific place, and that infrastructure is often uneven across a service geography. For example, a “VPNAccess” element might decompose into at least an option for cloud-hosted VNFs where there was infrastructure to support that, but might decompose into “pipes” and physical devices elsewhere.
Probably most services would include a mixture of these decomposition options, which I think means that what we’re really looking for is a set of policies that can control how a given higher-layer element would be decomposed. The policies might test geography, functionality, or any other factor that the operator (and customer) found useful. Because the policies are effectively based on retail element parameters, the relevant parameters have to be passed from the retail element, down as far as they’d need to be tested in policies.
Policies that control decomposition have, as their goal, selecting from among the possible deployment options associated with a given retail feature. These features could be considered analogous to the TMF’s “Customer-Facing Services” or CFSs, but the latter seem to be more likely to be rigorous functional or administrative divisions, almost “wholesale” elements. What I’m describing is more arbitrary or flexible; all you care about is getting to the point where resources are committed, not to creating specific kinds of division. My functional and geographic examples are just that; examples and not goals.
If we presume (as I do) that the service domain is the provenance of “service architects” who are more customer- or market-oriented than technically oriented, it follows that the Behaviors/RFSs that are exposed by the resource layer are probably not going to be directly composed into services, or even directly decomposed from them. Another likely role of the boundary layer is to frame the resource-layer offerings in the desired lowest-level units of service composition.
In our hypothetical technical and marketing meetings, a team from each group would likely coordinate what the boundary layer would expose to the service layer. From this, service architects would be able to compose services based on pricing, features, and SLA requirements, and if technology/vendor changes were made down in the resource layer, those changes wouldn’t impact the services as long as the paramount rule of making sure that intent models supported common capabilities regardless of how they decompose, was followed.
The service layer is also where I think that federation of service elements from others would be provided. If we have a service-layer object, we could have that object decompose to a reference to a high-level service object (a wholesale service element) provided by a third party. The decomposition would activate an order process, which is a model decomposition, in another domain. This could be a 1:1 mapping, meaning a single object in the “owning” service model decomposes to a like object in the “wholesale” service model, or the wholesale federated option could be one decomposition choice.
The service/resource boundary would set the limit of customer/CSR management visibility, and also the visibility a retail provider had into the service contribution of a wholesale partner. Remember that every service/resource element is an intent model with those three Ps. One of them includes the SLA the element is prepared to assert to its superiors, and every element at every level is responsible for securing that SLA either by selecting subordinates that can do it, or by providing incremental management practices.
The management practices are important because if we presume our service/resource boundary, then we would probably find that the network management and network operations processes, people, and tools would be directed to the resource layer, and the service management and OSS/BSS tools and practices at the service layer. That would again open the possibility of considering the modeling and decomposition might differ on either side of that boundary, though I stress that I believe a single process from top to bottom would preserve management flexibility just as well.
I’ve tended to describe these models as hierarchies—a single thing on top and a bunch of decomposed subordinates fanning out below. If you looked at the complete inventory of models for a given operator there would be a lot of “things on top”, and the trees below would often be composed of the same set of elements, with some adds and removals for each. That’s a form of the same kind of component reuse that marks modern software development processes.
One value of this complex “forest” of model trees is that we could define one set of service definitions that a customer might be able to order from a portal, and another that would require customer service intervention. That would maximize self-service without risking instability. In any case, the SLAs of the intent models would drive the portal’s management state, so the customer wouldn’t be able to directly influence the behavior of shared resources.
It’s also true that some “component reuse” would be more than just reusing the same model to instantiate independent service tenancies. Some features, like the subscriber management or mobility management elements of IMS/EPC, are themselves multi-tenant. That means that our modeling has to be able to represent multi-tenant elements as well as create tenant-specific instances. After all, a federated contribution to a service might well be multi-tenant and as long as it did what the three Ps promised, we’d never know it because the details are hidden in the black box of the intent model.
We can’t say as much about the service layer as you might think, other than to say that it enforces commercial policies. The real details of the service layer will depend on the boundary layer, the way that the service and resource policies and models combine. There are a number of options for that, and we’ll look at them in the next blog in this series.