What is a Model and Why Do We Need One in Transformation?

After my blog on Cisco’s intent networking initiative yesterday, I got some questions from operator friends on the issue of modeling.  We hear a lot about it in networking—“service models” or “intent models”, but typically with a prequalifier.  What’s a “model” and why have one?  I think the best answer to that is to harken back to what I think are the origins of the “model” concept, then look at what those origins teach us about the role of models in network transformation.

At one level, modeling starts with a software concept called “DevOps”.  DevOps is short for “Development/Operations”, and it’s a software design and deployment practice aimed at making sure that when software is developed, there’s collateral effort undertaken to get it deployed the way the developers expected.  Without DevOps you could write great software and have it messed up by not being installed and configured correctly.

From the first, there were two paths toward DevOps, what’s called the “declarative” or “descriptive” path, and what’s called the “prescriptive” path.  With the declarative approach, you define a software model of the desired end-state of your deployment.  With the prescriptive path, you define the specific steps associated with achieving a given end-state.  The first is a model, the second is a script.  I think the descriptive or model vision of DevOps is emerging as the winner, largely because it’s more logical to describe your goal and let software drive processes to achieve it, than to try to figure out every possible condition and write a script for it.

Roughly concurrent with DevOps were two telecom-related activities that also promoted models.  One was the Telemanagement Forum’s “NGOSS Contract”, and the other the IPsphere Forum’s notion of “elements”.  The TMF said that a contract data model could serve as the means of associating service events and service processes, and the IPSF said that a service was made up of modular elements assembled according to a structure, and “orchestrated” to coordinate lifecycle processes.

What’s emerged from all of this is the notion of “models” and “modeling” as the process of describing the relationship between components of what’s a logically multi-component, cooperative, system that provides a service.  The idea is that if you can represent all suitable alternative implementation strategies for a given “model”, you can interchange them in the service structure without changing service behavior.  If you have a software process that can perform NGOSS-contract-like parsing of events via the service model represented by a retail contract, you can use that to manage and automate the entire service lifecycle.

I think that most operators accept the idea that future service lifecycle management systems should be based on “models”, but I’m not sure they all recognize the features of the models that model derivation as I explained it would require.  A model has to be a structure that can represent as two separate things the properties of something and the realization of those properties.  It’s a “mister-outside-mister-inside” kind of thing.  The outside view, the properties view, is what we could call an “intent model” because it focuses on what we want done and not on how we do it.  Inside might be some specific implementation, or it might be another nested set of models that eventually decompose into specific implementations.

One of the big mistakes made in modeling is the requirement for event integration.  Each model element has an intent and a realization, and the realization is the management of the lifecycle of that element.  Thus, every model element has its own events and operating states, and these define the processes that the model requires to handle a given event at a given time.  If you don’t have state/event handling in a very explicit way, then you don’t have a model that can coordinate the lifecycle of what you’re modeling, and you don’t have service automation.

One of the things I look for when vendors announce something relating to SDN or NFV or cloud computing or transformation is what they do for modeling.  Absent a modeling approach that has the pieces I’ve described, you can’t define a complete service lifecycle in a way that facilitates software automation, so you can’t have accurate deployments and you can’t respond to network or service conditions efficiently.  So, no opex savings.

Models also facilitate integration.  If a service model defines the elements of a service, each through its own model, and defines the service events and operating states, then you can look at the model and tell what’s supposed to happen.  Any two implementations that fit the same intent model description are equivalent.  Integration is implicit.  Absent a model, every possible service condition has to somehow figure out what the current service state is, and what the condition means in that state, and then somehow invoke the right processes.  The service model can define even the APIs that link process elements; with no model what defines them, and insures all the pieces can connect?

Where something like policy management fits into this is a bit harder to say, because while we know what policies are at a high level (they are rules that govern the handling of conditions), unlike models it may not be clear how these rules relate to specific lifecycle stages or what specific events the conditions of the policies represent.  It’s my view that policy management is a useful way of describing self-organizing systems, usually ones that have a fairly uniform resource set on which they depend.

Router networks are easily managed using policies.  With NFV-deployed router instances, you have to worry about how each instance gets deployed and how it might be scaled or replaced.  It’s much more difficult to define policies to handle these dependencies, because most policy systems don’t do well at communicating asynchronous status between dependent pieces.  I’m not saying that you can’t write policies this way, but it’s much harder than simply describing a TMF-IPSF-DevOps declarative intent model.

Policies can be used inside intent models, and in fact a very good use for policies is describing the implementation of “intents” that are based on legacy homogeneous networks like Ethernet or IP.  A policy “tree” emerging from an intent model is a fine way of coordinating behavior in these situations.  As a means of synchronizing a dozen or hundred independent function deployments, it’s not good at all.

This all explains two things.  First, why SDN and NFV haven’t delivered on their promises.  What is the model for SDN or NFV?  We don’t have one, and so we don’t have a consistent framework for integration or service lifecycle management.  Second, why I like the OASIS TOSCA (Topology and Orchestration Specification for Cloud Applications).  Because it’s all about doing the very thing that’s too dynamic and complex to control via policies.  Remember, we generally deploy cloud applications today using some sort of model.

Integration is fine.  API specifications are fine.  Without models, neither of them are more than a goal, because there’s no practical way to systematize, to automate, what you end up with.  We will never make controlled services and service infrastructure substitute for autonomous and adaptive infrastructure without software automation, and it’s models that can get us there.  So forget everything else in SDN and NFV and go immediately to the model step.  It’s the best way to get everything under control.