Taking a Deeper Dive into Intent Modeling…and Beyond

One of the topics the people I speak with (and work with) are most interested in is “intent modeling”.  Cisco made an announcement on it (one I blogged on) and the ONF is turning over its intent-model-based northbound interface (NBI) work to the MEF.  Not surprisingly, perhaps, however popular the notion might be, it’s not clearly understood.  I wasn’t satisfied with the tutorials I’ve seen, so I want to explore the concept a bit here.

Intent modeling is obviously a subset of modeling.  In tech, modeling is a term with many uses, but the relevant one deals with virtualization, where a “model” is an abstract representation of something—a black box.  Black boxes, again in popular tech usage, are things that are defined by their visible properties and not by their contents.  It’s what they can do, and not how they can do it, that matters.

It’s my view that the popular tech notion of a model or black box has really been, or should have been, an “intent model” all along.  The difference that’s emerged in usage is that a model in virtualization normally represents an abstraction of the thing that’s being virtualized—a server, for example.  In intent modeling, the abstraction is at a higher level.  A good way to illustrate the difference is that you might use a model called “router” in virtualization, one that could represent either a physical box or a hosted instance of router software.  In strict intent modeling parlance, you’d probably have a model called “IP-Network” that represented the intent to do connectionless forwarding between points based on the IP header.

This point is important in understanding the notion of intent modeling, I think.  The approach, as the original ONF white paper on the topic shows, is to represent how a user system frames requests to a provider system.  Obviously, a user system knows the service of an IP network but not the elements thereof.  However, in a practical sense and in a virtualized-and-real-network world, a single model at the service level doesn’t move the ball much.  In the ONF work, since the intent model is aimed at the NBI of the SDN controller, there’s only one “box” underneath.  In the virtual world, there could be a global network of devices and hosted elements.

The main property of any virtualization model, especially an intent model, is that all implementations of the model are interchangeable; they support the same “intent” and so forth.  It’s up to the implementers to make sure that’s the case, but it’s the critical property that virtualization depends on.  You can see that this has important implications, because if you have a single model for a vast intent (like “network”) then only that vast intent is interchangeable.  You’d have to get a complete model of it to replace another, which is hardly useful.  You need things to be a bit more granular.

To me, then, the second point that’s important about an intent model is that intent models decompose into useful layers.  A “service” might decompose into “access” and “core”, or into “networks” and “devices”.  In fact, any given level of an intent-modeled service should be able to decompose into an arbitrary set of layers based on convenience.  What’s inside an intent model is opaque to the user system, and as long as it fulfills the user-system intent it’s fine.  It’s up to the modeling/decomposition process to pick the path of implementation.

Where I think intent modeling can go awry is in this layer stuff.  Remember that you can substitute any implementation of an intent model.  You want to decompose any layer of a model far enough to be sure that you’ve covered where you expect alternative implementations.  If you have a “router” model, you might want to have a “device-router” and “hosted-router” decomposition path, for example, and perhaps even an “SDN-router” path.  Good management of the modeling hierarchy is critical for good implementation.

It follows that a modeling approach that doesn’t support good management of a hierarchy isn’t going to be optimal.  That means that for those looking for service and network models, even those based on “intent”, it’s vital that you insure your modeling approach can handle versatile hierarchies of decomposition.  It’s also vital that you remember what’s at the bottom of each of the hierarchical paths—real deployment and lifecycle management.

A virtualization or intent model can be “decomposed” into lower-level models, or implemented.  This has to happen at the point where further abstraction isn’t useful in creating interoperability/interchangeability.  If the implementation of a “router” model is a device, for example, then the inside of that lowest level of model is a set of transformations that bring about the behaviors of the “router” that the user system would want to see.  That would probably happen by creating configuration/management changes to the device.  If the implementation is deployment of a software instance of a router, then the implementation would have to include the deployment, loading, configuration, etc.

This is the point where you have to think about lifecycle management.  Any intent model or virtualization model has to be able to report status, meaning that an implicit parameter if any layer of model is a kind of SLA representing expectations on the properties of the element being modeled.  Those could be matched to a set of parameters that represent the current delivery, and both decomposition and implementation would be responsible for translating between the higher-level “intent” and whatever is needed for the layer/implementation below.

The challenge with lifecycle management in a model-driven hierarchy is easy to see.  If Element B is a child of Element A, and so is Element C, then the state of A depends on the combined states of B and C.  How does A know those states?  Remember, this is supposed to be a data model.  One option is to have actual dedicated service-specific software assembled based on the model structure, so there’s a program running “A” and it can query “B” and “C”.  The other option is to presume that changes in the state of all the elements in a hierarchical model are communicated by events that are posted to their superior objects when needed.  “B” and “C” can then generate an event to “A”.

Intent modeling will surely help interoperability, because implementations of the same “intent” are more easily interchangeable.  It doesn’t necessarily help service lifecycle automation because intent modeling is a structure of an API.  That means it’s a process, a program component.  The trick in service automation is to create a description of event-to-process linkages.  Thus, data-driven event handling combines with intent-modeled processes to create the right answer.

This is the situation I think the TMF great-thinkers had in mind with their NGOSS Contract data-drives-events-to-processes notion.  It’s what I believe is the end-game for intent modeling.  If you can model not only the structure of the service but the way that processes handle lifecycle events, then you can truly automate the service.  I’ve fiddled with various approaches for a very long time (almost ten years at this point) and I came to that conclusion very quickly.  I’ve not been able to validate other options, but the market has to make its own decision here—hopefully soon.