A Deep Dive into Intent Models, and What they Need to Include

It may be time to look deeper into intent models.  I’ve used the term in a number of blogs, and while the sum of those blogs probably gives an acceptable picture of what the approach could bring to both computing and networking, there seems to be a lot of pressure to rethink both IT and networking these days.  That pressure is, perhaps, best approached (if not addressed) with a more specific look at intent modeling.  Can it really help transform, help integrate the transformation of, networks?

At a high level, an intent model is an abstraction that represents a set of functions.  In one sense, it’s the classic “black box”, whose properties can be determined by the relationship between its inputs and outputs.  In a deeper sense, though, an intent model shouldn’t require inspection to determine properties, not even inspection of what’s visible at the edge.  An intent model should expose specific things that represent useful features/properties.  The implementation is hidden, but the interfaces are not only exposed but published in detail.  It’s this property that makes intent models useful in building things made up of multiple pieces.  Think of them as LEGOs that can be used to build stuff.

I’ve played with the intent-model concept for well over a decade, and the view I’ve developed through that exercise is that while the specific interfaces exposed by an intent-modeled element will vary depending on the nature of the features/properties the model represents, there are classes of interfaces that are important, so let’s look at these first.

Intent models seem to expose four classes of interfaces.  It’s important to note that these are “logical” interfaces, and that when a modeled element has actual physical interfaces like Ethernet ports, they may support multiple logical interfaces.

The first class is the data plane interfaces, which represent the interfaces through which information is sent to or received from the model.  There are a variety of these interfaces, but they all have a direct relationship to the primary function of the element that’s being modeled.  Router intent models, for example, expose port/trunk interfaces that are data plane interfaces, and software exposes APIs.

The second class of interface is the management and parameter class.  These interfaces provide the means of parametric control of the modeled element.  Things like an HTML (web) interface to a management element, a command line interface (CLI), or an SNMP port, are examples of this.  In software, management APIs may also be provided.

The third interface class is the control-plane cooperative interface.  This interface is used to support collaboration among the intent-modeled elements themselves.  Adaptive discovery, event exchange, and similar things would be supported via this interface.  Today, in IP networks, these interfaces normally share a physical connection with others, particularly the data plane interfaces.

The final class is probably the least intuitive but perhaps the most important.  It’s the behavior model interface.  It’s becoming common practice in the IT world to have a descriptive model of a complex system, one that represents the goal-state behavior.  The system then works to attain that state.  This interface would be used to communicate the goal-state to a modeled element, and also to deliver the current state.

This interface offers us more than just the (significant) advantage of having an explicit goal-state reference.  It also offers a way of integrating simulation with operations, and that includes the results of AI analysis of conditions.  By allowing simulation/AI to establish or modify the goal-state of an intent-modeled element, we can inject guidance into the recovery from abnormal conditions, bound the range of remedies that might be applied, and more.

Interfaces don’t quite tell all the story, of course.  An important feature of an intent-modeled object that’s related to interfaces is the property of identity and behavior.  My assumption has been that this property is what’s stored in a catalog that helps bind modeled objects into complex relationships, but I also required, in my own implementations of the concept, that each object deliver this via the management and parameter interface.  In the first of the two ExperiaSphere projects I did, this was done by sending the object a command (“speak”), to which it responded with the identity and behavior data.

Identity and behavior, IMHO, describes the taxonomy of the modeled element.  For example, we might have a classification “Network-Node”, under which we might have “Optical-Device” then “ROADM”, or “Forwarding-Device”, under which we might have “Router”.  The presumption I made was that a fully qualified element name would denote a single model, all implementations of which (being contained inside the black box) would be equivalent.  I also assumed that lower-level elements in the taxonomy would “extend” (in Java terms) the higher-level ones, meaning that a “Router” would present all of the interfaces of a “Network-Node” but would extend and perhaps redefine them a bit.

Identity and behavior also introduce the fact that the classification hierarchy I’ve noted here isn’t the only hierarchy.  A good intent-modeled service or application should be conceptualized as a series of functional decompositions.  “Service” might decompose to “Access” and “Core”.  These are all functional composites, not devices or components, so an intent model can contain other intent models.  When that’s the case, the containing model has to present the interfaces described, and since all intent models are black boxes, the fact that this particular one is really a composite is kind of irrelevant at the composition and management level.  However, it’s important at the management level, for reasons we’ll get to.

For an intend-modeled element to be truly and optimally composable, it’s critical that every implementation of a singular modeled object type have identical behavior at all interfaces.  Thus, the interior of each of the black box has to satisfy all the interfaces in the same way, so that what’s inside can never be inferred.

This, to me, is the most important property of an intent-model-based composition of features for an application or service.  Without the presumption of full equivalence, anyone building a composition has to resolve interface differences.  With the full-equivalence presumption, the creator of a model element has to fulfill the interfaces of the class of element their gadget (or software) represents, which assures it can be composed into something.  Again, referencing Java, a procedure that “implements” a Java “interface” must fulfill the interface’s defined properties exactly.  Something that “extends” is expected to add something but “implement” the base to which its additions are made in the proper way.

The behavior model interface is the least intuitive of all, but it might be the most critical.  An intent model may (and perhaps will, most of the time) define a system with self-healing properties.  In other words, it will recognize a proper operating state and seek to achieve it.  Rather than have that state a constant within the implementation, it should be something that can be provided (in standard form for the specific fully-qualified model class).  Further, the current state should be available on request, so that if the object reports a fault (which is defined as a deviation from the goal-state) that cannot be corrected internally, the associated (higher-level) external processes can be expected to decode the issue for remediation.

One implicit point in all of this is that an intent-modeled multi-element application or service has to be seen in two dimensions.  The one is the data or service dimension, which reflects the functionality of the application or service overall.  Pushing data is the data/service dimension of an IP network.  The second dimension is the management/control dimension, which is responsible for lifecycle management.  A critical lesson I’ve learned is that this second dimension has to be event-driven.

Event-driven lifecycle management is, IMHO, essential for services or applications based on discrete components.  That’s true whether they’re based on intent-modeled elements or not.  A composite system is a bunch of churning pieces, each of which has its own operating state and its own way of impacting the state of the service or application overall.  This is asynchronicity in action, people, and if you want to coordinate that kind of stuff you have to do it with state/event processing, which of course implies three things—states, events, and a model.

The notion of state is based on the fact that most complex systems have a fairly small number of possible conditions of behavior.  Think “working” and “not-working” as a start, and perhaps add in things like “preparing-to-work” and “repairing”, and you’ll get the picture.  There may be a dozen or a hundred reasons why something is “not-working”, but all result in…well…the thing not working.  States define the meaningful set of conditions on which interpretation of future events will depend.

Events are signals of change, of something having happened.  The “something” may come from an external source above the current element or system, or below/within it.  The way the event is interpreted is based on the state.  For example, an event signaling a request to activate, received in the “preparing-to-work” state, is an indicator that actions should be taken to become operational (and enter the “working” state when that’s complete).  The same event, while in the “working” state, is a logic error somewhere.

The model is a logical structure that indicates the parent/child relationships of the hierarchy of elements in an application or service, and provides the state/event table that, for each element, defines how events and states will be handled.  In a modern cloud-think process, the model is also where state is maintained and where shared operating variables are stored.  An application or service, in lifecycle terms, is the model.

I think that intent-modeled elements are the critical, still-missing, technical piece of true cloud-native, next-gen applications and services.  I also think that we’re converging on some model because of the rapid progress in containers (Kubernetes) and the cloud (particularly hybrid cloud).  The obvious questions are 1) will the emerging model be intent-based, and 2) is there any chance that networking will somehow play a role in development or adoption?  I think the answers may be related, and to explain why I need to invoke my “work fabric” approach, covered HERE.

It’s a convenient abstraction to view inter-component exchanges as workflows, and the network and software framework in which they’re exchanged as work fabrics.  The properties of a work fabric are set by the software interfaces and network behaviors.  The requirements are set by the application or service being supported.  Networking is an extreme example of a high-availability, low-latency, mass-market requirement set.

The workflow/work-fabric model likely to emerge from the cloud computing universe is suitable for the control-plane piece of networking, but likely not ideal.  It’s probably not suitable for the data plane.  The networking industry, and particularly the operators, have focused on transformation by virtualizing the devices rather than the networks, which has a very limited impact on overall costs, and which tends to make operator initiatives diverge from the cloud.  The often-heard desire to make things like NFV “cloud-native” ignores the fact that cloud-native is about dissecting monoliths into functional pieces, while NFV is about hosting monoliths differently.  Even now, startup focus in the networking space is on new device models, as THIS article shows.

Still, a “disaggregated” device model might be a useful concept.  In a general sense, a network device is a bit like an application that has a very specific hardware dependency.  Chip-enabled forwarding is certainly a specific hardware feature, and this kind of specialization is already being considered in the cloud world.  Similarly, control/data-plane separation, a part of SDN and some emerging segment routing schemes, is a consideration on the networking side.  I’m not sure that either of these developments can be linked to a specific awareness of the long-term problem/opportunity, but both are likely to move things in the right direction.

The downside to all this “creeping commitment” stuff is the “creeping” part, and the inherent risk of the “diminishing marginal reward” it creates.  I offer the example of full-scale operations automation.  When the concept came along about seven years ago, lifecycle automation would (if fully realized) have cut process opex in the network and related areas by 20% to 25%, saving almost as much as some operators’ capital budgets.  There was no realization of the potential, and still isn’t, and so operators have adopted less efficient tactical and limited opex reduction strategies.  They’ve cut about 15% from opex, and the remaining 5-10% really doesn’t justify the effort needed to fully automate lifecycle management.

We have sort-of-intent models today, in the descriptive models used in IT deployments.  We’ve had something pretty close to the ideal approach, in things like TOSCA and the TMF NGOSS Contract, for a decade or more.  I think it’s clear that we’re going to get more intent-based over time, but it’s also pretty clear that we’d gain more if we got there faster.