Facilitating the Integration of Open Elements in Networks

You can’t build a network with a single element, or at least not a useful one.  As soon as you have multiple elements, though, you have the problem of getting them all to work together.  If those multiple elements are themselves made up of a set of layered software components, then is the problem of coordination impossible?  It better not be, or we’ll end up in a mess as we evolve to open networks and white boxes.  Will it demand so much integration that cost advantages of an open model are compromised?  That better not be either, but all of this could be a challenge to prevent.

A network is a cooperative system, and that means more than we might think.  When you build a network from devices, the interfaces the devices present and the protocol specifications that the devices support define the cooperation.  While there can still be a bit of tweaking, you can generally put compatible devices in the right places and build a network.

Things are a bit more complicated in the virtual world.  Getting virtual devices to cooperate is a bit more complicated than getting real devices to cooperate, and if you expand the notion of virtualization to the function/features level, it’s even more complex.  The problem is that you’re assembling more things, and there are more ways they can go together.  Most, of course, will be wrong.  Given that making all your network clocks chime at the same time is the biggest reason users give for sticking with a single vendor, you can see how open-model virtual networking could be the start of an integration nightmare for users.

Open-source software has always had integration issues.  You have an open-source operating system, like Linux.  It comes in different flavors (distributions or “distros”).  Then there’s middleware tools, things like your orchestration package (Kubernetes), monitoring, and so forth.  When you build something from source code, you’ll need a language, language processor, libraries, and so forth.  All of these have versions, and since features are introduced in new versions, there’s always the risk that something needs a new version of a lower-level element for those features.  Something else might need the old version.

Companies like Red Hat jumped to address this by synchronizing a set of tools so that everything was based on common elements and could be made to work together.  That doesn’t resolve all the “cooperation” problems of the virtual world, but it does at least get you sort-of-running.  We’re starting to see articles about the issues of integration, as they relate to the use of open-source in network elements.  Can the problems be resolved by anything short of an army of integrators that could toss out the cost advantages of open-model networks?  It might be possible through modeling techniques that I’ve advocated for other reasons.

Intent modeling is another name for “black box” functionality.  A black box is something whose properties cannot be viewed directly, but must be inferred by examining the relationship between its inputs and outputs.  Put another way, the functionality of a black box or an intent model is described by its external properties, and anything that delivers those properties is equivalent to anything else that does the same.

Much of the value of intent modeling comes in service or application lifecycle automation.  The classic problem with software implementations of features or functions is that how they’re implemented often impacts how they’re used.  Function A and Function B might do the same thing, but if they do it based on a different set of parameters and APIs, one can’t be substituted for the other.  That creates a burden in adopting the functions, and uncertainty in managing them.  I’ve noted the value of intent modeling in service lifecycle automation in many past blogs, but it has value beyond that, in facilitating integration.

One reason is that can happen is that intent modeling can guide the implementations of what’s inside those black boxes.  You could wrap almost anything in an intent model, but there’s an implicit presumption that what’s inside will either support those external properties directly, or be adapted to them (we’ll get more into this below).  The easier it is to accomplish this, the cheaper it is to use intent models to operationalize applications or services.

This adaption process has to support what we’d normally think of as the “real” external properties of the modeled elements, the visible functional interfaces, but that’s not the end of the story.  In fact, it may be less than half the important stuff, when we consider open substitution of modeled elements.

If our Function A and Function B were designed as “black boxes” or intent models, we could expect the external connections to these functions to be the same, if they were implementations of the same model.  That would mean that they could be substituted for each other freely, and everything would work the same whichever one was selected.  That would mean that when assembling complex software structures, we wouldn’t need to know which implementation was in place, because the implementations are inside those opaque black boxes.

In order for this to work, though, we have to look at intent-modeled functions at a bit of a deeper level.  The traditional vision focuses on what I’ll call the functional interfaces, the interfaces that bind the elements together in a cooperative system.  Those interfaces are critical for composition and management of complex systems of software components, but they’re the “horizontal” direction of a bidirectional challenge.  We also have to look at the “vertical” direction.

Consider this:  Function A and B are both implementations of the same intent model, have the same external interfaces and properties.  Can we substitute one for another in a hosting environment?  We actually don’t know, because the implementation is hidden inside.  We could presume compatibility with a given infrastructure set if we could assume one of three conditions were true.

The first is that, by implementation mandate, all of our intent models had to be designed for a specific hosting framework.  That would be practical in many cases because standardizing operating system and middleware features is a common practice in data center and cloud implementations.

The second is that each of the modeled implementations contains the logic needed to adapt it to a range of implementations that include all our desired options.  Perhaps the deployment process “picks” a software element based on what it has to run on.  This is practical as long as we build the ability to select software based on where/how it’s deployed, provided by the orchestration tools.

The final option is that there is a specific “vertical” or “mapping” API set, a second set of interfaces to our black box that are responsible for linking abstract deployment options to real infrastructure.  This would correspond to the P4 APIs or to the plugins of OpenStack.  This would appear to be the most practical approach, providing we could define these vertical APIs in a standard way.  To let every modeled element decide their own approach would eliminate any real interchangeability of elements.

That standardization of the vertical API relationships is a challenge, but it’s one similar to the challenge of establishing a “class hierarchy” for virtual elements and defining the horizontal interfaces.  I hinted in a blog earlier this week that we likely needed both an infrastructure API set and a functional API set for a good VFPaaS, and I think these points illustrate the importance.  Without it, resolving the hosting requirements of network functions/features can only be solved by applying policies to constrain selection of objects to those that fit hosting needs.  If hosting needs can’t be standardized (as would likely be the case for core-to-edge-to-premises migration of functions), then the standardization of the infrastructure APIs is the only sensible option.