Finding a Model for Open-Model Networks

If operators want open-model networking, it’s inescapable that they need an open model.  We know from past experience that simply insisting on white boxes and open-source software doesn’t secure the benefits operators expect; there’s too much cutting and fitting required in building a cooperative system.  That raises two questions; what exactly is in an “open model”, and who is likely to provide it?  The first question is fairly easy to frame out in detail, but more complicated when you try to answer it.  The second is a “Who knows?”, because candidates are just starting to emerge.

Virtualization of networks involves dealing with two dimensions of relationships, as I’ve noted in prior blogs.  One dimension is the “horizontal” relationship between functional elements, the relationship that corresponds to the interfaces we have and the connections we make in current device-centric networks.  The other dimension is the “vertical” relationship between functional elements and the resources on which they’re hosted—the infrastructure.  An open model, to be successful, has to define both these relationships tightly enough to make integration practical.

Because most open-model networks will evolve rather than fall from the sky in full working order, they’ll have to interact with traditional network components when they come into being.  That means that there has to be a boundary, at least, where the open-model stuff looks like a collection of traditional network technology.  That might come about because open-model components are virtual devices or white-box devices that conform to traditional device interfaces (router instances or white-box routers), or because a boundary layer makes an open-model domain look like a router domain (Google’s SDN-inside-a-BGP-wrapper).

The common thread here is that an open-model network has to conform to one of several convenient levels of abstraction of traditional networks.  It can look like a whole network, an autonomous system or administrative domain, a collection of devices, or a single device.  It’s important that any open model define the range of abstractions it can support, and the more the better.  If it does, have we satisfied the open-model requirements?  Only “perhaps”.

The challenge here is that while any coherent abstraction approach is workable, all of them aren’t equally open.  If I can look like a whole ISP, presenting standard technology at my edge and implementing it in some proprietary way within, then I’ve created a network model more likely to entrap users than the current device-network proprietary model.  Thus, we would have to say that to retain “openness”, our model has to either define functional, horizontal, relationships just as current networks do, or rely on accepted standards to define relationships within its lowest level.  If you can look like a collection of devices, then it’s fine if your implementation defines how that happens, based on accepted technical standards and implementations.

If that resolves our horizontal relationship requirements in an open model, we can move on to what’s surely the most complex requirement set of all, the vertical relationships between functional elements and function hosts.  The reason this vertical relationship set is complex is that there are multiple approaches to defining it, and each has its own set of plusses and minuses.

If we presume that the goal is to create open-model devices, the simplest way of linking function to hosting, then what we need is either a standard hardware platform (analogous to the old IBM PC hardware reference) or a set of standard APIs to which the device software will be written.  In the former case, all hardware is commoditized and common, and in the latter, hardware is free to innovate as long as it preserves those APIs.  Think Linux, or OpenStack, or P4.  Either of these approaches is workable, but obviously the second allows for more innovation, and for open-model devices, it seems to be the way the market is going.

The problem with the open-model device approach is that it doesn’t deal with “virtual” devices, meaning collections of functionality that deliver device-like behavior but are implemented to add scalability, resilience, or some other valuable property.  It also doesn’t work for “network elements” that aren’t really devices at all, like OSS, BSS, or NMS.  For these situations, we need something more like an open-function model to allow us to compose behaviors.  Even here, though, we face decisions that relate back to our initial question of just what an open-model really models.

If we could agree on a standard set of functions from which network services and features are composed, we could define them and their horizontal and vertical relationships just as we did with devices.  Something like this could likely be done for traditional network services, the “virtual network functions” of NFV, but it would require creating that kind of “class hierarchy” that I’ve talked about in prior blogs, to ensure that we got all our functions lined up, and that functions that were variations on common themes were properly related to ensure optimality in service-building and management.

This would be a lot harder in the network software space, because as anyone who’s ever written application software knows, there are a lot of ways of providing a given set of features, and the structure of the software depends as much on the path you choose as on the features at the end of the road.  For this particular situation, we have to look at two other options, the decomposable service option, and the universal hosting and lifecycle option.

Decomposable service approaches to openness say that network software divides into a specific set of “services” whose identity and relationships are fixed by the business processes or feature goals the software is intended to support.  Payroll applications, for example, have to maintain an employee list that includes payment rate, maintain a work record, print paychecks, and print required government documentation.  In effect, these decomposable services are the “virtual devices” of software elements.

The presumption for this option is that it’s the workflow among the decomposable services that has to be standardized.  The location of the services, what they’re hosted on, and how the services are implemented are of less concern.  This is a workable approach where overall services can be defined and where the assumption that each would come from a single source, in its entirety, is suitable for user needs.

The universal hosting and lifecycle option says that the goal is to build network software up from small discrete components in whatever way suits user needs best.  The goal is to unify how this combination of discrete components is hosted and managed.  This option requires that we standardize infrastructure, meaning hardware and associated platform software, and that we unite our function-handling through deployment tools that are versatile enough to handle a wide variety of software and service models.

I think this approach mandates a specific state-based vision, meaning that there’s a service model that can apply lifecycle process reactions to events to achieve the goal-state operation the service contract (implicit or explicit) mandates.  This is the most generalized approach, requiring less tweaking of existing network functions or existing applications, but it’s also highly dependent on that broad lifecycle management capability that we’re still unable to deliver.

What’s the best approach?  My view is that the decomposable service concept would offer the industry the most in the long run, but that it faces a pretty significant challenge in adapting to the current set of network functions.  That challenge might be minimized if we continue to struggle to frame a realistic way of exploiting those functions, which is the benefit of the universal hosting and management approach.  We’ll probably see this issue decided by the market dynamic over the next year or so.