Why Function Integration Needs to Pick an Approach

“What are we supposed to integrate?”  That’s a question a senior planner at a big telco posed to me, in response to a blog where I’d commented that virtualization increased the need for integration services.  The point caught me by surprise, because I realized she was right.  Integration is daunting at best, but one of the challenges of virtualization is that it’s not always clear what there is to integrate.  Without some specifics in that regard, the problem is so open-ended as to be unsolvable.

In a device network, it’s pretty obvious that we integrate devices.  Devices have physical interfaces, and so you make connections from one to the other via those interfaces.  You parameterize the devices so the setup of the interfaces is compatible, and you select control-plane protocols (like BGP) so everyone is talking the same language.  We all know that this isn’t always easy (how many frustrations, collectively across all its users, has BGP alone generated?) but it’s at least possible.

When we move from device networks to networks that consist of a series of hosted virtual functions, things get a lot more complicated.  Routers build router networks, so functions build function networks—it seems logical and simple.  The problem is that “router” is a specific device and “function” is a generic concept.  What “functions” are even supposed to be connected?  How do they get connected?

Standards and specifications, in a general sense, aren’t a useful answer.  First, you really can’t standardize across the entire possible scope of “functions”.  The interface needed for data-plane functions, for example, might still have to look like traditional network physical interfaces, such as Ethernet.  The interface needed for authentication functions might have to be an API.  Second, there are so many possible functions that it’s hard to see how any given body could be accepted to standardize them all.  Finally, there’s way too much time needed, time we don’t have unless we want virtualization to be an artifact of the next decade.

A final issue here is one of visualization.  It’s easy to visualize a device, or even a virtual device.  It’s a lot harder to visualize a function or feature.  I’ve talked to many network professionals who simply cannot grasp the notion of what might be called “naked functions”, of building up a network by collecting and integrating individual features.  If that’s hard, how hard would it be to organize all the pieces so we could at least talk integration confidently?

I’ve been thinking about this issue a lot, and it appears to me that there are two basic possibilities in defining a framework for virtual-function integration, including the ever-popular topic of “onboarding”.  One is to define a “model hierarchy” approach that divides functions into relevant groups, or classes, and provides a model system based on that approach.  The other is to forget trying to classify anything at all, and instead devise an “orchestration model” that defines how stuff is supposed to be put together.

We see examples of the first approach where we have a generalized software module that includes “plugins” that specialize it to something specific.  OpenStack uses this approach.  The challenge with it is to somehow avoid having to define thousands of plugins because you’ve been completely disorderly in how you defined the input to that plugin process.  That’s where the idea of a hierarchy of classes comes in.

All network functions, in this approach, descend from a superclass we could call “network-function”.  This class would be assigned a number of specific properties; it might, for example, have a function you could call on which would have it return its specific identity and properties.  In the original ExperiaSphere project, I included this as the function “Speak”.  Most properties and features would come from subordinate classes that extended that superclass, though.  We could, for example, say that there were four subclasses.  The first is “flow-thru-function” to indicate that the function is presumed to be part of a data plane that flowed traffic in/out.  The second is “control-plane-function” that handled peer exchanges to mediate behavior (BGP would fall into this), and the third “management-plane-function” where management control is applied.  The final subclass is “organizational-function” to cover functions that are intended to be part of the glue that binds a network of functions together.

If we look a bit deeper here, we can find some interesting points.  First, there are going to be many cases where network functions depend on others.  Here, “flow-thru-function” is almost certain to include a control-packet shunt facility, a kind of “T” connection.  This would feed the “control-plane-function” in our example, providing for handling of control packets.  Since it’s necessary to separate control and data plane for handling in our example, rather than require a separate function to do that, which would increase cost and latency, we could require it of at least some flow-thru-functions.

The second point is that we would need to define essential interfaces and APIs for each of our classes.  The goal of doing this based on a class hierarchy is to simplify the process of adapting specific implementations of a class, or implementations of a subordinate class, to software designed to lifecycle-manage the class overall.  If we know what interfaces/APIs a “firewall-function” has, and we write software that assumes those APIs, then all we have to do is adapt any implementations to those same interfaces/APIs.

Another useful point is then raised by the last one.  We still need to define, in order to build our hierarchy of classes, some basic assumptions about what network functions do and how they relate.  We also need to have vendors/products aligned with the classes.  If both of these are done, then the integration of a function requires the creation of whatever “plugin” code is needed to make the function’s interfaces conform to the class standard.  Vendors provide the mapping or adapting plugins as a condition of bidding for the business.

The other approach is simpler on one hand and more complicated on the other.  It’s simpler because you don’t bother defining hierarchies or classes.  It’s more complicated…well…because you didn’t.  In fact, it’s complicated to explain it without referencing something.

If you harken back to my state/event-based concept of service management, you recall that my presumption was that a service, made up of a collection of lower-level functions/elements, would be represented by a model.  Each model element, which in our example here would correspond to a function, has an associated state/event table that relates its operating states and events to the processes that are supposed to handle them.  Remember?

OK, then, what the “orchestration model” says is that if a vendor provides the set of processes that are activated with all the state/event combinations, then these processes can take into account any special data models or APIs or whatever.  The process set does the integration.

Well, it sort-of does.  You still have to define your states and events, and you still have to agree on how events flow between adjacent elements.  But this seems a lot less work than building a class hierarchy.  But even here we have to be wary of appearances.  If there are a lot of vendors and a lot of functions, there will be a lot of work done, when had we taken time to put together our class hierarchy, we might have been able to do some simple adapting and largely reuse all those processes.

A class-hierarchy approach organizes functions, following the kind of practices that have been used for decades in software development to create reusable components.  By structuring functional interfaces against a “class reference”, it reduces the variability in interfaces associated with lifecycle management.  That limits how much integration work would be needed for actual management processes.  The orchestration model risks creating practices so specialized that you almost have to redo the management software itself to accommodate the variations in how you deploy and manage functions.  Class hierarchies seem likely to be the best approach, but the approach flies in the face of telco thinking and, while it’s been known and available from the first days of “transformation”, it never got much traction with the telcos.  The orchestration model almost admits to a loss of control over functions and deals with it as well as possible.

Our choice, then, seems a bit stark.  We can either pick a class-hierarchy approach, which demands a lot of up-front work that, given the pace of telecom activity to date, could well take half a decade, or we can launch a simpler initiative that could end up demanding much more work if the notion of function hosting actually catches on.  If we could find a forum in which to develop a class hierarchy, I’d bet on that approach.  Where that forum might be, and how we might exploit it, are as much a mystery to me as ever.

I think I know what to do here, and I think many others with a software background know as well.  We’ve known all along, and the telco processes have managed to tamp down the progress of that knowledge.  Unless we admit to that truth, pick a strategy that actually shows long-term potential and isn’t just an easy first step, and then support that approach with real dollars and enforced compliance with the rules the strategy requires, we’ll be talking about this when the new computer science graduates are ending their careers.