At the TMF event in Nice Verizon opened yet another discussion, or perhaps I should say “reopened” because the topic came up way back in April 2013 and it was just as divisive then. It’s the topic of “microservices” or breaking down virtual functions into very small components. NetCracker also had some things to say about microservices, and so it’s a good thing to be talking about.
If we harken back to April of 2013, we’re at a point where the NFV ISG had just opened its activity. There was still plenty of room to discuss scope and architecture, and there was plenty of discussion on both. This was the meeting where I launched the CloudNFV project, and it was also the meeting where a very specific discussion on “decomposition” came up.
Everyone knows that the purpose of NFV was to compose services from virtual functions. Anything that composes a whole from some parts will be sensitive to just how granular the parts are. We know, for example, that if you compose virtual CPE from four or five functional elements (firewall, NAT, etc.) you get some benefits. If you had a virtual function that consisted of all of these things rolled into one and that was as granular as you got, it’s hard to see how a physical appliance wouldn’t serve better. Granularity equals agility.
The “decomposition” theme relates to this granularity. Here, the suggestion was that operators require that virtual functions be decomposed not only into little feature granules, but even further into what today we’d call “microservices”. There are a lot of common elements in things like firewall, VPN, NAT, and so forth, so the decomposition camp says. Why not break things down into smaller elements to allow even totally new stuff to be built from the building blocks of the old. It carries service composition downward to function composition.
The operators really liked this, and so did some vendors (Connectem introduced it in a preso I heard), but the major vendors really hated it. They still do, because this sort of decomposition not of services but of functions threatens their ability to promote their own VNFs. But the fact that buyers and sellers are in conflict here is no surprise. The question is whether decomposition is practical, and if it is whether microservices are a viable approach.
Virtually all software that’s written today is already decomposed, in that it’s made up of classes or modules or functions or some other internal component set. My memory of programming techniques goes back to the ‘60s, and I can honestly say that even then there was tremendous pressure from development management to employ modular structures. Even in programming languages like assembler, or machine language, there were features to support “subroutines” or modular elements that called directly on the computer’s instruction set (for those interested, look up “Branch and Link”).
One might think that this long history of support for modularity would mean that it would be no big thing to decompose functions. Not necessarily. Then, as today, the big problem is less dividing software into modules than it is in assembling those modules in any way other than the original way.
Most software that’s composable is really designed to be composed at development time. There are frequently no convenient means provided to determine what data elements are needed and what format they’re expected to be in. Worse yet, the flow of control among the components may implicitly depend on efficient coupling—local passing of parameters and execution. For something to be a “service” or “microservice” in today’s terms, it would have to accept loose coupling through a network connection. That’s something that adds complexity to the software (how do you know where the component is and whether it’s available?) and also can create enormous performance issues through introduction of network delays into frequently used execution paths.
The point is that it’s an oversimplification to say that everything has to be decomposed and recomposed. There are plenty of examples of things that should or could not be. However, there are also examples of vendor intransigence and a desire to lock in customers, and quite a few of the functions that could be deployed for NFV could be decomposed further. Even more could be designed to be far more modular than they are. We have to strike a balance somehow.
NetCracker’s concept of making more of NFV and operations modernization about microservices is an example of how that could be done. If there’s a service whose lifecycle events are so frequent that they are almost data-plane functions, that service has a serious problem no matter how you deploy it. Generally, management and operations processes have relatively few “events” to handle. State/event tables are the most common way to represent lifecycle process phases and their response to events, and the intersection of the states and events defines a component, a “microservice” if you like, and one that’s probably not activated often enough that it couldn’t be network-coupled. I’ve advocated this approach from the first, back to that 2013 meeting of the ISG.
Event-driven OSS/BSS is one way of stating a goal for operations evolution—another is “agile”. Whatever the name, the goal is to make operations systems respond directly to events rather than imposing a flow as many systems do. This goal was accepted by the TMF almost a decade ago, but most operations systems don’t achieve it. A microservice-based process set inside a state/event lifecycle structure would be exactly what the doctor (well, the operator) ordered.
If we want to go further than this, into something composable even when the components have to stay local to each other, then we need to define the composition/execution platform much more rigorously. An example, for those who want more detail, is the Java Open Service Gateway Initiative (OSGi), which has both a local and remote service capability. Relatively few network functions now residing in physical network devices conform to this kind of architecture, which means you’d have to rewrite stuff or apply the microservices-and-decomposition model to new functions only.
It’s hard for me to see this stuff and not think of something like CHILL or Erlang or Scala—all of these are specialized languages that could be applied to aspects of virtual-function development. If you’re going to develop for a compositional deployment that ranges from local to network-coupled, you might want to make the location and binding of components more abstract. If you want to be able to do this in any old language you may need to define a PaaS in which stuff runs and make binding of components an element of that, so you can adapt to the demands of the application or to how its owners want to deploy it.
Microservices, composable operations, and “decomposition” of network functions are all good things, but there’s a lot more to this topic than meets the eye. Software agility at the level that operators like Verizon or vendors like NetCracker want demands different middleware, different programming practices. The big challenge isn’t going to be accepting the value of this stuff, or even getting “vendor support” of the concept. It’s going to be finding a way to advance something this broad and complex in as a complete architecture and business case. We’ve not figured that out for something relatively simple, like SDN or NFV.