In my blog yesterday I talked about service modeling, and it should be clear from the details I covered that lifecycle management, service automation, and event handling are critical pieces to NFV. The service model ties these elements together, but the elements themselves are also important. I want to talk a bit more about them today.
Almost a decade ago, the TMF had an activity called “NGOSS Contract” that later morphed into the GB942 specification. The centerpiece of this was the notion that a service contract (a data model) would define how service events related to service processes. To me, this was the single most insightful thing that’s come out of service automation. The TMF has, IMHO, sadly under-realized its own insight here, and perhaps because of that the notion hasn’t been widely adopted. The TMF also has a modeling specification (“the SID”, or Service Information and Data model) that has the essential features of a model hierarchy for services and even a separation of the service (“Customer-Facing”) and resource (“Resource-Facing”) domains.
Service automation is simply the ability to respond to events by invoking automated processes rather than manual ones. In yesterday’s blog I noted that the rightful place to do the event steering to processes is in the model (where the TMF’s seminal effort put it and incidentally where Ciena’s DevOps toolkit stuff makes it clear that TOSCA and Ciena also can put it). What we’re left with is the question of the events. Absent service events there’s no steering of events to processes and no service automation.
The event stuff can’t be ignored, and it’s more complicated than it looks. For starters, there’s more than one kind of service event. We have resource events that report the state of resources that host or connect service features. We have operations events that originate with the OSS/BSS, customer service rep, network operations center, or even directly with the customer. We also have model events that originate within a service model and signal significant conditions from one model element (a lower-level dependent one) to another (the higher-level one), for example. Finally, with NFV, we have virtual network function (VNF) events. Or we should.
One of the glaring gaps in NFV work so far is the relationship between virtual functions as elements of a service and both the resources below and the service structures above. The current NFV work postulates the existence of an interface between a virtual function (which can be made up of multiple elements, including some management elements) and the rest of the NFV logic, meaning the orchestration and management components. That’s at least an incomplete approach if not the wrong one; the connection should be based on events.
The first reason for this is consistency. If service automation is based on event steering to appropriate processes you obviously need events to be steered, and it makes little sense to have virtual functions interact with service processes in a different way. Further, if a virtual function is simply a hosted equivalent of a physical device (which the NFV work says it is) and if physical devices, through management systems, are expected to generate resource events, so should VNFs.
The second reason for this is serialization and context. Events are inherently time-lined. You can push events into a first-in-first-out (FIFO) queue and retain that context while processing them. If you don’t have events to communicate service conditions at all levels, you can’t establish what order things are happening in, which makes service automation totally impossible.
Reason number three is concurrency and synchronization, and it’s related to the prior one. Software that handles events can be made multi-threaded because events can be queued for each process and for multiple threads (even instances of a single process). That means you can load-balance your stuff. If load balancing is an important feature in a service chain, doesn’t it make sense that it’s an important feature in the NFV software itself? And still, with all of this concurrency, you can always synchronize your work through events themselves.
Generating events is a simple matter; software that’s designed to be event-driven would normally dispatch an event to a queue, and there the event could be popped off and directed to the proper process or instance or thread. Dispatching an event is just sending a message, and you can structure the software processes as microservices, which is again a feature that Ciena and others have adopted in their design for NFV software. When you pop an event, you check the state/event table for the appropriate service element and you then activate the microservice that represents the correct process.
State/event processes themselves generate events as one of their options. In software, the typical behavior of a state/event process is to accept the input, generate some output (a protocol message, an action, or an event directed to another process) and then set your “next-state” variable. Activating an ordered service works this way—you get an Activate event, you dispatch that event to your subordinate model elements so they activate, and you set your next-state to ACTIVATING. In this state, by the way, that same Activate event is a procedure error because you’re already doing the activating.
Can we make a VNF generate an event? Absolutely we can, just as we can make a hardware management system generate one. Who do we dispatch a VNF event to? To the service model element that deployed the VNF. That element must then take whatever local action is appropriate to the event, and then dispatch events to higher- or lower-level elements as needed.
Phrased this way, the NFV notion of having “local” or “proprietary” VNF Manager (VNFM) elements as well as “central” elements actually can be made to work. Local elements are part of the service object set that deploys the VNF—a resource-domain action in my terms. Central elements are part of the service object set that defines functional assembly and collection—the service-domain behaviors. In TMF terms these are Resource-Facing- and Customer-Facing Services (RFS and CFS, respectively).
If everything that has to be done in NFV—all the work associated with handling conditions—is triggered by an event that’s steered through the service data model, then we have full service automation. We also have the ingredients needed to integrate VNFs (they have to generate an event that’s handled by their own controlling object) and the tools needed to support a complete service lifecycle.
You also have complete control over the processes you’re invoking. Any event for any service element/object can trigger a process of your choice. There’s no need for monolithic management or operations systems (though you can still have them, as collections of microservices) because you can pick the process flavor you need, buy it, or build it. This, by the way, is how you’d fulfill the goal of an “event-driven OSS/BSS”.
This approach can work, and I think any software architect who looks at the problem of service automation would agree. Most would probably come up with it themselves, in fact. It’s not the only way to do this, but it’s a complete solution. Thus, if you want to evaluate implementations of NFV, this is what you need to start with. Anything that has a complete hierarchical service model, can steer events at any level in the model based on a state/event relationship set, and can support event generation for all the event types (including VNF events and including model events between model elements) can support service automation. Anything that cannot do that will have limitations relative to what I’ve outlined, and as an NFV buyer you need to know about them.