I’ve talked in the last two blogs about how intent modeling fits the way that SDN and NFV have to work, and also a bit about its relationship with OSS/BSS, TMF, and the ETSI models. Today I’d like to close this series of blogs with a discussion of intent modeling and orchestration. As it happens, it might be intent modeling that finally defines that notion fully, and also fully justifies the ISG.
When you read through the ETSI material as a (former) software developer and architect, you’re struck by one point. The documents implicitly define software that processes data. That point might seem trivial on the surface because, after all, isn’t that what software is supposed to do? Actually, in a general modern sense, the answer is that it’s not.
Let’s review for a second what an intent-modeled service would look like. It would be a series of connected/cascaded models/objects, with the top one representing the retail offering. As you develop downward, you’d pass through some number of layers of “commercial models” that describe interconnection of purchasable entities, and finally cross a border into describing how intent was translated into resource commitments. It’s the classic tree-turned-upside-down picture.
One could say that an “orchestrator” processes this mode, but the problem with that is already visible. We have a separation of orchestration and management, and implicitly of deployment and the rest of the lifecycle processes. The more logical way to approach this would be to say that orchestration is a data-driven activity, and that deployment is simply a stage in the service lifecycle.
If we presume that our service model is fully developed in the order entry process, then logically the way to deploy it would be to send the top object an “activate” command, which is an event. That object would then run an orchestration process that would look at its own as-a-service subordinates and activate them in turn. You’d see a cascade of these events down the tree, and a collection of responses coming back, focusing eventually on the top object.
This, IMHO, is the only way to visualize a deployment of an intent-modeled service. All any layer of this structure knows about the rest is the set of services it consumes itself. You commit the services at a given layer, and that commits the subordinate ones, and so forth.
Each layer in the intent model is our black box process, and so each layer presents only functionality and an SLA in an upward direction. The model is responsible for the SLA so the model is responsible for remediation if something breaks, which includes horizontal scaling and replacement of a failed component. Those are lifecycle actions, and so they belong in the model.
What separates this from the literal ETSI view are two points. First, orchestration is a continuous and “fractal” process. It is running all the time, but it’s running in response to events. The second point is that the intent model structure describes the end-state of the service not a parameter set, and the model itself mediates processes to get the service into that goal-state.
In the intent-modeled service the “Orchestrator” and “VNF Manager” and even the VIM are all co-equal processes that are integrated into a given intent-model lifecycle state/event table. Everything that happens is a response to conditions, which is the first test for a fully automated service process. The blocks in the ETSI E2E are functionally valid but are not literally process elements.
One thing this could do is provide an actual mission for the NFV work to support. From the first there’s been a strong correlation between what MANO was supposed to be doing and what something like OpenStack already does. If you presume that the only goal of the NFV architecture is to deploy and connect virtual functions, you’ve reinvented Nova and Neutron, respectively. But if you say that it’s the intent-modeled service, the event-connected lifecycle progression, that NFV defines, then you’ve stepped into the new world that NFV as a concept has promised.
This same approach also integrates management and operations. A service lifecycle driven by service events has a natural event-handling relationship with both NMS and OSS/BSS. If a service breaks, you launch a lifecycle process to remediate. If it works, you tickle the NOC to get the underlying problem (like a bad server) fixed. If remediation doesn’t work you have to do a billing credit by tickling an eTOM process in the OSS/BSS. All of that is defined into the service lifecycle process by the intent model.
Presuming that intent modeling goes (forgive the pun) as intended, there is going to be an impact on the NFV marketplace. Several in fact.
First and foremost, intent modeling cements the business case to the implementation. If you do a PoC on an intent-modeled implementation of NFV you can tie in everything needed to show capital savings, operations efficiency gains, and agility gains. No stranded business cases required.
Second, the relationship between SDN and NFV is firmed up, and even SDN’s role outside NFV is strengthened. The same is true for the cloud; SDN and NFV are both mechanisms to expand the scope of cloud “orchestration” and “connection”, and so are things useful for more than they’re being considered today.
Third, there’s now a logical and valid framework in which NFV claims by vendors can be examined. For example, if Oracle announces NFV-supporting OSS/BSS elements, how do we know that they really support NFV? Since the relationship between operations and NFV is vague at best, almost anything can be claimed to support that relationship. If we demand intent-modeled service with event-driven lifecycle management, we can list criteria for “support” of NFV that are meaningful.
Which leads to the fourth point, which is that this approach would separate the wheat from the chaff, NFV-wise. We could say, for example, that if VNFs have to fit into a specific framework for intent modeling and event-driven lifecycles, then VNFs have to present specific integration points with that. Do they? If not, then they’re not VNFs. Similarly, somebody with OpenStack support can’t say they are an NFV implementation unless they have the broader intent-and-event orientation.
I’d love to be able to say that this is where we’re heading. I firmly believe that some in the ETSI process see this direction pretty much as I’ve described it here. More probably accept the broad concept but haven’t thought through all the details. However…everyone would not fit in that happy camp of endorsers. There’s a lot of inertia that’s developed in NFV, and right now the momentum is not in an intent-modeled, event-driven direction. Time will be required to turn it around.
Time we may not have. What operators want and need from NFV has to be in field trials within nine months or so, and on its way to proving a business case, or we couldn’t hope to deploy enough NFV to change the timing of the revenue/cost crossover operators expect for 2017. If we don’t see some strong indication that there’s at least a couple vendors who can do the right thing by around November of this year, then I think time may run out for NFV. Which means again that we’re going to have to accept vendor-driven and even proprietary visions of NFV in order to get useful ones—like intent models.