Taking a Deeper Look at “Orchestration”

I made a comment in an earlier blog about “orchestration” in the context of service chaining, and a few people messaged me saying they thought the whole orchestration topic was confusing.  They’re right, and since it’s very important for a wide range of things in tech these days this is a good time to try to organize the issues and solutions.  Wikipedia says orchestration is “the automated arrangement, coordination, and management of complex computer systems, middleware, and services.”  In this definition, we can see “orchestration” intersecting with 4 pretty important and well-known trends.

At the highest level, orchestration is the term that’s often applied to any process of automated deployment of application or service elements.  The popular “DevOps” or “development/operations” process integration of application development and lifecycle management is orchestration of components during deployment.  In fact, anyone who uses Linux likely uses scripting for orchestration.  It’s a way of doing a number of complicated and coordinated tasks using a simple command, and it reduces errors and improves efficiency.

Orchestration came into its own with virtualization and the cloud, because any time you increase the dynamism of a system you increase the complexity of its deployment and management.  When you presume applications made up of components, and when you define resources abstractly and then map them as needed, you’re creating systems of components that have nothing much in the way of firm anchors.  That increases complexity and the value of automation.

From a populism perspective, component orchestration, virtualization, and the cloud make up the great majority of orchestration applications.  Despite this, most network people probably heard the term first in connection with the network functions virtualization initiative, now an ETSI industry specification group.  In the published end-to-end model for NFV, the ISG has a component called “MANO” for “management and orchestration”, and even MANO is starting to enter the industry lexicon.  After all, what vendor or reporter is going to ignore a new acronym that has some PR legs associated with it?

Down deep inside, though, this is still about “the automated arrangement, coordination, and management of complex computer systems, middleware, and services” so orchestration can be broken down into three pieces.  First, you have to deploy (“arrange”) the elements of the thing you’re orchestrating, meaning you have to make them functional on a unit level.  Second you have to connect them to create a cooperating system and parameterize them as needed to secure cooperative behavior “(“coordination”).  Finally you have to sustain the operation of your complex system through the period of its expected life (“management”).

Orchestration processes we know about can be divided into two categories—script-based and model-based.  Script-based orchestration is process-based in that it takes the set of steps needed to deploy, connect, and sustain a system and collects them so they can be played back when needed and so that they can reflect dynamic changes along the execution path.  For example, if you deploy a given component to a VM, you must “save” that VM identity to reference the component there at a later point in parameterization and connection.  You can see from this simple point that process-based orchestration is very limited because it’s procedural, and it’s hard to reflect lifecycle handling in procedural form—the script’s organization is too complicated.  That brings us to model-based orchestration.

Model-based orchestration defines relationships and behaviors and not procedures.  The basic notion can be summarized like this:  “If you tell me what the proper system of the complex computer systems, middleware, and services looks like, then I can apply policies for automated arrangement, coordination, and management of that system.”  Model-based orchestration is fairly intuitive in describing multi-component application and service deployments because you can draw a diagram of the component relationships—which is the big step in creating the model.

It’s my view that orchestration in the future must become increasingly model-based because model-based orchestration can be viewed as a process of abstraction and instantiation, which is what virtualization is all about.  The abstraction of a service or application is an “object” that decomposes into a set of smaller objects (the “components”) whose relationships can be described both in terms of message flows (connections) and functionality (parameterization).  By letting orchestrated applications and services build up from “objects” that hide their properties, we have the same level of isolation that’s present in layered protocols.  It’s a proven approach to making complex things manageable

There’s a couple of important points we can now take up about orchestration, and the leading one is horizontal scope.  If we’re looking for “the automated arrangement, coordination, and management of complex computer systems, middleware, and services” you can see that it would defeat the whole benefit case for orchestration if we left out important stuff.  A complex of computer systems, middleware, and services has to be orchestrated as a whole.  Another is vertical scope; we have to take a task we’ve automated through to completion or we leave a human coordinator hanging at the end of an automated process.  We can look at the current orchestration poster-child, NFV, to see why scope matters.

Suppose there’s a resource—a system, middleware component, service, or whatever—that’s outside the orchestration system’s control.  What that means is that you cannot commission the cooperative system of components using orchestration because parts are left out.  Whatever benefit you thought you were getting in error reduction and operations efficiency is now at risk because not only are some of the pieces not connected, the time it will take to coordinate things at the “in-orchestration” and “not-orchestrated” boundaries will waste further time and errors are very likely to be introduced here.  This is why I’ve been arguing that you can’t define orchestration for virtual functions and then assert a goal of improved opex and service agility.  You’re not controlling the whole board, and that’s a horizontal-scope failure.

Vertical scope issues are similarly a potential deal-breaker.  Whatever you model about a service or application to deploy it must also automate the process of lifecycle management.  In fact, logically speaking, deployment is only a step in lifecycle management.  By separating orchestration and management in MANO, the ISG has set itself on the path of taking management of service lifecycle processes out of the orchestration domain.  They simply present some MIBs to an undefined outside process set (EMS, OSS/BSS) and say “handle this!”  Management has to know something about what deployment processes did, and however you “virtualize” resources or components, you have to collaterally virtualize management.  One model to rule them all.

This is my big concern with the ETSI ISG’s work, and the basis for my big hope for TMF intervention.  We need to rethink management in future virtualization-dependent services because we need to make it part of orchestration again.

Leave a Reply