Modeling for Next-Gen Services: Why You Should Care a Lot

One of the terms you hear more and more in networking is model.  We’ve already had a host of MWC announcements that include it (one of them I’ll blog about specifically tomorrow), and the concept of a model is pivotal in NFV and many cloud management systems.  If you read network rags, then you know that models are probably central to your future.  The question is “what kind of model?” and that question is harder to answer because there are many model types and issues.

The term “model” by itself is too general to be useful; we have network models and model airplanes, after all.  A model is a representation of something, but exactly what it represents and how it might be used has to be conveyed in a qualifying term.

An “information model” is used primarily to describe the relationship between elements, and some people would say that it’s a high-level, conceptual, or abstract model.  A “data model” is the detailed description of something, a set of elements that are contextualized by the information model.  It’s not uncommon to combine the two in a single definition; the TMF’s SID is a “Shared Information/Data” model.  In my view, this sort of model is primarily about the abstract relationship among data elements and an inventory of elements used/needed.

Which means both these terms are probably most useful in describing databases, which is where most of them have come from.  Software creates another set of model issues, related to software structuring techniques that have evolved from the ‘80s.  There are a number of competing technologies and definitions here, so let me try to use some fairly descriptive and general ones.  A piece of software can be called a “component”.  Components are pieces of application, and in modern software practices components are “objects” (hence the term “object-oriented programming”).  An object is something that presents a specific interface and provides specific features.

The purpose of objectivizing software is to make it easy to change.  If you have an object that presents an interface and features and you change the interior code without changing those interfaces/features, then you have not changed the object because its external properties are the same.  Everything that used it before can use it now.  You can see how valuable this capability would be in software development.  The object is sometimes called a “black box” model because you can see its properties as a box, but not its contents.

If you looked at a collection of objects and the data associated with them, you’d see the data fit into one of two categories.  Some data is represented as a variable in the interface description of one or more objects.  Other data exists “inside” an object or objects, and is therefore invisible.  You can also see that while it might be useful to collect all the externalized data and represent it as a tabular model or document, that doesn’t do much to describe how the structure of objects would work.  It would be useless to collect knowledge of the internalized data because it’s invisible.

Some internal data becomes externalized, of course.  An object could create a data element for the purpose of interfacing to another object.  In this case, the “new” external data is related to the interface data of the object that did the creating.  So there’s a chain of relationships that links objects through data, a flow of information.  This structure can be visualized in abstract, but it can also serve as a map to how software itself is organized, and how it works.

This picture is really important in understanding service modeling in the modern era.  A good service model should be translatable into a software implementation so it should likely support software object principles.  This means that a service model is made up of “intent models”, called that because the objects in an intent model are known by what they do, not how they do it.  A good service model should also insure that the chain of information/relationships maintains object independence.

Suppose I have a top-level object called “service”, and suppose this object has a variable called “hosted-on”.  This seemingly logical connection is really a problem because not all services are hosted on anything, and if you take something that’s a “deep” object variable and propagate it to the top of a model, you make the model “brittle”, meaning that changing something that shouldn’t matter at the top breaks the description there.  This is why it’s a bad idea to describe a service as a flat data/information model—you have to “know” everything at once and you break the inherent property of abstraction and black-box-ism that you need.  Every object in a service model has to be able to get what it needs.  That doesn’t mean it gets it from adjacent objects, or from a single order or descriptor file.

This all adds up to why good service models are hierarchies of objects.  An object can be “decomposed” in a good model into lower-level objects or (at the very bottom) specific resources.  A VPN might be a service object at the top, a “core” VPN object in the middle, and a set of access objects at the edge.  The hierarchy would be service-to-both-core-and-access, a two level structure.  If you implemented the core VPN with NFV, you might have a series of objects representing the VNFs, locked safely inside the VPN object.  That way you could order a VPN based on MPLS or on NFV and as long as you provided the externally necessary parameters, you’re able to use the same structure and let the object itself decide how to break down, depending on location, policy, etc.

That introduces another important point, which is that each object in a service model has a state that represents where it is in the lifecycle process, and a series of events that impact its operation.  You can signal among adjacent objects with events, but never further because that would violate the black-box rule—you don’t know what’s inside a given object so how can you send a message to something inside one?

What you see in this real service model isn’t just a list of information, parameters, it’s a live system that exchanges information.  In effect, it’s a program.  That’s good because if we want services to be managed by software processes then our models of service had darn well better look like software.  I did a couple of open projects to prove out the approach, and all these principles work and can result in an agile service model.  There are many paths to such a model, but IMHO it’s clear that we do have to take one if we’re to get operations, next-gen services, SDN, and NFV on track.

Where are we with the right model?  Well, you could make the TMF SID work but I don’t think it’s the optimal choice because it doesn’t enforce hierarchy and modularity as much as allow it.  ETSI’s new MANO group promises to have “modeling” but there’s no example of the models on their website yet so I can’t say whether they have the right stuff.  The six vendors who can make an NFV business case all have a service modeling approach that’s generally conformant, but I have deep detail on only three (ADVA from their Overture deal, Ciena from their Cyan Blue Planet deal, and HPE) so I can’t be completely confident the others (Huawei, Nokia, and Oracle) support all the key nuances, though they do support the broad principles.

In the NFV world, we now have two competing sources of orchestration with Open Source MANO and OPEN-O, each sponsored by some major network operators and each at least loosely linked to the ETSI ISG NFV work.  OSM makes a lot of references to models and modeling, which OPEN-O doesn’t, but in my view there is simply no way to do NFV if you don’t have a strong service modeling strategy.  Tomorrow I’ll look at what OSM offers to see if 1) we can see enough to draw conclusions and 2) whether OSM will move the modeling ball forward.  Eventually I think it will, but we remain locked in the question of time.  Operators will need to make changes this year on a large enough scale to promise real cost/revenue differences for 2017.  NFV or no, only modeling is going to get us on track for that.