Composing services in an agile and market-responsive way is a critical requirement for the future of network operators. That means it’s critical that technologies like SDN and NFV support it, and if proponents of those technologies want to play the agility card to justify their preferred revolution, then their technology has to support it better than alternatives. One of our challenges is that it’s hard to say whether that could happen because we don’t seem to be able to draw a picture of what we expect.
I’ve been in software design and development for many decades, and I’ve seen what happened in the software industry as we populized computing. Most haven’t really thought about this, but the fact is that microprocessor revolutions alone couldn’t create PCs or tablets or smartphones, you needed a lot of software. It’s software that gives the devices utility.
Services are in many ways like software in a consumption sense. We used to sell bit-as-a-service to large enterprises, and the revolution of the Internet was that we defined services that could be consumed by people who weren’t network professionals. Just like personal software revolutionized computing, personal services revolutionize networking.
One of the key things that happened in software that facilitated “appliance populism” was the concept of object-oriented or modular programming. When I learned to program there were no libraries of classes or objects to build from. You had to write code for everything, and that tool a long time, expert resources, and a lot of errors along the way. Worst of all, there simply weren’t enough programmers to produce the quantity of stuff that a populist market would want.
Today we have languages like Java whose class libraries contain enormous pools of functionality, and we follow a library-class model when we write our own code. Most software today was designed to be reused, to be plugged in here and there to make one development task serve a lot of application missions. The trend is toward higher-level languages that make things easier, and development increasingly leverages units of functionality developed as “utilities” for broad application.
So it must be with services, I believe. We should be looking at the future of services the way a developer would look at an application. I need a “class library” of generalized useful stuff, perhaps some specialty objects of my own, and a way to assemble this and make it work. If I have that, I can build something functionally useful in less time than a programmer of my era would have spent getting their code sheets keypunched.
So where is this concept? We do hear about service libraries, but we don’t hear much about the details, and the devil is in those details. Any developer knows that a class library has documentation on the functions and interfaces available, so there are “rules” that let a developer know how to integrate a given object. We should be asking about those kinds of rules for services too, and I don’t hear much at all.
Let me offer an example. We could say that a connection service has three configurations—LINE, LAN, and TREE—that express endpoint relationships. If we added a functional dimension we could describe two other “configurations”, what we could call in-line and on-line. In-line configurations for functional services are configurations where the service sits on a data path and either filters or supplements what’s sent along the “line”. On-line means that the service attaches as an endpoint itself. Got it so far?
Given this, we could now see how service composition would work. For example, a simple three-site VPNs is three LINEs connected to a LAN (multipoint) operating at L3. Suppose we wanted to add a firewall to each site. We’d now break our LINEs into two segments each, and we introduce an “in-line” firewall service. Simple. If we want to add something else, we either add it by making it another “in-line” (encryption for example) or an “on-line” like DNS or DHCP.
I’m not suggesting that these simple connection and service models are complete, but they’re complete enough to illustrate the fact that you can build services this way. Maybe we need another model or two, but in the end everything would still obey a basic rule set.
An “in-line” has two ports to connect to and a service between. I can connect in-lines to other in-lines or to LINEs. That frames a simple set of rules that a service creation GUI could easily accept. That means that a service architect could “build services” by assembling elements based on these concepts.
Obviously you need a bit more than topology to make this work. An “interface” of any sort means an address space and protocol set, which in the modern world will usually mean either “Ethernet” at Level 2 or IP at Level 3. You might refine either by specifying tunnel protocols and so forth. Similarly you’d need to have some sort of SLA that provided basic QoS guarantees (or indicated that the service was best efforts). So what we need, in addition to our hypothetical five topological models is an interface description and SLA. If we have all this stuff we can conceptualize what a service architect might really do, and what might really be done to support that role.
A “library” in this model is a collection of objects classified first by the topology and then by interface and SLA. An architect who wanted to build a service would first frame the service as a collection of functions and then map functions to library objects, presuming a fairly comprehensive library. If that assumption wasn’t valid, then the architect would likely explore the functions available and try to fit them to match service opportunities.
One obvious consequence of this approach is that it’s implementation-opaque. The “objects” are truly intent models, with an abstract set of features that would be realized in any number of ways by committing any number of different combinations of infrastructure. You could build a Level 3 VPN, for example, by using an overlay encryption approach (IPsec), an IP feature (MPLS, RFC2547), a set of virtual functions/devices, or SDN. If all these implementation options produced the same interfaces, features/topologies, and SLAs, then they’d be equivalent.
Another consequence is that management could be harmonized using the objects themselves. A “service” as a collection of functional objects could be managed in the same way no matter what the implementation of the objects were, providing that we added a set of management variables to the SLA and expected everything that realized our function would populate those variables correctly.
This is what creates both the support for an SDN/NFV revolution and a risk to that revolution’s benefits. If service agility and operations efficiency are the primary benefits of SDN and NFV, and if these benefits are actually realized using object/intent modeling above either SDN or NFV and embracing legacy options as well as “revolutionary” ones, then we could build agile services and efficient operations at least in part without the revolutions.
This isn’t to say that this higher-level approach would negate the value of SDN or NFV, only that it would force both SDN and NFV to focus on the specific question of how either technology could augment efficiency or agility inside the object/intent model. While I think you could make a strong case for both SDN and NFV doing better, the improvement would be less than an improvement created by using efficient object/intent models only for SDN and NFV, and expecting legacy to live with current practices.
That’s what I think is the big question facing the industry. We cannot realize service agility and operations efficiency purely within SDN or NFV, in part because neither really defines a full operations and service lifecycle model and in part because it’s unrealistic to assume a fork-lift from legacy to SDN/NFV with no transition state. Will SDN and NFV address the models within their own specifications and thus tend to associate model benefits with SDN and NFV, or will we have to solve operations and service modeling needs somewhere else, a place as likely to support legacy technology as the new stuff?
SDN and NFV cannot create agility, nor efficiency, by themselves—in no small part because the standards bodies have put many of the essential pieces in the “out-of-scope” category. What they can do is work within a suitable framework, and at the same time guide the requirements for that framework so that it doesn’t accidentally orphan new technology choices. I think we’re starting to see a glimmer of both these things in both SDN and NFV, and I’m hoping that will continue.