Yesterday, the CloudNFV project (of which I am Chief Architect) announced a new Integration Partner, Shenick Network Systems. Shenick is a premier provider of IP test and measurement capabilities, and I’m blogging about this not because of my connection with CloudNFV but because the announcement illustrates some important points about NFV and next-gen networking in general.
No matter how successful NFV is eventually, it will never completely displace either dedicated network hardware or static hosted information/processing resources. In the near term, we’ll certainly have a long period of adaptation in which NFV will gradually penetrate those areas where it is suitable, and in that period we’ll have to live and interwork with legacy components. Further, NFV is still a network technology and it will still have to accommodate testing/measurement/monitoring functions that are used today for network validation and diagnostics. Thus, it’s important for an NFV model to accommodate both.
One way to do that is by making testing and measurement an actual element of a service. In most cases, test data injection and measurement will take place at points specified by an operations specialist and under conditions where that specialist has determined there’s a need and an acceptable level of risk to network operations overall. So at the service level, we can say that a service model should be able to define a point of testing/monitoring as an interface, and connect a testing, measurement, or monitoring function to that interface as needed.
The question is how that function is itself represented. I blogged yesterday about the value of platform services in the cloud, services that were presented through a web-service interface and could be accessed by an application. It makes sense to assume that if there are a number of points in a network where testing/monitoring/measurement facilities exist, we should be able to link them to an interface as a platform service. This interface could then be “orchestrated” to connect with the correct point of testing defined in the service model, as needed.
Of course, there’s another possibility, which is that there is no static point where the testing and measurement is available. Shenick TeraVM is a hostable software testing and measurement tool, and while you can host it in specific places on bare metal or VMs, you can also cloud-host it. That means it would be nice if in addition to supporting a static location for testing and measurement to be linked with service test points, you could spawn a dynamic copy of TeraVM and run it somewhere proximate to the point where you’re connecting into the service under test.
What Shenick is bringing to CloudNFV (and to NFV overall) is the ability to do both these things, to support testing and measurement as a static set of platform service points and also as a virtual function that can be composed into a service at specific points and activated on demand. The initial application is the static model (because CloudNFV runs today in Dell’s lab, so dynamism in terms of location isn’t that relevant) but Shenick is committed to evolving a dynamic support model based on VNFs.
We need a way to connect testing, measurement, and monitoring into services because operations personnel rely on them today. What’s interesting about this Shenick approach is that it is also a kind of platform-services poster-child for NFV. There are plenty of other things that are deployed in a static way but linked dynamically to services. Take IMS, for example. You don’t build an IMS instance for every cellular user or phone call, after all. Same with CDN services. But if we have to build a service, in a user sense, that references something that’s already there and presented as a web-service interface or some other interface, we then have to be able to model that when we build services. That’s true whether we’re talking about CloudNFV, NFV a la ETSI ISG, or even traditional networking. Absent modeling we can’t have effective service automation.
In NFV, a virtual network function is a deployable unit of functionality. If that function represents something like a firewall that is also (and was traditionally) a discrete device, then it follows that the way a service model defines the VNF would be similar to the way it might define the physical device. Why not make the two interchangeable, then? Why not say that a “service model component” defines a unit of functionality and not necessarily just one that is a machine image deployed on a virtual machine? We could then model both legacy versions of a network function and the corresponding VNFs in the same way. But we could also use that same mechanism to model the “platform services” like testing and measurement that Shenick provides, which of course is what they’re doing. Testing and measurement is a function, and whether it’s static or hosted in the cloud, it’s the same function. Yes, we may have to do something different to get it running depending on whether it’s pre-deployed or instantiated on demand, but that’s a difference that can be accommodated in parameters, not one requiring a whole new model.
I think the Shenick lesson here is important for both these reasons. We need to expect NFV to support not only building services but testing them as well, which means that testing and measurement have to be included in service composition and implemented either through static references or via VNFs. We also need to broaden our perception of service modeling for NFV somehow, to embrace the mixture of things that will absolutely have to be mixed in any realistic vision of a network service.
Both SDN and NFV present a challenge that I think the Shenick announcement brings to light. Network services require the coordination of a bunch of elements and processes. Changing one of them, with something like SDN or NFV, certainly requires the appropriate bodies focus on what they’re changing, but we can’t lose sight of the ecosystem as we try to address the organisms.