Nothing is ever as easy as it seems, particularly in networking. We had some recent proof of that this week as RAD announced a bit more about its “Distributed NFV” concept. On the one hand, “classical NFV” says that you want to remove functions from devices and host them in servers. RAD says that’s not necessarily so; you want to put functions where they make the most sense, including in devices.
Other vendors have talked about hosting stuff on device blades. Juniper took that position quite a while ago with its universal edge concept, but they tied the idea more to SDN than to NFV so it wasn’t recognized as an argument for a more flexible attitude toward function hosting. With RAD joining the chorus of “let the functions kids play on devices too”, we have to ask whether “distribution” means distribution to any arbitrary hosting point.
RAD and Juniper would likely agree on their justification for edge-hosting in devices; you put features at the customer edge because that’s where the traffic is. Firewalls, NAT, security, and other virtual functions make sense if they can be kept in the data path, and put close enough to the customer that you don’t have to add complexity to pull out individual customer flows to act on them. I agree with this; you have to factor in the affinity of a function for the natural data path if you’re going to optimize its location.
What’s not clear, though, is whether you’re extending NFV to host on devices or just hosting on devices and citing NFV function distribution as your excuse. You’re doing the former if you have a generalized architecture for NFV orchestration and management that can accommodate hosting on devices. You’re doing the latter if you’re not linking the device hosting very explicitly to such a general NFV model. Right now, I think most people are doing the latter.
I just spent some time reviewing the data modeling for the CloudNFV initiative, which defines services by assembling what the data model calls “Nodes” but which represent the TMF-SID components like “Customer-Facing Service” or “Resource-Facing Service” or the NFV notion of a “Package”. Each of these defines one or more service models to collect interfaces, and each service model has a Service Model Handler that can deploy the service model if called on. The standard Service Model Handler uses OpenStack and deploys via Nova, but you could develop one to deploy on devices too. Thus, if you are satisfied with the overall NFV capabilities of CloudNFV, you could create an NFV-based device-hosting model with it.
The trick here is to be able to define services so that for each unit of functionality, you can define optimizing properties that determine the best place to put it. I think operators would not only accept but applaud a strategy for function hosting that admitted devices into the picture as long as they could define how they believed the capex benefits of server hosting related to the potential complexity and operations-cost risks of that strategy.
Probably the big determinant to the value of distribution of functions into network edge devices (or in theory any other device) is that question of benefit/risk balance. It’s likely that capital costs would be lower for handling functions via servers, but it’s not clear what the operations costs would be. Both SDN and NFV currently suffer from management-under-articulation. There is no clear and elegant model proposed for either to insure that injecting dynamism into a network by hosting software to control its behavior doesn’t explode complexity and drive up operations costs.
If you under-articulate management, you tend to create a convergence on the legacy model. SDN and NFV either have their own management architecture or they inherit some existing architecture, which means inheriting the current one. That would mean that we managed “virtual devices” that corresponded with the real devices we already manage. It sounds logical, but let me offer an SDN example to show it’s not like that at all.
Suppose we have a hundred OpenFlow switches and an SDN controller, and above the controller we have our northbound application that imposes a service model we call “IP”. We want to manage this, and so we have one hundred “router MIBs” to represent the network. The problem is that the “routers” are really composites of local forwarding and centralized route control, so we can’t go to the routers for full knowledge of conditions or for management. We have to create proxies that will “look like” routers but will in fact grab state information not only from the local devices but from central control elements. How?
If we centralize these virtual MIBS they’re not addressed like real router MIBs would be. If we distribute them to the routers so we can address the “SNMP Port” on the router, we then have to let every router dip into central management state to find out what its full virtual functional state is. And you can probably see that it’s worse in NFV, with functions scattered everywhere on devices whose real management properties bear no resemblance to the virtual properties of the thing we’re hosting.
So here’s my take on the notion of function-hosting in devices. You have to look at whether there is a cohesive deployment and management strategy that treats devices and servers as co-hosts. If there is, you have truly distributed NFV. If not, then you have a way of hosting functions in network devices, which doesn’t move the ball and could in fact be flying in the face of the operators’ goals with SDN and NFV.