The majority of the current network hype has been focused on SDN, and either despite the attention or because if it, SDN hasn’t garnered much focus other than hype. We have so much SDN-washing that it’s hard to see what’s even being washed any more. Laundry fatigue in tech? A new concept at last!
NFV is a newer concept than SDN, and one that so far doesn’t have a show with vendors exhibiting and issuing press releases. There are vendors who are voicing support for NFV (Intel did so just last week) but so far the claims are muted and even a bit vague. The second of the NFV global meetings is being held this week, and before the meeting may be a good time to review the issues the body will have to address.
The goal of NFV is to unload features from network appliances, presumably including even switches and routers, and host them in generic server farms. This, operators hope, will reduce costs and help the operators overcome the current problem of profit squeeze. It’s also hoped that the architecture that can support this process, which is where “network function virtualization” comes from semantically, will provide a framework for agile feature creation. That could make operators effective competitors in a space that’s now totally dominated by the OTT and handset players.
A virtualized anything is a step on the path to reality, obviously. You start by defining a set of abstractions that represent behaviors people are willing to pay for–services. You then decompose these into components that can be assembled and reassembled to create useful stuff, the process that defines virtual functions or a hierarchy of functions and sub-functions. These atomic goodies have to be deployed on real infrastructure—hosted on something. Once they’re hosted, they have to be integrated in that there has to be a mechanism for the users to find them and for them to find each other. Finally, workflow has to move among these functions to support their cooperative behavior—the behavior that takes us back to the service that we started with.
NFV, as a body, has to be able to define this process from start to finish to create NFV as a concept. What, for example, are the services that we propose to address? One example already given is the firewall service, another content delivery network services. Even at this point, we have potential issues to address.
Firewalls separate endpoints from traffic by creating a barrier through which only “good” stuff can pass. It follows that they’re in the data flow for the endpoints they serve. So does this mean that we feed every site through a software application that hosts selective forwarding? That might be practical up to a point, but servers aren’t designed to be communications engines operating at multi-gigabit speeds. Intel clearly wants to make it possible, but is it practical, or should we be thinking about having a kind of switch-like gadget that does the data plane handing and is controlled by a virtual function that needs only process rule changes? Good question.
Even higher up the tree in the conceptual sense is what we’re serving here. If we need to have endpoints supported by firewalls it follows that we need some conception of an endpoint. Who owns it, how is it connected in a protocol sense, how is it managed and who’s allowed to exercise management, what functions are associated with it (like firewalls)? In software terms, an endpoint is an object and an enterprise network is a collection of endpoints. Who owns/hosts the object that represents each endpoint, and who owns the association we’re calling an “enterprise network”?
We can do the same thing with CDNs. We have a concept of a CDN service as something that delivers a content address (presumably from an optimized cache location) to a user in response to the user having clicked on a URL. One element of this, obviously, is that we have to decode URLs, which is a DNS function. Do we have a special DNS for this? Does every user have their own “copy” or “instance” of DNS logic? Remember, in enterprise firewall applications we likely had an instance of the app for each user site. Not likely that will scale here. Also, the DNS function is a component of many applications; is it shared? How do we know it can be? Is “caching content” different from storing something in the cloud? How do we integrate knowledge of whether the user is an authenticated “TV Everywhere” client to access the video? Obviously we don’t want to host a whole customer process for every individual customer, we want to integrate an HSS-like service with DNS and storage to create CDN. That’s a completely different model, so is it a completely different architecture? If so, how would we ever be able to build architectures fast enough to keep pace with a competitive market.
You can see that I’ve filled a whole blog with questions about two possible applications in the first of five stages of execution. That’s the NFV challenge, and it’s a challenge that only a great architecture can resolve. So that’s what everyone—you and me and all the operators and vendors—need to be looking for out of meetings like this week’s session. Can NFV do a great architecture?
If they fail, does NFV fail? Not likely. There are way too many players behind this. We may have a failure of process—what carrier standards group in the last decade has produced a commercially and technically viable standard in time to be useful—but we’ll have a result produced by the market’s own natural competitive forces even if we don’t create one by consensus. I’d sure like to see consensus work here, though. It would be a healthy precedent in an industry that needs collective action to face formidable challenges.