NFV Savings and Impacts: From Both Sides

I’ve been reading a lot of commentary on network functions virtualization (NFV) and I’m sure that you all have been too.  Most of it comes from sources who are not actively involved with NFV in any way, and since the NFV ISG’s work isn’t yet public it’s a bit hard for me to see how the comments are grounded in real insight.  It’s largely speculation, and that’s always a risk, particularly at the high level when the question of “what could NFV do to infrastructure” is considered.  Sometimes the best way to face reality is to look at the extremely possible versus the extremely unlikely and work inward from both ends till you reach a logical balance.

If you think that we’re going to run fiber to general-purpose servers and do optical cross-connect or even opto-electrical grooming, be prepared to be disappointed.  General-purpose servers aren’t the right platform for this sort of thing for a couple of reasons.  First, these applications are likely highly aggregated, meaning that breaking one breaks services for a bunch of users.  That means very high availability, the kind that’s better engineered into devices than added on through any form of redundancy or fail-over.  Second, the hardware cost of transport devices, amortized across the range of users and services, isn’t that high to begin with.  Bit movement applications aren’t likely to be directly impacted by NFV.

On the other hand, if you are selling any kind of control-plane device for any kind of service and you think that your appliance business is booming, think again.  There is absolutely no reason why these kinds of applications can’t be turned into virtual functions.  All of IMS signaling and control is going to be virtualized.  All of CDN is going to be virtualized.  The savings here, and the agility benefits that could accrue, are profound.

Let’s move inward a bit toward our convergence.  If we look at middle-box functionality, the load-balancing and firewalls and application delivery controllers, we see that these functions are not typically handling the traffic loads needed to stress out server interfaces.  Most middle-box deployment is associated with branch offices in business services and service edge functions for general consumer services.  The question for these, in my view, is how much could our virtual hosting displace?

If we presumed that corporate middle-boxes were the target, I think that the average operator might well prefer to host the functions at their network’s edge and present a dumbed-down simple interface to the premises. Customer-located equipment can be expensive to buy and maintain.  Since most service “touch” is easily applied at the customer attachment and much harder to apply deeper, it’s likely that virtual hosting could add services like security and application delivery control very easily.  Based on this, there would be a strong pressure to replace service-edge devices with hosted functions.

On the contrary side, though, look at a consumer gateway.  We have this box sitting on the customer premises that terminates their broadband, offers them DHCP services, possibly DNS, almost always NAT and Firewall.  Sure we can host these functions, but these boxes cost the operator perhaps forty bucks max and they’ll probably be installed for five to seven years, giving us a rough amortized cost of six dollars and change.  To host these functions in a CO could require a lot of space and the return on the investment would be limited.

This edge stuff is the current “NFV battleground state”.  You can already see box vendors addressing the risks by introducing “hosting” capability into their boxes.  A modern conception of an edge device is one that has basic packet-pushing combined with service feature hosting, which essentially makes a piece of the box into an extension of NFV infrastructure (providing that NFV deployment can actually put something there and manage it).  You can also see software vendors looking at how they could create better economy of scale for middle-box functions that support consumer or SMB sites and thus have relatively low CPE costs.

If we move up from our “unlikely side” the next thing we encounter is the large switch/router products.  These products, like transport optics, are likely doing a lot of aggregating and thus have availability requirements to consider, and high data rates create a major risk of swamping general-purpose technology with traffic, even using acceleration features.    If we were to presume that the network of the future was structurally 1:1 with that of the present, having all the layers and devices in either virtual form or real form, I think we could declare this second transport level to be off-limits.

But can we?  First, where aggregation devices are close to the network edge, in the metro for example, we probably don’t have the mass traffic demand—certainly nothing hopelessly beyond server capability.  Second, even if we presume that a device might be needed for traffic-handling or availability management, it’s possible that NFV could get an assist from SDN.  SDN could take the functions of switching or routing and convert them into a control/data plane behavior set.  The former could be NFV-hosted and the latter could be hosted in commodity hardware.  That would make a victory of legacy device technology at this first level of aggregation a pyrrhic victory indeed.  All that needs to happen is that we frame the notion of aggregation services in the metro in a nice service-model-abstraction way so that we can set up network paths as easily as OpenStack Neutron sets up subnets to host application components.

This is the key point to our middle ground, the key factor in deciding how far from the “good side” of potential NFV applications we can really expect NFV to go.  If you look at the technology in isolation, as middle-box hosting, then the impact is limited.  If you look at NFV as edge hosting then there are a number of very logical steps that could make NFV much more broadly successful.  And the more successful it is, the more of metro networking (which is where edges and aggregation are located, after all) gets translated into NFV applications.  And NFV applications are cloud applications where traffic is pre-aggregated by data center switching.  That means you could consume optics directly, and you’d end up with a metro network consisting of NFV data centers linked with lambdas, fed by a thin low-cost access network.  If you believe in NFV revolution, this is what you have to believe in, and the big step in getting there is a service-model partnership between SDN and NFV.

Leave a Reply