How NFV Might Set the IT/Network Balance

The most fundamental question that we in networking faced is not whether we’ll see a transformation of the network, but what form that transformation will take.  As I suggested yesterday in my blog, fundamental changes in what a network delivers to its customers will always require changes in how the network operates.  We’re seeing those fundamental changes in content delivery, mobility, social networking, and more.  We’re seeing direct initiatives to change networking—SDN and NFV.  How these will interact will shape our future as an industry and determine who lives and who doesn’t.

All of our network revolutions are, under the covers, based on increased reliance on separated IT-hosted intelligence for embedded intelligence.  We’re looking at this shift to simplify service creation and operations, lowering costs and also improving services and revenues for operators.  For the enterprise, the changes could open new partnerships between information resources, mobile devices, and workers—partnerships that improve productivity.  There are a lot of benefits driving this change, so it’s important that we look at how that shift impacts networking, IT, and the players.

Cloud computing creates the most basic threat to traditional network devices, even if it’s an accidental impact.  Tenant control and isolation in the cloud has relied on software-switch technology from the first.  This vSwitch use may not displace current switches or routers, but it caps the growth of traditional switching and routing in cloud data centers.  My model suggests that fully a third of all new applications for switching/routing in the next three years will end up being implemented on virtual technologies, and that’s money right out of network hardware vendors’ pockets.

It gets worse, though.  The popular VMware “overlay network” acquired with Nicira creates what’s effectively a set of tenant/application overlay networks that ride on lower-level (so far, largely Ethernet) substrates.  The overlay model focuses visible network features “above” traditional network devices.  That dumbs down the feature requirements at the lower (hardware) level, which makes differentiation and margins harder to achieve.

SDN could take this further.  If you presume that application/tenant connection control is abstracted into virtual functionality, then it’s easy to pull some of the discovery and traffic management features from the devices—where they’re implemented in adaptive behaviors and protocols—and centralize them.  Thus additional feature migration out of the network drives even more commoditization, ending with (so many say) white-box switches that are little more than merchant silicon and a power supply.

It’s not just what’s happening, but where it’s happening, that matters.  All this feature exodus is exiting features from the data center, and that has profound short- and long-term implications.  The short-term issue is that data center network planning and evolution is driving enterprise networking overall, which means that as we dumb down the network we reduce the ability of network vendors to control their buyers’ network planning.  IT vendors are a kind of natural recipient of strategic influence as the tide of simplification of network devices pulls gold flakes of value out of “the network” and carries them into servers.

The long-term issue is that if the data center is the planning focus for buyers, then principles of network virtualization that are applied there would inevitably spread to the rest of the network.  I’ve noted many times that if you presumed application-based virtual networking in the data center, you could establish the same structure in branch offices and then link your virtual networks with simple featureless pipes.  In theory, transit IP networking could be displaced completely—you use tunnels or groomed optical connections depending on bandwidth and costs.  With this change, all the current security concepts other than digital signing of assets are at risk.

The impact of this on vendors is obvious.  Just the introduction of cloud computing is enough to create considerable downward pressure on prices for data center switching (Facebook’s approach is a proof point), which means that vendors must accept lower profit margins.  Every additional step in the migration of valuable features further erodes vendor pricing power, so even if you assume that a vendor (Cisco comes to mind as an example) would attempt to provide SDN-like capabilities without obsoleting current hardware, the current hardware prices would still decline.  There is no stopping this trend, period.

NFV now enters the picture, and it’s a bit more complicated in terms of impact.  At one level, NFV is an explicit step in this feature-migrating trend because it’s an architecture designed to host service components rather than embed them.  At another level, though, NFV could be the way that networking gets back into its own game.

I’m not talking about the kind of NFV you read about here.  I don’t personally think that much of what’s being “demonstrated” today for NFV has much relevance either to NFV conceptually or to the market overall.  The focus of most “NFV” is simply to host stuff, which isn’t much more than cloud computing does naturally.  Making this hosted alternative to appliances agile and operationally efficient is the key—and not just to NFV’s own business case.

The important thing about NFV, IMHO, is that it could provide a blueprint for creating application/service instances that mix network and hosted functionality and manage the whole service lifecycle automatically.  NFV is therefore a kind of mixture of IT principles and network principles, befitting the fact that the cloud is a mixture of the two technologies at the service level.  A vendor who can harness the union of IT and networking is harnessing the very path that value migration is flowing along.  They can expedite that migration (if they’re an IT player) or they can try to direct more of the feature value into traditional network lines.

You can see some of this process today with the whole issue of in-cloud versus in-device feature hosting.  If you’re an IT vendor you want everything to be hosted in servers.  If you’re a network vendor you’d like everything to be hosted in devices.  Neither extreme outcome is likely, and so you’re going to have to address a future where features run where they fit best, which means that network devices will gradually become “feature hosts” as well as bit-pushers.  Those who really control an NFV solution will be able to optimize the platform choices in favor of their own solutions.  Those who don’t will have to fit somehow into another vendor’s approach, which is hardly likely to leave them an optimum spot to grow.

The reason NFV is important is not because it might drive down opportunities for network appliances; competition and commoditization will do all the damage needed in that area, and provide most of the operator benefits.  It’s important because it’s an industry initiative that, as a byproduct of its primary mission, is forcing us to focus on that critical IT/networking union, and on the concept of composing services like we compose applications.  That’s important to NFV, to networking, to the cloud, and to vendors.

So who wins?  Right now, Alcatel-Lucent and HP are both poised to maybe-do-something-useful in the NFV space.  IBM seems well-positioned to act, and so does Oracle.  For the rest of the IT and networking vendors, it’s still not clear that anyone is ready to step up and solve the union-izing problem NFV presents.  Yeah, they’ll sell into an NFV opportunity someone else creates, but they don’t want to do the heavy lifting.  However, the barriers to doing NFV right aren’t all that high.  I’ve worked on two projects to implement an open NFV model, and neither of them require much resources to deliver.  Leadership in NFV, then, is still up for grabs, and who ends up grabbing it most effectively may not only become the new leader in networking, they may define how networking and IT coexist for decades to come.