Virtual Networking’s Dirty Operations Secret

Huawei seems to be projecting a future where network equipment takes a smaller piece of the infrastructure budget—IT and software getting a growing chunk.  Genband seems to be envisioning a UC/UCC space that’s also mostly in as-a-service software form, and they’re also touting NFV principles.  It would seem that the industry is increasingly accepting the transition to a “soft” network.

The challenge for “the industry” is that it’s probably not true that a simple substitution of hosted functionality for dedicated devices would cut operator costs enough to alter the long-term industry profit dynamic.  In fact, my model says that simple pipelined services to SMB or consumer customers would cost, in the net, significantly more to operate in hosted form.  Even for business users, the TCO for a virtual-hosted branch service could almost be a wash versus dedicated devices; certainly not enough of an improvement to boost operators’ bottom lines.

I’ve already noted that I believe the value of a more software-centric vision of network services will come not from how it can make old services cheaper but how it can create new services that aren’t even sold today and that will thus fatten the top line.  But there’s a challenge with this new world, whatever the target of the software revolution might be, and it’s related to the operationalization of a software-based network.

Networks of old were built by gluing customer-and-service-specific devices together with leased-line or virtual-circuit services.  We managed networks by managing devices, and the old OSI management model of element-network-service management layers was tried and true.  When we started to transition to VPN services, we realized that when you added that world “virtual” to a “private network” you broke the old mold.  VPNs and other virtual services are managed via a management portal into a carrier management system and not by letting all the users of shared VPN infrastructure hack away at the MIBs of their devices.  Obviously we’re going to have to do the same thing when we expand virtualization through SDN or NFV, or even though just the normal “climbing the stack” processes of adding new service value.  In fact, we’re going to have to do more.

There’s something wonderfully concrete about a nice box, something you can stick somewhere to be a service element, test, fix, replace, upgrade, etc.  Make that box virtual and it’s a significant operational challenge to answer the question “Where the heck is Function X?”  In fact, it’s an operational challenge to put the function somewhere, meaning to address all the cost, performance, administrative, regulatory, and other constraints that collectively define the “optimum” place to host something.  Having made the decision, though, it’s clear that we can’t decide on how to “manage” our virtual function by tearing it down and putting it back again, which means we have to find all the pieces and redefine their cooperative relationship.  This is something that we have little experience with.

The TMF, a couple of years ago, was working on this problem and while I’m not particularly a fan of the body (as many of you know), they actually did good, seminal, work in the space.  Their solution was something called the “NGOSS Contract”, and it was in effect a smart data model that described not only what the service constraints were—the things that would have to define where stuff got hosted and how it was connected—but also described what resources were committed to the service and how those resources could be addressed in the service lifecycle process.

A service has a lifecycle—provision, in-service parameter change, remove-add-replace element, redeploy, and tear down come to mind—and every step of this lifecycle has to be automated or we’ve reverted to manually managing service processes.  In any virtual world, that would be a fatal shift from an operations cost perspective.  But with SDN, for example, who will know what the state of a route is?  Do we look at every forwarding table entry from point A to B hoping to scope it out, or do we go back to the process that commanded the switches?  But even the SDN controller knows only routes, it doesn’t know services (which can be many routes).  You get the picture.  The service process has to start at the top, it has to be organized to automate deployment to be sure, but it also has to automate all the other lifecycle steps.  And if you don’t start it off right with those initial resources, you may as well seal your network moves adds and changes into a bottle and toss them into the surf.

One of our challenges in this positioning-and-exaggeration-and-media-frenzy world is that we’re really good at pointing out spectacular things on the benefit or problem side—even if they’re not true.  We’re a lot less good at addressing the real issues that will drive the real value propositions.  Till that’s fixed, all this new stuff is at risk in becoming a science project, a plot for media/analyst fiction, or both.

More on the SDN, NFV, and Cloud Opportunities

One of my previous blogs has generated a lot of discussion on the question of what the SDN market might be, and whether any market-sizing on the SDN space is simply an exercise in hype generation.  This comes at the same time as a series of articles, the latest in Network World, that cast doubt on some cloud numbers.  According to NWW, some analysis of the cloud market shows a very high rate of adoption (70% or more) and even a high rate of multiple-cloud hybridization, where others suggest that the majority of businesses haven’t done anything at all with the cloud.

A big part of the problem here is in definition, starting with what “the cloud” or “SDN” mean.  Many will define the cloud as any service relationship involving hosted resources, in which case every human who uses any remote storage service like Google Drive or Microsoft SkyDrive is a cloud user.  It would also cover users of shared hosting for their websites.  But if you look at services limited to IaaS, PaaS, SaaS offerings based on shared infrastructure, the population of users would fall significantly.  In the SDN space, virtually every vendor wants to call what they do “SDN” so it follows that anyone doing networking at all is doing SDN by that definition.  If you limit the population to those doing formalized centralized control of network behavior using either OpenFlow or another protocol set (MPLS, BGP, etc.) then we’re way down in the single digits in terms of adoption.  And in both cloud and SDN, we’ve found that the population of “users” is made up primarily of “dabblers”, companies who have significantly less than 1% of their IT spending committed to either cloud or SDN.

Another problem we have is in defining a “business”.  If we look at the US, we find that there are about 7.5 million purported business sites, and that well over half of them represent businesses with fewer than 4 employees.  So if we mean “business” when we say it, any assertion that the cloud has even 50% penetration is downright silly.  In point of fact, if you look at the business population of the US, the most cloud-intensive of all the global markets, you find that only about 10% of businesses currently use cloud computing other than basic web hosting or free online storage.  In the enterprises, by contrast, you’d find that nearly 100% had adopted cloud technology, though on a very limited scale.  The “limited” qualifier today means less than 1% of IT spending.  The number who spend even 25% of their IT on the cloud is statistically insignificant.

The reason behind all this hype is the media, who demand that everything be either the single-handed savior of all global technology or the last bastion of International Communism.  Why?  Because those two polar stories are easy to write and get a lot of clicks.  So if you’re a vendor, you’re forced to toe the marketing line or languish in editorial Siberia forever.  Still, this might be a good time to be thinking about more than gloss and glitter.  Both service providers and enterprises are under their own unique pressures, and they look to initiatives like the cloud, SDN, and NFV for salvation.

Amazon, so they say, is going to launch its own TV service, presumably one that competes with other players like Roku for providing access to Amazon Prime.  All of these services rely on the “free aether” of the Internet, which of course isn’t free except in an incremental-price sense.  Operators have to pay to produce the capacity that all these guys are consuming to support their own business models, and that’s the proximate cause of initiatives like NFV, aimed at both reducing the hemorrhage of revenue per bit by reducing the cost per bit, and by increasing the revenue through higher-level service participation.  Vendor support is lumpy; some are stepping up and others are hanging back.  Same with SDN, of course.

Enterprises too have the benefit issue; over 70% of CIOs told me in the surveys that they would prefer to promote technologies that improved benefits rather than cut costs, but of that group two-thirds say they have no specific directions to follow toward benefit nirvana.  Most enterprises think that SDN isn’t relevant to them at all, or that if it is the technology is really about somehow preparing for the private cloud.  Why they need to do that, given that they don’t know what the benefits of the private cloud are, remains a mystery.

So what’s needed?  I think there are three elements to the future of “the cloud” and the same three for the future of SDN and NFV.  Yes, resource pools and cost-efficient production of services, applications, and features is one of them.  Connecting these flexible resource pools is another—obviously the SDN dimension.  The third, and the most important, is a new application architecture that exploits the combination of hosting flexibility and connectivity flexibility, and does so in the context of an increasingly mobile-broadband-linked population and workforce.  I think we have a handle on resource efficiency, and on network connectivity, but we’re never going to drive benefits or even finalize resources our networks without that application model.

The cloud (forgive me, Amazon!) is not IaaS; that’s only an on-ramp for getting legacy stuff onto the cloud.  The cloud is a future world of highly composible and distributable software components that migrate here and there over very flexible networks to live in a transient sense on flexible resource pools.  Applications, resource pools, and flexible connectivity are the three legs of the stool holding up the cloud, like Atlas purportedly held up the earth.  We can only advance so far in any area without advancing in all or tipping the whole thing over.  We’re all in the cloud together, despite the fact that to a lot of vendors we don’t seem to be connecting SDN or NFV with the cloud at all, and it’s the cloud that’s the path to those benefits.  So when somebody talks to you about SDN or NFV and leaves out the cloud, run screaming.  They’re inflicting the death of a thousand hypes on you.

Remember, though, that the cloud is about elastic, composable services.  So if somebody says “cloud” to you and doesn’t talk about componentized software and web services, REST, or SOA, run screaming too.  And maybe all these technical dimensions are why vendors and media alike are degenerating into sloganeering.  The truth of our future is hard…complicated.  A nice fuzzy lie would be so much easier, and build enough clicks on stories or product data sheets in the short term.  But it won’t build a market, and eventually we’re going to have to do that or watch networking become a junk yard of old ideas.

Playing the SDN/NFV Opportunity Curve

If you take stock of the network equipment earnings thus far, you definitely get a picture of an industry under pressure.  I’ve commented on the main players as they’ve come out, and one of the common themes has been that you can’t expect network operators or enterprises to spend more when you’re presenting them with a static or declining ROI.  More service revenues for operators, or more productivity gains for enterprises, are essential in preserving the health of the industry.

In this context, things like SDN and NFV are both risks and opportunities.  For at least some of the supporters of each of these new initiatives, the goal is to lower the cost of network equipment, offload some high-value features to servers, and generally commoditize the market.  For some, the goal is to improve network agility, support cloud computing better both as an end-user service and as a framework for building other services, and enhance operations.  Clearly vendors with market share risks should be addressing only the latter goal and those who hope to gain market share might think about supporting both goals.

In truth, it’s hard to see what vendors are trying to do in either space.  In the world of SDN, all of the focus of the centralized/OpenFlow revolution have been pushing what’s arguably control without context.  An application (from what source?  Not our problem) can drive a controller to make changes.  Great, but it cedes the utility, the business case, to that undefined application.  In the NFV world, the challenge is devising an architecture for hosting functions that can present significantly lower cost than current discrete devices would present, and at the same time add agility and flexibility.  It’s too early to be sure whether that challenge can be met, much less will be met.

Our only certainty here is the notion of cloud networking.  If we view applications as a set of components distributed over a pool of resources in such a way as to optimize both cost and QoE, we have a pretty good mission statement for both SDN and NFV.  That kind of application vision is the inevitable result of the cloud, and that might explain our problem with creating benefit cases.  If we separate the conceptual driver of change from the details of how and what gets changed, we’re likely going to lose any realistic view of ROI.  What good is agile networking absent anything that needs agility?  How valuable is centralized control if there’s nothing centralized to control anything?

We even need to think about the question of a “network”.  In a traditional sense, a network is a collection of devices that are induced to cooperate in the creation of a delivered experience to a community of users.  We induce that cooperation today through a series of control-plane, data-plane, and management standards.  But if we were to view the network as a collection of components, of software elements, then we have no rigid distribution of functionality, no fixed set of things to connect, and no need for rigid standards to connect them.  A service like content delivery can be abstracted as a single functional block distributed in any useful way through the cloud.  It can be abstracted as a set of virtual components that make up that single block, and those components’ nature and relationship can be changed freely as long as the external feature requirements are met.  In this situation, just what is an “interface”?  Is a virtual device an analog of a real one, or just some arbitrary collection of functionality that makes service composition easy?  Or is there never any such thing at all?  The fact is that we don’t know.

There’s been a lot of talk about the impact of SDN and NFV on the network equipment space, and subscribers to our Netwatcher publication found out in the March issue that across all segments of the network infrastructure market, SDN and NFV would actually be accretive to network opportunity through 2015, and would diminish it thereafter.  They also learned that SDN and NFV would be impacting nearly all network spending within five years, but that this would not likely result in major changes in market share.  They learned, no surprise, that the fastest-growing segment of network spending was spending on IT elements to host features.

Amazon’s quarter should be teaching everyone something.  The company’s profit line dipped because Amazon is making some massive investments in the future.  Company shares have traded just a bit lower pre-market, but certainly they didn’t take a major hit, and this pattern has been followed many times before with Amazon.  Network vendors should be thinking about this; if a company is investing in a credible future opportunity, the market rewards that investment in the long term, even in the face of some short-term pain.  The years prior to 2016, the years when SDN and NFV will drive spending up, are the years when companies need to grab hold of the benefit side of the technologies, to take control of the “R” in “ROI” while it can be done without reducing net revenues.  How many will be smart enough to do this?  Based on what I’ve seen in both SDN and NFV positioning, not many.  It will only take one, though, to change the market share numbers radically and put true fear in the hearts of the others.

 

Reading into Alcatel-Lucent’s ProgrammableWeb Decision

Alcatel-Lucent has been in many ways the leader among network equipment vendors in service-layer strategy.  Their notion of the “high-leverage network” and their focus on APIs and developers for next-gen services has been, in my view, right on the money (literally).  Their positioning of their concepts, and as a result their ability to leverage their vision into some bucks in the quarterly coffers, has been disappointing.  So they’ve changed their game now, starting with the divesting of ProgrammableWeb, the API outfit they’d previously purchased to augment their developer strategy.

I’ve always been a fan of “application enablement” in Alcatel-Lucent’s terms, but I wasn’t a fan of the ProgrammableWeb thing, largely because I think it ignores the fundamental truth about developers, which is that they’re not all the same.  From the very first, network operators told me that they were less interested in exposing their own assets to developers than in creating their own service-layer assets to offer as retail features.  That requires APIs and developer programs, but below the surface of the network—down where infrastructure and service control live.  In fact, that is where Alcatel-Lucent now says it will focus.

This whole exercise demonstrates just how complicated it is for network operators and equipment vendors to come to terms with the software age.  For literally a century, we’ve built networks by connecting discrete boxes with largely static functionality.  A “service” is a cooperative relationship of these devices, induced by simple provisioning/management steps that are simple largely because the functional range of the devices are limited.  No matter how many commands you send a router or switch, it’s going to route or switch.  But make that device virtual, instantiate it as a software image, and now it’s a router or switch of convenience.  Add a functional element here and pull one out there and you have a firewall or a web server.  How do you build cooperative services from devices without borders?

When I go to NFV meetings or listen to SDN positioning or review my surveys of enterprises and network operators, what strikes me is that we’re all in this together, all trying to get our heads around a new age in networking, an age where role and role-host are separated, and so where role topology (the structural relationship among functional atoms in a network) isn’t the same as device topology.  Virtual networking is possible because of network advances, but it’s meaningful because of the dynamism that software-based functionality brings to the table.

There are players who purport to “get” this angle, and I’ve been talking to them too.  Any time I go to a public activity I get buttonholed by vendors who want to tell me that they have an example of this network-new-age situation.  In some ways they’re right, but there’s a difference between being a gold miner who uses geology to find proper rock formations for gold deposit and somebody who digs up a bag of gold in their back yard while planting a shrub.  Any change in market dynamic will create winners or losers just by changing the position of the optimal point of the market.  What separates these serendipitous winners and losers from the real winners and losers is what happens when the players see that the sweet spot has moved.  Do they run to the new one, or defend the old?

That’s Alcatel-Lucent’s dilemma now.  Their API strategy has been aimed at the wrong developers.  They picked up some baggage as a result, and now they’re shedding that.  Good move.  But did they pick up new insight when they dropped old baggage?  Do they understand what it means to support service-layer development now?  It’s more than saying that they’ll help the operators expose their assets, or more than saying that they’ll expose their own network APIs as assets to the operators.  What operators are saying is that they need to be able to build services in an agile way, reusing components of functionality and taking advantage of elastic pools of network-connected resources.  The future, in short, is a kind of blending of SOA principles and cloud computing principles.  When we build services, not only now but henceforth, we are building these elastic SOA-cloud apps.

Nothing we have today in terms of infrastructure matches the needs of this future.  Static device networks won’t connect elastic application resources.  Element management systems make no sense in a world where an element is a role taken on by a virtual machine.  Blink and you’re managing a server; blink again and it’s a firewall or a CDN or maybe even a set-top box.  Schizo-management?  Provisioning means nothing; we’re assigning roles.  Service creation is software component orchestration.  The question for Alcatel-Lucent is whether they grasp where the reality of future services will be found, because if they don’t then they may have dropped the baggage of ProgrammableWeb, but they’ve picked up another heavy load that will reduce their agility and limit their ability to create relevance in a network environment that is not only changing rapidly, it’s institutionalizing a mechanism to permit changes to be completely unbridled—because that’s the goal.

But for Alcatel-Lucent’s competitors, the issue may be worse.  Alcatel-Lucent has at least shown it knows it’s bringing empty buckets to the fire and put them down.  Do Cisco or Ericsson or Juniper or NSN know that?  Is Alcatel-Lucent’s commitment to virtualize IMS, for example, an indication that they know that all network features that aren’t data-plane transport are going to be fully virtualized?  Do they know that NFV goals will eventually be met ad hoc whether the body meets them or not?  And do other vendors who have made no real progress in virtualizing anything or even talking rationally about the process have even a clue?  If they don’t, then even a detour through the backwaters of the too-high-level-API world may still get Alcatel-Lucent to the promised land before their competitors take up residence.

Two Tales of One City

The market giveth, and takes away, but probably in the main it’s the vendors’ own actions that make the difference.  We have an interesting proof point of that in two events yesterday—the end of the second NFV meeting in Santa Clara and the earnings call of Juniper, just down the road in the same town.  Same town but two different worlds.

NFV is a poster child for carrier frustration and vendor intransigence.  I’ve surveyed operators since 1991 and many who have followed my comments on the results will remember that about five years ago, operators were reporting themselves as totally frustrated with vendor support for the operators’ monetization goals.  Well, guess what?  They got frustrated enough to start an organization dedicated to the transfer of network functionality from devices to servers.  Nobody listened five years ago; maybe this time it will be different.

Juniper is a player who on the surface should be a major beneficiary of initiatives like NFV.  Juniper was the founder and champion of the “Infranet Initiative”, which became the “IPsphere Forum” and was later absorbed by the TMF.  This early activity wasn’t aimed at pulling functionality out of the network but rather in laying functionality onto/into it, admittedly using software and hosted elements.  Many of the agility, operationalization, and even federation needs of NFV hark back to those old IPsphere days.

But where is Juniper on NFV?  They’ve been bicameral.  The company has blogged about the topic, as well or better than anyone else in the vendor space, but in public they have not only failed to exploit NFV in positioning (and thus exploit their topical expertise, gained from their past activities) but have actually taken NFV concepts and stuffed them into an SDN brown paper bag.  I commented at the time that this was an illogical step, and I think the explosion of interest in NFV proves that Juniper rode the wrong horse to the show on this one.

And perhaps on other things too.  The tagline to remember from Juniper’s earnings call was from Rami Rahim, who said “It’s clear that traffic is continuing to grow and this forms of course the foundation of much of our business. So it just comes down now to how much of a risk operators want to take or how hot they want to run their network before they want to invest. Clearly as long as that traffic continues to increase, which we see as increasing everywhere, that investment cycle especially in cost centers like the core will come, eventually.”  This sure sounds like “lie back and hope the money comes in”, which isn’t the kind of innovation-driven approach to the market that an aspiring player with a minority market share and a P/E multiple of four times the market leader should be talking.

The contrast with the coincident NFV event is striking.  Let operators buy routers to carry traffic despite declining revenue per bit and return on infrastructure, at the same moment as these same operators are convening an organization to disintermediate at least some of the devices you make.  Go figure.

NFV has its challenges, not the least being that the body is still dependent on the vendor community’s willingness to come up with solutions to fit the body’s requirements.  The goal of improving cost and agility by hosting network functions seems (and is) logical on the surface, but the devil is in the details.  If you replace a fifty-dollar residential gateway with three virtual machines and the intervening connectivity to link the functionality, you’ve likely created something whose capex/opex contribution is greater than what you started with.  It’s also not clear how functional agility offered by virtual residential gateways versus real ones would help sell new services to residential users.  Simple virtualization of networks on a device-for-device basis isn’t going to generate enough savings to matter, and the basic architecture of networks and services wouldn’t be changed.  If you’re going to do NFV, you have to do it with an eye to exploiting the cloud—which is the model of the new fusion of IT and networking.  The cloud, as a platform for applications, is an equally sound and flexible and cost-optimized platform for service components.  Because, gang, services are nothing but cooperating software application elements sitting on a network.

Everything that Juniper, or any other vendor, would need to fully realize the vision of NFV (even before the body is able to fully articulate how they expect that to work) is in the marketplace today in the form of proven technology.  Every insight needed to make network equipment into an optimum platform for the new network, the network the operators need to be profitable and continue to buy hardware, is not only visible but glaringly obvious.  That there were over 200 attendees at the NFV meeting suggests that carriers are committed and vendors are at least wary that the concept might just happen.  It will, because it has to.  It’s just a question of who it will happen with, and Juniper will have to take its eyes off the bits to smell the NFV roses.  So will everyone else.

Out with the Real, In with the Virtual

The attendance at the NFV meeting in Santa Clara seems a pretty solid indication that NFV has arrived in terms of being of interest.  It’s not a surprise given the support that’s obvious among the big network operators.  They run the meetings and are driving the agenda, an agenda that’s also clear in its goal of shifting network features from costly specialized devices to open platforms.

A move like that has its challenges.  We don’t engineer general-purpose servers to be either stellar data movers or high reliability devices.  There is interest among both operators and vendors, right down to the chip level, in improving server performance and reliability, but the move can only go so far without threatening to invent special-purpose network devices.  Every dollar spent making a COTS server more powerful and available makes it less COTS, and every dollar in price increase reduces the capital cost benefit of migrating something there.

I think it’s pretty obvious that you’re not going to replace nodal devices with servers; data rates of a hundred gig or more per interface are simply not practical without special hardware.  We could perhaps see branch terminations in server-hosted virtual devices, though.  How this limitation would apply in using servers to host ancillary network services like NAT or firewall is harder to say because it’s not completely clear how you’d implement these functions.  While we might ordinarily view a flow involving firewall, NAT, and load-balancing as being the pipelining of three virtual functions, do we actually pipe through three or do we have one virtual device that hosts them all with the pipeline managed only at the software level?  The latter seems more likely to be a suitable and scalable design.

Availability issues also have to be looked at.  You can’t make a COTS server 99.999% available, but you could make multiple parallel hosts that available.  The challenge is that it wouldn’t make it available in the same way as our original five-nines box.  A packet stream might be load-balanced among multiple interfaces to spread across a server complex, but unless the servers are running in parallel the result will still be at least a lost packet or two if one unit fails and you have to switch to another.  That wouldn’t happen if you were five-nines and didn’t fail in the first place.  As I said, it is possible to build a virtual application that has the same practical failure-mode characteristics and availability, but again you’re forced to ask whether you need to do that.  Do even modern voice services have to meet traditional reliability standards given how much voice is now carried on a best-efforts Internet or a mobile network that still creates “can you hear me?” issues every day at some point or another?  We’ll have to decide.

Security may or may not be an issue with hosted functions, including hosting the SDN control plane.  If we assume that virtual functions are orchestrated to create a service, there are additional service access points created at the boundaries and these could in theory be targets of attack.  However, you can likely protect internal interfaces among components pretty easily.  A more significant concern is what I’ve called the DoR or Denial of Resources attack, which is an attack aimed at loading up a virtual function host with work in one area to force a failure of another service being hosted there.  If you can partition resources absolutely, this isn’t a significant risk either.

One area that could be a risk is where a data-plane interface can force a control-plane action and a function execution.  The easiest example to visualize is that of the SDN switch-to-controller inquiry when a packet arrives that’s not in the forwarding table.  The switch has to hand it off to Mother, and if you could force that handoff at a high rate by sending a lot of packets that don’t have a forwarding entry in a short period, you might end up by loading down the controller or the telemetry link.

I don’t think that virtual function or SDN security is going to be worse in the net, but it will almost surely be different.  Same with availability and even performance.  There are things we can do in a hosted model that we can’t do in an iron-box model after all.  Even if, as seems likely for migration/transition reasons, NFV first defines a network of virtual devices that mirrors the network of real devices, it can evolve to one where all network functions would appear to be performed by a single virtual superdevice.

That has operational issues of course.  If your goal is to evolve from a real-box network, you’ll likely need your virtual boxes to mirror the real ones even at the management interface level.  But you can’t get deluded into starting to track failure alerts on virtual devices and dispatching real field techs to fix them!  A virtual device is “fixed” by instantiating it again somewhere else, and it might well be that is done automatically without reporting a fault at all.  It probably should be.  And remember that if we have one virtual device doing everything, we have only one management interface and less management complexity!

The point is that the virtual world is different in that it’s whatever you want it to be.  Any kid who ever daydreamed knows that.  We’ll learn it in the real world too.

 

Where Now, NFV?

The majority of the current network hype has been focused on SDN, and either despite the attention or because if it, SDN hasn’t garnered much focus other than hype.  We have so much SDN-washing that it’s hard to see what’s even being washed any more.  Laundry fatigue in tech?  A new concept at last!

NFV is a newer concept than SDN, and one that so far doesn’t have a show with vendors exhibiting and issuing press releases.  There are vendors who are voicing support for NFV (Intel did so just last week) but so far the claims are muted and even a bit vague.  The second of the NFV global meetings is being held this week, and before the meeting may be a good time to review the issues the body will have to address.

The goal of NFV is to unload features from network appliances, presumably including even switches and routers, and host them in generic server farms.  This, operators hope, will reduce costs and help the operators overcome the current problem of profit squeeze.  It’s also hoped that the architecture that can support this process, which is where “network function virtualization” comes from semantically, will provide a framework for agile feature creation.  That could make operators effective competitors in a space that’s now totally dominated by the OTT and handset players.

A virtualized anything is a step on the path to reality, obviously.  You start by defining a set of abstractions that represent behaviors people are willing to pay for–services.  You then decompose these into components that can be assembled and reassembled to create useful stuff, the process that defines virtual functions or a hierarchy of functions and sub-functions.  These atomic goodies have to be deployed on real infrastructure—hosted on something.  Once they’re hosted, they have to be integrated in that there has to be a mechanism for the users to find them and for them to find each other.  Finally, workflow has to move among these functions to support their cooperative behavior—the behavior that takes us back to the service that we started with.

NFV, as a body, has to be able to define this process from start to finish to create NFV as a concept.  What, for example, are the services that we propose to address?  One example already given is the firewall service, another content delivery network services.  Even at this point, we have potential issues to address.

Firewalls separate endpoints from traffic by creating a barrier through which only “good” stuff can pass.  It follows that they’re in the data flow for the endpoints they serve.  So does this mean that we feed every site through a software application that hosts selective forwarding?  That might be practical up to a point, but servers aren’t designed to be communications engines operating at multi-gigabit speeds.  Intel clearly wants to make it possible, but is it practical, or should we be thinking about having a kind of switch-like gadget that does the data plane handing and is controlled by a virtual function that needs only process rule changes?  Good question.

Even higher up the tree in the conceptual sense is what we’re serving here.  If we need to have endpoints supported by firewalls it follows that we need some conception of an endpoint.  Who owns it, how is it connected in a protocol sense, how is it managed and who’s allowed to exercise management, what functions are associated with it (like firewalls)?  In software terms, an endpoint is an object and an enterprise network is a collection of endpoints.  Who owns/hosts the object that represents each endpoint, and who owns the association we’re calling an “enterprise network”?

We can do the same thing with CDNs.  We have a concept of a CDN service as something that delivers a content address (presumably from an optimized cache location) to a user in response to the user having clicked on a URL.  One element of this, obviously, is that we have to decode URLs, which is a DNS function.  Do we have a special DNS for this?  Does every user have their own “copy” or “instance” of DNS logic?  Remember, in enterprise firewall applications we likely had an instance of the app for each user site.  Not likely that will scale here.  Also, the DNS function is a component of many applications; is it shared?  How do we know it can be?  Is “caching content” different from storing something in the cloud?  How do we integrate knowledge of whether the user is an authenticated “TV Everywhere” client to access the video?  Obviously we don’t want to host a whole customer process for every individual customer, we want to integrate an HSS-like service with DNS and storage to create CDN.  That’s a completely different model, so is it a completely different architecture?  If so, how would we ever be able to build architectures fast enough to keep pace with a competitive market.

You can see that I’ve filled a whole blog with questions about two possible applications in the first of five stages of execution.  That’s the NFV challenge, and it’s a challenge that only a great architecture can resolve.  So that’s what everyone—you and me and all the operators and vendors—need to be looking for out of meetings like this week’s session.  Can NFV do a great architecture?

If they fail, does NFV fail?  Not likely.  There are way too many players behind this.  We may have a failure of process—what carrier standards group in the last decade has produced a commercially and technically viable standard in time to be useful—but we’ll have a result produced by the market’s own natural competitive forces even if we don’t create one by consensus.  I’d sure like to see consensus work here, though.  It would be a healthy precedent in an industry that needs collective action to face formidable challenges.

 

Is IBM Presaging the Death of Strategic Thinking?

IBM delivered a rare miss in their quarterly numbers, and a significant one at that.  While the company seemed to focus on execution issues and delays in getting contracts signed rather than the usual macro-economic conditions tech vendors like to blame, I think the problems are deeper for IBM.  And for the rest of the space.

From the first, at least according to our surveys that began in 1982, IBM has led the vendors in the strategic influence they exercise on customers.  In the last decade, though, IBM has steadily declined in influence.  They opened this century with enough influence from mid-sized businesses to enterprises to drive projects over even combined opposition.  Now they can barely drive them without opposition, and of course opposition is mounting.  Worse for IBM, their influence is concentrated almost completely in the enterprise space.  You can see that in their numbers; hardware sales of mainframes were strong and everything else was weak.  “Everything else” represents the hardware classes that are suitable for the larger and broader market.

What’s responsible for this?  IBM, once the bastion of business-speak, is now seen as being able to articulate its message only to professional IT cadres.  Integrators told us that IBM can’t address the SMB space at all, even in marketing/advertising terms.  Absent marketing articulation, nobody can do anything at the sales level except play defense, and that’s what’s happening.  And you always lose eventually by playing defense.

Digging into IBM’s numbers and their call, you also catch the disquieting truth that some of the key components of their software strategy are losing steam.  Lotus, of all groups, turned in the best growth for them.  WebSphere, which absolutely has to be the framework whereby IBM would introduce new productivity paradigms, had once gained better than 20% and is now into single digits.  But it was hardware that dragged IBM down; only mainframes were above water for the quarter, and Lenovo has confirmed that IBM is looking to sell off its whole x86 server business.   Too much competition, too little profit.

Frankly, this kind of quarter makes me wonder yet again what IBM is doing backing OpenDaylight.  You can’t make money selling hardware, says IBM’s quarter.  You certainly can’t make money selling open-source software.  So do they plan to link a losing hardware business to free software to boost profits?  Somehow that seems illogical.  Do they plan to do what I suggested regarding OpenDaylight, which is to commoditize the lower part of the SDN market and focus on building the upper layers?  If they do, they need to have a lot more value to offer above the SDN controller, and I’m not sure where they think they’ll get it.  NFV is an example of a clear new-server-and-software application area, and yet Intel seems more aggressive with SDN/NFV positioning than IBM is.

For the industry, I think this should be a wake-up call, which is a good thing if true because as an industry we’ve been stuck in the cost-reduction stupids.  CEOs and CFOs of the tech world, unite!  You have only your profits and reputations to lose!  What happens to a company who can sell only by promoting cost reductions versus prior product generations?  They sell less, of course.  That means they start missing their quarters.  I’ve been harping on the fact that since the literal dawn of tech, we have had regular cycles of IT spending growth that corresponded to new productivity paradigms that created new IT benefits.  We had, until about 2002, when they stopped.  They’ve not restarted since.  This is the problem that the IBM of old could have solved.  This is a problem that any respectable IBM competitor of old would have jumped on had IBM somehow missed the boat.  Nobody’s jumping; we’re still stuck in TCO-neutral here, promoting the cause that there’s nothing new and useful computers can do, so every technology enhancement has to lower their cost.  It’s not taking us to a happy place, it never could have, and so I’m tired of everyone griping about stagnant sales.  If you don’t like them, get off your ass and come up with a value proposition other than “spend less”.

This has to be a critical issue for Cisco, too.  Microsoft, despite the issues it has, beat its estimates slightly.  Oracle, another rival, has a stronger software position and thus isn’t as exposed to the whole hardware problem.  Software, remember, is the link between humans and IT; we don’t have disk interfaces so we need productivity intermediation that only software can buy.  Oracle bought Acme, so might they be getting ready to buy into some new UCC-based productivity thing?  Could be, and if it is then where does it leave Cisco’s server side?

And Cisco’s network, of course.  I’m not among those who believe that SDN is going to destroy the network vendors.  I’m a believer that their focus on TCO is doing a great job of destruction on its own and doesn’t need help from the tech side.  SDN is an opportunity for network vendors, a way of creating a framework for point-of-activity empowerment that could represent that needed and long-delayed benefit driver for a new spending cycle.  But all these guys are playing SDN defense, linking it to operations cost management, which gets us back to lowering spending, bad quarters, and maybe new management teams.  And Cisco’s not the only vendor in this boat; every player who wants to sell to the enterprise has to either offer better benefits to drive higher spending or (surprise, surprise!) accept lower spending.

Businesses buy IT and networking because it makes people more productive.  The more productivity you drive, the more they can spend—on you.  Seems simple to me, but apparently a lot of senior management in the vendor space is finding it too complicated to deal with.  Maybe some new management teams really are in order here.

What Might Intel’s Open Network Platform Mean?

There’s a clear difference between dispatching an ambulance to an accident scene and chasing one there, as we all know.  There’s also a difference between a company reacting opportunistically to a market trend and a company actually shaping and driving that trend.  Sometimes it’s hard to tell the difference in this second area, and so it is with Intel’s announcement of a reference implementation of a server architecture for networking.

Trends like the cloud and SDN and NFV are driving servers into a tighter partnership with networking.  I’ve been saying for months that the future of IT was going to be created by the shift from IT as a network service access point to IT as a network component.  That’s what the cloud really means.  And Intel seems to know that, whatever is driving their interest, because they’re participating and not just product-washing.  In the NFV space, for example, they’re a more thoughtful and active participant than most of the network equipment vendors.

Intel’s Open Network Platform reference design includes the Wind River Open Network Software suite and a toolkit for creating tightly integrated data-plane applications.  The platform will implement an open vSwitch and the toolkit means that other vSwitch architectures, including we think the Nuage/Alcatel-Lucent one, could be implemented easily on the platform.  So at the minimum Intel may be voting with its R&D and positioning dollars that things like SDN and NFV are real.  At best, it might be taking steps that will actively drive the process.

One of the most important points I cited in Alcatel-Lucent’s Nuage launch was that the new SDN model the two companies promote is an end-to-end hybrid of virtual-overlay networking (what I’ve called “software-defined connectivity”) and real device-SDN networking.  The Intel platform seems to encourage the creation of a new model of highly functional virtual device, one that could form the branch edge of a network as easily as the server side.  This model would encourage the creation of application VPNs or what Alcatel-Lucent calls “software-defined VPNs”.  It could be deployed by enterprises, and also by network operators, and it could be linked by common central control down to policy-based or even route-based special handling at the traffic-connection level.

Perhaps the most profound impact of the Intel step could be the impact is has on NFV, and I don’t mean just the ability to create better server platforms to host virtual functions.  The value of the NFV concept, if it’s contained to network operators, will be slow in developing and limited in scope.  Intel might be framing a mechanism to link NFV to what it frankly should have been linked to from the first—the cloud.  NFV as a cloud element is NFV for enterprises, which is a much bigger market and a market that will move opportunistically with demand for cloud-specific services.  Thus, Intel might be at least attempting to single-handedly make NFV mainstream and not an operator science project that could take years to evolve.

The most general model for a network-coupled IT environment is an as-a-service model where all functional elements are represented by URLs and RESTful interfaces.  In such a model it doesn’t matter what platform hosts the functional element; they all hide behind their RESTful APIs.  That model is likely the ideal framework for NFV, but it’s also the ideal framework for the evolution of cloud services and the creation of a cloud virtual operating system that hosts a currently unimaginable set of new features and applications, for workers, and consumers.  This may be the NFV model Intel is thinking about.

The Intel step may put network vendors in the hot seat.  Alcatel-Lucent has already committed to a hybrid virtual-overlay-and-real-SDN-underlay approach, a model that tends to commoditize enterprise network hardware.  That’s fine for them because they’re not enterprise incumbents, but what do Cisco and Juniper and the other smaller enterprise players do?  Even for carrier applications like the metro cloud I’m always harping on, there’s a necessary marriage between the virtual-overlay stuff Intel’s ONP proposes to host and the metro Ethernet and fiber networks that build aggregation and backhaul.  A formalistic link between virtual-overlay and real device networks may be mandatory now, and that link to be useful has to elevate the former above Level 2, link it effectively to the cloud and to componentized software architectures, and then bind it in a general way to the real device networks that alone can move real traffic in high volumes.

Make no mistake, Intel’s ONP doesn’t replace specialized switching and routing and the major layers of transport networking.  An x86 platform running any UNIX derivative with the BSD enhancements could have done that for decades (and in fact the first Internet routers were servers like this).  We got specialized devices for traffic-handling because they’re better at that mission, and that’s almost surely true now and forever.  However, every time we add IT intelligence to a service we have a traffic-terminating mission from a network perspective, and that’s what Intel is optimizing for.  If they’re aggressive and clever in their support for things like Quantum, DevOps, SDN, and NFV, they’ll have a major leg up on an important server mission of the future.

Facing Networking’s Era of Change

We’ve already seen signs that mobile broadband is gutting at least the near-term PC sales, signs that Intel’s quarterly numbers only confirm.  We have lived for over thirty years in the personal computer age, and PCs have transformed just about everything in our lives and in business.  Now they’re dinosaurs.  My point is that if mobile broadband can shake the literal foundation of modern technology, it’s going to shake pretty much everything and we need to understand that.

Yahoo needs to understand that too.  Marissa Mayer, Yahoo’s CEO, said that it would take years to turn the company around.  They don’t have years.  If Yahoo wants to jumpstart itself, it will have to take advantage of a market revolution to do that, and the market revolution of our generation is right here, right now in mobile broadband.

This week, we’re having the ballyhooed ONF event.  Next week we have an NFV conference.  You can fairly say that both these activities are aimed at dealing with changes in networking, but I think it’s fair to ask whether either SDN or NFV is being driven by mobile broadband.  If they are, then we should see some signals of the shift in their early work.  If they’re not, then we have to ask whether either is going to meet its goals, its potentials.

Operators have for five years now outlined the same three top priorities for monetization; content, mobile/behavioral, and cloud computing in that order.  Their priorities have been set financially rather than technically; they saw content traffic as a big threat and so monetizing it was a big priority.  They saw the cloud as a business service and something outside their basic comfort zone, even in terms of setting financial goals, so they had it at a lower priority.  Over the five years since this started, though, the cloud has jumped into high gear with operators and where we stand now is that the cloud monetization projects have outrun everything else.  That leaves mobile/behavioral opportunity, the thing mobile broadband is enabling, in last place.

You can see this in the SDN stories.  If you look at a mobile broadband deployment from a sparrow’s vantage point, you see cell sites springing up like spring flowers (well, spring flowers in a normal spring—we’re not seeing that many yet) and backhaul trunks spreading like galloping tree roots.  Where?  In the metro areas.  Wireline broadband is an access/metro problem.  So tell me this; where are the stories about how SDN is going to revolutionize that space?  We have SDN in the data center.  We even have (via Google’s work) SDN in the core.  Ericsson has told a basic SDN-metro story but only a basic one, and when other vendors have made what could have been SDN-metro announcements there was no metro in them.

In the NFV space, there is a double-barreled question in the mobile broadband area.  First, given that the white paper the carriers issued at the launching of NFV focused on the migration of virtual functions from custom appliances to generic hosts, it tended to focus on stuff already being done.  Mobile broadband changes and opportunities aren’t represented in today’s appliances.  We’re actually searching for an architecture to support them, and logically the NFV architecture designed to host past-service elements in an effective way should also be tasked with supporting the future effectively.  But focusing on migrating existing features will miss the mobile/behavioral fusion that mobile broadband is driving, and that’s the biggest trend in all of networking.

The second point for NFV is the cloud.  That same initial white paper talked about virtualization as the host of the functions.  I pointed out from the first that the architecture for network feature hosting had to be broadened—the cloud is the logical vehicle.  This is especially true given that those operator monetization projects that involve the cloud advance twice as fast as those that do not, even when the projects aren’t aimed at offering cloud computing services of any sort.  Content cloud equals progress.  Mobile cloud equals progress.  NVF cloud is likely to equal progress too, so we need to see whether the group will accept that reality and embrace something that can move all its goals forward, while at the same time making the mobile-broadband-driven changes in the market an implementation priority.

Even the cloud has to change, though.  The conception we have of cloud computing today is just VM hosting of applications that were written before the cloud was even conceptualized.  Well, we’ve conceptualized it.  Will we keep writing applications in a way that demands the cloud morph into something that looks pretty much like legacy IT, or will we do things differently?  Yes, I know that today’s answer is “stay the course”, but that’s because vendors are all taking root in their current quarterly goals and becoming trees.  SDN and NFV will show startups where it’s possible to link new network visions and new cloud visions to new revenue opportunities.  That will include addressing the point-of-activity empowerment that mobile broadband enables by structuring applications to deliver just-in-time insight to the worker, whether they’re trying to make a sale or pry the cover off an inspection panel to start work.

Every network vendor, every IT vendor, is both empowered and threatened by the current trends, including Intel, Microsoft, Dell, HP, IBM, Cisco, Alcatel-Lucent, and yes Yahoo and Google.  We have seen the power of this change already.  We’ll see more of it, and more vendors will stand or sink based on whether they buck it or ride it.  This quarter is only the start.  More is coming.