What Do Operators Say are the “Myths” of SDN and NFV?

Sometimes our technologies are more defined by the stories told about them than about their realities.  SDN and NFV are no exceptions, and the full scope of mythology for either would take a lot more than a single article to cover.  Fortunately we can narrow the scope of myths (and blogs) by focusing on what operators think are myths.  I’ve culled through the views of service planners in the operator world and collected what they say are the SDN/NFV tales they think are the tallest.

Number one on all the lists is the myth of the five nines.  We’d have SDN and NFV coming out our pores if we could only get them to five nines.  Some operators think this is funny/silly and others think it’s destructive, but service planners agree it’s a myth.  They make two critical points to validate their view.

First, the network business has for years demonstrated that buyers will trade off reliability for price, at least up to a point.  According to the planners, the actual target for future services is only three nines, which is where their own research say buyers will draw the line.

Second, the notion that the nine-count for SDN or NFV is somehow holding up deployment is nonsense.  The problem with both technologies is simply one of proving a business case, which in the service planners’ experience is mostly relating to proving a radical improvement in operations efficiency is possible.  They don’t think SDN hardware is any less reliable than current network devices.  They have not planned to replace major transport/connection devices with NFV functions.  Counting nines isn’t the problem, it’s counting incremental return on investment.

The second myth the service planners listed seems to come out of frustration with some aspects of both SDN and NFV standardization.  Call this one the myth of the perfect resource.  According to this myth, we have to be able to define the resource needs of a given service with great precision.  The greater the better, because if we could find that perfect match the service would be more profitable.

Wrong, say the planners.  The fact is that “perfect routes” differ only a little in cost from imperfect ones, in the scope of services being considered for SDN.  In NFV, a lot of attention to the precise resource needs of a given virtual function means that the effective size of the resource pool is much reduced, which means less economy of scale and more overprovisioning.  A highly detailed assessment of deployment needs would also be more operationally complex to administer.

Myth number three is new revenue is going to come from shortening time to revenue.  We hear all the time about service agility, but when you dig into the comments you find that the talker means “provisioning in hours rather than weeks”.  That shortening is a limited benefit because most service delays are caused by access provisioning, which can be solved only by prepositioning capacity (and that can be done without either SDN or NFV).

The longest provisioning times today are for business data services, and these services tend to get provisioned when new sites are added.  That happens relatively rarely; sites turn over at an average rate of less than 2% per year.  In any event, planners say, you can’t necessarily run glass to every building hoping a customer will pop up there, and without access you can’t change provisioning times meaningfully.

The one area of exception to this is the area of feature hosting for add-on features like firewalls, NAT, DNS/DHCP, etc. which we’ve come to associate with virtual CPE.  Planners admit that there’s likely a benefit to being able to sell users add-on connection features, but they also say that within a year or two nearly all connections have been equipped with the features they will need in the long term.  Smart planners are asking what the cost of ownership will be when the dynamic period has passed.

The fourth myth service planners cite is controversial even among the planners.  That one could be called the operations-business-as-usual myth.  According to both SDN and NFV advocates within the operators’ business, SDN and NFV can be managed in the same way as the traditional devices could have been managed.  No changes to either OSS/BSS or NMS practices will be required.  Planners divide on this one, not based on whether there’s a myth here but based on why and how it would be dealt with.

A slight majority of planners believe that it is possible that “virtual device” management practices that made SDN or NFV look like traditional equipment could work.  The problem for this group is that the qualifier is unacceptable because the consequences of being wrong would be truly dire.  This group wants to see specific service trials to prove out the model, and they don’t see results they can bank on as yet.

The remainder of planners think it would be a waste of time to try to prove the point because it’s invalid on the face.  An SDN enclave or a bunch of hosted functions are not traditional devices.  Under the skin, they cannot be managed in the same way at all.  SDN is not adaptive, it relies on setting failure modes and changing network configuration if something goes wrong.  NFV substitutes servers and virtual connections for a physical appliance, and you have to manage what you really have not what it looks like or you’ll never fix anything.

One of the most interesting things that comes out of the service planner views isn’t their myth concept but the source of the myths.  It’s become the norm to divide operators into three groups—Tiers One through Three.  While the boundaries here are soft, they do roughly correlate with the SDN and NFV socialization that’s happening.  The smaller an operator is, the more targeted the operator’s service set, the more likely it is that the operator has broadly socialized network technology changes and has hammered out an accommodation.

Service myths have the greatest impact on the Tier One operators, because those operators are the most likely to have largely independent standards and CTO processes that are budgeted and can go on for quite some time on their own, bereft of any broad support.  Tier One planners say that a lot of the mythology of SDN and NFV comes not from the vendors but from their own people, who are looking to justify (to the rest of the company) a technology they’ve been advocating.

We’ve seen some pretty interesting NFV deployments from Tier Twos, and I think that the lesson of NFV mythology is that we’ll see the light in smaller operators first, ones who have better internal cooperation.  The bigger ones should think about that one.

If SDN and NFV Change INFRASTRUCTURE What do Future SERVICES Look like?

SDN and NFV are going to change infrastructure policy, if they succeed.  I’ve blogged about that before.  They’ll also likely change the services offered by operators.  The notion of service agility as a benefit demands something to jump from (which we have; the present) and to (what we’d have to define).  I’d like to think about that future service set today.

We have three broad classes of services today.  First, we have connection services that allow us to communicate with someone (or multiple parties).  Second, we have access services that let us get to something, like the Internet.  Finally, we have private network services in the form of VPNs or VLANs.  These categories are fairly broad, and I think it’s reasonable to assume that future services will tend to work within or among the current classes of service rather than to invent new classes.

As a class of service, connection services are becoming a subset of access services, meaning that we are transforming “connection services” into “connection as a service”.  That shift is inevitable given the fact that the Internet is a connection fabric and most connection services can fit easily within it.  Traditional connection services are the least-likely to be profitable in the long term, and so one change we can expect to see is for operators themselves to under-invest in these services and accept that they’ll lose some of each (or all of some) to OTT over time.

Voice calling and SMS/IM are examples of things that just don’t make a lot of sense to spend money on as an operator.  That doesn’t mean that they’ll disappear from wireless and wireline operator inventories tomorrow, but rather that more and more of them will get subsumed into social-media and other applications.

Service-specific access is also going to slip.  I am of the view that IPTV in the U-verse sense is doomed and I think you can see that in AT&T’s quarterly report.  The whole notion of service agility demands users be prepped with enough access capacity to accept delivery of their services without physical changes being made to the demarcation.  Thus, you don’t necessarily have to give a user the fattest pipe their media can support, but you do have to give them media to support more services than they initially buy, and provide a ready means to upspeed it.

Where this trend takes is to the notion of an agile demarcation (dmark) point, a place where services can be connected for presentation to the user on demand.  The first question that SDN and NFV would have to answer is how their use would facilitate this on-demand connection/presentation.

An futuristic IP dmark could be a more sophisticated version of a gateway router (virtual or real).  A user sees an IP network as a series of accessible subnetworks, which means that it’s fairly easy to present all manner of services as simply a piece of an address space.  You’d have to make sure that you didn’t allow routing between the services for security and stability control, but that problem exists today where VPNs and Internet are provided to the same site.

One possibility here is to presume that a user has a series of IP services that are mapped to a public IP address (the popular Class A address of IPv4 RFC 1918, 10.x.x.x, is an example), which would mean that every IP connection would “see” both public Internet and a series of private services in the same connection.  Another idea would be to create tunnels with SDN that would partition the services in a way similar to that used to deliver IPTV, packet voice, and Internet over fiber.

Carrier Ethernet could be changed by this model.  Ethernet is based on Level 2 addressing, and it would be possible to give each service a Level 2 address through which it would connect.  If the service is something like VLAN that address could represent a “bridge” to an Ethernet LAN, and if the service were Level 3 (IP) the address could look like a router that’s an on-ramp to the IP subnet(s) the service uses.  You could map the same kind of service model to carrier Ethernet as to IP, which means that you could evolve both residential and business customers to the service model.

In the long run, the smart thing to do in both cases would be to give the users a fiber path that could be metered to a range of speeds, and then let the user set the total capacity of their access connection using an operations tool.  That would then set the maximum capacity of agile service connections that could be terminated on that site.

Introducing NFV into the picture changes things a bit in my view.  NFV is about hosting features not hosting connectivity, and in order to give users a feature you need to present what I’ll call a “logical service”, not a series of tunnels.  Features are either embedded in a data path, in which case they are somewhat transparent to the user except in behavior terms, or they look like something the user can address (which implies they have an address and are part of a service address space).

It’s possible to see a service set as a series of tunnels, but it’s not an easy model to manage at the user level.  Given that, I think it’s fair to presume that the future services will be presented as addresses, which means that the service dmark will be either a L2 or L3 virtual window on a world of routers and switches that are gateways to useful stuff.

Connection services in this model are simply addresses through which a connection can be created.  Private network services are the same, and access services are address windows into hosted features of any sort.  This is a kind of compositional dmark model for service evolution.  Every user gets one (or two, for redundancy) access pipes that offer an address-space (or even two—one at L2 and one at L3) window into a service spectrum.

SDN and NFV are interior technologies in this model, obviously.  The biggest change is in the service dmark device that does the composing.  It doesn’t have to be on premises (you could just terminate an access path to an interior element) and it doesn’t have to be an appliance (software virtual routers and switches would work fine).  The point is that whatever the technology, it has to be service-elastic.  Otherwise we have all this wonderful capability to turn on Service A from a portal in 20 seconds, and then watch users wait three weeks for a new access pipe to use it.

There are profound implications to all of this.  Regulators have to address the question of whether providing IP services through virtual address windows links them to the Internet and makes them a neutrality issue.  Operators have to figure out how (based in part on the regulatory issue’s resolution) how “premium” services might be offered—as Internet/OTT services or as part of a private special-service address space.  Everyone will have to figure out how to make all this stuff seem plug-and-play to the buyer.

I think it can be.  I think that the address-space windowed compositional dmark model is the logical vision for the SDN/NFV future if you look at infrastructure from the user’s perspective; if you look (virtually) down the access pipe that connects you to your carrier.  I think this is the vision that everyone has to work to, or new services are going to underperform.

Near-Term Signs and Critical Periods: SDN and NFV Before the Flex Point

If SDN and NFV are going to create market waves, the obvious question is whether vendors are going to ride them or be swept away.  Given the immature state of both technologies, there’s not a lot of clear indicators to read on that topic, but there do seem to be a few signposts emerging from the fog, and I’ll try to describe what I see on them.

First, I think it’s clear from the strong showing Huawei is making that operators (except in the US where a political decision prevents them from selling) are increasingly adopting Huawei because they’re usually the price leader.  Huawei is also driving the prices lower in deals that they don’t get.  My view is that this demonstrates that neither SDN nor NFV are seen by operators as proven strategies.  We’d be virtualizing things, not buying price-leader products, if virtualization were proven.

Second, it’s just as clear that nobody can stand up and say “SDN and NFV are never going to amount to a hill of beans!”  Operators believe in the future of these two technologies even if they can’t prove them in the present.  Juniper, for example, has done a turnaround from a couple of years ago, when they were calling “service chaining” SDN and are now calling virtual routing a VNF.  Huawei, who is winning in the current cost-cutting paradigm, is still pushing hard to be a player in SDN and NFV.  You have to be able to show relevance, at least, in an SDN and NFV future or your stuff is hit with the label of “stranded cost.”

What these two points show is that we’re in a really funny state, market-wise.  It would be incorrect to say that either SDN or NFV is a sure thing, but it’s obvious that operators are looking hard to find a reason to invest in both of them, and if the right business case emerges there will likely be a very rapid response among buyers.

What’s somewhat surprising to me is that NFV and SDN are both as much IT stories as networking stories, and IT incumbents haven’t pushed their own positions as hard as I’d have thought they would.  You can understand why Alcatel-Lucent, for example, would be charting a careful path to NFV to avoid killing more routing business in the near term than they’d gain in VNF or MANO business.  Dell, HP, IBM, and Oracle are all unencumbered by network business to lose, and all of them have server products that would augment VNF and MANO revenues in the event of a big NFV win.  Yet none of these players are really aggressive in marketing, and IBM seems almost disinterested.

I think that the lack of drama in the IT giants’ NFV positioning is responsible for the near-term focus on VNFs.  You need a truly systemic business case to drive systemic NFV or SDN deployment, and in order to get that you have to involve legacy infrastructure, operations systems, service creation, and even marketing and market targeting.  Give that list to a salesperson to sell through and they’ll respond by looking for a job with a faster payoff.  You need a strong marketing platform to push through a broad NFV success, and absent such a push from the vendors with the least to lose, we fall back on low apples.

The easiest places to push NFV and by association promote SDN are places with one of two characteristics.  First, they can be customer edge services.  Anything that is done at the service edge is customer-specific so it can be market-targeted pretty easily to manage cost/revenue exposure.  We see vCPE for that reason.  Second, they can be distributed-intelligence services which would mean mobile services and content delivery today and IoT in the future.  These services have multi-tenant value and so can be justified on at least a metro scale without pushing spending too far ahead of return on investment.

In the vCPE area, I think that the next place to watch for some serious action is the area of application delivery control and application acceleration.  These are typically presented close to the edge, and the former is closely related to the load-balancing needed for any VNFs to exercise horizontal scaling.  F5 did pretty well this last quarter, which shows that there’s a real need for the technology, and so I’d expect that we’ll see a lot of interest in this space among VNF hopefuls.

In the distributed-intelligence services space, I think we have three possible push points not yet really represented in the market:

  • IoT, which I’ve mentioned before. There really is a lot of value in IoT if you get your head out of 5G clouds and look not at how you connect the stuff but how you harness what we already can (and do) connect.  I think there are a number of vendors looking at an IoT service architecture, and I know there are operators who’d like to see it.
  • Content delivery, which is implicated in the area of greatest traffic growth and which cuts across OTT video and mobility trends to create a symbiotic architecture opportunity. We used to have a cloud CDN vendor in Verivue, and they got sold off.  True “cloud CDN” demands both SDN and NFV play a significant role, and with content as important as it is, this is a great shot.
  • Service application platform, because SDN connects service elements and VNFs deploy them, but you still need to have them for either to be useful. Logically speaking, we should have a “VNFPaaS” architecture that lets operators quickly assemble services from useful functions within a framework whose APIs and controls assure interworking and operationalization.  Right now every VNF is an island, and that’s no way to build an ecosystem.  We’d end up with the VNF equivalent of Darwin’s finches.

I think you can see the street signs, as I noted, but we’re missing that big overhead freeway sign.  For that, we’ll have to wait for Cisco to commit.  Cisco’s “let-me-show-you-just-enough” strategy for both SDN and NFV is a clear indicator that the sales champion of networking things that the current model (where Cisco is at least a major contender if not a winner) still has legs.  When Cisco starts to trot out major, and real, announcements in SDN and NFV we’ll know that the flex point, or at least the critical decision point, is near.

When might that be?  I think we’ll want to watch the signs this fall.  Operators typically do a fall strategic planning cycle that runs from mid-September into mid-November.  If they are going to do something radical in 2016 that cycle is where they’d likely show their hand.  I’ll be keeping an eye on things in that period, and you should too.

A Realistic if Unsatisfying View of the “Market” for SDN and NFV

You can hardly pick up (virtually) an online publication these days without seeing an extravagant market forecast on NFV.  I don’t have much faith in forecasts in general; they usually turn out to be aimed at validating the largest market possible because the buyers of the report are usually vendors.  NFV is particularly problematic, though, and SDN follows only slightly behind in terms of forecast difficulty.  There’s a lot of junk science out there.

Gartner made news this week by saying that neither SDN nor NFV are markets at all, which I also disagree with.  Yes, SDN is an architecture.  So is IP, and it’s a market.  Yes, NFV is a deployment option.  So is cloud computing and even Gartner agrees that’s a market.  So we can’t define our way out of facing the basic question of what the two technologies are likely to do.  We can’t fall back on avoidance.

OK, then let’s give the analysis of the SDN and NFV market our best shot.

SDN is the substitution of central control for packet forwarding using adaptive technology exchange.  If we stay with that purist definition, then SDN advances in two distinct phases.  In phase one we have the enclave model of SDN, where we apply SDN to connection problems that are limited enough to avoid the intrinsic truth that one controller couldn’t possibly software-define the Internet.  In phase two we develop or stumble on a reasonable federation model for SDN that preserves SDN value across enclave boundaries by avoiding the need to harmonize to pre-SDN network-to-network interfaces.

In the enclave SDN phase, my own model says we’re limited to penetrating about 4% of total switch/router spending.  Right now it looks like SDN would be most likely to find its place in cloud data centers first, followed by slowly advancing into metro infrastructure.  During that advance, SDN would break out of the enclaves and become a more broadly useful technology.  In phase two, my model says that it could penetrate almost 75% of the switch/router market, but this will take quite a while.

When?  The current timing my model offers is that phase one SDN cannot possibly achieve full penetration until 2018, and that by 2020 phase two would be well underway, with SDN then owning about 11% of switching/routing.  However, it’s important to note that the growth beyond that level would come largely from SDN control of virtual switch/router elements.  SDN, in short, wins by having lower OSI levels take on more of a role in grooming and fault response, and the partitioning of services and applications would then enable the use of virtual switch/routers.

NFV is a lot more complicated, primarily because it’s far from clear what specific drivers for NFV would look like.  You have to take two views—based on current benefit expectations and based on “magic bullet” expansion.  In both cases you have to look at NFV as being first a “market” in that there would be spending on NFV technology, and second as a market transformation driver that’s shifting carrier capex from network equipment to data center equipment.

Capital savings was the first NFV benefit to be cited, and most operators have abandoned it at this point.  For good reason, I think because my own model says that if that’s the justification for NFV then the maximum impact of NFV would be to shift about a percent of capex to servers.  It’s a pimple on the market posterior at that level.

If opex can be targeted, things get more interesting.  A full solution to service orchestration could justify a lot of incremental NFV spending, but interestingly enough it wouldn’t result in an enormous change in how operator capex is distributed.  In order for NFV to be operationally efficient it has to spread its wings to cover legacy infrastructure.  In doing that it creates a significant benefit even where network equipment remains the same.  What drives the transformational spending changes is the fact that the improved operations practices can then support the third benefit.

Which is new services—“service agility” in the current vernacular.  An agile and efficient operations framework would start to open new revenue opportunities for operators, which would be fulfilled increasingly through the use of servers.  In my model, this benefit-driven but unfocused drive to NFV creates a shift to servers that eventually ends up involving about a third of all carrier spending by 2025.

You noticed that I’ve not talked about what gets spent on specific NFV technology, and that was also true for SDN control.  In neither case are the expenditures a significant piece of carrier spending and my model won’t forecast something that small accurately.  A lot of this will be open source, some will be given away by vendors who will make money elsewhere.  The transformation will impact OSS/BSS spending, though, to the point where NFV-fashioned orchestration will make up almost 40% of that market, also by 2025.

My magic bullet approach does a bit better.  Recall from prior blogs that the magic bullet says that there are a small number of successful NFV applications (vCPE and mobile infrastructure, with perhaps IoT coming along).  Under this model, early NFV success is promoted in the low-apple areas where it makes sense, and these successes serve to justify near-term changes more efficiently because they limit the areas where infrastructure changes don’t pan out in immediate revenue or savings.

In this model, NFV shifts about 2% of carrier capex to servers by 2019 and that number grows to 5% by 2020.  If we could start to address IoT in respectable terms by 2016, that 2019 number would be 5% and 2020 would be 11%.  OSS/BSS impact would also be larger; 65% of that market would be orchestration-driven by 2025.

The combination of SDN and NFV would have significant impact on carrier capex.  Up to around 2020, the combination would actually increase capex slightly year over year as operators have to pre-capitalize to some degree.  By 2022 that’s turned around and there’s a decline of about the same rate, with capex stabilizing at just a bit more than 90% of current levels by 2030.

So there we are, at least as my current model shows things.  Many won’t like the lack of absolute numeric precision in my results, but I don’t think it’s possible to do an absolute forecast—not because SDN and NFV aren’t markets but because they are both alternatives to legacy equipment and their adoption will be driven by their benefits and by overall capital budgets.  I can’t forecast the latter—there are too many variables.

The most critical time for SDN and NFV is the period between now and 2018.  Any technology option that can prove its business case will drive infrastructure spending in its own direction at that point, and that which can’t prove out will be put on the back burner as operators look for better capital strategies going forward.  The moral for SDN and NFV vendors is simple.  You can’t just ride the wave here, because you’re going to have to be the wavemaker.  Otherwise something else wins.

Making Network Revolutions into Realized Revolutions

The notion that things are changing, perhaps a bit too fast for comfort, is hardly a modern phenomenon nor one confined to tech.  One of my favorite poems (Arthur Guieterman’s On the Vanity of Earthly Greatness) starts with the provocative line “The tusks that clashed in mighty brawls of mastodons…are billiard balls.”  Change, and not change for the better (from the mastodon perspective, anyway).  Our tech giants should give it a read, perhaps.

IBM was once the unchallenged leader in terms of strategic influence, with a score so high that their vision alone could drive purchase decisions.  Today they’ve fallen into a tie for third place among IT vendors and their influence and a couple bucks will get you a Starbucks.  Cisco’s score has also fallen, and so has that of Microsoft, who just generated a record loss.  In fact, all of the tech leaders are suffering, which makes you wonder just what kind of market we’re going to have in even a few years.  It also makes you wonder how we got to this point.

The root cause is likely simple.  Business technology has gotten subducted under mass market technology.  Selling expensive computers to the Fortune 500 is a decent business but nothing compared to selling smartphones to the masses of the earth.  If you look at any technology forum these days, even those supposedly focused on business IT, you find it dominated by consumer stories, consumer comments, and even consumer ads.  Absent any means of engaging the market, business IT is bound to languish.

A related issue is the death of productivity-enhancing innovations.  We’ve seen regular cycles of IT growth in the past, driven by new compute paradigms.  We ended the last cycle in about 2002 and nothing has started up since.  From that point, buyers became increasingly focused on sustaining the productivity gains they’d achieved, and the only “benefit” that a seller could offer is lower cost to achieve those goals.

You might look at this trend and see it as the dawn of the Age of IT Populism, where masses of startups rise (waving white boxes, presumably) and shake the councils of the mighty.  Well, not so fast.  While the major IT vendors are losing strategic influence and often missing revenue targets as well, they’re still sustaining market share, particularly if you take them as a group.  We might not consult IBM’s crystal ball before making decisions, but we still sign their sales orders.

It’s this contradiction that sets the tone for networking for the balance of this decade, I think.  Buyers are unhappy with their vendors, they espouse open products and open source and standards, and they fear “lock-in” and influence.  Yet they still do pretty much the same stuff as before when the PO comes out.  Perhaps the reason is an old saw; “Nobody ever lost their job by buying IBM!”  You can substitute your big vendor of choice for “IBM” here and the sense is the same.  Changing from a traditional supplier, abandoning an industry leader is a risky decision.  We’re not being paid to take risks here, says the buyer.

Which gets us back to benefits.  Suppose that private airplanes cost a thousand bucks.  Would that create an explosion in air travel?  No, but it would darn sure drive up the salaries of pilots.  What’s happening as cost of equipment and software buckle under competitive pressure is a shifting of cost focus from the cheapening capital elements of tech to skilled tech labor—professional services and integration.  We’re making something different into the limiting factor, not eliminating the limits.

In my pilot/plane example, you can see that mass-market airplanes by themselves don’t create a mass market because you can’t train people to fly them safely.  What you need to have is a product focus shift, one that takes the functional aspects of IT or network equipment for granted (computers compute, routers push bits) and concentrates on making the stuff bulletproof in terms of installation, operation, and support.

Thirty years ago, network management was said to be 18% of network spending.  In 2014 when I asked enterprises for their number, I got almost 38%.  Gear got cheaper, more gear is bought, more gear is more operationally complex…you get the picture.

Put this way, the revolutions like SDN and NFV are shooting at the wrong duck and only the cloud has a captured a glimmer of reality.  Not with IaaS, which probably consumes more tech humans than the alternative, but with SaaS.  If people want an application, sell them the application in the form they’ll use it in, not a heap of technology they can (hopefully) assemble into something that (hopefully) will do what they want.

It’s not that SDN and NFV can’t simplify things, but that we’re not seeing them as what the cloud says they’ll have to be to succeed.  This is all about network-as-a-service, not network as technology choices.  We’re back to the populist airplane thing; just making something cheaper doesn’t make it more consumable, more populist.

Almost two years ago I had a meeting with a half-dozen EU operators on the goals of NFV, and the point that was raised loud and clear was that if you wanted to do something significant with NFV, you had to address the service lifecycle.  And that’s what they say now.  One operator says “If you want ‘service agility’ then you want a complete opportunity-to-revenue lifecycle measured in days and not in years.”  Well, for all the discussions about agility, we’ve not addressed that complete lifecycle.

Part of the solution is to frame our new technology revolutions in a service management model.  The goal of all this iron is to sell services, so you need to understand how the new pieces will address that broad goal.  Making a five-hour difference in provisioning a service that’s taken you two years to frame into market terms is hardly noticeable, much less revolutionary.

NaaS is programming more than it’s anything else.  Software development principles have to guide service development if we’re going to make a massive difference in agility.  We’ve had years of good science built around agile software and now we’re trying to build agile services in activities that have nothing to do with software and have little or no participation from software people.  A software architect would never do SDN or NFV as it’s been done.

This is an easy problem to fix, too.  The software industry spent nearly two decades evolving to the modern state of modular development and as-a-service or microservice or whatever.  We have the results of all of that, and with “software” as the core of nearly every change we believe is coming in networking, it should be easy to conceptualize future services as the facile assembly of useful pre-designed components.  SDN and NFV both play a role in componentized services—connecting the components and deploying them, respectively.  What we’re missing is how the components are built, the “upperware” platforms needed to facilitate their development, and the management systems that can do drag-and-drop service creation as easily as we can do drag-and-drop development.

This is where vendors could really help.  At least three of the big NFV names (Alcatel-Lucent, HP, and Oracle) have software frameworks and skill sets.  All three have the tools needed to create an agile framework.  Let’s get to it, gang.

What Operators Think of SDN Deployment Models and What that Says about the Future

I had an interesting exchange with a planner in a mid-sized carrier, and got some insight into how network operators are seeing SDN.  Coming from an exchange with some other operators, my contact gave me a tutorial on the “models” of deployment the operators are seeing as promising.  Some are familiar, and some approaches we think of (and read/hear about) often seem to be getting discounted a bit in the real world.

The model that got mentioned first by operators in this exchange was the “vSwitch-plus” model.  In cloud or NFV data centers, SDN and OpenFlow are often used to set up the connectivity at the virtual level, through control of vSwitches.  Operators like that explicit model of setup—you connect what you say you want to connect—and so they are looking at a white-box data center where data center switches are OpenFlow-programmed as well.

One interesting point about this particular SDN application is that it’s seen primarily as a tool in reducing errors and advancing security and compliance.  Operators didn’t think the cost savings involved would excite senior management, and they didn’t expect operations costs to be materially impacted either, except as it would relate to perhaps the security/compliance area.

The second model that got some attention was the metro grooming model.  Operators made the point that it was becoming a practice to use lower-layer tunnels to separate traffic, services, applications, and users.  Physical media like fiber or Ethernet copper are examples, but they lack the granularity needed because advances in technology keep boosting the capacity physical media can support.  SDN could provide what might be called a “protocol-less tunnel”, an extension of physical media.

Where these tunnels should be supported is clear to operators too; primarily in fiber and in particular agile-optics gear.  There is already a strong interest in redefining networking as a series of parallel Level 3 universes separated at the physical level, rather than as a universally connected Level 3 world as Internet advocates might see it.  Right now this interest is focused in the metro because (logical, huh?) that’s where most of the capex is flowing.  It’s also true that metro is the focus of a lot of different services, and also a focus of net neutrality planning for operators.

Neutrality is a big reason to worry here.  If you were to build a hypothetically fully connected IP network as your metro foundation, you could (in the US and Europe in particular) find yourself defending any special service capacity against neutrality complaints because what you did would look and act much like an extension of the Internet.

There’s also an opex dimension here, but it’s more indirect.  Operators say that the opex cost per handled bit is highest at Level 3 and declines as you drop through to the OSI basement.  They also say that compartmentalized IP, particularly with topology and resiliency ceded to agile lower layers, is cheaper to run and the per-device traffic (being that of a single compartment) is low enough to make a hosted router/switch solution practical.

Some of you may recall a blog I did on this topic a while back—making the point that the network of the future could be far more virtual than we think if we compartmentalized applications and services at a virtual Level 1.  Maybe I’m self-fulfilling here, but I see some of that working its way into operator planning as this group sees it, at least.

The third model of SDN is one of end to end service separation.  The thought is that this would build out from metro commitments, meaning that operators would establish metro grooming as above, then extend it to do things like provision Carrier Ethernet using explicit OpenFlow forwarding.  These Level-1-separated services at the metro level would then be interconnected using “core grooming” to create end-to-end services.

I had hoped to see some interest in building cloud computing services this way.  It’s easy to say that you could combine the vSwitch-plus and metro grooming infrastructure choices to build a cloud data center with an application-specific network, then tunnel it to CPE in multiple sites to offer access.  This model didn’t get much attention, though.  It may be that it combines too many service areas.  Most operators run their clouds and their networks independently.  It may also be that they’re not seeing the end-game for that approach yet, and so can’t really justify diving into it.

One thing I didn’t hear from my operator source here was a goal of displacing traditional switching and routing with SDN.  I was somewhat surprised by the very focused interest, and wondered if that were simply an indication of the operators identifying low-apple opportunities from which they could build to their real goal.  No, they say—these low-apple opportunities are the real goal.  Strategic use of SDN, meaning the conceptualizing of a pure, true, SDN model for switching and routing, is simply not something they were thinking about.

I wonder whether this might not have a significant impact on SDN, but also on NFV and even carrier cloud computing.  In a way, it makes sense.  Here are the operators, saying that they are facing profit compression because of the commoditization of bit services.  Do they then re-architect bit services to offer better cost points in a technology-driven revolution, or do they just focus on the stuff that’s 1) costing a lot and 2) getting currently refreshed or capitalized?  The latter, they say.

If this is the case, then SDN needs to focus a lot more on “interworking”, both in a vertical sense with the L2/L3 stuff and horizontally from network segment to network segment.  Explicit SDN interworking beyond that accomplished using network-to-network gateway processes from L2 and L3 is essential if you’re trying to do internetworking below L2 and L3, after all.

For NFV, the challenge is that the views expressed by these operators challenges the notion of new services driving substantial NFV deployment.  This group would simply do an NFV-ish (probably more cloud-like, or agile-CPE like in the case of vCPE) low-apple implementation that’s not even particularly designed to go anywhere else.  To build NFV on a systemic scale you’d then need some overriding operations and orchestration benefit.

For the carrier cloud services, things could be really tough.  There really are no obvious low apples for cloud services.  IaaS is generalized but not much more profitable than pushing bits.  PaaS and SaaS require a specific market target that operators might have trouble even finding and later have problems addressing in an engaging way.

Maybe we’re asking too much here.  Maybe we need a technology vision of the future that operators can build toward without actually endorsing or even knowing about.  That goes against my own planning-intensive grain, but the market will ultimately decide what it’s willing to do.

The Five Stages of VNFs

VNFs, meaning virtual network functions, are important to NFV.  Without them there’s no possible business justification to be had, no matter how good our infrastructure or orchestration and management might be.  Well, we all know there are supposed to be five stages of grief.  I contend that there are five stages of VNF too, and our progress through them—as vendors and as an industry—may decide whether we can forestall the other five stages.

The first stage of VNFs is the billboard stage.  In this stage, VNF vendors eagerly seek publicity and in many cases do that by linking themselves to any vendor who can spell “ecosystem” (even if they get a few letters wrong).  The reason for this is fairly obvious; VNFs can’t deploy except as a part of a broader NFV ecosystem and it’s far from clear early on who the winners in that space might be.

Most VNF providers are in the billboard stage now, and they’re there because there’s little barrier to being a partner to the NFV masses and little they can do to get traction except as a part of either an NFV trial or a larger RFP.  Most will probably never get out of this phase, because the primo players at the ecosystem level aren’t interested in a cattle call, they want offerings they can justify integration and trial efforts on.  Many are called but few are chosen here.

The second stage of NFV is the see-what-sticks phase.  A VNF vendor enters this phase, typically from the billboard phase, when they start to understand that real engagement is going to involve a significant commitment of resources to every “ecosystem partner” they think has a shot.  Since most probably won’t qualify, this phase is really about weeding out the chaff so you can focus on the few great ones.

This is the phase where a vendor typically learns the details of operationalizing NFV, and the way those details will impact VNFs.  In many cases this will generate a significant amount of new development.  In some it may bring relief.  Some VNF vendors will realize that their “success” depends less on an ecosystem partner than they might have thought because the nature of their product (part of a business services vCPE chain, for example) allows them to deploy with little or no real NFV linkage.  Those VNF vendors will wait here a while as things develop on a broader NFV front.

Stage three is the ride a magic bullet phase.  Here, VNF vendors discover that NFV opportunities that seem real are focused on a very small set of services, justifying a small set of VNFs.  Right now, for example, the two magic bullets are mobile and business-related vCPE.  IoT could be added.  Where a VNF vendor happens to have such an offering, this offering now becomes the Road to the Promised Land, and (mixing a few metaphors) they’ll forsake all others to get their key VNF or VNFs buffed up and ready.

Some vendors, of course, won’t be that lucky.  For that group, the only hope is to find some connection between what they can do and the key opportunity drivers that are emerging.  Security, application acceleration, and even collaboration players will all want to make their stuff look like a part of the magic bullets being recognized.  Most won’t make a decisive connection.

Stage four is perhaps the most critical of all the NFV stages, the second effort stage.  It’s not that we’re asking VNF vendors to have another desperate run at the goal, but that this is the stage where everyone at every level recognizes that one magic bullet doesn’t win a war.  NFV has to be broad-based to gain enough in benefits to justify the enormous industry effort.

This is where NFV gets real, because it’s relatively easy in the current immature world of NFV specifications and implementations to make a single VNF work.  The question is whether you can do two, and this is the stage where that has to happen.  Most operators say that they’re going to expect a pathway to their second VNF about the time they start field trials on the first, and that they’ll be looking for the features that make their NFV solution inclusive during those trials.

Which brings us to stage five of VNFs, which is reckoning.  Every VNF vendor who manages to get to stage four won’t move on, in no small part because many will have hitched their wagon to the wrong star, focusing on an ecosystem that doesn’t have the breadth.  The big question for the industry is whether enough VNFs pass through this stage without washing out.  Every NFV trial, every ecosystem, won’t succeed but I suspect that if the odds aren’t way better than 50:50 even in the early stages, there’s going to be a lot of blowback.

If you’re a VNF vendor, you need to be looking at the stages one or two ahead of where you are, at the least.  The most critical stage of all, IMHO, is stage three, where VNFs will have to prove not only that they partnered with the right ecosystem vendor(s) but also that they’ve envisioned a lively market opportunity in which their VNF can play.  Next-most-critical is stage four, where a VNF provider with one story figures out that they need two good ones at the minimum.

The obvious question is where we are, stage-wise, and it’s difficult to answer that.  Every provider of VNFs has an independent vision and program, though maturity of both vary considerably.  My rough surveys suggest that two-thirds of all purported providers of VNFs have little other than a hopeful eye on the future.  One told me that “VNFs are aspirations not products”, citing the fact that there was only an immature vision of deployment.

Of the third who actually have something, the two focus areas are vCPE and mobility/IMS/EPS.  There is a pretty solid business case for agile mobile infrastructure, even out to cloud RAN or CRAN.  The interesting point is that mobile infrastructure is more a cloud application than an NFV application; the specifics of NFV are really not needed to justify early deployment and make a business case.  The problem is that it might be difficult to develop a second act from a pure-cloud vision of mobility.  Other apps require more dynamic deployment and management, and thus depend more on NFV.

vCPE is also knotty in terms of its ability to pull through full NFV.  The “edge-hosted” model offers some benefit versus standard hardware, and like mobility is could stand alone (in this case, as an agile-edge application set) without much real NFV involvement.  Again, the challenge would be in transitioning it to a broader opportunity base.  The edge features of business services are not themselves highly dynamic, and once you’ve deployed what a customer is willing to pay for, might a custom device that’s operationally simpler be a better approach?  We don’t have enough data to determine that right now.

Here’s an IoT Approach that Works (but Nobody Sells it)

I said in a comment on an earlier blog that I thought all the IoT approaches touted so far were irrational.  In earlier blogs I’ve noted my view that IoT had to be viewed more as a big-data application than as a network.  A few of you have asked me to expand on my own view of IoT, and so that’s what I propose to do here.

“Classic” IoT is a vague model for device connection to the Internet whereby sensors and controllers of various kinds are directly connected to the Internet.  Once there, they’re available to fuel a whole series of new applications.  For proponents of this vision, the question is how we support LTE or WiFi interfaces to all these gadgets.  There are a lot of issues associated with this model, from a public policy perspective, from the perspective of ROI on IoT applications, and from a simple technology-ecosystem perspective.  We have to start an IoT discussion by addressing these issues.

In policy terms, it’s clear that just putting a bunch of sensors and controllers on the Internet would create a massive challenge in security and privacy.   Imagine how much fun hackers would have diddling with the traffic lights in New York, shutting down lanes on bridges and tunnels, and perhaps even impacting pipeline valves and the power grid.  Imagine how much easier it would be for stalkers (or worse) to track prospective victims by looking at the security and traffic cameras.  Happy fishbowl, everyone.  Obviously there’s no chance this could be allowed.

Perhaps, then, it’s fortunate that it’s far from clear who’d have an interest in deploying this stuff.  State and local governments in many areas have found they can’t even get permission or funding to set up traffic cameras.  Public utilities already have sensor and controller connectivity, but it’s shielded from the very open environment the IoT proposes to foster and they’d hardly be looking at magnifying their vulnerability.  Private companies would look at the IoT model and ask how they could possibly earn a return by just publishing data or allowing control openly.

The technical challenges fall into two groups, one relating to both policy and ROI and the other to utility.  On the policy/ROI side, the problem is that the more sophisticated you make a sensor or controller the more it costs and the more power it will need.  If you have a home security system you probably use inexpensive wired sensors for your doors and windows, and maybe for motion detection.  These probably cost about twenty bucks a pop.  Imagine an IoT world, where each of these sensors is online through WiFi or LTE, and each is equipped with a firewall and network-based DDoS protection to prevent attack.  You’re probably looking at five times the cost, plus you’ll either have to power the stuff or change batteries a lot.

The utility issue arises from the fact that a given sensor is just an IP address in the classic IoT model.  How is that sensor put into a useful context?  For example, if it’s a traffic sensor, what road and milepost is it located at, and what format is its information in?  Is it counting cars, measuring speeds, or both? How do we know it’s even what it purports to be?  It might be spoofed by some hacker or presented by an enterprising rest stop owner who wants to divert traffic by making an expressway look jammed with traffic.

In my opinion, IoT isn’t a movement at the network level, but rather an architecture built around a big-data model.  Imagine a database where information from known and authenticated contributors is collected and structured.  The contributions could include traffic sensors, home sensors, even locations of mobile devices.  All the data would be contributed based on policy-defined limits on use.  Those who wanted to use the data would do a big-data query that would be policy-validated to insure it meets security and privacy rules, and would receive what they needed—historically or in real time.  Control elements would be represented by write-enabled variables, and accessing them would also be policy controlled.

Where’s the network?  Behind the database.  Any owner of sensors could contribute information into the big-data repository, but they would control the contribution and be able to state policies on how their data could be used.  The “network” connecting their sensors could be anything that’s convenient, meaning that all of the current sensors and controllers that are networked using any protocol could be admitted to the IoT repository through a gateway.  No need to make sensors directly visible online, or to change sensor technology to support direct Internet visibility.

This sort of IoT could be visualized as a collection of “device subnets” that would use any suitable technology to attach sensors and controllers.  Each would have a gateway through which the data was pumped into the IoT repository, and the gateway would manage the policies and formatting.  The IoT repository would be an online database query service—a web service.  It might be linked onto a company VPN, to a cloud application set, or made available on the Internet.

You can probably see the similarities between this model and the web.  Anyone can put up a website; anyone could “put up” a device subnet directly, or contribute to one of any number of IoT repositories subject to their policies.  Anyone could access what’s put up, subject to whatever policy limits the owner imposes.  The commercial terms of any of these relationships could be whatever the market sets.

IMHO, it would be the IoT repositories that would establish the value of the whole picture.  Any cloud provider could establish one, of course, including Amazon, Apple, Google, and Microsoft.  Interconnect players like Equinix could build them, and network operators could as well.  For some of the players like Amazon, Apple and Google, you could see their repository exploiting the mobile devices they offer (directly or as a platform).  Auto manufacturers could join somebody’s repository or start their own.  Same with home security companies, federal, state, and local government, and even public utilities.

What about standards?  Well, if we presume the IoT Repository model, and if we presume that we’re accessing primarily those devices with large installed populations, standards shouldn’t really be much of an issue.  A query can format data in any way that’s convenient, unlike an interface.

This model is also easily federated.  We have hotel and airline sites today, and discount travel sites that create front-ends to their models, and even a couple who front-end the front-ends.  We could build gateways between IoT repositories, high-level repositories that culled specialized data from others, or did specialized analytics.  Think cottage industry.

One of the most interesting points about this model is that it raises what might be called the “utility IoT” approach.  A company deploys a bunch of sensors and controllers and pays for the effort by 1) contributing the data to repositories and/or 2) developing and deploying their own IoT repository where they charge for access.  Doing this would be easier for telcos and public utilities who have historically low internal rates of return and tolerance for high first costs, but in theory any player could bootstrap into it.

This isn’t classic IoT, it’s not a universe where new OTTs mine sensors that somehow magically appear, magically create ROI, and magically generate traffic and equipment revenues.  It’s somewhere I think we could get to, and that seems a better approach to me.

NFV Management’s Final Dimension–OSS/BSS/NMS Integration

In prior blogs I looked at the NFV deployment model and the way that management as ETSI defined it would presumably work within a “typical” deployment.  The question this last of my more detailed explorations of NFV management will deal with is how “NFV management” relates to management and operations in a broader sense.  You can’t, after all, support services by managing only NFV infrastructure.  You almost certainly can’t built them that way either.

There’s no single management and operations model in play today among operators, but whatever is out there has to deal with those two areas in some way.  “Management” is normally applied to the physical resources used to build services, and “operations” to the business processes and commercial tasks related to service sale and maintenance.  It wouldn’t be unreasonable to say that operations is a customer-facing process and tool set, and management faces resources.  Since the TMF links these two in its SID data model, it should be clear that many view management to be “under” operations.  The fact that many services today are still provisioned through NMSs says that many see them separated.

Another TMF concept is useful in understanding management integration.  The Enhanced Telecommunications Operations Map or eTOM is a picture of the steps associated with creating, selling, sustaining, and terminating a service.  There are a number of eTOM references, depending on whether you are or are not a TMF member, but here’s a basic public vision.  eTOM is divided into levels or layers, and at the most detailed level it’s a pretty comprehensive picture of what has to be done from soup to nuts, service-wise.

In the real world, most eTOM activities are intermingled between human and automated tasks, and between operations and management tools (using my previous division of the two).  From low-level eTOM, one could almost picture service operations as a modular function, where different pieces might be implemented different ways and in different places.  As part of a service, NFV has to integrate in some way with eTOM.

How?  NFV, in the strict construction of the ETSI ISG, is a set of specifications that define how real network functions hosted in traditional devices could instead be deployed as cooperative software elements on some agile resource set.  The operative part of “NFV” that threatens the traditional management/operations model is the “virtual” part.  In effect, virtualization of any sort creates an intermediary.  We used to have customer-facing and resource-facing pieces, remember?  Well, now we have this “virtual” piece that might look like a resource from the customer side, a customer from the resource side, or all or none of the above.

In the ETSI E2E architecture, there is an implicit vision of how virtualization and management combine.  We have an Element Manager that’s almost cohabiting with VNFs and is responsible for management of the VNFs themselves in the “customer direction”.  We have a VNF Manager that is (via some intermediary elements) responsible for managing the resource relationships with the VNFs.  Presumably, though this isn’t stated explicitly, we have resource management tools and practices aimed at the NFV Infrastructure as a pool of devices.

IMHO, the ETSI activity has focused most of its specification work on the VNF Manager piece as the “management” approach.  This is consistent with what I’ve called a “black-box” view of network functionality.  A VNF is a function.  A function is managed as a function, not as a collection of chips (today) or software (under NFV).  What happens to make software into the manageable function we expect is largely the VNFM’s problem?  And largely what ETSI worries about.  We could draw this out if you like.  Make a box all the way on the right and call it “traditional management/operations”.  Draw a box to the left of that with a bidirectional arrow connecting it, and call the new box “ETSI Element Manager”.   Draw another right-working box called “VNFs”, then one more called “VNFM” and finally one called “VIM/NFVI” and you have the picture.

This picture doesn’t necessarily represent a break in any management model.  If we assume that the ETSI EM depicts the functional model of the underlying structure completely and accurately then we could substitute a VNF implementation for a real device 1:1 and nobody would care.  The devil is in the details.

Here’s an example.  We can horizontally scale components in NFV, right?  That’s supposed to be one of the benefits.  You don’t horizontally scale chips or devices on demand, so the current management model for Real_Widget wouldn’t have the properties of Virtual_Widget I’d like to sell, whatever a widget is.  However, I could in theory build a new Widget-MIB that had the fields necessary to represent incremental NFV functionality, and if my management system could contend with that extra data I’d still be fine.

Another issue less easily fixed is in the concept of FCAPS, which is traditionally seen as the high-level vision of “network management”.  All of the letters in the acronym represent something that had a single logical meaning in the old device days, but has two meanings in the world of NFV.  What’s a “fault?”  Is it a failure of the virtual device, meaning that we’ve exhausted the automatic remedies for replacement/reconfiguration of VNFCs that NFV might offer, or a failure of an underlying resource?

We could assume operations integration with FCAPS would work if we applied the acronym to the virtual world.  In the real world, downward to the resources, we have a problem of correlation because the relationship between resource faults and virtual device faults depends on how we’ve allocated resources and the extent to which we attempt automatic remediation.

Which raises the challenge of virtualization.  If we want operations to know about real problems, real resources, real capacities and cost accounting, then we have to dip below the virtual.  We have to somehow tie operations processes to the deeper reality.  That’s also true of management processes, because as we travel down the traditional service-network-element-management stack in a virtual world, we find there’s a basement, which is the virtual-to-resource mapping.

ETSI talks in general terms about operations/management relationships with the NFV software, but the interfaces for these are not defined nor are there any solid rules for how the relationships would be structured.  The TMF has a good opening approach in its customer/resource-facing service model and the (NGOSS Contract, now part of GB942, the TMF Business Services Suite) notion of steering service events to suitable processes through the intermediary of a service contract data model, but the specifics of this aren’t real clear even for the real-device world and that part of the TMF model is (according to my operator sources) rarely implemented.

In a standards sense, then, we’re not solving the problem yet.  Unfortunately, we can’t just ignore management integration because there will surely be no pure NFV service early on, and likely never a pure NFV service even down the line.  There are going to be legacy devices in networks for a very long time, likely forever.  Given that, and given that operations efficiencies and service agility isn’t very meaningful if you confine either or both to just a piece of a service, we need to harmonize management completely.  Here and there, federated and solo, NFV and legacy, applications and services, transport and connection.  Services to users have few boundaries even now, and management can’t have them either.

So here’s where I think we are.  There are only two ways to make a management connection from top to bottom.  One is to build “virtual-device MIBs” that could be based on current “real” MIBs but that would reflect data elements that represented any new service features, costs, or conditions that would arise in an NFV world.  We’d then have to populate these fields from real resource information as the service progressed through its lifecycle.  The other is to provide operations/management coupling through the virtual layer into the real resources.  My own work has always focused on the second of these approaches because I’m leery of having resources living behind a perpetual mask, but there’s no question that it would be easier to attack the former approach than the latter.

If this second approach is taken, then the service data model could be supplemented with the information collected when binding service components to each other and to resources.  These bindings could be traversed to dive into more detail on service state.  You could also, at any level of “object” in the model, describe the state/event relationships that would fulfill the TMF concept of mapping events to services.  It’s obviously more complicated, but if you did this you could define any current or newly developed operations process at any state/event intersection, and provide full integration of management components from top to bottom.

We have to do either a virtual-device-MIB or data-coupled management model; I don’t believe any other options even exist.  Unfortunately, I don’t think we have a convincing model for either in place; not in the ETSI ISG or TMF.  So I’d like to see operators and vendors cooperating (perhaps even in PoCs and lab trials) to explore the consequences of each approach and the alternatives for implementation.

The Three Paths to NFV Victory (and the Risk of Detours)

NFV is turning out to be a lot more complicated than it first appeared, and that’s particularly true in the area most critical to vendors—the business case.  While the question of making a broad business case for NFV is weeding out a lot of secondary players, it’s not deciding a market leader yet.  In fact, it’s not even clear what a winning strategy will be.  We have three options out there, and now’s a good time to look at them.

What most operators want from NFV is what I’ll call a systemic model for deployment, something that can justify a broad commitment to NFV (and almost always SDN, collaterally) and bring NFV to the largest collection of services and customers.  The average operator I’ve talked with thinks that systemic NFV could touch as much as 75% of all customers.

In order for systemic NFV to work, you have to be able to deliver operations efficiencies and service agility, because operators (particularly CFOs) say that capex improvements won’t create enough momentum or even fully justify NFV complexity.  That means you need to extend ETSI-modeled NFV both into legacy infrastructure and into OSS/BSS orchestration.  You also have to be able to host a large number of diverse VNFs.

From a sales perspective, systemic NFV is definitely a “hang in there” proposition.  The sheer scope of the success goal means that nearly everyone who signs anything will have to sign off on systemic NFV.  IT will touch every piece of the network, every major vendor, every craft practice and operations/management software tool.  It’s also so big that it’s hard to grasp it, and many proponents of this model are trapped in small-scale on-ramp projects that might or might not lead to a realization of the broad goal.

The second approach to NFV success is the magic bullet model.  Rather than trying to build up NFV to a broad base through a wide range of services, magic-bullet proponents seek to identify a killer app, a single service that has so profound a benefit case that it can carry NFV into deployment by itself.  Once this app has greased the NFV skids, other applications can then follow along.

Magic bullets, to succeed, have to be both accurate enough and massive enough, and that’s the current rub.  The obvious candidate for a magic-bullet attack is mobile services, because mobile infrastructure is still the capex focus.  It’s easier to deploy a new technology where money is still being spent on a large scale, than to displace already-bought stuff elsewhere.  The question is whether mobile is the right target.

The risks of mobility lie in extensibility in the service domain.  Yeah, we can apply NFV to manage costs in mobile networks, and perhaps even to improve operations efficiency, but service agility goals demand hypothetical services.  IMS and EPC are candidates for early NFV exploitation, but they’re specialized multi-tenant applications.  Services built to demonstrate agility would have to be built both on IMS and on NFV to be relevant, and right now we use both IMS and NFV only for efficient hosting—we don’t have a model of service-building.

The third NFV strategy is what I’ll call (given my penchant for quoting old poems and music) the September-Song approach.  “…I let the old earth take a couple of whirls…” is the relevant theme.  Septemberish NFV advocates are essentially saying that NFV is inevitable, that somebody will hit on the magic formula for deployment.  That somebody will then spawn explosive NFV growth, which will create an explosive growth in demand for something NFV consumes a lot of—servers, data center switches, software licenses.

It you’re a platform (server, OS) vendor, there’s something said for the wait-and-see approach, because 1) you don’t have to go out and create and merchandise a full NFV solution and 2) you don’t face the risk of alienating the players who do manage to make a business case.  It’s a kind of arms-merchant approach to the NFV wars, because you have something everyone will need.

The obvious problem the Septemberists face is the risk that an NFV magician who’s able to make the business case will sell servers and software too.  That could happen both for a systemic NFV player or a magic-bullet player, and the result would be that Septemberists would have to fight their way into a deal whose business case is under the control of another vendor.

We’ve had examples of competitive evolution for all these approaches recently, which I think proves that none of them are off the table yet.  That also means none are winning convincingly.

HP is the paramount player in the systemic camp.  Their OpenNFV has legacy device orchestration, OSS/BSS and NMS integration, a strong ecosystem, a good on-boarding model, and good engagement in a variety of trials to prove out service breadth.  Their problem has been that they’ve become perhaps a bit obsessed with the trials and have underplayed their systemic assets.  That’s easy to do because it’s hard to make something like NFV operations efficiency exciting.  In the service agility area, services are VNFs and you can’t be seen to favor a given partner if you’re a partnership-driven ecosystem.

HP doubled down on VNF partnerships this week with a big NEC announcement.  One thing this shows is that larger players like NEC see HP as a viable platform going forward, an endorsement that’s likely to play well with operators and with other prospective partners.  But the press release on the deal didn’t mention any specific services, which means that it doesn’t add a lot of near-term impetus to HP’s drive.

In the magic-bullet class of NFV player, Alcatel-Lucent has been making news through NFV-ready IMS and IMS-related offerings.  A highly focused mobile drive has given Alcatel-Lucent a presence even in accounts where another vendor (HP, for example, with Telefonica) already had a win.  Alcatel-Lucent has, in its Rapport collaboration framework, an application platform to facilitate service creation that’s NFV- and IMS-compatible, and so it addresses the limitations of early mobile-service targeting I noted above.

The challenge is that platforms do not a service make.  IMS has been a theoretical platform for rich communications services for a decade and it’s not killing off OTT competitors.  Part of the problem is that it’s not entirely clear what platform capabilities Alcatel-Lucent’s Rapport and IMS actually bring to service developers or VNF developers, nor is it clear how NFV and IMS cooperate to be greater than the sum of the parts.  Alcatel-Lucent needs to make all that clear.

The Septemberist giant is of course Intel.  An optimal deployment of NFV could generate over a hundred thousand new data centers, ten times that number of new servers, and a heck of a lot of new CPU chips.  Intel is the clear leader to pick up the NFV financial marbles because they’re a part of almost any credible winning strategy.

To address some of the risks on the platform side, Intel has been pushing its Wind River Titanium Server strategy, and recently won a Nokia validation that might signal a firm link with the leading magic-bullet player—Alcatel-Lucent—when/if the Alcatel-Lucent/Nokia deal closes.  Wind River is also a platform partner for systemic leader HP.

For all of this, Intel still hasn’t taken a step toward making the business case.  Yes, they have the right hardware to deploy NFV.  Yes they have the right software platform.  They don’t contribute much to the direct business case, though, and so they are still at risk for a slow-roll NFV that undershoots potential, or the introduction of a competitor who is able to take advantage of the slow roll to get into the game.

So where does this leave us?  I think that it will be difficult, though not impossible, for any player—even HP—to make a pure systemic run at the NFV opportunity.  It’s probably too late to socialize that complex story, though I think they need to try.  I think that mobile is going to be hard to use as a truly universal magic bullet because it doesn’t hit enough operators, and doesn’t hit hard enough to push universal adoption, unless you build a service framework on it.  And I think that waiting for somebody else to win and hoping to ride on their coat-tails is always an unacceptable risk.

Something evolving from mobile has to be the answer, and I think that “something” is the always-overplayed Internet of Things.  All three of our giants are trying to come to terms with a real IoT architecture, and I think that whoever wins it can win NFV too, as long as they make what should be the obvious connections.  That would create a truly massive win, perhaps the largest in networking since the early days of IP convergence.