The Strategy Behind SDN and NFV “Lite”

One of the questions that operators are asking in at the end of the first quarter is “Just how much real SDN and NFV do we need?”  I pointed out in prior blogs that if you were to do a successful OSS/BSS modernization you could achieve more of the service agility and operations efficiency benefits than you’d get with infrastructure modernization.  The majority of operators are looking at this same sort of benefit-targeted evolution, even if most haven’t accepted the notion that it’s really an OSS/BSS shift.  What the majority are looking at is what we could call the “lite” version of SDN and NFV.

SDN in a formalistic sense is based on the substitution of central control of routes for the usual adaptive control found in today’s switch or router networks.  While it would be theoretically possible to simply make current networks work with OpenFlow, the benefits claimed for SDN rely on using cheaper white-box devices.  To transform infrastructure to this model would obviously involve a lot of money and risk, so operators have been looking for a different way to “software-define” their networks.

NFV is in the same boat.  What the original Call for Action white paper (October 2012) envisioned was the replacement of fixed appliances with pool-hosted software instances of the equivalent features.  This is another infrastructure modernization, in short, and the targeted cost savings is highly dependent on achieving good economy of scale in that resource pool.  That means mucho resources, and a correspondingly high level of transitional risk.

You’ve all probably read the stories about SDN and NFV adoption, and you’d be justified in thinking these technologies were really taking off.  But in nearly all the cases, what’s taking off is a much narrower approach that exercises than we’d think of as the foundation standards.  Hence, SDN or NFV Lite.

The principle behind all these deployment models is to go after agility and, to a lesser degree, self-care.  Give the users a portal to order services that don’t require running a new access path, and then provide the minimal facilities needed to deploy these services using as much of current infrastructure as possible.   The goal is to bring a change to the customer experience itself, cheaply and quickly.

This may sound like my “operations first” model, but it’s actually very different.  Many of the operators have grabbed products that adapt current management capabilities to portal usage rather than even considering broader changes to OSS/BSS.  One operator who did this told me “We’re trying to be agile at the business level here, and our operations processes are nowhere near agile.”  What they’re ending up with is actually more OTT-like.

I think this is a current reflection of a trend that I’ve encountered as far back as three years ago, when operators’ own organizations were dividing over whether to modernize operations or redo it.  They seem to be settling on a creeping-commitment version of the latter, where the nibble at the edges of operations practices using customer portal technology and management adapters.

On the NFV side, we can see this same trend in a slightly different form.  You take a standard device that has a board or two that lets it host features, and you use the vendor’s management tools to maintain the features by controlling what’s loaded onto the board.  All of this can be driven by (you guessed it!) a customer portal.

All of this has mixed implications for SDN and NFV.  We’re taking steps that are labeled with the standards’ names, but not in the way the standards envisioned.  So will these early steps then lead (as the practices mature) to “real” versions of SDN and NFV or will they end up creating silos?  That’s an important question and one that’s very hard to answer right now.  There are two negatives to this Lite movement.  One is technical dilution and dispersion, and the other is low-apple benefit strangulation.

Obviously, a non-standard accommodation to SDN or NFV principles could very easily evolve into something that couldn’t be extended either functionally or in terms of open vendor participation.  A silo, in short, and a risk to both SDN and NFV.  For SDN, providing agile management interfaces could easily mean lock-in to a specific vendor.  Operators already fear Cisco is trying this with their Application-Centric Infrastructure (ACI) model.  The problem is that the SDN and NFV specifications have focused on the bottom of the problem, control of new devices and deployment of virtual functions, when the real issues are higher up.  These Lite models thus live above the standards, in the wild west of modernization.

The second problem may be the most significant, though.  You can get to the future from the present along a variety of paths, each of which has a kind of “benefit surface” that shows how it will impact cost and ROI over time.  These Lite strategies are appealing because they grab a big chunk of benefits on the table at a much lower cost than a full, standards-compliant, solution.  They also open a potentially interesting competitive dynamic.

The Lite model of SDN and NFV creates a portal-to-management pathway through which a lot of new services could be created.  This could potentially augment, or bypass, both service-level and resource-level orchestration.  That means operations vendors could jump on the approach to enhance OSS/BSS participation in transformation, or others could use it to minimize the need for operations participation in transformation.  Same with the current and “true” SDN and NFV vendors—they could either build their top-end processes based on a portal notion, or other vendors could use portals to offer NFV benefits without NFV standards.

The most interesting notion here might be the potential to use portals as the front-end of both SDN and NFV, even to the extent of requiring that OSS/BSS systems interface through them.  That would allow vendors of any sort to incorporate both service- and resource-level orchestration into their offerings and present “virtual services” to operations systems instead of virtual devices.  From the bottom, it would mean that SDN and NFV would be low-level deployment elements, with no responsibility for organizing multipart models.

A service or a resource are, IMHO, both model hierarchies, and so are multi-part.  That’s why you have to orchestrate in the first place—you have many players in a service or many parts to a resource deployment.  The Lite strategy could make orchestration into an independent product space, one that does all the modeling and orchestration and uses SDN, NFV, and OSS/BSS only for the bottom-level or business-level stuff.

The higher-level portal-to-orchestration like could be the offense to the “deep orchestration” defense I’ve described.  If you accept the notion that an operations-first strategy can deliver the most benefits, you need to adapt deeper SDN and NFV deployment technology to integrate with operations.  If you want to realize your own benefits without waiting for operations technology to mature, you might give SDN/NFV Lite a try.

Unraveling the VNF Opportunity

An important question in the NFV space is also a simple and obvious one; “What is a good VNF?”  This question is something like the rather plaintive question asked early in the NFV evolution; “Where do I start?” and it’s a close relative to the “What do I do next?” question that many vCPE pioneers are already asking.  Most of all, it’s related to the key sales question “Who do I get it from?”

A virtual function is something ranging from a piece of a feature to a set of features, packaged for unified deployment.  As far as I’ve been able to tell, all the VNFs so far offered are things that have been available for some time in the form of an appliance, a software app, or both.  Many of them are already offered in open-source form too.  Given all of this, it’s not a surprise that there’s already a conflict in pricing, as operators complain that VNF vendors are asking too much in licensing fees.  That’s simply a proof point that you can’t do the same thing in a better way unless that means a cheaper way.

In the consumer market today, the licensing fees for VNFs are probably indefensible across the board.  The consumer’s needs for the standard vCPE-type VNFs are basic, they can buy an appliance for less than 50 bucks that fulfills them all, and some CPE is necessary for consumer applications because you need in-home WiFi.  Even for enterprises, a half-dozen big network operators have told me that they can’t offer enterprises virtual function equivalents of security and connectivity appliances at much of a savings versus the appliances themselves.  Managed service providers who add professional services to the mix have done best with the vCPE VNFs so far.

Addressing the licensing terms is critical for VNF providers and also for NFV vendors who’ve created their own ecosystems.  The operators report two issues, and sometimes a given VNF provider has both of them.  The first is a very steep first tier price, often tied to minimum quantities.  This forces the operator to surrender a lot of their incentive to use hosted VNFs in the first place; they don’t have a favorable first-cost picture.  The second problem is lack of any option to buy the license for unlimited use at a reasonable price.  VNF providers say this is because an operator could end up getting a windfall; the problem is that the alternative to that can’t be that the VNF vendor gets one.

Licensing challenges like this are killing both residential and SMB VNF opportunity, slowly but surely.  They’re also hurting NFV deployment where it’s dependent on vCPE applications.  Operators have been outspoken at conferences and in interviews that something has to be done here, but perhaps the VNF providers think they have the operators at a disadvantage.  They don’t.

A technical approach to reducing the licensing issues would be supporting multi-tenant VNFs.  The problem with vCPE in particular is that the cost of licensing has to be added to the resource costs for hosting.  The smaller the user, the lower the likely revenue from that user, and the less tolerable it is to host that user’s functions on independent VMs.  Even containers won’t lower the cost enough to get to the mass-market opportunity.  VNF providers, though, are reluctant to provide multi-tenant solutions, perhaps again because they see dollar signs.

Progress in this area is almost certainly going to come too late.  For VNFs that represent basic security and connectivity, I don’t think the market will live up to expectations.  Operators have no incentive to work hard to sell something that’s only marginally profitable to them, and VNF software companies don’t want to let go of the notion that somehow this is their brass ring.  There is already a ton of good open-source stuff out there, and I think that there’s a lot of impetus developing from operators to make it even better.  In fact, I think that we’re long overdue in launching some initiative to figure out how to facilitate open-source ports of functionality to VNF form in an organized way, what I’ve called VNF Platform as a Service or VNFPaaS.

A good VNFPaaS approach could help non-open-source approaches, particularly those who want to sell premium data services to businesses.  The Internet is becoming the cost-preferred approach for private network building, and the evolution of SD-WAN could radically accelerate that trend.  If we were to see overlay VPNs/VLANs take over in terms of connection services, then any OTT provider could build in all the security, connectivity, and other useful features that the market could desire.  If that even starts to happen the high velocity of the OTT players could make it impossible for operators to catch up.

We’ll get VNFPaaS eventually, and when we do, open source will eat the enterprise VNF security/connectivity space.  So what about VNFs in other areas?

The most promising area for VNFs is also the most difficult for most vendors to address.  Mobile use of NFV is almost a slam dunk, which means that everything related to RAN, IMS, and EPC will eventually be turned into virtual functions.  The challenge is that the vendors with the inside track in this area are the vendors who already supply RAN, IMS, and EPC or who have compelling demonstrations of their experience and credibility.  There aren’t many of these.

In theory, it would be possible to create “vCPE-like” VNF opportunities for mobile users.  Obviously not by hosting at the point of connection, on a per-user basis, though.  Could IMS or EPC be equipped with hooks for mobile-user security?  Surely, but this has not been so far a populist opportunity either.  What operators should be asking for is a VNFPaaS interface with IMS/EPC so that they could use their (multi-tenant, fairly priced) VNFs with mobile infrastructure.

This could be of critical importance for broader opportunity realization because of the whole IoT-and-contextual services thing.  Even the currently dominant IMS/EPC players are at risk down the line if they fail to support embedded services to facilitate these two opportunity areas.  While data path security is a fair lock for the operators today, as we noted earlier it is feasible that shifting to Internet-based overlay VPN/VLANs could kill that edge.  For the mobile space, the “personal agent” model could do the same, because if a user is really communicating directly only with a personal agent, then whoever owns that agent can provide all the network features the user would see (and buy).

The notion of a VNFPaaS is obviously critical to the success of VNF vendors because without it the relationship between VNFs and the rest of the NFV process isn’t standardized enough to support agile development and wide-ranging features.  The ETSI specifications, which in fairness were not (at least originally) supposed to define an implementation, are not sufficient to insure the VNF ecosystem can evolve.  Vendor strategies, even open ones, are likely to differ among vendors, particularly if large vendors see a chance to create a lock-in because they have a favored VNF that will pull through their approach.  This issue should be a priority with the ETSI ISG because nobody else is likely to take it up.

NFV demands a different vision of networking if it’s to succeed on a large scale.  VNF providers and operators alike are trying to drive their own revenue models without much regard for the other stakeholders in the game.  However, everyone in the VNF space has to understand that the operator is not compelled to migrate to NFV, and that sellers who insist that buyers accept unfavorable profit balances rarely prosper themselves.  It’s going to take time and effort to get all this shaken out, and accommodations on the VNF side are in my view inevitable.

The Metro Dynamic in Services and Infrastructure

At the service level we all know that mobile broadband gets most of the capex.  In topology terms, the big focus of capex is metro networking.  It’s so important it’s even been driving operator M&A.  If you look at “opex” in the most general terms, meaning non-capital expenses, you find that paying for backhaul and other metro services is for many operators the largest single element.  Finally, something that’s been true for years is even truer today—over 80% of all revenue-generating services transit less than 40 miles of infrastructure, so they are often pure metro.

The big question for metro, for operators, and for vendors is exactly how the metro impetus to capex can be supported and expanded without killing budgets.  That means generating ROI on incremental metro investment, and that’s complicated because of the dichotomy I opened with—there’s services and there’s infrastructure, and the latter is the sum of the needs of the former.

If we were to look at metro from a mobile-wireless-only perspective, things are fairly simple for now at least.  Mobile broadband is sort-of-subject to neutrality but the notion of true all-you-can-eat isn’t as established there and may never be.  Operators tell me that ROI on mobile investment is positive and generally running right around their gross profit levels (15% or so) but wireline ROI hovers in the near-zero area, even going negative in some places.

One telling fact in this area is AT&T’s decision to phase out U-verse TV (a TV-over-DSL strategy that those who’ve read my blog long enough know I’ve never liked) in favor of satellite.  Another is that Verizon has capped new growth in FiOS service area, and now focuses on exploiting current “passes” meaning households where FiOS infrastructure would already support connection.

A part of the problem with wireline is the Internet.  Neutrality rules make it nearly impossible for operators to offer residential or small business connectivity at a decent return.  Enterprises are interested in new connectivity-related services only if they lower costs overall.  Even the VPN and VLAN services enterprises consume are at risk to cannibalization by Internet-overlay SD-WANs and VPNs.

Every operator on the planet knows that there are “producers” and “settlers” in terms of service revenue potential.  The former are likely to do something revenue-generating and the latter are settling on legacy services that can only become less profitable to operators over time.  Today, to be a producer, you have to be a consumer of TV service over wireline (which, increasingly, means fiber or cable), a post-pay mobile broadband user or (preferably) both.  For the rest, the goal is to serve them at the lowest possible cost if you can’t rid yourself of them completely.

The ridding dimension is amply demonstrated by regulated telcos’ selling off of rural systems.  They know they can’t deliver TV there, they can’t run fiber, and so wireline there is never going to cut it.  Better to let these groups of customers go, presumably to rural-carrier players who qualify for subsidies.

The lowest-cost dimension is looking more and more like a form of fixed wireless.  While all the wireless broadband talk tends to center on eager youngsters who have iPhones, the advances in wireless technology combines with the growing appetite for mobility to generate the once-revolutionary idea of bypassing wireline last-mile completely.  That would mean that wireless and wireline would converge in infrastructure terms except where wireline broadcast video delivery is profitable.

Wireless rural services make a lot of sense, and so would the replacement of copper loops with wireless in urban areas where the plant’s age is starting to impact maintenance costs.  Wireless also dodges any residual regulatory requirements for infrastructure sharing, already under pressure in most major markets.

Wireless creates a whole different kind of metro topology.  In the 250-odd metro areas in the US, there are about 12,000 points of wireline concentration, meaning central offices and smart remote sites.  That equates to an average of about 50 offices per metro. There are more than 20 times that number of cell sites and they’re growing, which wireline offices are not.  We’re already evolving to where the mobile sites are supported by Evolved Packet Core and the infrastructure is not subject to sharing.  Net neutrality rules are also different, at least slightly, for mobile in nearly all the major markets.

NFV and IoT could do even more.  The distribution of feature-service-hosting data centers could add hundreds of even thousands of sites per metro area, and all of these would be points of service aggregation so they’d be fiber-served.  What we’re headed for is a fairly dense metro fiber fabric, perhaps linking a fair population of mostly-small data centers.

This is the future that the MEF wants to exploit with their Third Network concept.  It’s also the future that may determine whether SDN a la OpenFlow is ever going to mean anything in a WAN.  The reason is that we have three different models for what a metro network would be, and almost surely only one of them will prevail.

One model is SDN, which is a model that says that services should be based on virtualized-Level-1 technology for aggregation, traffic management, recovery, and service separation.  We’d then build higher-layer services by adding devices (virtual or real) that connect with these virtual wires.  This model would, if adopted, transform networking completely.

The second model is the MEF approach, which says that you build Ethernet networks at Level 2 and these networks are then the basis for not only Level 2 services but for Level 3 services.  With this model, switching is expanded and routing could in theory be containerized to virtual instances, perhaps even running in CPE.

The final model is the IETF model, which not surprisingly builds an IP/MPLS network that offers (in a switch of OSI thinking to say the least) Level 2 over Level 3 among other things.  This network would retain IP, MPLS, and rely on BGP even more.

You can see the common thread here; which is that we’re talking about competing models for metro infrastructure, an underlayment for services rather than services in itself.  Implicit in that is the fact that any model chosen will influence what level of technology gets purchased, which vendors win.  The SDN model favors white-box plays, the MEF model favors Ethernet and the IETF model favors routers.  Since metro technology is growing so much, under so many different pressures, there’s no meaningful incumbency to consider.  Anything could win.

 

Myths, Marketing, and How To Make Money on Network Services

There is absolutely nothing as important to a business as profits.  For large public companies in particular, profits are what drive stock prices and for the last decade they’ve also driven executive compensation.  If we want to understand businesses as a group, we need to understand how they profit.  Which is why some of the discussions about “new services” really frost me.  They presume buyers will work against profits, and so against their own interests.

The engine that drives enterprise technology purchasing is productivity.  I just saw a clip on LinkedIn that was talking about the high revenue per employee for tech companies like Microsoft and Facebook.  The baseline opportunity for high revenue per employee is set by industry (what you sell) but by making workers more productive you can raise the number, and that improves your profits.  Since the dawn of the computer age, we have had three waves of productivity revolution (clearly visible in the US in Bureau of Economic Analysis data) and these three waves coincide with periods when IT spending growth was high relative to GDP growth.  That’s what we’d like to see for our industry now, but that’s not the point here.

There are a lot of network services we can hypothesize, but the problem is that they still provide the same properties of connecting things as the old services.  Thus, they have the same benefits, and thus they can be beneficial only to the extent that they lower costs overall.  So when Cisco, for example, says that on-demand services are the future, they’re really saying that users would consume ad hoc bandwidth because it would save them money.  That, of course, means it would cost the service providers revenue.

The fact is that there is nothing that can be done to improve revenue from connection services.  Five, ten, twenty years from now the revenue per bit will be lower and that’s inevitable.  If we want new revenue we have to look somewhere else for it.  If we want new revenue from enterprises, we have to look to our productivity story.

Productivity isn’t helped by bandwidth on demand—cost is.  Productivity isn’t helped by hosting features in the network when the features are already available in the form of CPE, either.  We can’t toss around simplistic crap like “higher OSI layers”.  Earth to marketers—there are no “higher OSI layers” in the network above the IP layer.  The other layers are on the network meaning they reside with the endpoints.

So what do we do?  In all of the past productivity waves, IT improved productivity by a single, simple, paradigm.  It got IT closer to work.  We used to punch cards and process them after the transaction had long been completed in the real world.  We moved toward transaction processing, minicomputers, and PCs and these have let us put computing power directly in a worker’s hands.  So one thing that’s clear is that the next wave (if the industry gets off its duff and manages to do something logical) will be based on mobile empowerment.

Mobile empowerment means projecting IT through a worker’s mobile device.  If we presume that projection is going back to the simple issue of connecting workers and pushing bits to them, we’ve simply held back the sea at the dike for a bit longer.  What has to happen is that we create new services that live in the network.

Mobile networks already have live-in services.  IMS and EPC form the basis for mobility and subscriber management and so they’re essential to mobile networks.  They are also “services” in that they rely on network-resident intelligence that is used to enhance value to the service user.  Content delivery networks have the same thing; you enter a URL for a video (or click on it) and you’re directed not to the content source somewhere on the Internet, but to a cache point that’s selected based on your location.

What are the services of the future?  They’re the IMS/EPC and CDN facilitators to mobile empowerment and mobile-device-driven life.  That much we can infer from the financial and social trends, but in technology terms what are they?

A mobile worker is different from a fixed worker because they’re mobile.  They’re mobile-working because their task at hand can’t be done at a desk, or presents itself to be done at an arbitrary point where the worker might not be at the desk to handle it.  That says two things about our worker—they are contextual in consuming IT resources and they are event-driven.

Contextual behavior means reacting to conditions.  Your actions are determined by your context, and context means where you are, what you observe, what your perceived mission is, how you’re interacting with others, and so forth.  Every one of these contextual elements is a service, or at least a potential one.  Yes, you can use a phone GPS to figure out where you are in a geographic sense, and yes that can also provide you with some understanding of the real question, which is where you are relative to things that are important to you.  But suppose that you could find those things by simply relating your “receptivity” to a network service, and have that service now feed you the necessary information?

This model also works for events.  If you’re walking from mid-town to downtown Manhattan (a nice walk if you want some exercise), and if you’re in a hurry, you might try to time out the lights so that you don’t have to wait.  Suppose your phone could tell you when to speed up, when to cross, etc.  OK, some are saying that IoT would let this all happen by providing sensors to read, but who deploys the sensors if everyone gets to read them for nothing?  Anyway, how would you know what sensors to query for your stroll downtown, and how to interpret results?

Social context presents the same situations.  You have a set of interests, for example, that might be prioritized.  You’re heading to a meeting—the work mission that has priority.  You are also looking for some coffee—next-highest priority.  You might have an interest in cameras or shoes.  You might also receive calls or texts, either relating to your mission or introducing other interests (“If you see a nice bottle of wine…”) and you need to field these events too.

This is where the money is, both for enterprises who want to enhance productivity and for network operators or others who want to get some additional revenue.  Make a worker’s day more efficient and you improve productivity.  Make sales processes more efficient and you can improve sales.

Network transformation isn’t going to happen to support on-demand connectivity.  If there’s a value to on-demand connectivity it’s not going to come from applying it to today’s applications, but to supporting one of these contextual/event applications that we’re still groping to accept.  NFV isn’t going to happen because of cloud firewalls or vCPE, it’s going to happen because somebody uses it to deploy contextual/event services.

We don’t know anything about these services today because we’re focused too much on myth-building.  None of the new things we’re talking about are going to transform worker productivity or save telco profit per bit.  Myths are nice for entertainment, but they can really hurt if you let them color your perception so much that the truth can’t get through.  Fearing the Big Bad Wolf can make you vulnerable to alligators, and waiting to win the lottery can blind you to the need for an investment strategy.

Telcos and vendors are equally at fault here.  It’s easier to sell somebody a modest evolution in capability because it’s easier for the buyer to understand and easier for the seller to promote.  But modest evolutions offer, at best, modest benefits.  We should be aiming higher.

What NFV Needs is “Deep Orchestration”!

If my speculation is correct and operations vendors may take the lead in NFV, what happens to all the grand plans the technology has spawned?  Remember that my numbers show the ROI on an OSS/BSS modernization to improve operations efficiency and service agility is much better than network modernization based on NFV.  Would we strand the network?  The answer may depend on what we could call “deep orchestration”.

“Orchestration” is the term that’s used today to describe the software-driven coordination of complex service processes.  In NFV, the term has been applied to the process of making VNF deployment and connection decisions.  The problem in the NFV sense is that a “service” includes a lot more stuff than VNFs, and in many cases operators tell me that some services that they’d want to “orchestrate” might include no VNFs at all in some areas, and so for some customers.

There is also an orchestration process associated with service-layer automation.  The TMF defines (in its Enhanced Telecommunications Operations Map) a complex model of processes associated with service offerings.  It even (in GB942, the business-layer stuff), offered a model to associate events to processes using CORBA that qualifies as orchestration via a data model (the contract).  There were very few implementations of GB942, though.

The negligible commitment to data-model orchestration TMF style meant that NFV orchestration could have branched out to address the full spectrum of service and resource orchestration.  This approach has been supported at the product level by the six vendors I’ve been citing as those who could make the NFV business case (ADVA, Ciena, HPE, Huawei, Nokia, and Oracle after M&As are accounted for).  However, these six have not made enormous progress in actually building that uniform orchestration model.  Now Amdocs and Ericsson seem to be attacking orchestration at the service level, and because that could produce most of the benefits of NFV at a better ROI, these guys could end up not accelerating NFV but stalling it.

In theory we could see vendors at the OSS/BSS level actually add effective NFV orchestration to their model, meaning that they could extend their service-layer orchestration downward.  Amdocs has been looking for NFV test engineers, which suggest that they might want to do that.  However, the OSS/BSS guys are as hampered as the NFV vendors were by a single powerful force—organizational politics.  In an operator, the OSS/BSS stuff is run by the CIO and the network stuff run by the Network Operations group, with the CTO group being the major sponsor of NFV.  That’s a lot of organizational musical chairs to chant to and fill if you want an integrated approach.  So what could be done to save actual network modernization?

This is where “deep orchestration” comes in.  If there’s going to be service orchestration at a high level, through an OSS/BSS player, then at this point it will be difficult to convince CIOs to accept NFV-style orchestration of operations processes.  That means that getting CIO backing (and maybe CEO/CFO backing) will require tying into the OSS/BSS orchestration process in some way.

Right now, OSS/BSS systems manage devices.  The prevailing wisdom (??) for NFV is to make NFV look like a virtual device, which is why I call the approach the “virtual device model”.  The idea is that if you deploy virtual functions that mimic the behavior of devices, then you could present OSS/BSS systems with the virtual form of these devices and they’d be none the wiser.  This approach would work fine for both NFV and OSS/BSS so what’s wrong with it?

The answer is that it doesn’t promote NFV in any way.  Virtual devices accommodate NFV, they don’t facilitate it.  What we need to do, if we want NFV-driven modernization, is one of two things.  First, NFV vendors who can orchestrate operations processes could advance that notion aggressively and beat back the new efforts of the OSS/BSS players.  That, frankly, isn’t likely to happen because few of the NFV Super Six who can do full-spectrum orchestration have the credibility and connections to influence OSS/BSS decisions.  Those who do haven’t done it well up to now, and it’s unrealistic to think that’s going to change.  Way two is to structure NFV orchestration to complement service orchestration.

Everything I’ve done on NFV (CloudNFV and ExperiaSphere) has recognized what operators have told me, which is that “services” and “resources” are two different domains, even politically.  The guiding principle of deep orchestration is recognizing that and providing a suitable boundary interface between the two that lets service orchestration have a more modern and (dare I say) intimate relationship with lower-level orchestration, including NFV.  But what’s different between deep orchestration and virtual devices?  The best place to start is what’s wrong with virtual devices.

The first problem with a virtual device model is that it represents a device.  In classic networks, you create services by coercing cooperative device behavior, but when devices are virtual they don’t have explicit behaviors.  Forgetting for a moment what happens as virtual is assigned to resources and becomes real, virtual devices limit operations because they are derived from appliances, which optimum new-age devices are not and should not be.

The virtual-device link to a real device creates the second problem, which is that of reflectivity.  If you have an issue in a real device you can assume it’s a real-device issue.  If it’s an issue in a virtual device you have to map the issue to a real-device MIB, which may not be easy if you’ve virtualized the resources and your firewalls now contain servers and IP networks.

The solution to the problem is to present not fixed devices but flexible abstractions to the OSS/BSS.  That means developing a more agile model than a “device”, a model that can represent an arbitrary set of features and connections, and an arbitrary set of SLA properties.  One way to model this sort of thing is the “intent model” concept I’ve supported in other blogs, but it’s not the only way.  The key is to insure that the OSS/BSS boundary with SDN and NFV be generalized so that old-network behaviors aren’t imposed on the new network.

I still believe that the best approach to orchestration in the long run is to define a single modeling approach from the top to the bottom.  This would let service architects and resource architects build services downward from needs or upward from capabilities with the assurance that the operations and management practices could be integrated throughout.  The next-best thing would be to define a boundary point, which I think is the “service/resource” boundary that naturally fits between service (OSS/BSS/CIO) and network (NMS/NOC/COO) activities, and codify that in as general a way as possible.

Speed is also important.  Operators can realize about 80% of the agility and efficiency benefits of NFV simply by orchestrating service processes optimally.  That leaves a very small chunk of operations benefit to justify a network modernization, forcing you to define services with features not currently supported by network devices if you want to justify changing infrastructure.  We’ve not really worked on how those service/feature relationships would develop and we don’t know whether regulators and lobbyists will cooperate.  I think that this year is critical, because I think that this year will mark the point where operations takes the initiative, if true NFV vendors don’t do something to recapture it, or at least ride the service orchestration wave.  Deep orchestration may be essential in NFV vendor survival.

The Real Story on SDN and NFV Security

There is probably no issue in technology that gets as much attention as security.  Nobody seems to think it’s good enough (and they’re probably right) which means that you can criticize nearly any technology, product, or vendor on the basis of security issues and get a lot of sympathy and attention.  So it is with SDN and NFV, both of which have been declared security black holes by various people.  The obvious question is “Are they?” and the answer is that it’s too early to say, so we’ll have to focus on potential here.

Security is a term nearly too broad to be useful and a good starting point is to divide it into two categories—content security that relates to malware, viruses, and things that people download accidentally and then does them harm, and connection security that relates to the ability of a network to connect those it’s supposed to and keep out everyone else.

Content security problems can arise from a number of sources, but the most prevalent by far is the Internet.  People go to the Internet for something, which could be information, content, software, whatever, and they get something bad instead.  Because content security is compromised by content from insecure sources and because the Internet is and will always be such a source, there’s not much that can be done in new SDN/NFV technology to deal with it.

Some vendors have pitched the idea that a smart software virtual function in the delivery path could on-the-fly explore content to bar the bad stuff, but that doesn’t seem practical.  There are already tools that can spot a malicious site, but it’s not realistic to assume we could detect malware on the fly from an ordinarily trusted source.  I got a contaminated link from a person I know just this week, but fortunately I’m a suspicious type.  The only contribution that network technology can bring to content security is keeping bad things off the network in the first place.

That leaves us with connection security, whose properties of servicing the authorized and blocking everyone else has been noted.  Connection security breaks down into three specific areas, each of which we’ll look at:

  1. Network admission control, meaning the ability to control who can join the community the network connects.
  2. Network interference prevention, meaning the ability to prevent anyone inside or outside the network from doing something that’s destructive to service overall.
  3. Network interception prevention, meaning the ability to prevent passive interception of traffic.

Network admission control is primarily a service management process in today’s networks.  When services are ordered or changed, they can be added to (or removed from) sites or, perhaps in some cases, user’s inventories.  If services permit plug-and-play joining, then this process could be automated and based on credentialing.  The risks SDN or NFV would bring in this category is encouragement of self-service portals, which could make the admission-control process less secure.

Network interference prevention has two main dimensions.  One (the most commonly discussed) is denial-of-service prevention and the other is the decertification of maverick endpoints.  For other than public networks like the Internet or networks that share infrastructure, these tasks tend to converge and require both a determination mechanism (“finger-pointing”) and a cutoff mechanism.  SDN and NFV could impact this area positively or negatively.

Network interception prevention, if we step outside the area of intercepting physical media like the access line, is also a management and control task.  There are some SDN-or-NFV-facilitated activities like passive taps for testing that might be hijacked to create a new risk here.

Like all risks, security risks should be addressed incrementally.  If SDN or NFV have the same risk factors in some areas as we face, and accept, today then they aren’t changing the game in security.  If they improve or increase risk, then they are.  Let’s look at each technology based on that rule and see what falls out.

SDN, in the classic sense, is a centrally managed forwarding framework that eliminates adaptive routing based on inter-device communication.  If the central SDN controller is compromised, the entire network is compromised, but that’s true if network management is compromised in general.  It’s difficult, in my view, to argue that the centralization of forwarding control adds risk to the picture.

What would create a major security flap is the ability to intercept or hack the device-to-SDN-controller link.  SDN raises a point that’s going to be central to all of virtual networking, which is that there has to be a control path that is truly out-of-band to the data plane.  It’s not enough to just encrypt it because it could then still be attacked via denial of service.  You have to support an independent control plane, just like some old-timers will recall SS7 provided in voice.

On the other hand, central management that eliminates device topology and status exchanges makes it significantly more difficult for a device that’s been compromised to hijack the network.  Providing, that is, that the links between SDN domains for interconnect-related exchanges are secure.  That security would be easier to accomplish than securing networks from maverick devices—we’ve already seen that happen with false route advertisements.

The big point with SDN may be its ability to partition tenant networks into true ships in the night.  Centrally set forwarding rules create routes explicitly and change them explicitly.  The devices themselves can’t contaminate this process and because the tenant networks are “out of band” to the common control processes they can’t diddle with each other’s resources.  SDN networks could be significantly more secure.

On the NFV side, things are a lot more complicated.  What’s difficult about NFV is that what used to be locked up safely inside a box is now distributed across a shared resource pool.  Virtual functions that add up to a virtual device could be multi-hosted and interconnected, and this structure presents a two-dimensional risk.

The first dimension is that network connection among functions.  Just what is the network the connections are on?  If we presumed the ridiculous, which is that it was an open public network, then you’d be able to hack each functional component.  But suppose the network is simply the service data network itself?  You have still exposed individual functions to connections from the outside, which you now have to prevent.  Even if you can do that, you probably could not prevent denial-of-service attacks from the service data plane.

The second dimension is the vertical dimension between functions and their resources.  One of the complex issues of NFV is managing resources to secure an SLA, and the setup for doing that in the ETSI spec is a “VNF Manager” at least a component of which might be assembled and run with the VNF.  Unless the VNF manager is expected to do its thing with no idea what the state of the assigned resources are, there is now a channel from a tenant software element (the VNF/VNFM) and a shared resource.  That could result in major security and stability issues.

Both these dimensions of risk could be exacerbated by the fact that NFV introduces the risk of agile malware that imbeds in the network.  A VNF, like a maverick device, could bugger a service.  It could also, if it has connection to either the service data plane or the shared resource MIBs, bugger the user’s applications and data or the shared resource pool.  Any flaw in a VNF, if it’s exploitable, could be a true disaster.

NFV could require a number of separate control planes or independent resource networks.  Every VNF, IMHO, should have a private IP subnet that its components live on.  In addition to that, you’ll need the service data plane for the user, and you’ll also need at least one management plane network for centralized control of the resource pool, and probably another for the service management links.  All of these will require virtualization along the lines of Google’s Andromeda, meaning that some elements will be represented in multiple virtual networks.

The common thread here is that the real security issues of both SDN and NFV are outside the traditional security framework, or should be, because they involve securing the control plane(s) that manage virtual resources and their relationships with applications and services.  We do have an incremental security risk in that area because we have incremental exposure issues.  We’re also exacerbating that risk by not addressing the real issues at all.  Instead we talk about SDN/NFV “risk” as though it’s an extension of the risks of the past.  It’s not, and the security strategies of the past cannot be extended to address the future.

There will be the same old service-security questions with SDN and NFV as with legacy networks, but the virtual security processes are different and we need to understand that it’s virtualization itself we’re securing.  Otherwise we’re going to open a true can of worms.

Is NFV Seeking a New Business-Case Leader?

You can’t have a market for next-gen tech without a business case for transformation.  As the famous saying of Project Mercury, the first step in the US space program went, “No bucks, no Buck Rogers.”  The news out of MWC, recent LinkedIn posts, and other more direct metrics are all showing that vendors and operators alike are starting to realize this crucial point.  What’s interesting is that the focus of the NFV business case is a fairly narrow set of tools that we could call “full-spectrum orchestration” and the costs and fruits of an NFV victory lie largely elsewhere.  That could create some interesting dynamics.

I’ve talked in past blogs about the nature of the NFV business case and the role that operations efficiency has to play in making it.  There are a half-dozen sub-categories inside what I’ve called “process opex” meaning the opex related to service/network processes and not to things like mobile roaming settlement, but all of them depend on applying a very high level of software automation to OSS/BSS/NMS processes, both today and as they evolve toward SDN/NFV.  This automation injection is literally that—it’s a fairly small quantity of very insightful software that organizes both operations/management processes and resource allocations.  You don’t rewrite all of OSS/BSS/NMS, and what you actually do doesn’t even have to be a part of either of these things, which is what’s so interesting.

The largest cost of NFV, and the largest pie vendors will divide, is the resources, what ETSI calls the NFV Infrastructure or NFVI.  The virtual network functions are a distant second.  My model says that NFVI will make up about 90% of total capex for NFV, with 8% from VNFs and 2% from that critical and central operations/management orchestration thing.  The core of NFV, the central management/orchestration stuff, wags a very large dog and yet represents where all the NFV proof points have to be developed and NFV resistance overcome.

Most vendors would love for somebody else to do the heavy NFV lifting as long as that someone wasn’t a direct competitor.  Resource-biased NFV is emerging, as I’ve noted recently, as vendors recognize that getting the big bucks can be as easy as riding a convenient orchestration coat-tail. The challenge is that NFVI without the rest of NFV isn’t going very far.

So the question for NFV today is whether the secret business sauce will come from a vendor or from an open activity, like OPEN-O or OSM.  That’s an important question for vendors because if there is no open solution for the orchestration part then an NFVI player is at the mercy of an orchestration player who has NFVI.  Such a player would surely not deliberately make a place for outsiders, and it’s likely that even operator efforts to define open NFV wouldn’t totally eradicate such a vendor’s home-court advantage.

We will probably have an open-source orchestration solution…eventually.  We probably won’t have one in 2016 and maybe not even in 2017.  That means all those hopeful NFVI vendors and VNF vendors will have to wait for something to coalesce out of the commercial space that can meet operator goals financially and not be so proprietary that they gag on their dinners.

The big Hope for the NFVI faction is that there will be an orchestration-business case giant who doesn’t sell NFVI.  While that might seem a faint hope at best, the fact is that of the six vendors who can currently make a business case for NFV, only one is clearly an NFVI incumbent (HPE).  All the rest are potentially allies of any/many in the NFVI space, but so far none of these orchestration vendors has been willing to take on the incredibly difficult task of doing what’s needed functionally and at the same time wringing an open ecosystem from specifications that aren’t sufficient to assure one.

And then there’s the OSS/BSS side.  None of the OSS/BSS-specific vendors are among my six, but it’s obvious that both Amdocs and Ericsson are stepping up an operations-centric solution to opex efficiency and service agility.  OSS/BSS vendors have good operator engagement, and new services and service operations start with OSS/BSS, after all.  With an NFV power vacuum developing you could expect these guys to move.

Amdocs has been especially aggressive.  At MWC they announced some explicit NFV stuff, and they’ve been advertising for NFV test engineers.  The big news from them IMHO was their focus on “digital transformation”.  If you’re an operations vendor you gain little or nothing by tying yourself to an infrastructure trend.  You don’t sell the stuff.  What you want to do instead is what every vendor who doesn’t sell stuff in a new area wants to do—superset it.  You want to climb up above the technology, which tech vendors usually build and try to justify from the bottom up, and eat all the benefits before they get a chance to filter down.  Transformation is what operators have been trying to do for almost a decade, so revisiting the notion is appropriate.

It’s especially appropriate when the benefits that are needed to drive network transformation can be secured without much in the way of network investment, or even new infrastructure.  If you recall my blog on this, you can gain more cost savings by service-layer automation than by network modernization.  I’ve been working on the next-gen services numbers, and they also seem to show that most of the barriers lie above the network.  The ROI on service automation is phenomenal, and that’s both a blessing and a problem.

It’s a blessing if you’re Amdocs or maybe Ericsson, because you can absolutely control the way operators respond to their current profit squeeze.  No operator presented with a rational picture of next-gen services and operations efficiency driven by service automation from above would consider lower-level transformation until that service-automation process had been completed, at least not after their CIO got done beating on them.  Thus, an OSS/BSS vendor with some smarts could control network evolution.

Which makes it a curse to those who want a lot of evolving to be done, and quickly.  Remember those optimum, hypothetical, hundred thousand NFV data centers?  If you’re Intel or HPE or any platform vendor you want that full complement and you want it today.  Sitting on your hands while OSS/BSS vendors and operator CIOs contemplate the which-ness of why isn’t appealing.

What I think is most interesting about the picture of NFV and network transformation that’s emerged from MWC is that we seem to have lined up everyone who wants to benefit from NFV and sorted them out, but we’re still trying to enlist the players who can actually drive the deal.  I again want to refer to a prior blog, this one on the Telefonica Unica award.

The details aren’t out in the open, but it seems pretty clear that Telefonica found that NFV integration is a lot harder than expected.  Yes, this almost certainly comes out of a problem with the NFV ISG’s model, but whatever the source the point is that fixing it now will either consume a lot of time (in a traditionally slow-moving open-source project) or require a vendor offer an open solution even if they undermine their own participation in the NFVI money pit.

Who might that vendor be?  One of the “Super Six” who can make an NFV business case directly?  An OSS/BSS vendor who’s growing downward toward the network eating operations benefits along the way?  A new network or IT vendor who’s not a major player now?  Whoever it is, they have to resolve that paradox of effort and benefits, of orchestration and infrastructure, and they have to be willing to show operators that NFV can be done…from top to bottom.

What Cisco’s DNA Might Really Be

I hate to blog about vendors two days in a row, but it’s clear that significant stuff is happening at Cisco, who as a market leader in networking is also a major indicator (and driver) of industry trends.  Yesterday I talked about their cloud transformation and how it seemed to be hedging against network commoditization.  Today we have a chip vendor buy and their Digital Network Architecture announcement.  It all seems to me to tell the same story.

The chip deal was Cisco’s second M&A announcement this week.  This one involves Leaba Networks, a company with a lot of skill in building big complex communications chips.  There are a lot of engineers who tell me that this sort of technology would be essential in building a cheap “tunnel switch” and also in creating high-performance electrical-layer stuff to groom optical paths.

If you believe that operators don’t want to buy expensive switches and routers any more, then there are only three possible reactions.  First, hunker down on your product line, push your salespeople to the point of hemorrhage, and hope.  Second, you could get out of the big expensive box business, perhaps into software instances of either.  Third, you could try to make the price/performance on your stuff a lot better.  My vote is that Cisco has picked Door Number Three here.

Actually the Leaba deal could position Cisco for the second option too.  I think the logical evolution of carrier networking is to virtual-wire underlayment as a means of simplifying L2/L3 almost to the point of invisibility.  While Cisco might not like that, the alternative of having somebody come along and do it to Cisco instead doesn’t seem attractive.

All of this stuff seems aimed at the network operator, and none of it really addresses another logical conclusion that you could draw about the network of the future.  If everything inside is commoditizing to virtual wires, then how do you sustain differentiation and margins even if you have great technology?  I’ve said all along that operations/management was the key and I think that’s true. So why isn’t Cisco pushing that?

Perhaps they are, with DNA, which in this context is that Digital Network Architecture I’ve already mentioned.  Cisco has a history of three-letter technology architectures of course, but DNA looks interesting for what it seems to be doing, which is to create a kind of higher-layer element that not only could easily be shifted to the operator side, it even includes some operator-oriented technology already.

DNA’s principles could have been drafted for the carrier market.  There are five (remember when Cisco always had five phases—apparently that’s the magic marketing number) and they are (quoting the Cisco release) “Virtualize everything to give organizations freedom of choice to run any service anywhere, independent of the underlying platform – physical or virtual, on premise or in the cloud.  Designed for automation to make networks and services on those networks easy to deploy, manage and maintain – fundamentally changing the approach to network management.  Pervasive analytics to provide insights on the operation of the network, IT infrastructure and the business – information that only the network can provide.  Service management delivered from the cloud to unify policy and orchestration across the network – enabling the agility of cloud with the security and control of on premises solutions.  Open, extensible and programmable at every layer – Integrating Cisco and 3rd party technology, open API’s and a developer platform, to support a rich ecosystem of network-enabled applications.”

Why then push it out for the enterprise?  Well, to start with, Cisco can’t afford to be shilling one flavor of the future to the sellers of services in that future and another to the buyer.  If you’re going to try to do something transformational you need to reap every buck you can from the revolutionary upside because as an incumbent you’re going to for sure reap the downside.  But it’s also true that the carrier space is not where Cisco wants to lead the transformation to next-gen anything because they have too much at stake.  Enterprises offer Cisco a more controllable Petrie dish.

It’s also true that the enterprise cares less about technology and more about results, which plays to Cisco’s evolutionary approach overall.  Enterprise NFV, for example, is about being able to host features/functions anywhere, meaning generally on Cisco devices.  It’s a kind of super-vCPE approach, and it wouldn’t work well in a carrier environment where you’d quickly run out of real estate.  For the enterprise it’s a good idea.

But the big value in an enterprise DNA push is that you can control the market by controlling the buyer.  Whatever Cisco can do to frame demand will force operators to consider Cisco when framing supply.  And by sticking a virtualization layer into the mix, Cisco can frame demand in such a way that it doesn’t force big changes (read big moves to trash existing Cisco gear) on the enterprise side.  Would we be surprised to find that same element in the supply-side version of DNA?

“Let’s go to bed, said Sleepyhead—Let’s stay awhile said Slow.  Put on the pot said Greedy Gut, we’ll eat before we go!”  Networking has been in conflict among these goals for a decade now.  We have those who want to move aggressively—to doing nothing different.  We have those who just want to be comfortable, and those who want to drain the current market before looking for new waterholes.  Cisco doesn’t want to be any more aggressive than its competitors—perhaps less aggressive in fact.  Cisco also knows how vulnerable it is now, as everyone is trying to spend less on networking.  Some sort of transformation is essential or we collapse into a hype black hole and commoditization.

Cisco’s remedy is cynical but useful nevertheless.  They have uncovered a basic truth, one that everybody probably knows and nobody talks about.  Virtualization has to start from the top because you can only abstract from top to bottom, not the other way around.  Further, once you abstract the top, what the bottom looks like becomes invisible.  Build an intent model of NaaS and equip it with the service features you want, then realize the model on your current equipment and adjust the realization at the pace of your own development.

DNA lets the sense of SDN and NFV work its way into networks at the enterprise level, and thus both change the nature of demand and through it the nature of supply—or so they hope.  That’s its strength, and its weakness is that other vendors who really want to do something in either area can simply follow the Cisco path and accelerate the transformation of how NaaS is recognized.  Cisco is hoping that this won’t happen and they might be right; it’s not like Cisco’s competitors have been accomplishing astonishing feats of positioning up to now.

Cisco is changing, under new leadership, from a company that denied the future to perhaps a company that’s determined to exploit the future as safely as possible.  That may not sound like much of a change, and it may not be, but if Cisco follows a top-down pathway to virtualization as earnestly as DNA suggests it might, and if it adds in some insightful cloud capabilities, it could be a real contender in both IT and networking, even if the future is as tumultuous as it might turn out to be.

What’s Behind Cisco’s Big Cloud-Management Buy?

Cisco’s acquisition of CliQr (I hate these fancy multi-case names; they just make it harder for me to type and spell-check so I won’t use the name from here forward!) raises a whole series of questions.  Foremost, at the industry-strategic level, is the matter of the value of the hybrid cloud and how that value might change IT.  At the vendor-competitive level, you have to wonder if Cisco is, instead of being the next IBM, is now focused on being the current one.  It’s possible they might succeed, and if they do it will say more about the industry than about either Cisco or IBM.

Rarely do earth-shaking changes in a market occur when buyer requirements are static and the benefit case driving purchasing is likewise.  We can see creeping trends but not revolutions.  On the other hand, a major shift in a paradigm will open a broad opportunity that didn’t exist before, and if some player can grab it, the results can be transformational.

Earth to marketplace; if Cisco spent over two hundred million on buying what’s primarily a hybrid cloud management vendor, they don’t think the whole world is going over to public cloud.  Smart on Cisco’s part because it’s not.  Even under the best of conditions, my model currently has the maximum share of IT spending represented by public cloud services not reaching 50%.  Since I agree with the hype (gasp!) that every enterprise and mid-sized businesses and about a third of small businesses will embrace public cloud, that means we are looking at a lot of hybrid cloud prospects.

At the 50% penetration level, and given that about half of that penetration will be in the form of pure-cloud, only-cloud, applications, there is little risk to IT vendors that the cloud will eat their direct business sales.  In the net, they’ll gain both hardware and software revenue in the transition.  Which of course impacts the industry and competitors at the same time.

Staying with the industry, a pure-cloud positioning could be risky for vendors if that’s cast as being a vote for pure public cloud.  Enterprise IT has always been more against cloud than for it, providing that it’s public cloud we’re talking about.  Enterprise line management, who have often been for anything that seemed to them to cut through the labyrinthian  IT bureaucracy and insufferable arrogance, are finding that IT from outsiders is even less responsive and that somebody who’s delivering arrogance is at least delivering something.  Even line departments now think they’ll need their own data centers.

That doesn’t mean that populism in IT is dead, which is another reason why hybridization in general and cloud management in particular is important.  Line departments have tasted (illusory, to be sure) freedom and they aren’t going to give it all up.  Cloud management is important in hybrid models if you presume that line organizations are driving the bus.  If internal IT was, they’d simply harmonize public and private in their own special (arcane) technical way.

If line organizations are going to run hybrid clouds then the internal IT processes will have to be more private-cloud-like than simply legacy-integrated applications.  That has major implications for application vendors, users, network vendors, and just about everyone.  We’re going to drive a model of true virtualization, where resources are resources no matter where they are.

This argues for building more cloud/networking tools into the OS than are there today, and doing so in a more agile way.  PLUMgrid approach could be the secret sauce in this area; build out the OS to be more cloud-aware so that cloudiness isn’t an overlay that could be done inconsistently in public versus private cloud deployments.  It also argues for NaaS deployment coequally with cloud services because you can’t have today’s fixed-site network model when half of your spending is on virtual elements.

For vendors in general, the hybrid move is as important a guarantee of future revenue as you’ll see in this chaotic market, as I’ve already suggested.  That could relieve the stress of “bicameral marketing” where vendors sell IT to enterprise like the cloud is never happening while trying to sell to cloud vendors like nothing but the cloud matters.  Fifty-fifty is a fair split of opportunity, particularly when the cloud’s half is Greenfield money and not stolen from CIO data center budgets.

That doesn’t mean every vendor is a winner, which is surely what’s behind Cisco’s move.  Cisco has no real data center incumbency; they’re primarily a network-linked server vendor.  That means that they know that they can step out quickly and with safety when legacy IT vendors will still have nagging worries whether the cloud will hurt more than help, in the near term at least.  Cisco also knows that despite the fact that most of the cloud opportunity is new money, early cloud applications will evolve out of the data center apps, and thus will demand more sophisticated integration.

Cloudbursting and failover are things that can be addressed as requirements now, even though they should in the end be automatic byproducts of resource independence.  That’s a horse Cisco can ride because current applications aren’t agile in a resource sense, and a management system can go a long way in making them cloud-ready.  Cisco, because they don’t have a big stake in the current IT paradigm, can focus on facilitating the transformation to the new one when incumbents in the data center of the past have to be more circumspect.

This is nice for Cisco right now.  It doesn’t stay nice.

Let’s say that hybrid cloud becomes the rule, and that it absorbs that 50% share of IT spending.  We have today a set of fixed sites linked to a set of fixed data centers.  We’re in a position to sell private networking to businesses because there’s a rigid IT structure that justifies those switches and routers.  What happens when there is no real focus of traffic because there’s no real focus of IT?  Gradually, in the world Cisco is betting on, private networking diminishes everywhere and in the WAN it is subducted into a completely virtual network vision.  WAN services transform to agile NaaS.

A completely agile NaaS pretty hard to differentiate at the device level.  If white boxes have a future outside the data center, this is where that focus would have to be.  And inside the data center, without any fixed LAN-to-WAN relationship to play on, there’s no reason to think white boxes couldn’t sweep that segment too.  At the very least, commoditization of networking seems the outcome.

So is Cisco stupid?  Most incumbents are, if you take the long view, but here we have to admit to another possibility.  If network commoditization is inevitable, then there’s no point worrying about what drives it.  The key is to get yourself positioned in an area where commoditization won’t happen, where differentiation remains.  Where problems need a combination of a new solution and a trusted vendor.  Where you can acquire somebody at a very rich price because you have a lot at stake.  Ring any bells?  I think it does.

What MWC Contributed Overall to the Sense of NFV

MWC generated a lot of ink, and some of the developments reported by Light Reading, SDx Central, or both create some nice jumping-off points for comments.  You’ll probably not be surprised if I have a different spin on many of the things I’ve chosen, and that I hope we can gain some overall sense of where things are going with NFV.

One thing that struck me was the continued tendency of the media to talk about an “NFV architecture” or “NFV strategy” even when what’s being discussed is at best just a piece of NFV functionality and at worst isn’t really any piece at all.  It’s frustrating to me, because I think operators who are trying to get the range of NFV are hampered by lack of any organized placement of the offerings.

One example of a questionable use of the terms is the series of OpenStack-related stuff.  There is no question that OpenStack will dominate the deployment of VNFs, but OpenStack is not orchestration, it’s a part of the Virtual Infrastructure Manager.  So are any DevOps tools linked to NFV deployment, and so (IMHO) is YANG.  Because the ETSI ISG’s description of what the NFV Orchestrator does is rather vague (and seems to overlap with other descriptions for VIM and VNF Manager), and because deployment is orchestration at one level, we seem to be conflating OpenStack and orchestration and then MANO.  That’s not good.

On the NFV Infrastructure side we have a bit more logical positioning to report.  I noted last week that we’d seen the beginning of a separation of NFVI from the rest of NFV because the hosting part of NFV is where most of the capital dollars will be spent.  One news item that expands on my point was the announcement by Red Hat and Amdocs that the Amdocs Cloud Service Orchestrator has been integrated with the Red Hat Linux stack.  It’s not rocket science to make a Linux app run on a specific version of Linux, but this shows that Red Hat wants to have a role in NFV’s success should there be a big adoption wave.  Amdocs is interesting as a partner because they’re really more a player at the service side of the NFV story.  Were they unusually interested in getting some ink of their own, or does Red Hat’s move indicate it thinks that service integration and orchestration will be big?

Another example is the Telecom Infra Project spin-out from the Open Compute Project.  OCP is an initiative started by Facebook to drive hosting costs down by using a standard (commodity) design.  Facebook has blown some specific kisses at telecom, and certainly they would benefit if telecom costs were to be reduced to the point where it would drive down service prices, thus facilitating more Facebook customers.  I don’t think NFV in any form is likely to have an impact on consumer broadband pricing, however; certainly not in the near term.  This move could have an impact on NFV hardware vendors like Dell and HP, and since Intel is involved in the project it could be another data point on the path toward an Intel-driven attempt to get those optimum 100 thousand data centers deployed.  You can bet Intel chips will still be inside.

The cost angle raises the point that Orange has indicated some dissatisfaction with the cost-based justification for NFV.  My own contacts seem to think that the issue is not so much that cost-based justification doesn’t work as that capex reduction as a driver won’t work.  Opex reduction, says my contacts, is still much in favor at Orange (and everywhere else in the telco world), and most operators believe that the same service automation capabilities that would generate opex reduction would also be necessary to facilitate new services.  That’s what Orange is said to favor over cost reduction, and if the same tools do both opex and services, then it makes sense to presume this is yet another repudiation of the simplistic save-on-capex slant on NFV justification.  That approach has been out of favor for two years.

The opex and services angle might be why we had two different open-source projects for NFV announced—OPEN-O and Open Source MANO (OSM).  Interest in the first group is high in China (and of course among big telco vendors) and in the latter high in Europe.  While OSM demonstrated at the show, it’s not clear to me just what their approach looks like in detail because their material doesn’t describe it at a low level.  OPEN-O has less to show at this point than even OSM.  The fact that there are two initiatives might be an indication that China wants to pursue its own specific needs, but it might also mean that nobody likes where we are and nobody is sure the other guy can be trusted to move off the dime.

Even given the mobile focus of MWC, there still seemed to be a lot of noise related to service chaining and services that use the concept.  In the main, service chaining is useful in virtual CPE applications where a virtual function set is replacing a set of appliances normally connected at the demarcation point, in series.  I have no doubt that service chaining and virtual CPE are good for managed service providers targeting businesses, but I’m not convinced that they have legs beyond that narrow market niche.  Unlike mobile infrastructure, vCPE doesn’t seem to pull through the kind of distributed resource pool that you need for NFV to be broadly successful.

There was a lot of skepticism on whether the NFV focus on 5G was smart.  It’s not that people don’t believe that NFV would be good for (and perhaps even necessary for) 5G success, but that 5G seems a long way off and NFV might have already succeeded (and not need 5G pull-through) or failed (and wouldn’t be helped then by 5G).  I think that operators are anxious to see how 5G would impact metro mobile backhaul, and they need to know that pretty quickly because they are already committed to upgrades there to support WiFi offload and increased 4G cells.  This may be a case where a little hype-driven forward thinking actually helps planners unload future risks.

It seems, when you look at the news out of MWC overall, that NFV is kind of splitting down the middle in terms of vendor positioning and operator comments.  On one hand, operators and some vendors are getting more realistic about what has to be done to make NFV a success.  That’s resulting in more realistic product positioning.  On the other hand, this industry hasn’t had much regard for facts or truth for years.  Technical media isn’t too far from national media, where coverage of nasty comments always plays better than coverage of issues.  Excitement is still driving the editorial bus, and that means that vendors who overposition their assets are still more likely to get ink than those who tell the truth.

The thing is that you can only argue about what UFOs are until one lands in the yard and presents itself for inspection.  We’re narrowing down the possible scenarios for NFV utility even as we’re broadening the number of false NFV claims.  I think that by the end of this year, we’ll start to see a narrowing of the field and a sense of real mission…finally.