Building a Technology and Regulatory Model for SDN/NFV “New Services”

Suppose that network operators and vendors accepted the notion that profitable “new” operator services had to be built in a network-integrated way, like IMS/EPC or CDN?  What would the framework of these services then look like?  How would they have to be offered and used?  It’s time to dig deeper into a network-operator-centric view of services, something not so much “over the top” as on the top—of the network.

We have to start this from a regulatory perspective because that frames how operators have to consume technology and price and offer services.  Right now, regulators tend to see services as “Internet” and “not-Internet”, with very different rules applying to the two.  The most significant truth is that Internet services are bill-and-keep and pretty much everything else is subject to normal global settlement practices—everyone in the service food chain shares the retail pie.  There are two factors that complicate this division—one favorably and the other unfavorably.

The unfavorable one is that regulators tend to warn operators against creating new services outside the Internet that offer Internet-like capabilities.  In the US, the FCC has warned explicitly against that behavior, and in Europe regulators have also signaled their concern in this area.  That means that it’s not feasible to think of a new services as being an outside-the-Internet application of IP technology.  If it can be done over the Internet and is offered to consumers, then it’s probably close enough to being an Internet service that separating it based on the mission and expecting no net neutrality rules to apply is a dream.

The favorable thing is that regulators have accepted some aspects of Internet service where separation of features from neutrality regulation is already in place.  Service control planes such as those offered in IMS/EPC are one example, and content delivery networks (CDNs) are perhaps a better one.  Most traffic delivered to Internet users is content, and most content delivered is handled by a CDN.  CDN providers (Akamai, for example) are paid for their service by content owners and sometimes by ISPs, so we have a model in CDNs where a network service feature is paid for by someone other than the Internet user who consumes the service the features is a part of.  That, to me, establishes the CDN as the model for what a new service feature has to look like.

A CDN is a combination of a control-plane behavior (customized decoding of a URL) and a cache, which is a hosting/server behavior.  Think of a CDN as being a set of microservices.  One is used to get an IP address that represents a content element in a logical sense, not a resource that’s tied to a specific IP address, server, etc.  The other is simply a delivery engine for content elements.

Let’s now look at how this approach could be applied to a new opportunity, like IoT.  For the example, let’s assume you’re in a vehicle driving through a city and you want to get traffic status/guidance.  The CDN model says that you might make an information request (the content URL click analogy) for traffic data ahead of you (the cache analogy).  The control-plane element has to know where “ahead of you” is, and from that knowledge and perhaps some policies on how distant “ahead” might be (just because you’re heading north doesn’t mean you want to look at traffic in Greenland!) associate your request with the proper resource.

The approach here, both for CDN and my IoT driving example, can be described as “I know what I want and don’t know where it is”, which is a request for a service and not a resource.  That’s the critical point in a regulatory sense.  Resources are on the Internet; services are integrated with it.  It’s my view that explicit following of that model would be the pathway to solving the regulatory problems with advanced services.

Let’s look at the technical framework here.  We have two components to deal with—a kind of service control plane and a set of feature microservices that are used to compose overall service behaviors behind the scenes.  These two pieces are critical, I think, not only for positioning our network evolution and transformation in a regulatory sense, but also for framing what SDN and NFV should evolve to support.

Both SDN and NFV advocates push their wares in the context of current services; we create “new” services only in the sense that we use new mechanisms.  In a future of “on top” services we use new mechanisms but to support new service behaviors, behaviors that are more agile because they’re signaled dynamically.  In this model, we justify SDN and NFV to support that agility, to enable the two-tier control-and-feature-plane model.

This is not to say that that we wouldn’t use SDN and NFV for current services, or couldn’t.  The goal would be to frame those services in the on-top model described here.  In short, what we’d be doing is creating a framework for services, a kind of next-gen architecture, that harmonizes with regulatory reality and at the same time builds a model for both current and next-gen services.

The service signaling part of this model can be based on the CDN model and/or on what I’ll call the “IMS model” meaning the model used in mobile networks for subscriber and mobility management.  What that means is that the signaling plane of a new network could be reached either because it was extended explicitly to the subscriber (phone registration, dialing, movement) or because it was “gatewayed” from the customer data plane (clicking a media URL).

The hosted feature plane of the model would differ from Internet models in that it would not be addressable on the customer data plane level in a direct way.  You can’t pick your own CDN cache point.  Instead, the service feature would be connected/delivered through a gateway, which we could visualize as a kind of virtual microservice.

Let’s look at IoT in this context.  We presume that there is a “traffic” service that is represented on the Internet.  The service offers users the ability to either assess a current route or to optimize a path to a destination from a specified (or current) location.  Our hypothetical driver would exercise this service to see what’s happening ahead, by clicking on what would look something like a content URL.  The service request would be gated into the signaling plane, where a suitable route analyzer can be found.  This gating process could involve access to the customer records, a billing event, etc.

The route analyzer would create a response to the request and return it to the service in the form of the results of the click, just like a CDN does, and the result is then available to either a mobile app or a web page display.  Any of the data paths (except to the customer via the Internet service) could be prioritized, built with SDN.  Any feature could be deployed with NFV.  So we have blended IMS and CDN principles into a single model, used our new network technologies, and created something that could dodge neutrality problems.

“Could”, because if operators or others were to adopt this model the wrong way (letting priority pathways bleed onto the customer data plane) they’d be at risk in at least some major regulatory jurisdictions.  You can’t get greedy here and try to re-engineer the Internet without bill-and-keep and with settlement for traffic handling.

The point of all of this is to demonstrate two things.  First, we can make “new services” work even with neutrality/regulatory barriers in place.  Second, we have to think about these new services differently to get us there.


SDN and NFV Pass the Torch to CORD

The interest being shown in ONOS’s CORD architecture (see this Light Reading piece) isn’t a surprise to me (see my own blog on it here) but it’s an indication that CORD might be even more influential because of a singular fact—integration and “packaged solutions” are much more in CORD’s DNA.  I don’t agree that it’s a simple cloud CO in a box, but it’s much closer to that than even most proprietary strategies would offer.  That could be important in populizing cloud support of operator transformation.  I referenced my prior blog for those who’d like a digest of what CORD and related technologies (ONOS and XOS) are, so I can jump off from a technology summary without repeating it.

Redefining the CO in cloud terms, meaning applying virtualization principles to its infrastructure and services, is a useful way of positioning network evolution.  Adopting SDN or NFV might sound exciting, but for network operators you have to look at the end game, meaning what you’d end up with.  Enterprises have built their IT architectures from data centers, and operators have built their networks from “serving offices”, the most common of which are the central offices that form the operators’ edge.  That’s a big factor in CORD’s acceptance, but we’re starting to see another factor.

SDN and NFV are both incomplete architectures, meaning that they don’t define enough of the architectural framework to cement their own benefit case.  In fact, neither really defines enough of the management framework in which they’d have to operate to make operators comfortable with SLA management, and NFV doesn’t define the execution framework for virtual functions either.

The fashionable thing to worry about in that situation is that you’d end up with “pre-standard” implementation.  In the real world of SDN and NFV the real risk is that you end up in a black hole of integration.  There are too many players and pieces to be fitted, and the chances of them forming a useful infrastructure that makes the benefit case is near zero unless somebody jiggles all the pieces till they fall into place.  That’s what CORD proposes to do, at least in part.

A cloud-adapted CO is by definition integrated; COs are facilities after all.  ONOS and CORD have made integrated structures an explicit goal while SDN and NFV standards groups have really failed integration totally.  However, wishing won’t make it so.  CORD may make integration explicitly a goal but that doesn’t get it realized, it only focuses people on it.  The first question now is how long it will take for that focus to mean something.  Not the last question, though.  We’ll get to that one in time.

What I’ve called VNFPaaS, the execution platform for VNFs that NFV needs desperately, is logically within CORD scope but CORD isn’t there yet.  It also needs to deliver the details in the implementation of the Infrastructure Manager intent-model concept that’s critical to resource independence.  Again, it’s an element that’s in-scope, which is more than we can say for SDN and NFV.

What might be helpful in getting to the right place is vendor interest in CORD as a way of packaging their own solutions.  Ciena’s promise of a turnkey CORD implementation is particularly interesting given that Ciena is one of the vendors with all the pieces needed to make an SDN/NFV business case.  Ciena alone could make a difference for SDN and NFV, even if its other five business-case-ready competitors don’t jump on CORD (which they should).

This is where the “how long…” question comes in, though.  Another Light Reading article illustrates the growing cynicism of operators.  Too much NFV and SDN hype for too long has created expectations in the CIO and CEO offices that technologists have not been able to meet.  At one level the cynics are right; both technologies have been mercilessly hyped and the hypothetical (dare we say!) potential has little chance of being met in the real world.  However, the success of something has to be measured, here as always, against the business case and not against conformance to fables full of sound and fury (as Shakespeare said, and perhaps tales told by idiots is also a fair parallel).  Have we so poisoned the well that it no longer matters whether we can make a business case because too much is expected?

That’s the second question I promised to get to.  Will SDN or NFV make the operators into OTTs?  That question is asked by the second Light Reading piece, but it’s not the right one.  Neither SDN nor NFV is needed to do that.  Anyone can be an OTT.  What’s hard is being a network operator.

Let’s forget the OTT goal and focus on reality.  Operators cannot leave their space or there’s no Internet to be on top of.  Operators cannot be profitable above the network they’re losing money in, while competing with others who have no such boat-anchor on their profits.  Google won’t be a network operator; why would they?  They’ll try to scare the media and operators into thinking they might, but they won’t.  So operators are what we have left to carry the water.

SDN and NFV are not about making operators into OTTs, they’re about making networks into something that, if it’s not profitable, is at least not a boat anchor.  What’s needed now is a transformation to improve profitability of network services.  A lot of that has to be cost management, opex efficiency.  Some can also come from redefining “services” to include higher-layer features (like IMS/EPC and CDN).  Very little will come from new models of selling connection services, which is why it’s fruitless to try to change connection technology without changing connection operations economics.  If it’s not cheaper there’s nothing much in connection services buyers value.

This brings us to the “best of CORD” because if we can’t create a service ecosystem into which optimized pieces can be introduced cheaply, nothing good is going to come out of either SDN or NFV.  The right way to do both, top-down, was not adopted and it’s clear that rising to the top of the problem is beyond both the ONF and the ETSI ISG.  All OPNFV has managed to do is create a platform for NFV to run on, without any feature value to make the business case.  ONOS is at least heading in the right direction, and every vendor and operator backer takes us closer to the point where we reach a critical mass of utility.

And yet we are not there at this point, and we’re running out of time.  SDN and NFV can still be redeemed—I still believe that we can extract significant value from both—but no technology is useful if it reaches its potential when its buyers have moved on.  The lesson of the two Light Reading pieces is that the buyers are moving on, reluctantly.  To CORD for now, which is at least SDN and NFV compatible.  Eventually, if CORD can’t harness the business value of transformed data centers, to something else.  In that case we will have spent a lot of time and money on nothing, and wasted an enormous opportunity.

The Strategy Behind SDN and NFV “Lite”

One of the questions that operators are asking in at the end of the first quarter is “Just how much real SDN and NFV do we need?”  I pointed out in prior blogs that if you were to do a successful OSS/BSS modernization you could achieve more of the service agility and operations efficiency benefits than you’d get with infrastructure modernization.  The majority of operators are looking at this same sort of benefit-targeted evolution, even if most haven’t accepted the notion that it’s really an OSS/BSS shift.  What the majority are looking at is what we could call the “lite” version of SDN and NFV.

SDN in a formalistic sense is based on the substitution of central control of routes for the usual adaptive control found in today’s switch or router networks.  While it would be theoretically possible to simply make current networks work with OpenFlow, the benefits claimed for SDN rely on using cheaper white-box devices.  To transform infrastructure to this model would obviously involve a lot of money and risk, so operators have been looking for a different way to “software-define” their networks.

NFV is in the same boat.  What the original Call for Action white paper (October 2012) envisioned was the replacement of fixed appliances with pool-hosted software instances of the equivalent features.  This is another infrastructure modernization, in short, and the targeted cost savings is highly dependent on achieving good economy of scale in that resource pool.  That means mucho resources, and a correspondingly high level of transitional risk.

You’ve all probably read the stories about SDN and NFV adoption, and you’d be justified in thinking these technologies were really taking off.  But in nearly all the cases, what’s taking off is a much narrower approach that exercises than we’d think of as the foundation standards.  Hence, SDN or NFV Lite.

The principle behind all these deployment models is to go after agility and, to a lesser degree, self-care.  Give the users a portal to order services that don’t require running a new access path, and then provide the minimal facilities needed to deploy these services using as much of current infrastructure as possible.   The goal is to bring a change to the customer experience itself, cheaply and quickly.

This may sound like my “operations first” model, but it’s actually very different.  Many of the operators have grabbed products that adapt current management capabilities to portal usage rather than even considering broader changes to OSS/BSS.  One operator who did this told me “We’re trying to be agile at the business level here, and our operations processes are nowhere near agile.”  What they’re ending up with is actually more OTT-like.

I think this is a current reflection of a trend that I’ve encountered as far back as three years ago, when operators’ own organizations were dividing over whether to modernize operations or redo it.  They seem to be settling on a creeping-commitment version of the latter, where the nibble at the edges of operations practices using customer portal technology and management adapters.

On the NFV side, we can see this same trend in a slightly different form.  You take a standard device that has a board or two that lets it host features, and you use the vendor’s management tools to maintain the features by controlling what’s loaded onto the board.  All of this can be driven by (you guessed it!) a customer portal.

All of this has mixed implications for SDN and NFV.  We’re taking steps that are labeled with the standards’ names, but not in the way the standards envisioned.  So will these early steps then lead (as the practices mature) to “real” versions of SDN and NFV or will they end up creating silos?  That’s an important question and one that’s very hard to answer right now.  There are two negatives to this Lite movement.  One is technical dilution and dispersion, and the other is low-apple benefit strangulation.

Obviously, a non-standard accommodation to SDN or NFV principles could very easily evolve into something that couldn’t be extended either functionally or in terms of open vendor participation.  A silo, in short, and a risk to both SDN and NFV.  For SDN, providing agile management interfaces could easily mean lock-in to a specific vendor.  Operators already fear Cisco is trying this with their Application-Centric Infrastructure (ACI) model.  The problem is that the SDN and NFV specifications have focused on the bottom of the problem, control of new devices and deployment of virtual functions, when the real issues are higher up.  These Lite models thus live above the standards, in the wild west of modernization.

The second problem may be the most significant, though.  You can get to the future from the present along a variety of paths, each of which has a kind of “benefit surface” that shows how it will impact cost and ROI over time.  These Lite strategies are appealing because they grab a big chunk of benefits on the table at a much lower cost than a full, standards-compliant, solution.  They also open a potentially interesting competitive dynamic.

The Lite model of SDN and NFV creates a portal-to-management pathway through which a lot of new services could be created.  This could potentially augment, or bypass, both service-level and resource-level orchestration.  That means operations vendors could jump on the approach to enhance OSS/BSS participation in transformation, or others could use it to minimize the need for operations participation in transformation.  Same with the current and “true” SDN and NFV vendors—they could either build their top-end processes based on a portal notion, or other vendors could use portals to offer NFV benefits without NFV standards.

The most interesting notion here might be the potential to use portals as the front-end of both SDN and NFV, even to the extent of requiring that OSS/BSS systems interface through them.  That would allow vendors of any sort to incorporate both service- and resource-level orchestration into their offerings and present “virtual services” to operations systems instead of virtual devices.  From the bottom, it would mean that SDN and NFV would be low-level deployment elements, with no responsibility for organizing multipart models.

A service or a resource are, IMHO, both model hierarchies, and so are multi-part.  That’s why you have to orchestrate in the first place—you have many players in a service or many parts to a resource deployment.  The Lite strategy could make orchestration into an independent product space, one that does all the modeling and orchestration and uses SDN, NFV, and OSS/BSS only for the bottom-level or business-level stuff.

The higher-level portal-to-orchestration like could be the offense to the “deep orchestration” defense I’ve described.  If you accept the notion that an operations-first strategy can deliver the most benefits, you need to adapt deeper SDN and NFV deployment technology to integrate with operations.  If you want to realize your own benefits without waiting for operations technology to mature, you might give SDN/NFV Lite a try.

Unraveling the VNF Opportunity

An important question in the NFV space is also a simple and obvious one; “What is a good VNF?”  This question is something like the rather plaintive question asked early in the NFV evolution; “Where do I start?” and it’s a close relative to the “What do I do next?” question that many vCPE pioneers are already asking.  Most of all, it’s related to the key sales question “Who do I get it from?”

A virtual function is something ranging from a piece of a feature to a set of features, packaged for unified deployment.  As far as I’ve been able to tell, all the VNFs so far offered are things that have been available for some time in the form of an appliance, a software app, or both.  Many of them are already offered in open-source form too.  Given all of this, it’s not a surprise that there’s already a conflict in pricing, as operators complain that VNF vendors are asking too much in licensing fees.  That’s simply a proof point that you can’t do the same thing in a better way unless that means a cheaper way.

In the consumer market today, the licensing fees for VNFs are probably indefensible across the board.  The consumer’s needs for the standard vCPE-type VNFs are basic, they can buy an appliance for less than 50 bucks that fulfills them all, and some CPE is necessary for consumer applications because you need in-home WiFi.  Even for enterprises, a half-dozen big network operators have told me that they can’t offer enterprises virtual function equivalents of security and connectivity appliances at much of a savings versus the appliances themselves.  Managed service providers who add professional services to the mix have done best with the vCPE VNFs so far.

Addressing the licensing terms is critical for VNF providers and also for NFV vendors who’ve created their own ecosystems.  The operators report two issues, and sometimes a given VNF provider has both of them.  The first is a very steep first tier price, often tied to minimum quantities.  This forces the operator to surrender a lot of their incentive to use hosted VNFs in the first place; they don’t have a favorable first-cost picture.  The second problem is lack of any option to buy the license for unlimited use at a reasonable price.  VNF providers say this is because an operator could end up getting a windfall; the problem is that the alternative to that can’t be that the VNF vendor gets one.

Licensing challenges like this are killing both residential and SMB VNF opportunity, slowly but surely.  They’re also hurting NFV deployment where it’s dependent on vCPE applications.  Operators have been outspoken at conferences and in interviews that something has to be done here, but perhaps the VNF providers think they have the operators at a disadvantage.  They don’t.

A technical approach to reducing the licensing issues would be supporting multi-tenant VNFs.  The problem with vCPE in particular is that the cost of licensing has to be added to the resource costs for hosting.  The smaller the user, the lower the likely revenue from that user, and the less tolerable it is to host that user’s functions on independent VMs.  Even containers won’t lower the cost enough to get to the mass-market opportunity.  VNF providers, though, are reluctant to provide multi-tenant solutions, perhaps again because they see dollar signs.

Progress in this area is almost certainly going to come too late.  For VNFs that represent basic security and connectivity, I don’t think the market will live up to expectations.  Operators have no incentive to work hard to sell something that’s only marginally profitable to them, and VNF software companies don’t want to let go of the notion that somehow this is their brass ring.  There is already a ton of good open-source stuff out there, and I think that there’s a lot of impetus developing from operators to make it even better.  In fact, I think that we’re long overdue in launching some initiative to figure out how to facilitate open-source ports of functionality to VNF form in an organized way, what I’ve called VNF Platform as a Service or VNFPaaS.

A good VNFPaaS approach could help non-open-source approaches, particularly those who want to sell premium data services to businesses.  The Internet is becoming the cost-preferred approach for private network building, and the evolution of SD-WAN could radically accelerate that trend.  If we were to see overlay VPNs/VLANs take over in terms of connection services, then any OTT provider could build in all the security, connectivity, and other useful features that the market could desire.  If that even starts to happen the high velocity of the OTT players could make it impossible for operators to catch up.

We’ll get VNFPaaS eventually, and when we do, open source will eat the enterprise VNF security/connectivity space.  So what about VNFs in other areas?

The most promising area for VNFs is also the most difficult for most vendors to address.  Mobile use of NFV is almost a slam dunk, which means that everything related to RAN, IMS, and EPC will eventually be turned into virtual functions.  The challenge is that the vendors with the inside track in this area are the vendors who already supply RAN, IMS, and EPC or who have compelling demonstrations of their experience and credibility.  There aren’t many of these.

In theory, it would be possible to create “vCPE-like” VNF opportunities for mobile users.  Obviously not by hosting at the point of connection, on a per-user basis, though.  Could IMS or EPC be equipped with hooks for mobile-user security?  Surely, but this has not been so far a populist opportunity either.  What operators should be asking for is a VNFPaaS interface with IMS/EPC so that they could use their (multi-tenant, fairly priced) VNFs with mobile infrastructure.

This could be of critical importance for broader opportunity realization because of the whole IoT-and-contextual services thing.  Even the currently dominant IMS/EPC players are at risk down the line if they fail to support embedded services to facilitate these two opportunity areas.  While data path security is a fair lock for the operators today, as we noted earlier it is feasible that shifting to Internet-based overlay VPN/VLANs could kill that edge.  For the mobile space, the “personal agent” model could do the same, because if a user is really communicating directly only with a personal agent, then whoever owns that agent can provide all the network features the user would see (and buy).

The notion of a VNFPaaS is obviously critical to the success of VNF vendors because without it the relationship between VNFs and the rest of the NFV process isn’t standardized enough to support agile development and wide-ranging features.  The ETSI specifications, which in fairness were not (at least originally) supposed to define an implementation, are not sufficient to insure the VNF ecosystem can evolve.  Vendor strategies, even open ones, are likely to differ among vendors, particularly if large vendors see a chance to create a lock-in because they have a favored VNF that will pull through their approach.  This issue should be a priority with the ETSI ISG because nobody else is likely to take it up.

NFV demands a different vision of networking if it’s to succeed on a large scale.  VNF providers and operators alike are trying to drive their own revenue models without much regard for the other stakeholders in the game.  However, everyone in the VNF space has to understand that the operator is not compelled to migrate to NFV, and that sellers who insist that buyers accept unfavorable profit balances rarely prosper themselves.  It’s going to take time and effort to get all this shaken out, and accommodations on the VNF side are in my view inevitable.

The Metro Dynamic in Services and Infrastructure

At the service level we all know that mobile broadband gets most of the capex.  In topology terms, the big focus of capex is metro networking.  It’s so important it’s even been driving operator M&A.  If you look at “opex” in the most general terms, meaning non-capital expenses, you find that paying for backhaul and other metro services is for many operators the largest single element.  Finally, something that’s been true for years is even truer today—over 80% of all revenue-generating services transit less than 40 miles of infrastructure, so they are often pure metro.

The big question for metro, for operators, and for vendors is exactly how the metro impetus to capex can be supported and expanded without killing budgets.  That means generating ROI on incremental metro investment, and that’s complicated because of the dichotomy I opened with—there’s services and there’s infrastructure, and the latter is the sum of the needs of the former.

If we were to look at metro from a mobile-wireless-only perspective, things are fairly simple for now at least.  Mobile broadband is sort-of-subject to neutrality but the notion of true all-you-can-eat isn’t as established there and may never be.  Operators tell me that ROI on mobile investment is positive and generally running right around their gross profit levels (15% or so) but wireline ROI hovers in the near-zero area, even going negative in some places.

One telling fact in this area is AT&T’s decision to phase out U-verse TV (a TV-over-DSL strategy that those who’ve read my blog long enough know I’ve never liked) in favor of satellite.  Another is that Verizon has capped new growth in FiOS service area, and now focuses on exploiting current “passes” meaning households where FiOS infrastructure would already support connection.

A part of the problem with wireline is the Internet.  Neutrality rules make it nearly impossible for operators to offer residential or small business connectivity at a decent return.  Enterprises are interested in new connectivity-related services only if they lower costs overall.  Even the VPN and VLAN services enterprises consume are at risk to cannibalization by Internet-overlay SD-WANs and VPNs.

Every operator on the planet knows that there are “producers” and “settlers” in terms of service revenue potential.  The former are likely to do something revenue-generating and the latter are settling on legacy services that can only become less profitable to operators over time.  Today, to be a producer, you have to be a consumer of TV service over wireline (which, increasingly, means fiber or cable), a post-pay mobile broadband user or (preferably) both.  For the rest, the goal is to serve them at the lowest possible cost if you can’t rid yourself of them completely.

The ridding dimension is amply demonstrated by regulated telcos’ selling off of rural systems.  They know they can’t deliver TV there, they can’t run fiber, and so wireline there is never going to cut it.  Better to let these groups of customers go, presumably to rural-carrier players who qualify for subsidies.

The lowest-cost dimension is looking more and more like a form of fixed wireless.  While all the wireless broadband talk tends to center on eager youngsters who have iPhones, the advances in wireless technology combines with the growing appetite for mobility to generate the once-revolutionary idea of bypassing wireline last-mile completely.  That would mean that wireless and wireline would converge in infrastructure terms except where wireline broadcast video delivery is profitable.

Wireless rural services make a lot of sense, and so would the replacement of copper loops with wireless in urban areas where the plant’s age is starting to impact maintenance costs.  Wireless also dodges any residual regulatory requirements for infrastructure sharing, already under pressure in most major markets.

Wireless creates a whole different kind of metro topology.  In the 250-odd metro areas in the US, there are about 12,000 points of wireline concentration, meaning central offices and smart remote sites.  That equates to an average of about 50 offices per metro. There are more than 20 times that number of cell sites and they’re growing, which wireline offices are not.  We’re already evolving to where the mobile sites are supported by Evolved Packet Core and the infrastructure is not subject to sharing.  Net neutrality rules are also different, at least slightly, for mobile in nearly all the major markets.

NFV and IoT could do even more.  The distribution of feature-service-hosting data centers could add hundreds of even thousands of sites per metro area, and all of these would be points of service aggregation so they’d be fiber-served.  What we’re headed for is a fairly dense metro fiber fabric, perhaps linking a fair population of mostly-small data centers.

This is the future that the MEF wants to exploit with their Third Network concept.  It’s also the future that may determine whether SDN a la OpenFlow is ever going to mean anything in a WAN.  The reason is that we have three different models for what a metro network would be, and almost surely only one of them will prevail.

One model is SDN, which is a model that says that services should be based on virtualized-Level-1 technology for aggregation, traffic management, recovery, and service separation.  We’d then build higher-layer services by adding devices (virtual or real) that connect with these virtual wires.  This model would, if adopted, transform networking completely.

The second model is the MEF approach, which says that you build Ethernet networks at Level 2 and these networks are then the basis for not only Level 2 services but for Level 3 services.  With this model, switching is expanded and routing could in theory be containerized to virtual instances, perhaps even running in CPE.

The final model is the IETF model, which not surprisingly builds an IP/MPLS network that offers (in a switch of OSI thinking to say the least) Level 2 over Level 3 among other things.  This network would retain IP, MPLS, and rely on BGP even more.

You can see the common thread here; which is that we’re talking about competing models for metro infrastructure, an underlayment for services rather than services in itself.  Implicit in that is the fact that any model chosen will influence what level of technology gets purchased, which vendors win.  The SDN model favors white-box plays, the MEF model favors Ethernet and the IETF model favors routers.  Since metro technology is growing so much, under so many different pressures, there’s no meaningful incumbency to consider.  Anything could win.


Myths, Marketing, and How To Make Money on Network Services

There is absolutely nothing as important to a business as profits.  For large public companies in particular, profits are what drive stock prices and for the last decade they’ve also driven executive compensation.  If we want to understand businesses as a group, we need to understand how they profit.  Which is why some of the discussions about “new services” really frost me.  They presume buyers will work against profits, and so against their own interests.

The engine that drives enterprise technology purchasing is productivity.  I just saw a clip on LinkedIn that was talking about the high revenue per employee for tech companies like Microsoft and Facebook.  The baseline opportunity for high revenue per employee is set by industry (what you sell) but by making workers more productive you can raise the number, and that improves your profits.  Since the dawn of the computer age, we have had three waves of productivity revolution (clearly visible in the US in Bureau of Economic Analysis data) and these three waves coincide with periods when IT spending growth was high relative to GDP growth.  That’s what we’d like to see for our industry now, but that’s not the point here.

There are a lot of network services we can hypothesize, but the problem is that they still provide the same properties of connecting things as the old services.  Thus, they have the same benefits, and thus they can be beneficial only to the extent that they lower costs overall.  So when Cisco, for example, says that on-demand services are the future, they’re really saying that users would consume ad hoc bandwidth because it would save them money.  That, of course, means it would cost the service providers revenue.

The fact is that there is nothing that can be done to improve revenue from connection services.  Five, ten, twenty years from now the revenue per bit will be lower and that’s inevitable.  If we want new revenue we have to look somewhere else for it.  If we want new revenue from enterprises, we have to look to our productivity story.

Productivity isn’t helped by bandwidth on demand—cost is.  Productivity isn’t helped by hosting features in the network when the features are already available in the form of CPE, either.  We can’t toss around simplistic crap like “higher OSI layers”.  Earth to marketers—there are no “higher OSI layers” in the network above the IP layer.  The other layers are on the network meaning they reside with the endpoints.

So what do we do?  In all of the past productivity waves, IT improved productivity by a single, simple, paradigm.  It got IT closer to work.  We used to punch cards and process them after the transaction had long been completed in the real world.  We moved toward transaction processing, minicomputers, and PCs and these have let us put computing power directly in a worker’s hands.  So one thing that’s clear is that the next wave (if the industry gets off its duff and manages to do something logical) will be based on mobile empowerment.

Mobile empowerment means projecting IT through a worker’s mobile device.  If we presume that projection is going back to the simple issue of connecting workers and pushing bits to them, we’ve simply held back the sea at the dike for a bit longer.  What has to happen is that we create new services that live in the network.

Mobile networks already have live-in services.  IMS and EPC form the basis for mobility and subscriber management and so they’re essential to mobile networks.  They are also “services” in that they rely on network-resident intelligence that is used to enhance value to the service user.  Content delivery networks have the same thing; you enter a URL for a video (or click on it) and you’re directed not to the content source somewhere on the Internet, but to a cache point that’s selected based on your location.

What are the services of the future?  They’re the IMS/EPC and CDN facilitators to mobile empowerment and mobile-device-driven life.  That much we can infer from the financial and social trends, but in technology terms what are they?

A mobile worker is different from a fixed worker because they’re mobile.  They’re mobile-working because their task at hand can’t be done at a desk, or presents itself to be done at an arbitrary point where the worker might not be at the desk to handle it.  That says two things about our worker—they are contextual in consuming IT resources and they are event-driven.

Contextual behavior means reacting to conditions.  Your actions are determined by your context, and context means where you are, what you observe, what your perceived mission is, how you’re interacting with others, and so forth.  Every one of these contextual elements is a service, or at least a potential one.  Yes, you can use a phone GPS to figure out where you are in a geographic sense, and yes that can also provide you with some understanding of the real question, which is where you are relative to things that are important to you.  But suppose that you could find those things by simply relating your “receptivity” to a network service, and have that service now feed you the necessary information?

This model also works for events.  If you’re walking from mid-town to downtown Manhattan (a nice walk if you want some exercise), and if you’re in a hurry, you might try to time out the lights so that you don’t have to wait.  Suppose your phone could tell you when to speed up, when to cross, etc.  OK, some are saying that IoT would let this all happen by providing sensors to read, but who deploys the sensors if everyone gets to read them for nothing?  Anyway, how would you know what sensors to query for your stroll downtown, and how to interpret results?

Social context presents the same situations.  You have a set of interests, for example, that might be prioritized.  You’re heading to a meeting—the work mission that has priority.  You are also looking for some coffee—next-highest priority.  You might have an interest in cameras or shoes.  You might also receive calls or texts, either relating to your mission or introducing other interests (“If you see a nice bottle of wine…”) and you need to field these events too.

This is where the money is, both for enterprises who want to enhance productivity and for network operators or others who want to get some additional revenue.  Make a worker’s day more efficient and you improve productivity.  Make sales processes more efficient and you can improve sales.

Network transformation isn’t going to happen to support on-demand connectivity.  If there’s a value to on-demand connectivity it’s not going to come from applying it to today’s applications, but to supporting one of these contextual/event applications that we’re still groping to accept.  NFV isn’t going to happen because of cloud firewalls or vCPE, it’s going to happen because somebody uses it to deploy contextual/event services.

We don’t know anything about these services today because we’re focused too much on myth-building.  None of the new things we’re talking about are going to transform worker productivity or save telco profit per bit.  Myths are nice for entertainment, but they can really hurt if you let them color your perception so much that the truth can’t get through.  Fearing the Big Bad Wolf can make you vulnerable to alligators, and waiting to win the lottery can blind you to the need for an investment strategy.

Telcos and vendors are equally at fault here.  It’s easier to sell somebody a modest evolution in capability because it’s easier for the buyer to understand and easier for the seller to promote.  But modest evolutions offer, at best, modest benefits.  We should be aiming higher.

What NFV Needs is “Deep Orchestration”!

If my speculation is correct and operations vendors may take the lead in NFV, what happens to all the grand plans the technology has spawned?  Remember that my numbers show the ROI on an OSS/BSS modernization to improve operations efficiency and service agility is much better than network modernization based on NFV.  Would we strand the network?  The answer may depend on what we could call “deep orchestration”.

“Orchestration” is the term that’s used today to describe the software-driven coordination of complex service processes.  In NFV, the term has been applied to the process of making VNF deployment and connection decisions.  The problem in the NFV sense is that a “service” includes a lot more stuff than VNFs, and in many cases operators tell me that some services that they’d want to “orchestrate” might include no VNFs at all in some areas, and so for some customers.

There is also an orchestration process associated with service-layer automation.  The TMF defines (in its Enhanced Telecommunications Operations Map) a complex model of processes associated with service offerings.  It even (in GB942, the business-layer stuff), offered a model to associate events to processes using CORBA that qualifies as orchestration via a data model (the contract).  There were very few implementations of GB942, though.

The negligible commitment to data-model orchestration TMF style meant that NFV orchestration could have branched out to address the full spectrum of service and resource orchestration.  This approach has been supported at the product level by the six vendors I’ve been citing as those who could make the NFV business case (ADVA, Ciena, HPE, Huawei, Nokia, and Oracle after M&As are accounted for).  However, these six have not made enormous progress in actually building that uniform orchestration model.  Now Amdocs and Ericsson seem to be attacking orchestration at the service level, and because that could produce most of the benefits of NFV at a better ROI, these guys could end up not accelerating NFV but stalling it.

In theory we could see vendors at the OSS/BSS level actually add effective NFV orchestration to their model, meaning that they could extend their service-layer orchestration downward.  Amdocs has been looking for NFV test engineers, which suggest that they might want to do that.  However, the OSS/BSS guys are as hampered as the NFV vendors were by a single powerful force—organizational politics.  In an operator, the OSS/BSS stuff is run by the CIO and the network stuff run by the Network Operations group, with the CTO group being the major sponsor of NFV.  That’s a lot of organizational musical chairs to chant to and fill if you want an integrated approach.  So what could be done to save actual network modernization?

This is where “deep orchestration” comes in.  If there’s going to be service orchestration at a high level, through an OSS/BSS player, then at this point it will be difficult to convince CIOs to accept NFV-style orchestration of operations processes.  That means that getting CIO backing (and maybe CEO/CFO backing) will require tying into the OSS/BSS orchestration process in some way.

Right now, OSS/BSS systems manage devices.  The prevailing wisdom (??) for NFV is to make NFV look like a virtual device, which is why I call the approach the “virtual device model”.  The idea is that if you deploy virtual functions that mimic the behavior of devices, then you could present OSS/BSS systems with the virtual form of these devices and they’d be none the wiser.  This approach would work fine for both NFV and OSS/BSS so what’s wrong with it?

The answer is that it doesn’t promote NFV in any way.  Virtual devices accommodate NFV, they don’t facilitate it.  What we need to do, if we want NFV-driven modernization, is one of two things.  First, NFV vendors who can orchestrate operations processes could advance that notion aggressively and beat back the new efforts of the OSS/BSS players.  That, frankly, isn’t likely to happen because few of the NFV Super Six who can do full-spectrum orchestration have the credibility and connections to influence OSS/BSS decisions.  Those who do haven’t done it well up to now, and it’s unrealistic to think that’s going to change.  Way two is to structure NFV orchestration to complement service orchestration.

Everything I’ve done on NFV (CloudNFV and ExperiaSphere) has recognized what operators have told me, which is that “services” and “resources” are two different domains, even politically.  The guiding principle of deep orchestration is recognizing that and providing a suitable boundary interface between the two that lets service orchestration have a more modern and (dare I say) intimate relationship with lower-level orchestration, including NFV.  But what’s different between deep orchestration and virtual devices?  The best place to start is what’s wrong with virtual devices.

The first problem with a virtual device model is that it represents a device.  In classic networks, you create services by coercing cooperative device behavior, but when devices are virtual they don’t have explicit behaviors.  Forgetting for a moment what happens as virtual is assigned to resources and becomes real, virtual devices limit operations because they are derived from appliances, which optimum new-age devices are not and should not be.

The virtual-device link to a real device creates the second problem, which is that of reflectivity.  If you have an issue in a real device you can assume it’s a real-device issue.  If it’s an issue in a virtual device you have to map the issue to a real-device MIB, which may not be easy if you’ve virtualized the resources and your firewalls now contain servers and IP networks.

The solution to the problem is to present not fixed devices but flexible abstractions to the OSS/BSS.  That means developing a more agile model than a “device”, a model that can represent an arbitrary set of features and connections, and an arbitrary set of SLA properties.  One way to model this sort of thing is the “intent model” concept I’ve supported in other blogs, but it’s not the only way.  The key is to insure that the OSS/BSS boundary with SDN and NFV be generalized so that old-network behaviors aren’t imposed on the new network.

I still believe that the best approach to orchestration in the long run is to define a single modeling approach from the top to the bottom.  This would let service architects and resource architects build services downward from needs or upward from capabilities with the assurance that the operations and management practices could be integrated throughout.  The next-best thing would be to define a boundary point, which I think is the “service/resource” boundary that naturally fits between service (OSS/BSS/CIO) and network (NMS/NOC/COO) activities, and codify that in as general a way as possible.

Speed is also important.  Operators can realize about 80% of the agility and efficiency benefits of NFV simply by orchestrating service processes optimally.  That leaves a very small chunk of operations benefit to justify a network modernization, forcing you to define services with features not currently supported by network devices if you want to justify changing infrastructure.  We’ve not really worked on how those service/feature relationships would develop and we don’t know whether regulators and lobbyists will cooperate.  I think that this year is critical, because I think that this year will mark the point where operations takes the initiative, if true NFV vendors don’t do something to recapture it, or at least ride the service orchestration wave.  Deep orchestration may be essential in NFV vendor survival.

The Real Story on SDN and NFV Security

There is probably no issue in technology that gets as much attention as security.  Nobody seems to think it’s good enough (and they’re probably right) which means that you can criticize nearly any technology, product, or vendor on the basis of security issues and get a lot of sympathy and attention.  So it is with SDN and NFV, both of which have been declared security black holes by various people.  The obvious question is “Are they?” and the answer is that it’s too early to say, so we’ll have to focus on potential here.

Security is a term nearly too broad to be useful and a good starting point is to divide it into two categories—content security that relates to malware, viruses, and things that people download accidentally and then does them harm, and connection security that relates to the ability of a network to connect those it’s supposed to and keep out everyone else.

Content security problems can arise from a number of sources, but the most prevalent by far is the Internet.  People go to the Internet for something, which could be information, content, software, whatever, and they get something bad instead.  Because content security is compromised by content from insecure sources and because the Internet is and will always be such a source, there’s not much that can be done in new SDN/NFV technology to deal with it.

Some vendors have pitched the idea that a smart software virtual function in the delivery path could on-the-fly explore content to bar the bad stuff, but that doesn’t seem practical.  There are already tools that can spot a malicious site, but it’s not realistic to assume we could detect malware on the fly from an ordinarily trusted source.  I got a contaminated link from a person I know just this week, but fortunately I’m a suspicious type.  The only contribution that network technology can bring to content security is keeping bad things off the network in the first place.

That leaves us with connection security, whose properties of servicing the authorized and blocking everyone else has been noted.  Connection security breaks down into three specific areas, each of which we’ll look at:

  1. Network admission control, meaning the ability to control who can join the community the network connects.
  2. Network interference prevention, meaning the ability to prevent anyone inside or outside the network from doing something that’s destructive to service overall.
  3. Network interception prevention, meaning the ability to prevent passive interception of traffic.

Network admission control is primarily a service management process in today’s networks.  When services are ordered or changed, they can be added to (or removed from) sites or, perhaps in some cases, user’s inventories.  If services permit plug-and-play joining, then this process could be automated and based on credentialing.  The risks SDN or NFV would bring in this category is encouragement of self-service portals, which could make the admission-control process less secure.

Network interference prevention has two main dimensions.  One (the most commonly discussed) is denial-of-service prevention and the other is the decertification of maverick endpoints.  For other than public networks like the Internet or networks that share infrastructure, these tasks tend to converge and require both a determination mechanism (“finger-pointing”) and a cutoff mechanism.  SDN and NFV could impact this area positively or negatively.

Network interception prevention, if we step outside the area of intercepting physical media like the access line, is also a management and control task.  There are some SDN-or-NFV-facilitated activities like passive taps for testing that might be hijacked to create a new risk here.

Like all risks, security risks should be addressed incrementally.  If SDN or NFV have the same risk factors in some areas as we face, and accept, today then they aren’t changing the game in security.  If they improve or increase risk, then they are.  Let’s look at each technology based on that rule and see what falls out.

SDN, in the classic sense, is a centrally managed forwarding framework that eliminates adaptive routing based on inter-device communication.  If the central SDN controller is compromised, the entire network is compromised, but that’s true if network management is compromised in general.  It’s difficult, in my view, to argue that the centralization of forwarding control adds risk to the picture.

What would create a major security flap is the ability to intercept or hack the device-to-SDN-controller link.  SDN raises a point that’s going to be central to all of virtual networking, which is that there has to be a control path that is truly out-of-band to the data plane.  It’s not enough to just encrypt it because it could then still be attacked via denial of service.  You have to support an independent control plane, just like some old-timers will recall SS7 provided in voice.

On the other hand, central management that eliminates device topology and status exchanges makes it significantly more difficult for a device that’s been compromised to hijack the network.  Providing, that is, that the links between SDN domains for interconnect-related exchanges are secure.  That security would be easier to accomplish than securing networks from maverick devices—we’ve already seen that happen with false route advertisements.

The big point with SDN may be its ability to partition tenant networks into true ships in the night.  Centrally set forwarding rules create routes explicitly and change them explicitly.  The devices themselves can’t contaminate this process and because the tenant networks are “out of band” to the common control processes they can’t diddle with each other’s resources.  SDN networks could be significantly more secure.

On the NFV side, things are a lot more complicated.  What’s difficult about NFV is that what used to be locked up safely inside a box is now distributed across a shared resource pool.  Virtual functions that add up to a virtual device could be multi-hosted and interconnected, and this structure presents a two-dimensional risk.

The first dimension is that network connection among functions.  Just what is the network the connections are on?  If we presumed the ridiculous, which is that it was an open public network, then you’d be able to hack each functional component.  But suppose the network is simply the service data network itself?  You have still exposed individual functions to connections from the outside, which you now have to prevent.  Even if you can do that, you probably could not prevent denial-of-service attacks from the service data plane.

The second dimension is the vertical dimension between functions and their resources.  One of the complex issues of NFV is managing resources to secure an SLA, and the setup for doing that in the ETSI spec is a “VNF Manager” at least a component of which might be assembled and run with the VNF.  Unless the VNF manager is expected to do its thing with no idea what the state of the assigned resources are, there is now a channel from a tenant software element (the VNF/VNFM) and a shared resource.  That could result in major security and stability issues.

Both these dimensions of risk could be exacerbated by the fact that NFV introduces the risk of agile malware that imbeds in the network.  A VNF, like a maverick device, could bugger a service.  It could also, if it has connection to either the service data plane or the shared resource MIBs, bugger the user’s applications and data or the shared resource pool.  Any flaw in a VNF, if it’s exploitable, could be a true disaster.

NFV could require a number of separate control planes or independent resource networks.  Every VNF, IMHO, should have a private IP subnet that its components live on.  In addition to that, you’ll need the service data plane for the user, and you’ll also need at least one management plane network for centralized control of the resource pool, and probably another for the service management links.  All of these will require virtualization along the lines of Google’s Andromeda, meaning that some elements will be represented in multiple virtual networks.

The common thread here is that the real security issues of both SDN and NFV are outside the traditional security framework, or should be, because they involve securing the control plane(s) that manage virtual resources and their relationships with applications and services.  We do have an incremental security risk in that area because we have incremental exposure issues.  We’re also exacerbating that risk by not addressing the real issues at all.  Instead we talk about SDN/NFV “risk” as though it’s an extension of the risks of the past.  It’s not, and the security strategies of the past cannot be extended to address the future.

There will be the same old service-security questions with SDN and NFV as with legacy networks, but the virtual security processes are different and we need to understand that it’s virtualization itself we’re securing.  Otherwise we’re going to open a true can of worms.

Is NFV Seeking a New Business-Case Leader?

You can’t have a market for next-gen tech without a business case for transformation.  As the famous saying of Project Mercury, the first step in the US space program went, “No bucks, no Buck Rogers.”  The news out of MWC, recent LinkedIn posts, and other more direct metrics are all showing that vendors and operators alike are starting to realize this crucial point.  What’s interesting is that the focus of the NFV business case is a fairly narrow set of tools that we could call “full-spectrum orchestration” and the costs and fruits of an NFV victory lie largely elsewhere.  That could create some interesting dynamics.

I’ve talked in past blogs about the nature of the NFV business case and the role that operations efficiency has to play in making it.  There are a half-dozen sub-categories inside what I’ve called “process opex” meaning the opex related to service/network processes and not to things like mobile roaming settlement, but all of them depend on applying a very high level of software automation to OSS/BSS/NMS processes, both today and as they evolve toward SDN/NFV.  This automation injection is literally that—it’s a fairly small quantity of very insightful software that organizes both operations/management processes and resource allocations.  You don’t rewrite all of OSS/BSS/NMS, and what you actually do doesn’t even have to be a part of either of these things, which is what’s so interesting.

The largest cost of NFV, and the largest pie vendors will divide, is the resources, what ETSI calls the NFV Infrastructure or NFVI.  The virtual network functions are a distant second.  My model says that NFVI will make up about 90% of total capex for NFV, with 8% from VNFs and 2% from that critical and central operations/management orchestration thing.  The core of NFV, the central management/orchestration stuff, wags a very large dog and yet represents where all the NFV proof points have to be developed and NFV resistance overcome.

Most vendors would love for somebody else to do the heavy NFV lifting as long as that someone wasn’t a direct competitor.  Resource-biased NFV is emerging, as I’ve noted recently, as vendors recognize that getting the big bucks can be as easy as riding a convenient orchestration coat-tail. The challenge is that NFVI without the rest of NFV isn’t going very far.

So the question for NFV today is whether the secret business sauce will come from a vendor or from an open activity, like OPEN-O or OSM.  That’s an important question for vendors because if there is no open solution for the orchestration part then an NFVI player is at the mercy of an orchestration player who has NFVI.  Such a player would surely not deliberately make a place for outsiders, and it’s likely that even operator efforts to define open NFV wouldn’t totally eradicate such a vendor’s home-court advantage.

We will probably have an open-source orchestration solution…eventually.  We probably won’t have one in 2016 and maybe not even in 2017.  That means all those hopeful NFVI vendors and VNF vendors will have to wait for something to coalesce out of the commercial space that can meet operator goals financially and not be so proprietary that they gag on their dinners.

The big Hope for the NFVI faction is that there will be an orchestration-business case giant who doesn’t sell NFVI.  While that might seem a faint hope at best, the fact is that of the six vendors who can currently make a business case for NFV, only one is clearly an NFVI incumbent (HPE).  All the rest are potentially allies of any/many in the NFVI space, but so far none of these orchestration vendors has been willing to take on the incredibly difficult task of doing what’s needed functionally and at the same time wringing an open ecosystem from specifications that aren’t sufficient to assure one.

And then there’s the OSS/BSS side.  None of the OSS/BSS-specific vendors are among my six, but it’s obvious that both Amdocs and Ericsson are stepping up an operations-centric solution to opex efficiency and service agility.  OSS/BSS vendors have good operator engagement, and new services and service operations start with OSS/BSS, after all.  With an NFV power vacuum developing you could expect these guys to move.

Amdocs has been especially aggressive.  At MWC they announced some explicit NFV stuff, and they’ve been advertising for NFV test engineers.  The big news from them IMHO was their focus on “digital transformation”.  If you’re an operations vendor you gain little or nothing by tying yourself to an infrastructure trend.  You don’t sell the stuff.  What you want to do instead is what every vendor who doesn’t sell stuff in a new area wants to do—superset it.  You want to climb up above the technology, which tech vendors usually build and try to justify from the bottom up, and eat all the benefits before they get a chance to filter down.  Transformation is what operators have been trying to do for almost a decade, so revisiting the notion is appropriate.

It’s especially appropriate when the benefits that are needed to drive network transformation can be secured without much in the way of network investment, or even new infrastructure.  If you recall my blog on this, you can gain more cost savings by service-layer automation than by network modernization.  I’ve been working on the next-gen services numbers, and they also seem to show that most of the barriers lie above the network.  The ROI on service automation is phenomenal, and that’s both a blessing and a problem.

It’s a blessing if you’re Amdocs or maybe Ericsson, because you can absolutely control the way operators respond to their current profit squeeze.  No operator presented with a rational picture of next-gen services and operations efficiency driven by service automation from above would consider lower-level transformation until that service-automation process had been completed, at least not after their CIO got done beating on them.  Thus, an OSS/BSS vendor with some smarts could control network evolution.

Which makes it a curse to those who want a lot of evolving to be done, and quickly.  Remember those optimum, hypothetical, hundred thousand NFV data centers?  If you’re Intel or HPE or any platform vendor you want that full complement and you want it today.  Sitting on your hands while OSS/BSS vendors and operator CIOs contemplate the which-ness of why isn’t appealing.

What I think is most interesting about the picture of NFV and network transformation that’s emerged from MWC is that we seem to have lined up everyone who wants to benefit from NFV and sorted them out, but we’re still trying to enlist the players who can actually drive the deal.  I again want to refer to a prior blog, this one on the Telefonica Unica award.

The details aren’t out in the open, but it seems pretty clear that Telefonica found that NFV integration is a lot harder than expected.  Yes, this almost certainly comes out of a problem with the NFV ISG’s model, but whatever the source the point is that fixing it now will either consume a lot of time (in a traditionally slow-moving open-source project) or require a vendor offer an open solution even if they undermine their own participation in the NFVI money pit.

Who might that vendor be?  One of the “Super Six” who can make an NFV business case directly?  An OSS/BSS vendor who’s growing downward toward the network eating operations benefits along the way?  A new network or IT vendor who’s not a major player now?  Whoever it is, they have to resolve that paradox of effort and benefits, of orchestration and infrastructure, and they have to be willing to show operators that NFV can be done…from top to bottom.

What Cisco’s DNA Might Really Be

I hate to blog about vendors two days in a row, but it’s clear that significant stuff is happening at Cisco, who as a market leader in networking is also a major indicator (and driver) of industry trends.  Yesterday I talked about their cloud transformation and how it seemed to be hedging against network commoditization.  Today we have a chip vendor buy and their Digital Network Architecture announcement.  It all seems to me to tell the same story.

The chip deal was Cisco’s second M&A announcement this week.  This one involves Leaba Networks, a company with a lot of skill in building big complex communications chips.  There are a lot of engineers who tell me that this sort of technology would be essential in building a cheap “tunnel switch” and also in creating high-performance electrical-layer stuff to groom optical paths.

If you believe that operators don’t want to buy expensive switches and routers any more, then there are only three possible reactions.  First, hunker down on your product line, push your salespeople to the point of hemorrhage, and hope.  Second, you could get out of the big expensive box business, perhaps into software instances of either.  Third, you could try to make the price/performance on your stuff a lot better.  My vote is that Cisco has picked Door Number Three here.

Actually the Leaba deal could position Cisco for the second option too.  I think the logical evolution of carrier networking is to virtual-wire underlayment as a means of simplifying L2/L3 almost to the point of invisibility.  While Cisco might not like that, the alternative of having somebody come along and do it to Cisco instead doesn’t seem attractive.

All of this stuff seems aimed at the network operator, and none of it really addresses another logical conclusion that you could draw about the network of the future.  If everything inside is commoditizing to virtual wires, then how do you sustain differentiation and margins even if you have great technology?  I’ve said all along that operations/management was the key and I think that’s true. So why isn’t Cisco pushing that?

Perhaps they are, with DNA, which in this context is that Digital Network Architecture I’ve already mentioned.  Cisco has a history of three-letter technology architectures of course, but DNA looks interesting for what it seems to be doing, which is to create a kind of higher-layer element that not only could easily be shifted to the operator side, it even includes some operator-oriented technology already.

DNA’s principles could have been drafted for the carrier market.  There are five (remember when Cisco always had five phases—apparently that’s the magic marketing number) and they are (quoting the Cisco release) “Virtualize everything to give organizations freedom of choice to run any service anywhere, independent of the underlying platform – physical or virtual, on premise or in the cloud.  Designed for automation to make networks and services on those networks easy to deploy, manage and maintain – fundamentally changing the approach to network management.  Pervasive analytics to provide insights on the operation of the network, IT infrastructure and the business – information that only the network can provide.  Service management delivered from the cloud to unify policy and orchestration across the network – enabling the agility of cloud with the security and control of on premises solutions.  Open, extensible and programmable at every layer – Integrating Cisco and 3rd party technology, open API’s and a developer platform, to support a rich ecosystem of network-enabled applications.”

Why then push it out for the enterprise?  Well, to start with, Cisco can’t afford to be shilling one flavor of the future to the sellers of services in that future and another to the buyer.  If you’re going to try to do something transformational you need to reap every buck you can from the revolutionary upside because as an incumbent you’re going to for sure reap the downside.  But it’s also true that the carrier space is not where Cisco wants to lead the transformation to next-gen anything because they have too much at stake.  Enterprises offer Cisco a more controllable Petrie dish.

It’s also true that the enterprise cares less about technology and more about results, which plays to Cisco’s evolutionary approach overall.  Enterprise NFV, for example, is about being able to host features/functions anywhere, meaning generally on Cisco devices.  It’s a kind of super-vCPE approach, and it wouldn’t work well in a carrier environment where you’d quickly run out of real estate.  For the enterprise it’s a good idea.

But the big value in an enterprise DNA push is that you can control the market by controlling the buyer.  Whatever Cisco can do to frame demand will force operators to consider Cisco when framing supply.  And by sticking a virtualization layer into the mix, Cisco can frame demand in such a way that it doesn’t force big changes (read big moves to trash existing Cisco gear) on the enterprise side.  Would we be surprised to find that same element in the supply-side version of DNA?

“Let’s go to bed, said Sleepyhead—Let’s stay awhile said Slow.  Put on the pot said Greedy Gut, we’ll eat before we go!”  Networking has been in conflict among these goals for a decade now.  We have those who want to move aggressively—to doing nothing different.  We have those who just want to be comfortable, and those who want to drain the current market before looking for new waterholes.  Cisco doesn’t want to be any more aggressive than its competitors—perhaps less aggressive in fact.  Cisco also knows how vulnerable it is now, as everyone is trying to spend less on networking.  Some sort of transformation is essential or we collapse into a hype black hole and commoditization.

Cisco’s remedy is cynical but useful nevertheless.  They have uncovered a basic truth, one that everybody probably knows and nobody talks about.  Virtualization has to start from the top because you can only abstract from top to bottom, not the other way around.  Further, once you abstract the top, what the bottom looks like becomes invisible.  Build an intent model of NaaS and equip it with the service features you want, then realize the model on your current equipment and adjust the realization at the pace of your own development.

DNA lets the sense of SDN and NFV work its way into networks at the enterprise level, and thus both change the nature of demand and through it the nature of supply—or so they hope.  That’s its strength, and its weakness is that other vendors who really want to do something in either area can simply follow the Cisco path and accelerate the transformation of how NaaS is recognized.  Cisco is hoping that this won’t happen and they might be right; it’s not like Cisco’s competitors have been accomplishing astonishing feats of positioning up to now.

Cisco is changing, under new leadership, from a company that denied the future to perhaps a company that’s determined to exploit the future as safely as possible.  That may not sound like much of a change, and it may not be, but if Cisco follows a top-down pathway to virtualization as earnestly as DNA suggests it might, and if it adds in some insightful cloud capabilities, it could be a real contender in both IT and networking, even if the future is as tumultuous as it might turn out to be.