The Access Revolution: What’s Driving It and How do We Harness It?

All networking reduces to getting connected to your services, which means access.  In the past and in a business sense, the “divestiture” and “privatization” trends have first split access from long-haul, then combined it.  The Internet has also changed access networking, creating several delivery models inexpensive enough to serve consumers.  Today, virtualization is creating its own set of changes.  So where is access going?

The first important point to make is that the notion that it’s “heading to multi-service” is false.  It’s been there for decades.  The evolution of “the local loop” from analog voice to multi-service voice/data/digital started back in the era of ISDN.  Cable companies have confronted voice/data/video for over two decades.  It’s less how many services than how services are separated.

What is true is that consumer data services based on the Web generated new interest in multi-service access because consumer data needs rapidly evolved to the point where voice traffic was a non sequitur on a broadband path.  “Convergence” meaning the elimination of service-specific silos is just good business.  And consumer Internet and VoIP were the dawn of the real changes in access.

Many, perhaps in many geographies, most of us use VoIP.  Most stream video as an alternative to channelized linear RF delivery.  The simple truth is that for the consumer market, the access revolution is a predictable consequence of the increased demand for data/Internet capacity.  The bandwidth available is exploitable under the Internet model by any service (hence Skype) and that drives a desire to consolidate services onto that new fat pipe, displacing service-specific access and its associated cost.

Business services haven’t moved as decisively.  In part, that is because businesses were always consumers of both voice (TDM) and data and there was no sudden change such as the one that drove consumer Internet demand.  Over time, though, companies increased their own data demand in support of worker productivity gains, and also put up ecommerce sites.

Where we are now with access relates to this business evolution.  Companies have typically retained service-specific access technology, though TDM voice access is rapidly being replaced by VoIP via SIP trunking.  At the same time, though, physical access media has begun to shift more to fiber, which means that we’ve seen first consolidation of access channels on the same fiber trunks, and more recently we’re starting to see access multiplexing climb the OSI stack toward L2/L3.

It’s this ride up the OSI ladder that’s being driven increasingly by virtualization.  Network virtualization, NaaS, or whatever you want to call it doesn’t have to be classic SDN, it could be about tunnels, or MPLS, or whatever.  The point is that if you have a service at a given OSI level, you can use some form of multiplexing below that level to create ships-in-the-night parallel conduits that share the access media.  You can do this multiplexing/virtualization at L2 if you have L3 services, and so forth.  You have multi-service at the level of service consumption, but you may have any service from OSI Level 1 through 3 down below as the carrier.

Virtualization is a more dynamic solution than threading new fiber or copper, and the increased potential dynamism facilitates dynamic service delivery.  We all know that SDN, NFV, and the cloud all postulate ad hoc services, and if those services were indeed to grow in popularity and become significant consumers of bandwidth, they would tend to break the static-bandwidth access model of today.

Dynamism at the service level may drive access changes, but access changes then drive changes to services, even the basic connection services.  You can sell somebody extemporaneous capacity to ride through a period of heavy activity, but only if the access connection will support that incremental capacity.  Turning up the turbo-dial isn’t useful if you have wait two weeks or more to get your access connection turned up.

Our vision of elastic bandwidth, so far at least, is surpassingly shortsighted.  I’ve surveyed enterprises about how they’d use it, and in almost nine of every ten cases their answer boils down to “Reduce cost!”  They expect to cut back on their typical bandwidth and burst when needed above it.  If that doesn’t turn out cheaper for them, forget elastic bandwidth.  That means that business service access changes probably have to be driven by truly new, higher-layer, services rather than tweaks to basic connectivity.

Even new services like cloud-hosted applications or NFV-delivered virtual features can be inhibited by lack of access capacity.  If the new service or feature requires more bandwidth, it may be impossible to deliver suitable QoE over the same tired access path—unless the operator had the foresight to pre-deploy something faster.  Two operators, serving the same customer or even building, might compete as much on residual capacity as on “price” in a strict sense.  “Bandwidth in waiting”, waiting for something new to deliver, means it’s waiting for operators to exploit to gain new revenues.  This is the trend now driving access evolution for business services.

The business service flagship, Carrier Ethernet, shows this trend most clearly.  The MEF’s Third Network concept is an attempt to define, first and foremost, an access line as a pathway for multiple virtual networks, and that’s the end-game of the current trends.  The Third Network redefines Carrier Ethernet, but at the same time redefines what it carries.  As ad hoc provisioning of services becomes possible, services that benefit from it become business targets to operators.  Further, if access limitations resolved through virtualization were necessary to make ad hoc services work, it follows that those limitations virtualization cannot address—the basic capacity of the pipe—have to be somehow minimized or it hurts service evolution.

One thing this would do is drive much more aggressive fiber deployment to multi-tenant facilities.  Even the facility owners might want this sort of thing, and we already have some facilities in some areas served by multi-fiber bundles from several operators.  Imagine what will happen if we see more dynamic services, and if elastic bandwidth actually means something!

That means that “service multiplexing” and ad hoc services versus ad hoc capacity is the key.  Cloud computing, offsite data storage, anything that has additional capacity requirements as an offshoot of the service delivery, is the only credible type of driver for access virtualization changes on a large scale.  Any of these could produce a carrier revenue model based on ad hoc service sale dependent on ad hoc capacity availability.  That implies oversupply of access capacity.

The question is whether the revenue model for pre-positioning access assets could be made to work.  Certainly at the physical media level, as with fiber deployment, it makes sense.  However, physical media doesn’t become an OSI layer without something to do the conversion.  We’d need to think about how to either up-speed connections dynamically or meter the effective bandwidth of a superfast pipe unless the buyer pays to have the gates opened up.  We also need to think about how to utilize “services” that appear ad hoc on an access line.  How do you distinguish them from an elaborate hacking attempt?

That’s the forgotten point about access evolution, I think.  More often than not, we have static device and interface facilities feeding a static connectivity vision.  We’ll have to work out a way to harness dynamism by converging changes in service delivery and service consumption to match our new flexibility.  Otherwise access evolution could be just another trend that falls short.

SDN/NFV: We Don’t Need Everything We Think, but We DO Need Some Things We’re Not Thinking Of

Revolutions have their ups and downs.  “These are the times that try men’s souls,” said Tom Paine before the American Revolution, and the current times are probably trying the souls of many an SDN or NFV advocate.  For several years, we heard nothing except glowing reports of progress in both areas, and now we seem to hear little except statements of deficiency.  Both these conditions fail the market because they don’t conform to reality, which is somewhere between the glow and the gloom.  Obviously it would be nice to find a technical path that takes us there.

It’s easy to make progress when you’ve defined what “progress” means.  SDN and NFV both took the easy way out, starting down in the technical dirt and picking a little incremental piece of a very large and complicated pie.  That was dumb, to be sure, but it’s hardly fatal.  Even if we leave aside the ongoing question of business case and look to technology issues, we’re not really too deep in the woods…yet.

The biggest problem both SDN and NFV now face is that an effective operational umbrella and a complete picture of infrastructure have yet to be offered.  Given that services have a variable dependency on both operations and infrastructure depending on their specific features and targets, that means that it’s very doubtful that a vision for “total SDN” or “total NFV” could emerge from PoCs or use cases.

Absent an architectural vision to unify all these efforts as they progress, we’re already seeing what one operator characterized as “The Death of Ten Thousand Successes”, meaning an explosion of silos built around vendor and emphasis differences.  In many cases these silos will also be layered differently, approaching EMS, NMS, SMS, OSS, BSS, and whatever-SS differently.  It would do no good at this point to define a unified approach; it’s too late to take it.  What we need instead is to unify all our approaches in some way.  Fortunately that way has already presented itself in the principles of virtualization–abstraction.

An abstraction, or a “black box” or “intent model” is a functional concept or external behavior set that’s separated from its possible resources through a representation model.  “Black box” is a great term for this because it reflects the essential truth—you can’t see in, so you only know a black box by its properties, its interfaces and their relationships.

The reason this is nice is that such a black box can envelope anything, including any SDN or NFV silo, any interface, any network or IT element or service.  No matter what you do, or have done, it can be reduced to a black box.

Looking from the outside in, black boxes anonymize the implementation of something.  Looking from the inside out, they standardize it.  You can wrap something in a black box and transform how it looks, including from the management perspective.  It’s easiest to understand this by going to my description of an intent model from earlier blogs, to the notion that intent models or black boxes representing services have an associated SLA.  You could think of this SLA as a local data model, since everything that’s variable and knowable about a service should be thought of as being part of or contributing to an SLA.

Let’s take this principle and apply it at the top.  If we have an SDN controller or an NFV domain, we could wrap it in an intent model or black box.  Once inside it’s just an implementation of something, so a “service” created by either SDN or NFV is a realization of intent, an implementation inside the black box.  A legacy solution to the same service would also be such an implementation, so we could say that black box wrappings equalize the implementations.  And, if we assume that the black-box model of an SDN or NFV or legacy service has the same SLA (as it must, if it’s an implementation of the same abstraction) then we can say that the management variables of all this stuff is identical.

Now we get to the good part.  Because all of the implementations inside a black box are exposing the same management variables and features, they can all be managed in the same way.  Management is simply one of the properties abstracted by the black box or intent model.  It can also be “orchestrated” in the same way, meaning that everything that can be done to it on the outside can be done regardless of what’s inside.

Harmony at any level, within SDN or NFV, could be achieved using this approach.  If an SDN controller domain is an intent model or black box, then its management properties can be the same regardless of who implements it, or the boxes underneath.  They’re also the same as the properties asserted by real switches and routers or NFV-deployed virtual switches and routers, and all of these are the same regardless of the vendor.  If an NFV resource pool is represented by a series of intent models for the type of things they can do, then any set of hosting and connection features that can be combined to do anything useful can be represented, and the implementation can be anything that works.

With this approach the arguments on what the “right” data model is for NFV are moot.  As long as you model intent the way it’s expressed is a minor issue, one that software developers resolve regularly with transliteration processes for data and Adapter Design Patterns for interfaces.  The discussions on interfaces are moot too, because you can transform everything with these models.

What this represents for the SDN community, the NFV community, and the OSS/BSS community is profound.  You can pull this all together now, with a single stroke.  Further, if you focus orchestrating on combining intent models into services and focus management and operationalization on managing and operationalizing intent models, you can have one OSS/BSS/NMS model for everything.  All you have to do is make sure that what is inside an intent model translates between that model’s SLA goals and the internal steps represented.

We hear a lot about orchestration and modeling today, and it’s clear that having a single approach that crosses SDN/NFV and legacy, OSS/BSS and NMS, would have helped.  We can still have it.  You could build such a product and apply it at the top, the middle, the bottom.  Best of all, at least some of the products already built seem to have a lot of that capability.

I’ve always said there were six vendors who could make a business case for NFV.  These also incorporate at least some SDN and legacy vision, but it’s not always clear how these work in detail.  In particular, it’s not clear whether the modeling and orchestration meets black-box or intent-model standards.  Based on public documentation that I’ve seen, I can say that Ciena and HP are almost certainly capable of doing that which is needed.  Overture probably can, and I think Alcatel-Lucent, Huawei, and Oracle are at least looking at the approach.  While all these vendor-specific solutions are invitations to silos, that doesn’t hurt much in an intent-modeled black-box-seeking world.

What does hurt is a lack of acceptance of intent model principles lower down.  Both SDN and NFV need to look at their interfaces in those terms, and while there is some black-box momentum in the standards processes for both these areas it’s not yet fully percolated thinking.  I’d sure like to see the notion move quicker, because if it does then we would be closer to a deployable model of next-gen network services.

Finally, an Actual IoT Offering

I admit that in past blogs I have vented about the state of insight being demonstrated on IoT.  It would be far easier to provide a list of dumb things said and offered in the space than a list of smart things.  In fact, up to late last week I couldn’t put anything on the “smart thing” list at all.  Fortunately, things have changed because of information I’ve received from and discussions I’ve had with GE Digital.

Everyone who lives in even a semi-industrial society has likely been touched by GE at some point; the company is an enormous industrial and consumer conglomerate.  They created a new unit, GE Digital, to handle their growing software business, and it’s GE Digital that’s offering Predix, which they call “a cloud platform for the industrial Internet”.  Yes, it is that, but under the covers Predix is how the IoT should have been visualized all along.

If you recall my blogs on IoT, particularly the most recent one, you know that I’ve advocated visualizing IoT not as some “Internet extension” or “LTE opportunity” or even as a “network” but as a big data and analytics application.  My model of three ovals—big in the middle and a little one on top and at the bottom, reflects a view that real IoT will be a repository/analytics engine (the middle oval) connected to sensors and controllers at the bottom oval, and accessed by applications at the top.  This is essentially what Predix creates.

The central piece of Predix, the “Industrial Cloud”, is the repository and cloud platform plus a set of supporting applications that include analytics.  It’s fed from sensors and connected to controllers through a software element called a Predix Machine.  These can interface with (using proper adapters) any sensor/controller network technology, so this is my bottom oval.

You can have a hierarchy of Predix Machines, meaning that you can have one controlling a single device, a collection of them, a floor, a factory.  Each machine can do local analytics and respond to events based on locally enforced policies.  They can also generate events up the line, and this structure keeps control loops short and reduces central processing load, but the central repository can be kept in the loop through events generated or passed through.

Speaking of events, they could be generated by analytics operating on stored data, or on real-time streams through event recognition or correlation.  Events can change the repository and also change the state of the real systems, and in all cases they are processed by software that then decides what to do.  As I noted, some of that software can be strung along a Predix Machine hierarchy, some can be inside the Industrial Cloud, and some could be in external applications linked by APIs.

The top oval is a set of APIs available to GE Digital and developer partners to build either general or industry-specific applications.  There’s a Predix Design System that provides reusable components, developer frameworks to support specific developer types and goals, and a UI development environment based on Google’s Polymer (designed to build highly visual, interactive, and contextual user experiences).

Inside Predix there’s the concept they call the “Digital Twin”.  This is a kind of virtual object that’s a representation of a device, a system of functionally linked devices, a collection of devices of a given type, or whatever is convenient.  A model or manifest describes the elements and relationships among elements for complex structures, and the Digital Twin links (directly or through a hierarchy) to the sensors and controllers that actually represent and alter the real-world state the Digital Twin represents.  You can kind of relate a Digital Twin to social networks—you have individual profiles (Digital Twins of real humans or organizations that collect real humans) and you have any number of ad hoc collections of profiles representing things like demographic categories.  A profile would also be a network of “friends”, which means that it’s also representing a collection.

GE builds the “Digital Twin” of all its own technology, and you could build them for third-party elements or anything else as long as you provide the proper model data that describes what’s in the thing and how the innards relate to each other.  The Digital Twin provides a representation of the real world to Predix, collecting data, recording relationships, and providing control paths where appropriate.

One of the benefits of this Digital Twin approach is that Predix understands relationships or object context explicitly, and also correlations among objects.  If you look at a given profile in social media, you can see who it relates to.  Same with Digital Twins but in more dimensions.  A piece of an engine is part of the engine and part of a broad collection of that particular piece in whatever other things it’s also part of.  You can then gather information about that specific thing and from how it’s behaving elsewhere, and predict what might happen based on the current state of that single real thing and on the behavior of what’s related to it.  GE Digital has a blog about this.

You can analyze things in a time series too, of course.  You can correlate across classes of things to follow Digital Twin paths, project conditions from the general class to specific objects, and project the result out into the future for essentially any period where the asset you’re talking about has value.  The modeling used to define the Digital Twins lets you contextualize and data and policies let you define access and usage of information for security and governance.

Another interesting principle of Predix that directly relates to the popular vision of IoT is the notion of closed-loop operation.  The concept of “M2M” has been extended by IoT enthusiasts to apply to the idea that refrigerators talk to sinks, meaning that two machines could interact directly.  Even a cursory look at that notion should demonstrate it’s not practical; every device would have to understand how to interpret events sourced from another and how to act on them.  In Predix, closed-loop feedback from sensor to control is handled through a software process intermediary that does the analysis and applies policies.

The notion of closed-loop feedback also introduces yet another Predix concept that I think should be fundamental to IoT, which is “Outcome as a Service”.  OaaS says that in “thing systems” like IoT would generate, the consumer of the system is looking for an outcome.  “Get me to the church on time!” is an example of an outcome, and it would be dissected into routes, traffic analysis, vehicle condition, size, and speed, driver proclivities based on past history, etc.  OaaS is probably the most useful single concept to come along in IoT because it takes the dialog up to the top where the value to the user lives.

In an implementation sense, Predix is a cloud application.  Everything is implemented as a microservice that combine to create an evolving PaaS that future elements (including those developed by third parties) can support.  There are also DevOps tools to deploy applications and microservices, and “BizOps” tools to manage the cloud platform itself.  To say that Predix is an IoT cloud is no exaggeration.

Even in a blog over a thousand words long, I can’t do more than scratch the surface of Predix, and I don’t have any specific insight into what GE Digital might do to promote it widely or apply it to generalize IoT problems.  Their specific application is the “Industrial Internet” remember.  But this application, which includes transportation, healthcare, energy, and manufacturing, has enormous IoT potential and could generate enormous benefits (in fact, it already has to early customers).  All of that would make Predix a great on-ramp to a broad IoT success.

IoT is like a lot of other things in our age, including SDN and NFV.  You can nibble at little pieces and control your risk, cost, and effort, but the results will be little too.  The trick is to find early apps that are so beneficial they can justify significant infrastructure.  In the four key verticals GE Digital is targeting, you can see how a Predix deployment around the core (GE-supplied and third-party) technologies could build a lot of value and deploy a lot of technology.  The incremental cost of adding in peripheral (and yes, pedestrian) things like thermostats and motion and door sensors would be next to nothing.  These applications then don’t have to justify anything in the way of deployment, and they are all pulled into a common implementation framework that’s optimized for hardware and software reuse and for operations efficiency.

I think GE Digital under-positions Predix, and that the material is far too technical to be absorbed by the market overall.  This reflects the “industrial” flavor of the offering.  GE Digital is also seeing this more as a service than as a product, which would make it difficult to promote broadly—to network operators to offer, for example.  All these issues could be resolved, and most very easily, of course.  In any event, even the success of one rational IoT framework could change the dialog on the IoT topic.

We need that.  There might not be more IoT hype than for technologies like SDN or NFV, but there’s darn sure worse hype, and IoT is a newer concept.  The barriers to become a strident IoT crier are very low; anything that senses anything and communicates.  We’ve made a whole industry out of nonsense, when the real opportunity for IoT to reshape just about everything in our lives is quite large, and quite real.  I hope Predix will change things.

 

 

A Look at the MEF’s “Third Network”

There are a lot of ways to get to the network of the future, but I think they all share one common concept.  Services are in the eye of the beholder, meaning buyer, and so all services should be viewed as abstractions that define the connectivity and SLA they offer but are realized based on operator policies and resource versus service topologies.  In the modeling sense this is the “intent model” I’ve blogged about, and in the service sense it’s Network-as-a-Service or NaaS.

SDN and NFV have focused on what might be called “revolutionary NaaS”, meaning the specification of new technologies that would change infrastructure in a fundamental way.  Last year, the Metro Ethernet Forum embarked on what it called the “Third Network”, and this defines “evolutionary NaaS”.  I noted it at the time but didn’t cover the details as they emerged because it wasn’t clear where the Third Network fit.  Today, with SDN and NFV groping for a business case, evolutionary NaaS sounds a lot better.  Some operators are looking at the approach again, so it’s worth taking some time now to have a look ourselves.

According to the MEF, the Third Network “combines the on-demand agility and ubiquity of the Internet with the performance and security assurances of Carrier Ethernet 2.0 (CE 2.0).  The Third Network will also enable services between not only physical service endpoints used today, such as Ethernet ports (UNIs), but also virtual service endpoints running on a blade server in the cloud to connect to Virtual Machines (VMs) or Virtual Network Functions (VNFs).”  The notion is actually somewhat broader, and in their white paper the MEF makes it clear that the Third Network could be used to create ad hoc connections even over or supplementing the Internet, for individuals.

Taking this goal/mission set as a starting point, I’d say that the MEF is describing a kind of virtual overlay that can connect physical ports, virtual ports, and IP (sockets?) and utilize as underlayment a broad combination of Ethernet, IP, SDN, and NFV elements.  The Third Network would be realized through a combination of operations/management elements that would orchestrate the cooperation of lower-level elements, and gateways that would provide for linkage among those underlayment pieces.

I mentioned orchestration above, and the MEF says that “embracing complete Lifecycle Service Orchestration” is the key to realizing the Third Network’s potential.  LSO is envisioned not as a singular “orchestrator” but as a hierarchy, with the operator who owns the retail relationship running a top-level LSO instance that then communicates with the LSO instances of the underlayment pieces.

This, in my view, is very much like the notion of an intent model hierarchy of the kind I’ve been blogging about.  Each “service” that an LSO level is working on is decomposed by it into lower-level things (real deployment or other subordinate LSOs) and any LSO levels above will integrate it as a subordinate resource.  There’s an SLA and connection points and an implied service description, again like an intent model of NaaS.  That’s good, of course, because NaaS is what this is supposed to be.

It doesn’t take a lot of imagination to see that the Third Network could be the “orchestrator of orchestrators” or higher-level process that unifies SDN, NFV, and legacy technology and also operator, vendor, and administrative domains.  The LSO white paper shows this graphically, in fact.  From that, one might reasonably ask whether LSO has the potential of being that unifying concept that I’ve said SDN and NFV both need.  Yes…and no.

The Third Network’s approach is the approach that both the NFV ISG and the ONF should have taken to describe their own management and orchestration strategies.  That a service is made up of a hierarchy of models (intent models in my terminology) should have been fundamental, but it wasn’t.  The good news is that the Third Network now creates such a hierarchy, but the problem is first that it doesn’t go low enough, and second that it doesn’t go high enough.

The MEF can (and does) define LSO as a top-level service lifecycle orchestrator, and it can (and does) subordinate SDN and NFV implementations and legacy management to it.  But it can’t retroject the service model hierarchy into these other standards processes.  That means that in order for LSO to define a complete orchestration model for a complex service made up of all those technology pieces, it has to model the service entirely and pass only the bottom-level elements to the other standards’ implementations.  Otherwise whatever limits in terms of service structure those other standards had, they still have.

It’s possible that the MEF could take LSO to that level.  Their diagrams, after all, show LSOs talking to LSOs in a hierarchy, and there’s no reason why one of those subordinate LSOs might not itself be top of a hierarchy.  But it’s a lot more complicated to do that, and it extends the work way beyond the normal scope of a body like the MEF.

There’s a similar issue at the top, where the LSO connects with the OSS/BSS.  The diagrams the MEF provide show the LSO below the OSS/BSS, meaning that it would have to look to operations systems like a “virtual device”.  That’s not something unique to the Third Network approach; most NFV implementations and SDN implementations try to emulate a device to tie to management systems, but it can create issues.

A service lifecycle starts with service creation and ends with sustaining an operational service for a customer.  While there are events in a service lifecycle that have no direct OSS/BSS impact (a rerouting of traffic to correct congestion that threatens an SLA is one), many events do require operations system interaction.  The service lifecycle, after all, has to live where services live, which is really in the OSS/BSS.

It’s not clear to me from the Third Network material just how the MEF believes it could address the above/below orchestration points.  There is value in the kind of orchestration the MEF proposed, even if they don’t address the holistic orchestration model, because for business services like Carrier Ethernet, VLANs, and VPNs we have established management models.  However, if somebody does develop a full orchestration model, then the Third Network functions would duplicate some of the functions of that broader model.  It might then have to be treated as a “domain” to be integrated with OSS/BSS, SDN, and NFV orchestration through federation techniques.

LSO is good in concept, but it’s still in concept and so I can’t really say where the concept will go.  The MEF white papers outline a highly credible approach and even indicate the specific information models and specifications they plan to develop.  Even with a limited-scope target, this is going to be a formidable task for them.  It would be facilitated, of course, by a broad notion of how new-age operations and management looked from the real top, which is above current operations systems.

We really need a true vision of top-down services for next-gen networks.  You can see that vendors and operators are working on this and that, pieces of technology or standards that have real utility but only in a broad and operationalizable context that we’re still groping to identify.  The main signals of real progress so far are in some of the TMF Catalyst demonstrations, which for the first time are starting to look at both the service realization below and the operations modernization above.  Hopefully the vendors involved will push their efforts earnestly, because there’s a lot riding on the results.

Is Cisco Missing Two Big Opportunities it Already Knows About?

Cisco’s numbers for the quarter just ended were decent, but their guidance for the current quarter was a disappointment to many.  Yeah, Cisco did the usual dance about macro-economics and currency fluctuations, but you can see the Street is concerned that either technology shifts or operator pressure on costs (or both) was impacting Cisco.  The question, if these impacts are real, is what Cisco could do about it.  If you look at Cisco’s earnings call, you see what might be a Cisco strategy emerging, built around UCS.  Is it the right approach?

For a decade, my surveys have shown that the primary driver of network change is data center change, and the vendor who controls data center evolution tends to control network evolution.  IBM was the unchallenged leader among enterprises in everything data center for a long time, but in 2013 they slipped to parity with Cisco and they’ve declined a bit since then.  Part of the reason, I think, is that Cisco realized that data center evolution reflected IT evolution in general, and you need to get control of that process to be a big winner or even (gasp!) the next IBM.

The data center is shifting from being a small-ish number of multi-tasking large systems to a vast complex of blade servers, racks, and switches supporting virtualization or cloud computing.  When vendors like Cisco or Arista talk about cloud strategies and offerings, it’s this data center evolution they’re targeting.  The “cloud” has become a general term to describe IT based on a broadly deployed and loosely coupled pool of servers, connected by LAN/WAN switching and routing.  That networking is extended to users primarily through carrier virtual services, so most enterprise network equipment spending is focused in the data center.

So Cisco’s strategy is simply to win in the data center by offering a complete portfolio of products that address the migration from “mainframes” to racks of blade servers.  In doing so they have a great shot at controlling the data center network and through it, network evolution in the enterprise.  Nothing in the way of technology shifts is needed; it’s Sales 101.

It’s Sales Planning 101 to say that if you’re riding a wave to success you don’t try to jump off.  Cisco would gain little or nothing by pushing through big technology shifts in the data center, shifts like white box switching and SDN.  Their problem is that a news story that says “nothing is happening” isn’t clicked on much, so the media looks for big changes they can claim are imminent.  SDN news produces SDN interest in the enterprise, and that could threaten Cisco’s orderly harnessing of a single sound business trend.

Cisco’s strategy for that is to lance the boil.  You take the top end of SDN, the part that standards people always get to last anyway given their propensity to do bottom-up development, and you tie up the eventual benefits and business case in APIs and tools that deliver “software definition” to the stuff buyers already have and Cisco already makes.  Application-Centric Infrastructure (ACI) is a kind of sissified SDN, something that promises the end-game without the changes in technology.  It does some useful stuff, as all top-down approaches tend to do, but it’s a defense mechanism.

Nothing wrong with that, as long as you stay defensive where you need to be, and that’s where I think Cisco’s call and their ongoing strategy have some holes—two to be explicit.  One is in the area of NFV and the other in SaaS.

It’s really difficult to assess how much risk NFV presents to Cisco’s device business.  In theory, you can host anything in the cloud or on a multipurpose CPE box.  That’s as true today as it would be in an age of NFV, because most enterprise services based on virtual network functions have multi-year contracts.  It’s nice to talk about dynamic provisioning, but how many companies want their VPNs or firewalls turned on and off on a regular basis?  If hosted versions of network features haven’t hurt Cisco so far, it may be that they’re a limited threat.

In any event, what Cisco should be able to do is to capture the benefit case for NFV without actually doing it, just as they’ve done with SDN.  Nearly all the practical benefits of NFV will come not from displacing existing devices but by automating operations and management.  Well, Cisco had plenty of opportunity (and cash) to develop a service management and operations platform that could have delivered nearly all the service agility and operations efficiency of NFV without displacing a single appliance.  A creative program to facilitate field installation of firmware features could do most of the rest.

This approach could be combined with Cisco’s cloud thrust, and the combination could create an upside for UCS, draw the fangs of device-replacement advocates, and perhaps even generate some running room for Cisco’s device business by giving carriers lower TCO without lowering capex.  How did they miss this?

Then there’s SaaS.  On their call, Cisco says that their WebEx is one of the most popular SaaS applications, and in fact it’s one of the most pervasive.  Cisco’s had WebEx for a long time (since 2007) and what started as a collaborative resource for online presentations is…well…still pretty much a collaborative resource for online presentations.  With Cisco pushing technology frontiers (so said the call) with in-house startups, why have they failed to do anything with a premier SaaS opportunity?

And guess what the best example of a specific SaaS/WebEx miss is?  IoT.  Cisco has never been able to look at IoT as anything other than a source of traffic.  I guess if you’re a hammer, everything looks a bit like a nail to you.  Look deep into WebEx, though, and you see an important truth, which is that collaboration happens around something.  WebEx succeeded because it let you collaborate around slides.

IoT could provide a rich new source of stuff to collaborate around.  Think health care, think utilities and transportation, think just about every vertical market that has “things” in it.  Add new logic to WebEx to centralize communication around an arbitrary view of a “thing” or a collection of them, and you have a major new business opportunity.

There are no real technical barriers to Cisco taking advantage of these two opportunities, and I don’t think there’s any real risk to their core business either.  Cisco could be a big, even a giant, player in both spaces.  To me, this looks like an example of corporate tunnel vision, back to my nail-and-hammer analogy.  If they’d think outside the bit (a term that I hold a trademark on, by the way), they’d see that anything that generates real value in a distributed way generates traffic.  In contrast, hype to the media generates only ink and clicks.

I don’t know when the operators will start to act decisively on their revenue/cost-per-bit crossover, or whether some have already done that (some tell me they have).  That means I don’t know when Cisco’s “macro-economic” conditions will have to be updated to include “the buyer put the wallet away” as a condition.  Perhaps neither SDN nor NFV will really matter.  Perhaps regulators will mandate operator spending and tax the population to pay for the excess.  Or maybe Cisco can ask each employee to leave a tooth under their pillow.  Employees need to think about whether they could convince management to look at these two areas.  Or check their dental coverage.

Looking Deeper into “Practical IoT”

IoT could well go down in tech history as the most transformational concept of all time.  It will certainly go down as the most hyped concept.  The question for IoT, in fact, is whether its potential will be destroyed by the overwhelming flood of misinformation and illogic that it’s generated.  SDN and NFV have been hurt by the hype, which has almost eliminated any publicity for useful stuff in favor of just crazed vendor (and media) opportunism.  IoT could be an easier target.

The general view of IoT is that it’s an evolution of the Internet where devices (sensors, controllers, or both) talk to each other to create value for people overall.  The mere thought of billions of “things” appearing on the Internet and generating traffic causes Cisco to nearly faint in joy.  The thought of billions of things that have to be connected wirelessly has made both LTE vendors and mobile network operators salivate to near-flood proportions.  Every possible “thing” interaction (except those involving toilets) have been explored ad nauseam in the media.

At the same time, privacy and security advocates have pointed out that this kind of direct, open, “thing-exchange” creates enormous risks.  Who even knows if a given “thing” is what it purports to be?  Who knows the goal of the interactions—traffic routing, stalking, or terrorism?  Proponents of traditional IoT don’t have a problem with these objections—you just spend more to fix the problems.  Realists know that even without extra security problems to counter, having everything directly on the Internet would be so costly that it would make fork-lift transformation to SDN or NFV look like tossing a coin to a subway musician.

In past blogs, I’ve said that the “right” way to think of IoT is not as a network of anything, but rather as an enormous repository, linked with analytics and event-driven applications.  That’s because what all this “thing-ness” is about is making pretty much all of our environment a resource to be exploited by technology that’s aiding us to accomplish our goals.  If we were to look at the question of finding an optimal route, we’d naturally gravitate to the notion of having a route database with updates on conditions posted along each path.  It’s obvious that home control, industrial control, utility management, transportation optimization—everything that’s supposed to be an IoT application is in fact an application in the traditional sense.  It’s not a network at all, not an Internet of anything.  So why not think of it in application terms?

Workers and consumers do their thing in the real world, which establishes a physical/geographic context and also a social context that represents their interactions with others.  If our gadgets are going to do our will, they clearly have to understand a bit about what we’re trying to do.  They need that same pair of contexts to be fully effective.  Further, each of us (in worker or leisure roles) establishes their own specific context by drawing from conditions overall.  So making our “things” bend to our will means getting them to share our context, which means first and foremost making it sharable.

What IoT needs to do is assimilate context, which is very different from just connecting things.  Connect any arbitrary pair of things and you have next to nothing in terms of utility.  Assimilate what things can tell us, and you’re on your way to contextual understanding.

The right model for IoT, then, should have three layers.  In the middle, at the heart, is a repository, analytics engine, and event generator.  At the bottom is a distributed process that admits validated systems to be either information sensors or controllers and builds a repository of their state and capabilities, and at the top is a set of applications that draw on the data, events, and analysis of the middle layer.

An important part of that middle layer is a policy border.  Nothing gets in except from an authenticated source.  Nothing gets out except in conformance to policies set by the information owner, the provider of IoT service, and regulators at all levels.  So no, you can’t track your ex and then hack the traffic lights along the route to make sure nothing moves.  You can’t “see” control of lights at all, in fact, because of the policies.  The notion of a repository with a policy border is critical because it makes security and privacy achievable without making every IoT device into a security supercomputer.

Contributing to realistic IoT is simpler too.  Anything that has information and is trusted can contribute.  Since it’s routine to create “logical repositories” that blend public and private data, you could retain your own IoT data in-house and integrate query access between it and the public repository.  An example is easy to visualize.  Today you might turn your home lights on or off at specific times of day.  Or you might use the level of ambient light.  With IoT you might say “Turn on my lights when a majority of my neighbors have theirs on” and off based on a similar majority vote.  Useful?  Yes, if you don’t want your home to stand out.

An analytic/event view of the world is useful in both social and work situations.  For example, a shopper might want an alert if they passed within 50 yards of a sale of a specific item.  A worker might want to know if they’re within the same distance of a junction box or passed a freight car with a specific carton inside.  You could argue that a conventional model of IoT could provide this, but how would anyone know what sensor to use or how to interpret the result geographically?  Does the store or boxcar sensor have to know where it is?  We’re back into supercomputer territory here.  But with the analytic/repository model, all this stuff is straightforward.

My proposed IoT model doesn’t mean that there are no new sensors, but it would suggest that current low-cost-and-power techniques for putting sensors online would be retained and expanded to build critical mass, control costs, and avoid sensor hacking.  There would still be new revenue for operators if they could establish a value to using cellular technology directly, which could be the case with higher-value sensors or controllers.  There would still be new traffic too, though most of it would likely come from applications of IoT and not from connecting “things”.

There’s a clear role for both SDN and NFV in this model too.  You could picture my central core element of IoT as a grand mesh of functions that have to be deployed and connected.  We would create a pervasive cloud of repositories and applications for analysis, digestion, and classification.  We’d then add in dynamic request-driven applications.

To me, it’s a mystery why something this obvious gets ignored, and the only possible answer is that what vendors and operators want is a simple short-term revenue boost.  Since the NASDAQ crash of 1999/2000 financial regulations have increasingly focused companies no further forward than the next quarter.  We’re not going to get IoT in any form with that sort of thinking, nor will we get SDN or NFV deployed to any significant level.  It’s also true that simple stories are easy to write, and you can fit them into the 350 words or so that publications are willing to commit.

Simple’s not going to cut it here, and so with IoT as with SDN and NFV we may have to depend on somebody big and smart stepping up.  That may be happening, and I’ll talk about it in a future blog.

How to Keep SDN/NFV From Going the Way of ATM

Responding to a LinkedIn comment on one of my recent blogs, I noted that SDN and NFV had to focus now on not falling prey to the ATM problems of the past.  It’s worth starting this week by looking objectively at what happened with ATM and how SDN and NFV could avoid that (terrible) fate.  We should all remember that ATM had tremendous support, a forum dedicated to advancing it, and some compelling benefits…all like SDN and NFV.  Those who don’t learn from the past are doomed to repeat it, so let’s try to do some learning.

ATM, or “asynchronous transfer mode” was a technology designed to allow packet and TDM services to ride on the same paths.  To avoid the inevitable problem of having voice packets delayed by large data packets, ATM proposed to break traffic into “cells” of 53 bytes, and to prioritize cells by class of service to sustain fairly deterministic performance across a range of traffic types.  If you want the details on ATM technology you can find them online.

If you look at the last paragraph carefully you’ll see that ATM’s mission was one of evolution and coexistence.  The presumption of ATM was that there would be a successful consumer data service model and that model would generate considerable traffic that would be ill-suited for circuit-switched fixed-bandwidth services.  So you evolve your infrastructure to a form that’s compatible with the new data traffic and the existing traffic.  I bought into this view myself.  It’s at least a plausible theory, but it fell down on some critical market truths.

Truth number one was that while ATM was evolving, so was optical transport and Ethernet transport, and in any event even high-speed TDM trunks (T3/E3) could be used as trunks for packet services.  Further, these physical-layer parallel paths offered a cheaper way of getting to data services because they didn’t impact the cost of the rest of the network or commit the operator to a long period of evolution.

The second truth was that the whole issue of cells was doomed in the long term.  At low speeds, the delay associated with packet transport of voice mingled with data could be a factor, but the faster the pipe the less delay long packets introduced.  We have VoIP today without cells; QED.

The third truth was the vendors quickly saw all the media hype that developed around ATM and wanted to accelerate new opportunities of their own.  They pushed on stuff that might have supported their own goals but they never addressed the big question, which was how (and whether) you could justify a transition to unified ATM versus partitioned IP/TDM.  They never made the business case.

It’s also worth noting that there was a time dimension to this.  In 1989 the Web came along, and with that we had the first realistic model for consumer data services.  Users were initially limited to dial-up modem speeds, so the fact is that consumer bandwidth for data services was throttled at the edge.  The market was there almost immediately, and realizing it with the overlay model was facilitated by the limited dial-up bandwidth.  But it was clear that consumer broadband would change everything, and it came along in at least an early form within about five years.  At that point, the window for ATM closed.

Few in the ‘80s doubted that something like ATM was a better universal network architecture than TDM was, presuming you had a green field and a choice between them.  But that wasn’t the issue because we had TDM already.  IP was, at the time, just as flawed (but in different ways) than ATM as a universal strategy.  What resulted in “IP convergence” and not “ATM deployment” was that IP had control of the application, the service.  The argument that one network for all would have been cheaper and better is probably still being debated, but the fact was (and is) that the differences didn’t matter enough to justify fork-lifting stuff.

I hope that the parallels with SDN and NFV are clear here.  If we were building a global network today where none had existed, we’d probably base it largely on SDN/NFV principles, but we did have IP convergence and so we have a trillion-dollar sunk capital cost and immeasurable human skills and practices to contend with.

My contention from the very first has been that capex would not be enough to justify either SDN or NFV, and operators I talked with as far back as 2013 agreed.  You need new service revenues or dramatic reductions in opex, or you can’t generate a benefit case large enough to reach critical mass in SDN/NFV deployment.  Without that mass we’re back to my operator contact’s “rose-in-a-field-of-poppies” analogy; you just won’t make enough difference to justify the risk.

There were, and still are, plenty of justifications out there, but there seem to be only two real paths that emerge.  One is to find a “Trojan App”, a service whose revenue stream and potential for transformation of user/worker behavior is so profound that it builds out a critical mass of SDN/NFV on its own.  The other is to harness the “Magic Benefit”, a horizontal change that displaces so much cost that it can fund a large deployment, and then sustain it.

The Magic Benefit of operations and management automation—or “service automation”—could deliver operator savings equivalent to reducing capex by over 40% across the board.  I believe that if, in 2013, the NFV ISG and the ONF had jumped on this specific point and worked hard to realize the potential, we could already be talking about large-scale success for both SDN and NFV and certainly nobody would doubt the business case.  Neither body did that.

We do have six vendors (Alcatel-Lucent, Ciena, HPE, Huawei, Oracle, and Overture) who could deliver the Magic Benefit.  I also believe that if in 2014 any of these vendors had positioned an NFV solution credibly based on service automation at the full scope of their current solution, they’d be winning deals by the dozens today and we’d again not be worried about business cases.  Never happened.

If we apply ATM’s lessons, then both SDN and NFV need a tighter tie to services; cost alone isn’t going to cut it.  I’m personally a fan of the Trojan App, but the problem is that there are only two that are credible.  One is the mobile/content delivery infrastructure I just blogged on and the other is the Internet of Things.  For the former, we have only a few SDN/NFV vendors who could easily drive the business case—Alcatel-Lucent and Huawei of my six total-solution players have credible mobile/content businesses.  IoT doesn’t even have a credible business/service model.  It’s hyped more than SDN and NFV, and to just as evil an effect.

There is no question that mobile and content infrastructure could be a tremendous help to SDN/NFV deployment because both are well-funded and make up a massive chunk of current capital spending.  If you get critical mass with for SDN/NFV with mobile/content deployment, you get critical mass for everything and anything else.  No other success would be needed to lay the groundwork.  But there’s still the nagging question of whether SDN/NFV benefits services in any specific way.  At the end of the day, we’re still pushing the same protocols and bits.

All of the six NFV prime vendors could also tell a strong mobile/content story.  Metaswitch is one of the most experienced of all vendors in the NFV space, and their Project Clearwater IMS would be a strong contender for many mobile operators and a super strategy for a future where MVNOs did more of the service-layer control than is common today.  Any vendor could assemble open-source elements to create an IoT model, though it would be far easier if some big player with some market might got behind it.

IoT is the opposite, meaning that instead of having a lot of paths that risk being service-less, we have no credible paths because service-oriented IoT hasn’t been hot.  Everyone is focusing on the aspect of IoT that’s the most expensive and raises the largest security and public policy concerns; attaching new sensors.  We have billions of sensors already, and we have technologies to connect them without all the risk of an open network model.  What we need is an application architecture.

Interestingly, I heard HPE’s CTO give a very insightful talk on IoT that at least seemed to hint at a credible approach and one that could easily integrate both SDN and NFV effectively.  For some reason this hasn’t gotten much play from HPE in a broader forum; most operators tell me they don’t know about it.  Other NFV prime vendors could also play in an effective IoT model, though it would be easier for players like HPE or Oracle to do that because they have all the specific tech assets needed to quickly frame a solution.

The lesson of ATM is at the least that massive change demands massive benefits, which demand massive solutions.  It may even demand a new service model, because cost-driven evolution of mass infrastructure is always complicated by the fact that the cheapest thing to do is use what you already have.  I think that in the coming year we’re going to see more operators and vendors recognizing that, and more wishing they’d done so sooner.

What Does the SDN/NFV Success Track Through Mobile and Content Look Like?

I was talking yesterday with an old friend from the network operator space, a long-standing member of the NFV elite, and one of our topics was just what could pull through SDN and NFV.  Two specific notions came up, one the Internet-of-Things opportunity I mentioned a number of times in my blogs (yesterday, for example) and the other was content delivery.  I’ve already promised to look more deeply into the former, but content and in particular mobile content is also very credible.  Let’s take a look there first.

To set the stage, mobile services are the bright spot of the network operator space, if there is such a thing.  The margins are higher, there’s still at least in some areas a hope to increase ARPU, and regulations in many areas are a bit lighter.  For a decade now, mobile capex has grown much faster than wireline capex.

Video isn’t the only driver of mobile, but it sure helps.  A bunch of research on video viewing from varied sources agrees on a key point, which is that channelized TV viewing isn’t falling.  Online video consumption largely supplements it, and the reason is that more and more online viewing takes place where channelized TV isn’t available—in the hand of the mobile user.

Mobile streaming is by far the fastest-growing user of bandwidth, and its importance to mobile users was demonstrated by T-Mobile’s decision to offer free streaming video as a competitive differentiator.  As I suggested in yesterday’s blog, this is a reflection of the fact that a small increase in capex to support additional capacity would be easily justified if customer acquisition and retention costs (the largest opex component for mobile operators) could be reduced significantly.

One corollary to this point is that it then behooves the operators to insure that the capex increase associated with unfettered mobile streaming is small.  How that might be done is a jumping-off point illustrating the complexity of the relationship between new technologies like SDN and NFV and real-world business issues for network operators.

Mobile networks’ video-carrying capacity is impacted by a number of things.  The first is the RF signal, which has a native capacity shared by users within the cell.  You can increase this either by making the radio access network (RAN) faster (4G is pretty good at supporting large numbers of video users and 5G would be better), by making cells smaller so fewer users share the capacity (which means making them more numerous to cover the geography), and by using WiFi offload where possible to create what’s essentially a new parallel RAN.

Back from the RAN is the backhaul.  You can’t offer wireless video services without something to connect the cell sites (or WiFi sites) to video sources.  In the modern world, this means running fiber.  Given that per-fiber capacity is quite high, things like 5G that increase per-cell capacity make sense versus running a bunch of new glass to support more cells.

The combination of RAN and backhaul, and the high cost of customer acquisition and retention in the mobile space, is making the notion of the mobile virtual network operator (MVNO) more interesting.  Giants like Amazon, Apple, and Google have either demonstrated MVNO interest or are rumored to be looking at it.  Cable companies have admitted they plan to become MVNOs at least on a trial basis.

When Google talked about being an MVNO, I pointed out that there was no future in having a mobile industry with three or so players and a dozen resellers.  All that happens in undifferentiated resale is that prices fall into the toilet.  Low-margin mobile does none of our aspiring MVNO players any good, nor does it exploit their strengths.  So we have to look at differentiated MVNO potential, and that’s where SDN and NFV come in.

Why have “virtual” in a name if you don’t virtualize anything?  It seems pretty obvious that if a “real” infrastructure-based mobile operator fully virtualized their infrastructure they could create a kind of mix-and-match inventory of capabilities that MVNOs could exercise at will, mixing in their own unique differentiators.  Comcast, for example, is at least considering being an MVNO and they have a strong content delivery capability already.  Why not combine it with RAN from somebody else?

While some of this virtualizing would impact the RAN and backhaul, most of it would probably fall in the metro and CDN zone.  The signaling and service intelligence of a mobile network resides there, inside IMS, EPC, and of course the CDN technology.  Virtualization at the SDN level could let operators partition real mobile infrastructure better for virtualized partners, but it would also let operators reconfigure their mobile and content delivery architecture to match either short- or long-term shifts in user behavior and traffic patterns.

On the NFV side, mobile and CDN signaling/service elements could be deployed, but the value of NFV to these long-lived multi-tenant components of infrastructure depends on how much of NFV’s benefits are drawn from agility/operations efficiency.  If all you do with NFV is deploy stuff, then something that deploys only once and then gets minimally sustained isn’t a poster-child app.  But if we start to imagine feature differentiation of mobile services and the integration of a true IoT model (not the vapid “let’s-move-sensors-to-LTE” junk), we can see how the same operator who offered virtual IMS/EPC/CDN might offer hosting to VNFs that MVNOs supplied for service differentiation.

CDN elements and IMS customer control and signaling are hosted, whether on specialized appliances or servers.  The hosting could evolve to a more dynamic model, as I’m suggesting above, and with that dynamism it could promote distribution of data centers more richly in at least major metro areas.  That would then establish hosting at a reasonable scale and reduce the barrier to deploying other incremental NFV applications/services.  Virtual CPE in any form other than edge-hosted probably depends on something like this pre-deployment of at-scale resource pools, and so do many other applications.

Many people think that mobile services and content delivery offer SDN and NFV opportunities, but there’s been precious little said about the specific opportunities that would arise or the specific way that SDN or NFV could address them.  Absent that sort of detail, we end up with people saluting the mobile/content/SDN/NFV flag without any actual collateral to play in the game, much less to drive it.

This is one of the true battlegrounds for SDN/NFV, with battle lines that aren’t shaped by either technology but by the high-level reality of selling real services to real users.  The union of Alcatel-Lucent and Nokia could create a true powerhouse in this area, a player who would then fight with Ericsson and Huawei for supremacy in the mobile/content space.  That fight is one business/market force that could then create a rich opportunity for both SDN and NFV—and of course for the three vendors who are duking it out.

 

Can We Find, and Harness, the Real Drivers of Network Change?

If you go to the website of a big vendor who sells a lot to the network operators, or read their press releases, you see something interesting.  The issues that these vendors promote seem very pedestrian.  We hear about things like “customer experience”, “unified services”, “personalizing usage”, “traffic growth”, “outages”, and even “handset strategies”.  Where’s the revolutionary stuff like the cloud, SDN, and NFV?  Or, at least, why isn’t that stuff getting highlighted?

The popular response to this is that it’s because of that bad old carrier culture thing.  These guys are dinosaurs, trapped in primordial sediments that are slowly fossilizing around them while the comet zooms in to generate mass extinction.  Others are playing the role of the mammals—small, fast, scurrying over the traps and destined to survive and rule.  You probably realize by now that it’s not that simple, but maybe not why that’s the case.

The vendor website that’s filled with these pedestrian terms isn’t trying to sell to the dinosaur population, they are trying to sell to the buyers with money.  Equipment is generally purchased by operations departments, and these people don’t have a mission of innovation.  That’s the “science and technology” or CTO people (who, by the way, don’t have much money at all).  The operations people think in terms of service benefits relevant to current sales situations, and that’s why all those pedestrian topics come up.

Imagine yourself as a carrier sales type.  Your customer bursts through your door (virtually or in person) and shouts “I demand you fulfill me using SDN or NFV!”  You’d probably call the cops.  On the other hand, a customer demanding you accommodate traffic growth, unify their services, or control outages is pretty much the norm.

At a high level, this explains the business case issue.  We can’t sell technology changes to buyers of service, we have to sell the impact of technology change on those buyers’ own businesses or lives.  That’s what a business case must do.  But we’ve talked about business cases already, and I want to open another dimension to this.  What are the “priority attributes” that any element of network infrastructure will have to deliver on?

The CFO of every operator is the star of the company’s quarterly earnings call.  All the people on the call—meaning the CFO and the financial analysts—see networking as essentially a zero-sum game.  Revenue gains by me are revenue losses by someone else, which means that “new revenue” is more likely to be someone else’s old revenue than something that’s never been spent before.  Cost reductions have to target large costs with low-risk approaches.

Zero-sum revenue games mean you have to differentiate on something that 1) the customer values and 2) the salesperson can convey quickly and convincingly.  Simple technology changes fail on both counts, which is why that initial list of what might look like ancient clichés is so ubiquitous on vendor sites.  It might not be as obvious, but truly new services fail the second test.  How much time would it take for a salesperson to convince a buyer to adopt a different service model?  A long time, and it might never happen, and sales success is the real prerequisite to any revenue gains.

Interestingly, cost reduction discussions often end up sounding like new-revenue discussions.  The reason is that the largest operations/administration cost element is customer acquisition and retention, running 12 cents per revenue dollar.  When you consider that capex is only 20 cents you can see the point here.  This little fact is why wireless companies like T-Mobile can offer unlimited video streaming, eating the data costs.  Sure it costs them some access capacity (which they can reduce through efficient use of CDNs) but if a little additional capex can make a big difference in the acquisition/retention cost, it’s worth it.

Let’s take this simple truth and run with it.  If the largest benefit source for a new technology is its ability to reduce acquisition/retention charges, then what matters about the technology is how well it does that.  It’s not easy to make a connection between virtual pipes or virtual firewalls and better customer traction or lower churn.  You can assert that there is, or could be, one but most vendors would admit they have no idea how to prove it.  Worse, they could never construct a technology trial to validate their assertions.

This is why a bottom-up approach to both SDN and NFV was such a problem.  In a real, logical, technology project you start with the benefits you’ll need to harness to get what you want, and you define specific requirements and features that will deliver them.  You build downward then to implement or standardize these features.

What about IP convergence, you might ask?  Well, the fact is that the IP revolution came about because of a fundamental change in demand.  We had two forces in play, in fact.  Enterprise networking was built around host-centric architectures like IBM’s Systems Network Architecture (SNA).  We had no consumer data service potential.  Routers offered enterprises a cheaper way to push data traffic, and the Web offered a consumer data service model.  And so off we ran.

This is why the focus on the service status quo is a problem for SDN and NFV.  If we reproduce what we already have as our only revolutionary mission for new technology, we cut ourselves off from the only kind of benefits that has ever created a network revolution.  We are forced to rely purely on cost savings, and as I’ve pointed out in prior blogs it’s difficult to muster a lot of cost savings when you deploy a technology incrementally.

How much do you save by transitioning one percent of your network spending to SDN or NFV?  Clearly less than 1% in capex, since you’ll still spend something.  On the opex side you may save nothing at all because your SDN and NFV pearls are trapped in a lot of legacy seaweed (swine?) that still requires exactly the same practices for operations and management.  And without new services you’re back to the problem of proving customer acquisition and retention savings is possible.

I’ve noted in past blogs that the Internet of Things was something that could drive technology changes.  That’s because it’s a service-level change, something that like IP could be transformative because it transforms what the top-of-the-food-chain buyers spend their money on.  However, just as our conception of SDN and NFV has been shortsighted to the point of being stupid, so is our IoT conception.  Cisco thinks it’s all about more traffic (therefore, it’s about more router spending).  Verizon and other operators think it’s all about LTE-based sensors (therefore, about more mobile service spending).  It’s not about either one.

I’m going to be talking more about IoT in future blogs, but talking about what it really can be and not the stupid stuff.  Along the way, I’ll show how it can transform the network, the cloud, and how it could then pull through massive SDN and NFV investments.

We did dumb things with our current revolutions, and in doing so have almost killed their chances of being revolutionary.  I’d sure like us not to muck up IoT.

Taking a TMF/OSS View of NFV’s Business Case

I’ve pointed out in a number of my past blogs that of all the things needed from an SDN or NFV implementation to make the business case, none tops an effective service management automation approach.  I’ve also noted that the NFV ISG initially put end-to-end management out of scope, and that they also ignored the issues of federation of services across management domains.  The ISG seems to be reversing itself on these issues, but the architecture was laid out without them and retrofitting to put them in could take quite a while.  Other bodies might have to take up the slack.

The most logical would seem to be the TMF, which launched an activity called ZOOM (Zero-touch Orchestration, Operations & Management) to deal in part with SDN/NFV impact and in part with the broader issue of “modernizing” OSS/BSS.  That duality of mission, as you’ll see, carries over into even some vendor Catalyst presentations made in Dallas early in November.

HP’s Catalyst presentation has what should be the tag line for the whole topic:  “Combining NFV, SDN and OSS is not easy”, which it surely isn’t.  The presentation identifies three specific issues (paraphrasing):

  • Today’s OSSs are process silos that lack procedures to automate responses to service events, particularly fulfillment and assurance
  • ETSI NFV specifications don’t consider ‘hybrid’ services that extend over both legacy and SDN/NFV infrastructure.
  • The general approach taken by the TMF and by OSS/BSS vendors is based on linear “waterfall” workflows that are more suitable for manual processes than for service automation.

HP’s Catalyst augments “standard” OSS/BSS service processes with NFV processes.  The effect of this appears to be the creation of a multi-level orchestration model that allows operators to orchestrate aspects of OSS/BSS while NFV MANO remains as the standard for NFV elements.  They don’t go into the details on how this is done, which is a pity in my view because HP has the most mature modeling approach for services and resources.  A key point that I think could have been made is that their service modeling would enable them to either model services as two interdependent classes—legacy and NFV—or as a single integrated class.

Huawei also presented a Catalyst, and there are some common threads between the two presentations.  One is that it’s critical to extend models for services across both legacy and SDN/NFV elements.  Another is that a closed-loop process for automating service lifecycles (both a normal and accelerated one) is critical.

The realization of these goals is described a bit more clearly in Huawei’s material.  They define a Management Control Continuum (MCC), which includes all of the components of OSS/BSS, NMS, and SDN/NFV management elements.  This (I think) is essentially the structure I suggested in my ExperiaSphere project, where all of the processes that support a service lifecycle are orchestrated explicitly, through a model.  Huawei appears to be calling all these little elements “microservices”.

It would appear that you could visualize a service as a model (my term: intent model) that is associated with a series of “function chains” that do specific things, and also (likely) policies that establish self-managed behavior of the stuff underneath the model.

If you link Huawei’s material with other presentations made by the TMF as a body, what you get is the impression that they see services as a series of intent models (which the TMF would say are nested customer- and resource-facing services) that can express SLA-and-lifecycle handling in terms of either policies or function chains.  Here’s the relevant quote from the Huawei presentation:  “Goal based policy is an important way of specifying the desired network states in systems that are largely autonomic, working in conjunction with standard ECA policy.”  Translating, this seems to me to say that service components are modeled as intent models and that policies define the way their SLA is met.  While HP doesn’t say as much about the detail, I think based on my analysis of their modeling that they could do this as well.

So what does this mean?  First, the TMF does have a program (ZOOM) that addresses the key factors in making a service/network management and operations driver work for NFV.  Second, there are demonstrations (Catalysts) roughly equivalent to the NFV ISG’s PoCs that address many of the points needed.  Third, ZOOM isn’t fully baked, and so the Catalysts are exploring what it might/should look like rather than what it currently does.  Finally, there’s still the question of implementation/evolution.

To my eye, HP and Huawei are both supporting a model of future services that fits the “intent model” structure I’ve blogged about.  They’re augmenting the basic notion that an SLA is an attribute of an intent model with the notion that policies (that are perhaps logically part of that SLA) are at least a means of communicating intent downward to influence resource behaviors.  In all of this, at least according to Huawei’s presentation, “we simply are following the likely evolution of ZOOM”.

Which means the TMF hasn’t slid across the plate yet, but they may be rounding third (if you care for US baseball analogies).  There are three barriers, in my view, to the TMF reaching its goal.

The first barrier is constituency.  The TMF is primarily an organization of OSS/BSS types.  On the vendor side, it’s arguably even more ossified than the network vendors are.  On the buyer/operator side, there’s as much consensus for the view that OSS/BSS systems need to be tossed out and something new created as there is for the view that they need to be modernized.  That’s not exactly a cheering crowd.

The second barrier is communication.  As a body, the TMF is almost a subculture.  They have their own terms, acronyms, issues, sacred cows, heroes, villains, and documents.  Most of their good stuff is not available to the public.  Because they try to describe the new (like ZOOM) in terms of the old (all their Frameworx stuff) rather than in current industry terms, they can’t communicate easily outside their own subculture, and that means they don’t have good PR.  Even within operator organizations, TMF/CIO types often have problems talking with the rest of the company.

The final barrier is containment.  The desire to preserve the old, the primary framework of the TMF’s work, leads it to try to limit the impact of new stuff.  SDN and NFV can be viewed as an alternative way to implement device functionality.  That could be accommodated simply by adding SDN/NFV processes below current device-level OSS/BSS processes—the “virtual device” model I’ve mentioned before.  The problem with that is that it encourages vendors to separate SDN/NFV virtual-device realization (which is what the NFV ISG MANO function focuses on) from the orchestration of the service overall.

You can perhaps see this in HP’s presentation charts, and it resolves potential conflicts between what the NFV ISG or the ONF might do for “management” or “operations” and what the TMF might do.  It creates two layers of orchestration, and the separation leads to the conclusion that you need to modernize OSS/BSS systems along event-driven or policy lines, and also implement SDN and NFV deployment and management that way.  From many, two.  Or maybe three, because if there are two levels of orchestration how do these levels then combine?

Modernization of OSS/BSS was one of the goals of the NGOSS Contract and GB942 work I’ve cited many times, work that was the foundation for both my CloudNFV and ExperiaSphere projects.  I didn’t see any reference to it in the Catalyst material, and since NGOSS Contract work is explicitly about using data models to steer events to processes, it would seem it should have been seminal.  It may be that componentized, event-coupled, OSS/BSS isn’t in the interest of the OSS/BSS vendors.

I think that the TMF has all the pieces of the solution to SDN and NFV’s problems.  I think that the real goal of ZOOM was (based on the goals document) and remains a form of fundamental OSS/BSS modernization.  Will OSS/BSS vendors and the operator CIOs drive that kind of change?  Would it have been easier to orchestrate both OSS/BSS and SDN/NFV with a common element external to both?  These questions probably can’t be answered at this point, and we also don’t know how long this process will take, either in the TMF or outside it in the NFV ISG, the OPNFV group, or whatever.

I’m mostly heartened by the TMF Catalysts, because we’re at least getting some field experience at the layer of the problem where the SDN and NFV business case have to live.  The next big TMF event is in Europe in the spring, and there we may finally see something broad enough and deep enough to be convincing.