Oracle’s NFV Orchestration: Does it Stack Up?

NFV as a technology has captivated a lot of people.  For it to be a “revolutionary technology” it has to do something revolutionary to the way we create network services and purchase network infrastructure.  That obviously has to start with some set of NFV products, created by credible sources and delivered with a compelling vision that wins over risk-adverse decision-makers.  Such a source has to find a benefit for themselves too, and this kind of win-win has been hard for vendors to frame so far.

Especially for software vendors.  There are three companies in the market that we would say are decisively on the software side of the vendor picture—IBM, Microsoft, and Oracle.  None of these guys have been powerhouses in NFV positioning; in fact all have been non sequiturs, at least until now.  Oracle is now stepping up in NFV with an announcement it made yesterday, and the fact that underneath the covers Oracle is a major cloud provider, a major application provider, and even a provider of highly credible carrier-level servers and operating systems software makes it even more interesting.

Oracle laid out its basic NFV approach last fall, and since all NFV approaches map to a degree to the ETSI E2E architecture it wasn’t revolutionary.  What they are now announcing is the details on their Network Service Orchestrator.  To put that term into perspective (meaning ETSI perspective) it’s kind of a super-MANO NFVO, but that’s probably not the way to look at it.  The best part of Oracle’s talk is that they’ve put both NFV overall and their own stuff on a kind of vertical stack from OSS/BSS down to infrastructure, so let me use that reference as a starting point to talk about Oracle’s strategy.

In the Oracle vision, OSS/BSS and the TMF world guide a series of service lifecycle processes that focus on what gets sold and paid for.  They also coordinate craft activity, so things about service ordering that involve real humans doing stuff like installing CPE are up in this top layer.  In the TMF world, this layer builds toward the Customer-Facing Service, a term I’ve used a lot.  The TMF has recently seemed to be working around or away from the CFS concept, but it may be making a comeback with the TMF’s ZOOM modernization process.

Below CFS in Oracle’s stack is the Resource-Facing Service (RFS), which is also a TMF term.  According to Oracle’s model, ETSI is a means of realizing RFSs, so the ETSI processes start below the RFS.  In a diagrammatic sense, Oracle is saying that their OSS/BSS offerings cover the top of the service-to-resource stack, and that their new Network Service Orchestrator will cover the bottom, with a critical overlap point at RFS.  If you’ve followed my blogs and my work in ExperiaSphere you know that I believe in CFS/RFS and believe that service-to-resource boundary is critical, so I’m in favor of this positioning.

Using this structure and this service-resource stack as their foundation, Oracle then applies service lifecycle management examples to explain what they’re up to, which again is something I do myself so I can hardly criticize it.  The end result is a strong presentation, positioning that is clear at the high level, and one that is anchored firmly in both the ETSI stuff and the TMF stuff.  Oracle is the only NFV player to offer a highly OSS/BSS-centric vision and to build a story from top to bottom that’s fairly clear.

Oracle also passes a couple of my litmus tests for sanity in VNF positioning.  They don’t say that OpenStack is NFV orchestration.  They have a generic VNF Manager (VNFM) and not VNF-specific ones.  The have analytics and policies.  There’s enough here to prove that Oracle isn’t just blowing smoke at the world.  The bad news is that I can’t validate a lot of the detail from their material, and there are questions that in my mind demand validation.  There’s not enough detail to prove it works.

Let’s start with NSO.  When Oracle introduced ETSI MANO as a concept they cited the notion that there were three layers of orchestration within it—MANO, VNFM, and VIM.  I don’t disagree that there are three “orchestrable elements” represented, but it’s not clear to me why we have three independent orchestrations going on there.  It’s not clear to me how these three orchestrations are coordinated with the Oracle approach.  My own vision is to model everything at all three levels using a common modeling language.  Since I don’t have any details on Oracle’s modeling approach can’t offer a firm answer to what they do, though their material suggests to me that there may actually be three different models and orchestrations here.

If you count the OSS/BSS service-level stuff there could even be four.  Oracle cites an example of a deployment that might involve a piece of CPE in some cases or a deployment of a virtual element in others.  Obviously if CPE has to be deployed in a truck roll, you’d have to manage the craft activity via OSS/BSS, but suppose there’s CPE there already, or that the service choice between a physical and virtual function is inside the network, a matter of whether a given user is in a zone where NFV is available or not?  Then operations really doesn’t have to know—it’s a deployment choice to parameterize or to host a VNF.  And all of these decisions have to be modeled, orchestrated, if you’re going to choose between rolling a truck and pushing functionality out to virtual CPE.  Is this orchestration option four, with yet another modeling and orchestration toolkit?  It would make more sense to collect orchestras here, I think.  If event-driven OSS/BSS is a goal then why not fold it you’re your functional-MANO stuff?  Maybe they do, but I can’t tell from the material.

The third point is that service/resource boundary and the overlap point for OSS/BSS and NSO.  There are a lot of powerful reasons why the service and resource domains have to be separated and a lot of benefits that could accrue once you’ve done that.  Some of these are truly compelling, significant enough to deserve mention and collateralization.  In fact, these benefits are the real glue that binds that lifecycle process set Oracle uses.  They are not mentioned much less collateralized.

But there is a lot of promise here.  Oracle’s cloud strategy is based in no small part on SaaS delivery, which means it has to face at the cloud level many of the same deployment and operationalization issues that operators face with NFV.  Solaris is a premier operating system for the cloud, with strong support for containers and big data built right in.  Oracle has database appliances that could be used to manage the collection, analysis, and distribution of operations information.  They even have servers, though their positioning suggests they’re not going to get hidebound on the question of using Oracle iron with Oracle NFV.  The point is that these guys could really do a lot…if all this stuff is as good in detail as it is in general.  So I’m not carping as much about their concepts as I am about their collateral.

Oracle has done the best job so far of positioning its NFV plans for public consumption.  Their slides are clear, they cover the right high-level points, and they demonstrate a grasp of the problem overall.  In that regard they’ve beat out all the other players, including those whom I’ve given the nod as the top contenders in NFV.  However, NFV has to be more than just a pretty face.  My own standard for rating NFV implementations is that I have to have documentation on a point to fully validate it.  Oracle did offer me a deeper dive briefing but as I told them, I can’t accept words alone and no further details in document form were provided.

I can’t give Oracle an NFV rating equal to HP or Overture, both of whom have given me enough collateral to be confident of their position.  I can’t even give them as much as I’d give IBM, whose SmartCloud Orchestrator is the only example of TOSCA modeling, which I think is the best approach for NFV.  They fit in my view of the NFV space where Alcatel-Lucent fits, a company who I believe has the stuff to go the distance but who’s not been able to collateralize their offering enough for me to say that for sure.

Of course, they don’t have to sell me, they have to sell the network operators.  I know they have decent engagement in some NFV PoCs, Oracle is going to make a push for their NFV strategy at Mobile World Congress and again at the TMF meeting in Nice in June.  These events will probably generate more material, and I’ve asked Oracle to provide me with everything they can to explain the details.  If I get more on them from any of these sources, I’ll fill you all in on what I see.  In the meantime, it will be interesting to see if Oracle’s entry causes some of the other NFV players to step up their own game.  Singing a pretty song doesn’t make you a pretty bird in the tree, but absent the song nobody will know you’re there at all.

My Thermostat Doesn’t Want to Talk to You

OK, I have to admit it.  There is nothing on the face of technology today that fills me with the mixture of hope and admiration and disgust and dismay like the Internet of Things.  Even the name often annoys me.  It brings to mind the notion of my thermostat setting itself up a social network account.  The hype on this topic is so extreme, the exaggerated impacts so profound, that I’d despair and call it all a sham were it not for the fact that there is real value here.  We’re just so entrapped in crap that we’re not seeing it, and we’ll never prepare for the real parts if we can’t get past the exaggerations.

Home control is a big part of the IoT, right?  Suppose that we were to make every light, every outlet, every appliance, every motion sensor or infrared beam in every home a sensor.  How much traffic would this add to the Internet.  Zero.  None.  Suppose we were to place similar controls in every business.   How much traffic?  None.  Even if we were to add sensors to every intersection, add traffic cameras, add surveillance video to every storefront we’d add little or nothing.  We already have a lot of this stuff and it generates no direct Internet impact at all.

Control applications aren’t about broadcasting everything to everybody, they’re about letting you turn your lights on or off without getting up, or perhaps even turning them on automatically.  You need to have sensors to do that, you need control modules too, and controllers, but you don’t need to spread your home’s status out across the Internet for all the world to see.  Sensors on sensor networks don’t do much of anything, and most controllers don’t do anything to traffic either.

How about the fact that “home control” can sometimes be exercised from outside the home.  There are times when you might want to turn your home lights on from your car as you drive onto your street.  There are times when you might want your home to call you (to tell you your basement is wet) or call the police or fire department.  The thing is, we do all of this already, and it’s not like your home is going to call you every minute even if it could.  The fact is that in a traffic sense the IoT is kind of a bust.  Similarly, it’s not going to add to the list of things that need IP addresses; most sensors and controllers work on non-network protocols and those that don’t use private IP addresses that aren’t visible on the Internet anyway.  If you call your home from your cell to turn your lights on, you’re likely only consuming a port on an IP address you already have.

Besides being an obviously great yarn to spin, does the IoT actually offer anything then?  I think it does, and I’ve blogged about a part of what it might offer in the past.  We could expect to see control domains (sensors and their controller) abstracted as big-data applications available through APIs for the qualified to query.  We could expect to see some of the real-time process control stuff, which is what self-drive vehicles are conceptually related to, generate “local” traffic that might even get outside our control network and touch online.  But traffic isn’t going to be where IoT impacts things, nor will addresses or any of the stuff you always hear about.

The biggest impact of IoT on networking is in availability.  If I want to turn on my home lights as I drive up, I don’t need a lot of bandwidth but I darn sure need a connection.  If I were to find that most of the time my lights-on process failed, I’d be looking for another Internet provider.  If I’m expecting my car to turn left when it’s supposed to and it runs forward into the rear of a semi because it lost Internet connectivity, I’m going to be…well…upset.

The Internet is pretty reliable today, but most home alarm systems don’t use it, they call out either wireline or wireless on what’s a standard voice connection because that’s more likely to work.  Unless we want the Internet of Things to be phoning home (or out from home) in our future (how pedestrian is that?) we have to first be sure it can do the kind of stuff that’s already bypassing it for availability reasons.  But would we pay enough for that availability improvement?

A second big impact is latency.  My car is moving down the street and the sensors around on the corner and lamp-posts tell the Great Something that another vehicle is approaching on the side street.  If it takes two or three seconds for the message to get to the controller, be digested, get back to my car, and be actioned, and I’m moving along at 60 mph then I’m a couple hundred feet along before recommended action can be taken.  Even a hundred milliseconds is nine feet at my hypothetical speed.  I can hit somebody in that distance.

Related to this is the third impact, which is jitter.  Whatever the character of a control loop, the worst thing is a lack of stability in its performance.  You can design around something or avoid using it if it stays where it is, but if it jitters all over the place you find yourself stopping at the stop sign one minute and hitting the semi the next.  That sort of uncertainty wears you down and surely reduces the sale of self-drives.

Home controllers offer what I think is the logical offshoot of all these issues.  Why do we have them?  You have to shorten control loops where real-time responses are important.  Rather than try to create a sub-millisecond Internet everywhere, the smart way is to host the intelligence or the controller closer to the sensors and control elements.  So what we need to be thinking about with IoT isn’t traffic on the Internet or addressing or even Internet latency and jitter, it’s process placement.

Network standards won’t matter to IoT.  What matters is inter-controller communication, big-data abstraction APIs for raw information access, and the like.  My car controller has to talk to its own sensors, to local street sensors, to route processors, traffic analyzers, weather predictors, and so forth.  None of this is going to create as much traffic as a typical teen watching YouTube but it will create a need to define exchanges, or we’ll have cars running into areas where they can’t talk to the local control/sensor processes and whose drivers are probably watching YouTube too.

Making stuff smarter doesn’t necessarily mean it has to communicate, or communicate on a grand global scale.  What is different about “smart” versus “dumb” in a technology sense isn’t “networked” versus “non-networked”.  Being connected isn’t the same as being on the Internet.  The key to smartness is processes and the key to extending real-time systems over less than local space is process hierarchies.  Processes will be talking to processes, not “things to things” or “machines to machines”.  This is about software, like so much else in networking is.  It’s about the cloud, and it’s process hierarchies and cloud design that we need to be thinking about here.  The rest of the news is just chaff.

Our thermostats don’t need to talk to each other, but if yours has something to say about this feel free to have it call mine, and good luck to the both of you getting through.

From Service Management to Service Logic: SDN/NFV Evolution

If kids put teeth under their pillow and hope for quarters, or dig in their yards hoping to find pirate treasure, so financial planners in operators are looking for NFV and SDN to generate revenue.  Cost savings are fine as a means of delaying the marginalization of the network and bridging out of a difficult capex bind, but only new revenue can really save everyone’s bacon.  The question is whether the financial planners’ aspirations are any more realistic than those of our hypothetical quarter-and-treasure-seeking youth.

It’s really hard to say with confidence what services will be purchased in the future, even five years out.  Survey information on this sort of thing is totally unreliable; I know that from over 30 years’ experience.  I’ve made my own high-level guess in prior blogs—we have to presume that business IT spending growth will come from improvements to productivity gained by harnessing mobility.  Rather than tell you that again, I propose we forget for a moment the specifics and look instead at a very important general question.  Not “what” but how?

“Service agility” is the ability to present service offerings that match market requirements.  If we knew what those requirements were we wouldn’t need to be agile; we’d just prepare for the future in a canny way.  The fact is that even if we knew the next big thing, we’d have to worry about the thing after it.  Service consumers are more tactical than traditional service infrastructure.  What we need to know now is what the characteristics of the string of next-big-things-to-come are, so we can build SDN and NFV correctly (and optimally) to support them.

Sometimes it’s really helpful to engage in “exclusionary thinking” for something like this.  So, we’ll start with what they are not.  We are not going to get new revenue from selling the same stuff.  We are not going to get new revenue from selling new stuff whose only value is to allow customers to spend less.  We are not going to get new revenue from sources that could have given us new revenue all along had we but asked for it.  New revenue may not come from my guess on mobile empowerment but it’s going to come from something that is new.

If we follow this thread we can look with somewhat of a critical eye at the current NFV activities at least insofar as supporting new revenue is concerned.  Virtual CPE is not new revenue.  Hosted IMS is not new revenue.   In the SDN world, using virtual routers instead of real ones or OpenFlow instead of adaptive routing isn’t new revenue either.  All of these things are worthy projects in lowering costs and raising profits, but cost management (and those who depend on it) vanish to a point.  We have to see these as bridges to the future, but we have to be sure we can support the future as it gets closer.

“Old revenue” is based on two things—sale of connectivity and sale of connection-related features.  New revenue delivers stuff, provides information and decisions.  Thus, it’s much more likely to resemble cloud features than service elements.  Services will be integrated with it, and in particular we’ll build ad hoc network relationships among process elements that are cohabiting in a user’s information dreams, but we’re still mostly about the content rather than about the conduit.

If we look at SDN in these terms we realize that SDN isn’t likely to generate much new revenue directly. It’s important as a cloud enabler or as an NFV enabler, depending on what you see the relationship between these two things being.  NFV is definitely a support element too, but it’s supporting the dynamism of the features/components and their relationship with each other and with the user.  We’re using NFV to deploy VNFs, components, VNF-as-a-service, and so forth.  We’re using SDN to connect what we deploy.

What is that?  Not in terms of functionality but in terms of relationships?  There seem to be two broad categories of stuff, one that looks much like enterprise SaaS and the other that looks quite a bit like my hypothetical VNFaaS or what I’ve called “platform services”.  Amazon is building more and more value-added components in AWS to augment basic cloud IaaS, to the point where you could easily visualize a developer writing to these APIs as often as they’d write to common operating system or middleware APIs.  These combine to frame a model of service where composition of services is like application development.

This more dynamic model of service evolves over time, as companies evolve toward a more point-of-activity mission for IT.  Some near-term services will have fairy static relationships among elements, particularly those that support communities of cooperating workers or that provide “horizontal” worker services like UC/UCC.  Others will be almost extemporaneous, context- and event-driven to respond to conditions that change as fast as traffic for a self-drive car.

In general, faster associations will mean pre-provisioned assets, and as activities move from being totally per-worker-context-driven to a more planned/provisioned model we’ll move from assembling “services” from APIs and VNFaaS to assembling them in the more web-like sense by connecting hosted elements.  In between we’ll be deploying service components and instances on demand where needed and connecting them through a high-level connection-layer process that looks a lot like “overlay SDN”.

NFV’s MANO is about deployment instructions; it models the service’s functional relationships only indirectly.  As you move toward extemporaneous cloud-like services you are mapping functions and functional relationships themselves.  In many, likely even most, cases the “functions” don’t have to be deployed because they’re already there.  It’s a classic shift from service management to service logic, a boundary that’s been getting fuzzier for some time, though the shift has been gradual and largely unrecognized.

This is a new kind of orchestration, something that’s more akin to providing a user with a blueprint that defines their relationship with information and processing, and the relationship between those two things and all of the elements of context that define personalization because they define (at least in a logical/virtual sense) people.  Think of it as an application template that’s invoked by the user on demand, based on context and mission, and representing the user through a task or even entertainment or social interaction.  Identity plus context equals humanity?  Perhaps in a service sense that’s true.

Both NFV and SDN need to be expanded, generalized, to support this concept but I’d submit that we were as an industry, perhaps unwittingly, committed to that expansion anyway because of the cloud.  NFV is the ultimate evolution of DevOps.  SDN is the fulfillment of the ultimate generalization of OpenStack’s Neutron.

Yes, Neutron.  OpenStack is probably more important in the model I’m describing here than SDN Controllers because the latter will be abstracted by the former.  TOSCA is more important than Yang because the latter is about network connections and the former is about cloud abstractions.  We’re not heading to the network in the future, but to the cloud.

Don’t let this make you think that I believe NFV is superior to SDN.  As I indicated above, both are subordinate to that same force of personalization.  Whatever services you think are valuable in the future, and however you think they might be created, they’ll be aimed at people and not at companies or sites.  Both SDN and NFV have blinders on in this respect.  Both see the future of networking as being some enhanced model of site connection when mobility has already proved that sites are only old-fashioned places to collect people.  It’s the people, and what they want and what makes them productive that will frame the requirements for NFV and SDN in the future.

Alcatel-Lucent’s Strategy-versus-Tactic Dilemma

Alcatel-Lucent released its quarterly numbers before the bell this morning, and their results illustrate the complexity of the business of networking these days.  Do we look at things tactically, as current financial markets do?  If we say “No!” to that, then can we agree on what a strategic look would offer us?  Let’s see.

Tactically speaking, what Alcatel-Lucent’s numbers showed was a company whose IP business did well while other sectors largely languished.  IP routing was up 15% in constant currency, IP transport (meaning largely optics) was up by 3%.  Given that many of Alcatel-Lucent’s rivals were seeing negative growth in these same areas, they didn’t do badly here.

Elsewhere things weren’t as bright.  Revenues in IP Platforms, the service and operations layer, were off 15%.  Access was off by 11% overall, and that includes wireless.  Revenues were down overall, and while the quarter was judged as good by the Street it’s because cost-cutting helped profit more than lower revenue hurt it.

The overall picture here is that Alcatel-Lucent is in fact delivering on its transformation, and I believe that’s true.  It’s also marching to the future one quarter at a time, and so it’s fair to ask whether the transformation is taking them to a future where numbers can continue to be positive.  That’s a much harder question to answer because it demands we look at the model of networking not today but perhaps out in 2020.

It is very clear that we will need bits in the future, so it’s clear that IP transport is not going away.  In fact, Alcatel-Lucent and the whole sector are delivering more bits every year.  They’re just not delivering a lot more bucks.  Optical transport doesn’t connect directly to service revenues, feature differentiation is difficult to sustain, and competitive price pressure can only grow.  IP routing is a smaller business for Alcatel-Lucent today than IP transport, so in order for the revenue numbers to turn around, IP routing has to grow significantly over that five-year period or Alcatel-Lucent has to find another growth area.

Where have we heard network operators telling us that their plan is to build more network routing?  Every major initiative operators have undertaken in network modernization has been directed at doing more with optics and less with IP.  IP is the layer where content traffic growth, regulatory changes, and complexity increases have created the biggest threats to return on infrastructure.  Can Alcatel-Lucent deliver radical new revenues from that space?  Not without something to push things along a different path.

That path would have to be buried somewhere in what the company classifies as IP Platforms.  The challenge IP has is that Internet is not highly profitable today, is getting less profitable every year, and probably can’t reverse that trend any time soon.  And the Internet is where most IP equipment goes.  Operators, to invest more in IP rather than continue to try to invest less, would have to earn more on their investment, to turn around the converging revenue/cost-per-bit curves we’ve all seen.

I’ve said in earlier blogs that I believe the future of the network will be an agile opto-electric substrate at the bottom, the cloud at the top, and virtualized L2/L3 in the middle, tied to more agile optics below and hosted in the cloud above.  That model might well end up spending more on L2/L3, but not on routers as boxes.  It would spending more on virtual routers, virtual functions, and servers to host them on.  Alcatel-Lucent does not make servers, and that’s the company’s big strategic problem.  They have to face off in the new age of IP beyond 2020 with network competitors like Cisco who have servers and can win something through the transformation to hosted services.  They also have to face off against IT vendors who present hosting options and even virtual routers/functions and don’t even bother with network equipment.  That means that they have to do really, really, well in SDN and NFV and the cloud but do it without having the automatic gains that being able to sell the servers would generate.  I commented in my analysis of how various classes of vendors would do in NGN that the network equipment types would have a challenge because of the natural loss they face if money shifts from funding physical network devices to hosting virtual ones.  Alcatel-Lucent faces that risk.

Which is why I find their performance in IP platforms troubling.  This is where the company needed double-digit growth, massive evidence that they were gaining traction in the service-software part of infrastructure where new revenues could be created, and from which symbiotic tendrils could be extended to pull through equipment.  So far they don’t have the financial signals to validate they’re getting traction there.

CloudBand is Alcatel-Lucent’s biggest hope, operations is next.  That’s because first and foremost we’re building revenues from future services by hosting stuff in the cloud, and second because what we’re not hosting there for revenue’s sake we’re operating from there to manage costs.  SDN and NFV are important because they represent technology and standards trends that define these hosting-to-service relationships and also frame the operations challenges that all future services will have to meet.

In the cloud/CloudBand area, Alcatel-Lucent has what I believe to be a strong product set and good capabilities, but their ability to describe what they have is extraordinarily weak.  Of the three vendors who I rate as likely being able to drive NFV to deployment, they are the only one for whom I have to stress the word “likely” because I just can’t get the details needed to offer an unqualified response.  And hey, Alcatel-Lucent, if you don’t sell servers you have to be able to present the cloud in some kind of awe-inspiring software package or you have little chance of being a player.

On the operations side, Alcatel-Lucent doesn’t match rivals Ericsson or Huawei in terms of OSS/BSS tools and capabilities.  They may have plans and capabilities to link the cloud layer to operations in a good or even compelling way, but those plans and capabilities are among the things I don’t have details on.

Could it be that Alcatel-Lucent is pushing on IP Routing as a segment because it’s where they have growth today, and holding back in areas like SDN and virtual routing that could be seen as a threat to their IP routers?  If that’s the case then they are betting that routers will carry the company in the future, and I have no doubt that cannot happen.

From the time when Alcatel and Lucent became Alcatel-Lucent, I’ve groused over their insipid positioning and weak marketing.  I’m still grousing, and I think it’s past time when the company deals with that problem.  People facing a major transformation of revenue and technology, as operators are everywhere, want to follow what they perceive a leader.  For Alcatel-Lucent, the time to qualify themselves for that role is running short.  Routing can’t sustain them forever.

The FCC’s Neutrality Order: It Could be Worse

The FCC is converging on a net neutrality order, bringing “clarity” to the broadband regulation process.  In fact, the process has been murky since the Telecom Act, nothing in the current order (as we know of it from the FCC’s released “fact sheet”) un-murks much, and in any event this is something only the courts can hope to make final and clear.  Everyone will hate the new order, of course, and it doesn’t do what it might have done, but it’s not as bad as it might have been.

The high-level position the FCC is taking shouldn’t surprise anyone who knows what the FCC is and what it can and can’t do.  The question that’s been raised time and time again, most recently and decisively by the DC Court of Appeals on hearing the FCC’s previous neutrality order, is jurisdiction.  The position the Court took was that the FCC having previously declared that the Internet was not a common-carrier service, cannot now issue regulations on it that are permitted only for services of that type.  And that is that; the FCC has no jurisdiction under its own rule.  Fortunately for the FCC, it is not bound by its own precedents, so it can reverse a prior ruling in a stroke, which is what Wheeler is going to propose.  Broadband, wireline and wireless, would be reclassified as Title II.

What this does not mean (as TV commercials and some comments have suggested it does) is higher fees and taxes imposed on broadband providers and paid by consumers.  The FCC has said all along that if it made the Title II decision it would then apply Section 706 of the Act to “forbear” from applying many of the Title II rules to broadband.  This would include fees and taxes but also requirements for wholesaling.

The combination of these rules generates as harmless a result as anything is in this political age.  Yes, the FCC can change its mind.  Yes, the courts could rule this is invalid too (I doubt they will).  Doomsayers notwithstanding, the legal foundation of this order is as good as we can get without going back to Congress, a fate I’d be reluctant to wish on the industry.  The biggest real issue neutrality had was jurisdiction of the FCC to treat violations and the order fixes that without generating much unexpected disruption with operators.

So what is the FCC going to do with its authority?  In many cases that’s at least semi-harmless harmless.  It boils down to “no fast lanes” and “equal regulation for all broadband”.  The former bars paid prioritization of traffic.  The latter is the biggest change; the FCC is applying all its rules to wireline and wireless alike.  There will be gnashing of teeth on this, of course, but the truth is that everyone knew we were heading in that direction.  Deeper in the fact sheet the FCC released are some interesting points, though.  These could have significant consequences depending on just what the FCC does with the authority the order would give it.

At the top of my list is the fact that the FCC would hear complaints on ISP interconnect/peering.  This is consistent with the fact that the FCC has jurisdiction over common-carrier peering and tariffs, so at one level it’s not a surprise.  The question is how this new authority would be used, given that we’ve just had a flap between ISPs and content providers like Netflix, resulting in the latter paying a carriage charge to some access providers.

Video traffic is disruptive to network performance, because it demands fairly high capacity and at least stable if not low latency.  It swamps everything else and so if you let it run unfettered through the network it can congest things enough for other services to be impacted.  The FCC’s new order permits throttling traffic for management of stability and to prevent undue impact on other services.  If the FCC were to say that Netflix doesn’t have to pay for video carriage, operators could either suck it up and invest in capacity, further lowering their profit on broadband, or let things congest and try in some way to manage the consequences.

The FCC would, under the new order, have the power to impose settlement—rational settlement—on the Internet.  That could be the biggest advance in the Internet’s history but for one thing, that politically inspired and frankly stupid ban on paid prioritization.  With a combination of settlement and paid QoS, the Internet could become a limitless resource.  With paid prioritization off the table completely, we might see settlement but it won’t do much more than preserve the status quo, under which operators are already seeing critical revenue/price convergence.

I’m not sure whether the details of the order will shed light on this point.  In the past, the FCC has looked askance at provider-pays prioritization, but not at plans where the consumer pays.  The fact sheet doesn’t seem to make any distinction but the order might read differently.  We’ll have to see when it’s released.

The other interesting point in the fact sheet is that the FCC intends to insure that special IP non-Internet services (including VoIP and IPTV) don’t undermine the neutrality rules, presumably by having operators move services into the category to avoid regulation.  This sort of thing, if it went far enough, could create a kind of growing “Undernet” that would absorb applications and services by offering things like paid prioritization.

The devil again will be in the enforcement details.  There’s a fine line between IPTV and streaming services on the Undernet.  The FCC could lean too far toward regulation and make IPTV seem risky, or too far away and encourage bypass of the Internet.  Will the order make its bias clear, or will we have to wait until somebody is ordered to do, or not do, something?

Waiting is the name of the game, regulation-wise, of course.  This order will be appealed for sure.  Some in Congress will propose to preempt it, reverse it, toss it out.  We probably will have years of “uncertainty”, but the good news is that we’ll probably know shortly after the order comes out, whether there is a reasonable risk that any of the reversing/undoing will succeed.

I believe that the order as summarized in the fact sheet is better than we’d likely get if Congress intervened.  The original Republican stance of a very light touch has been amended of late to include support for “no fast lane”, and that creates the classical problem of a part-regulated market.  Light touches aren’t distinctively no touch or a push, and all the current Republican position really seems to do is to give the FCC authority to do its regulating on neutrality without (in fact, barring) Title II regulation applying to ISPs.  I think that the details of that would confound Congress, as it always has with telecom regulation.  The FCC was created to apply expert judgment to the details of regulating a technical industry, and we need to let them do that.

The thing that’s clear is that “no fast lanes” has become a flag-waving slogan for everyone, and fast lanes might have been the best thing ever for the Internet.  No matter what consumers think/want or regulators do, you can’t legislate or regulate an unprofitable operation for very long, and we’ve closed down the clearest avenue to profit for ISPs.  Not only that, we’ve closed down the only avenue that would have made generating bits a better business.  Cisco and others should be wearing black armbands on this one because it decisively shifts the focus of networking out of L2/L3 and either upward into the cloud or downward into optical transport on the cheap.

The administration should have stayed out of this; by making a statement on neutrality the President made it a political issue, and in today’s world that’s the same as saying we’ve foreclosed rational thought.  We can only hope the FCC will enforce its policies with less political bending and weaving than it’s exhibited in setting the policies in the first place.

The Role of “VNFaaS”

The cloud and NFV have a lot in common.  Most NFV is expected to be hosted in the cloud, and many of the elements of NFV seem very “cloud-like”.  These obvious similarities have been explored extensively, so I’m not going to bother with them.  Are there any other cloud/NFV parallels, perhaps some very important one?  Could be.

NFV is all about services, and the cloud about “as-a-service”, but which one?  Cloud computing in IaaS form is hosted virtualization, and so despite the hype it’s hardly revolutionary.  What makes the cloud a revolution in a multi-dimensional way is SaaS, software-as-a-service.  SaaS displaces more costs than IaaS, requires less technical skill on the part of the adopter.  With IaaS alone, it will be hard to get the cloud to 9% of IT spending, while with SaaS and nothing more you could get to 24%.  With “platform services” that create developer frameworks for the cloud that are cloud-specific, you could go a lot higher.

NFV is a form of the cloud.  It’s fair to say that current conceptions of function hosting justified by capex reductions are the NFV equivalent of IaaS, perhaps doomed to the same low level of penetration of provider infrastructure spending.  It’s fair to ask whether there’s any role for SaaS-like behavior in NFV, perhaps Virtual-Network-Function-as-a-Service, or VNFaaS.

In traditional NFV terms we create services by a very IaaS-like process.  Certainly for some services that’s a reasonable approach.  Could we create services by assembling “web services” or SaaS APIs?  If a set of VNFs can be composed, why couldn’t we compose a web service that offered the same functionality?  We have content and web and email servers that support a bunch of independent users, so it’s logical to assume that we could create web services to support multiple VNF-like experiences too.

At the high level, it’s clear that VNFaaS elements would probably have to be multi-tenant, which means that the per-tenant traffic load would have to be limited.  A consumer-level firewall might be enough to tax the concept, so what we’d be talking about is representing services of a more transactional nature, the sort of thing we already deliver through RESTful APIs.  We’d have to be able to separate users through means other than virtualization, of course, but that’s true of web and mail servers today and it’s done successfully.  So we can say that for at least a range of functions, VNFaaS would be practical.

From a service model creation perspective, I’d argue that VNFaaS argues strongly for my often-touted notion of functional orchestration.  A VNFaaS firewall is a “Firewall”, and so is one based on a dedicated VNF or on a real box.  We decompose the functional abstraction differently for each of these implementation choices.  So service modeling requirements for VNFaaS aren’t really new or different; the concept just validates function/structure separation as a requirement (one that sadly isn’t often recognized).

Managing a VNFaaS element would be something like managing any web service, meaning that you’d either have to provide an “out-of-band” management interface that let you ask a system “What’s the status of VNFaaS-Firewall?” or send the web service for the element a management query as a transaction.  This, IMHO, argues in favor of another of my favorite concepts, “derived operations” where management views are synthesized by running a query against a big-data repository where VNFaaS elements and other stuff has their status stored.  That way the fact that a service component had to be managed in what would be in hardware-device terms a peculiar way wouldn’t matter.

What we can say here is that VNFaaS could work technically.  However, could it add value?  Remember, SaaS is a kind of populist concept; the masses rise up and do their own applications defying the tyranny of internal IT.  Don’t tread on me.  It’s hard to see how NFV composition becomes pap for the masses, even if we define “masses” to mean only enterprises with IT staffs.  The fact is that most network services are going to be made up of elements in the data plane, which means that web-service-and-multi-tenant-apps may not be ideal.  There are other applications, though, where the concept of VNFaaS could make sense.

A lot of things in network service are transactional in nature and not continuous flows.  DNS comes to mind.  IMS offers another example of a transactional service set, and it also demonstrates that it’s probably necessary to be able to model VNFaaS elements if only to allow something like IMS/HHS to be represented as an element in other “services”.  You can’t deploy DNSs or IMS every time somebody sets up a service or makes a call.  Content delivery is a mixture of flows and transactions.  And it’s these examples that just might demonstrate where VNFaaS could be heading.

“Services” today are data-path-centric because they’re persistent relationships between IT sites or users.  If we presumed that mobile users gradually moved us from being facilities-and-plans-centric to being context-and-event-centric, we could presume that a “service” would be less about data and more about answers, decisions.  A zillion exchanges make a data path, but one exchange might be a transaction.  That means that as we move toward mobile/behavioral services, contextual services, we may be moving toward VNFaaS, to multi-tenant elements represented by objects but deployed for long-term use.

Mobile services are less provisioned than event-orchestrated.  The focus of services shifts from the service model to a contextual model representing the user.  We coerce services by channeling events based on context, drawing from an inventory of stuff that looks a lot like VNFaaS.  We build “networks” not to support our exchanges but to support this transfer of context and events.

If this is true, and it’s hard for me to see how it couldn’t be, then we’re heading away from fixed data paths and service relationships and toward extemporaneous decision-support services.  That is a lot more dynamic than anything we have now, which would mean that the notion of service agility and the management of agile, dynamic, multi-tenant processes is going to be more important than the management of data paths.  VNFs deployed in single-tenant service relationships have a lot of connections because there are a lot of them.  VNFaaS links, multi-tenant service points, have to talk to other process centers but only edge/agent processes have to talk to humans, and I shrink the number of connections, in a “service” sense, considerably.  The network of the future is more hosting and less bits, not just because bits are less profitable but because we’re after decisions and contextual event exchanges—transactions.

This starts to look more and more like a convergence of “network services” and “cloud services”.  Could it be that VNFaaS and SaaS have a common role to play because NFV and the cloud are converging and making them two sides of the same coin?  I think that’s the really profound truth of our time, NFV-wise.  NFV is an accommodation of cloud computing to two things—flows of information and increasing levels of dynamism.  In our mobile future we may see both services and applications become transactional and dynamic, and we may see “flows” developing out of aggregated relationships among multi-tenant service/application components.  It may be inevitable that whatever NFV does for services, it does for the cloud as well.

Is NFV’s Virtual Network Function Manager the Wrong Approach?

I’ve noted before that the weak link in NFV is operations, or management if you prefer.  A big part of the problem, IMHO, is the need for the ISG to contain its efforts to meet its schedule for completing its Phase One work.  Another issue is the fact that the body didn’t approach NFV from the top down.  Management is a problem because so much of NFV’s near- and long-term value proposition depends on efficient operations.  Service agility means accelerating the service lifecycle—management.  Capex reductions are useful only if you don’t add on so much additional opex due to increased deployment complexity that you swamp the savings.

I’m not the only one who feels there’s a major issue here.  Last spring operators told me that they didn’t have confidence that they could make the business case for NFV and that management was the issue.  Some of their concerns are percolating into visibility in the industry now, and so I think we should do what the ISG didn’t and look at NFV management top-down.

To me, there two simple high-level principles in play.  First, NFV infrastructure must, at the minimum, fit into current network operations and management practices.  Otherwise it will not be possible to replace physical network functions with virtual functions without changing operations, and that will stall early attempts to prove out benefits.  Second, to the extent that NFV is expected to deliver either service agility or operations efficiency benefits, it must provide improved operations practices that deliver sufficient efficiency impact overall.

If we step down from the first of these, we can see that the immediate consequence of harmony with existing practices is management equivalence between VNFs and PNFs.  I think this requirement was accepted by operators and vendors alike, and their response was the notion of the VNF Manager.  If you could collect the management data from the VNFs you could present it to a management system in the same form a PNF would have presented it.  Thus, if you bind a VNFM element into a set of VNFs you can fill my first requirement.

Sadly, that’s not the case.  The problem here is that virtualization itself creates a set of challenges, foremost of which is the fact that a PNF is in a box with local, fixed, hardware assets.  The associated management elements of the PNF know their own hardware state because it’s locally available to be tested.  If we translate that functionality to VNF form, we run the functions in a connected set of virtual machines grabbed ad hoc from a resource pool.  How does the VNF learn what we grabbed, how to interpret the status of stuff like VMs and hypervisors and data path accelerators and oVSs that were never part of the native hardware?  The fact is that the biggest management problem for NFV isn’t how to present VNF status to management systems, it’s how to determine the state of the resources.

The problem with resource management linkage has created a response, of course.  When vendors talk about things like “policy management” for NFV what they are often saying is that their architecture decouples resources from services explicitly.  I won’t worry about how a slew of servers and VMs look to a management system that expects to manage a physical CPE gateway because I’ll manage the resources independent of the service and never report a fault.  Great, but think of what happens when a customer calls to report their service connection is down, and your CSRs say “Gee, on the average we have 99.999% uptime on our virtual devices so you must be mistaken.  Suck it up and send your payment, please!”

There are services like consumer broadband Internet that can be managed like this, because that’s how they’re managed already.  It is not how business services are managed, not how CPE is managed, not how elements of mobile infrastructure are managed.  For them, I contend that the current approach fails to meet the first requirement.

And guess what.  The first requirement only gets you in the game, preventing NFV from being more costly and less agile than what we have now.  We are asking for improved operations efficiency, and that raises two new realities.  First, you can’t make meaningful alterations to opex by diddling with one little piece of a service.  Just like you can’t alter the driving time from NYC to LA by changing one traffic light’s timing.  Second, you can’t make meaningful alterations to even a piece of opex if you don’t do anything different there.  We have decoupled operations and network processes today and if we want service automation we have to make operations event-driven.

Event-driven doesn’t mean that you simply componentize stuff so you can run it when an event occurs.  Event-driven processes need events, but they also need state, context.  A service ordered and not yet fulfilled is in (we could say) the “Unactivated” state.  Activate it and it transitions to “Activating” and then becomes “Ready”.  A fault in the “Activating” process has to be remedied but there’s no customer impact yet, so no operations processes like billing are impacted.  In the “Ready” state the same fault has to do something different—fail over, invoke escalation, offer a billing credit…you get the picture.

What is really needed for NFV is data-modeled operations where you define a service as a set of functional or structural objects, assign each object a set of states and define a set of outside events for each.  You then simply identify the processes that are to be run when you encounter a given event in a given state.  Those processes can be internal to NFV, they can be specialized for the VNF, they can be standard operations processes or management processes.

State/event is the only way that NFV management can work, and it makes zero sense to assume that every VNF vendor would invent their own state/event handling.  It makes no sense that every vendor would invent their own way of accessing the status of underlying resources on which their VNFs were hosted, or that operators would let VNF-specific processes control shared-tenancy elements like servers and switches directly.  We can, with a single management strategy, fulfill both the resource-status-and-service coupling needs of NFV (my first requirement) and the operations efficiency gains (my second).  But we can’t do it the way it’s being looked at today.

This shouldn’t be about whether we have a VNF-specific model of management or a general model.  We need a state-event model of management that lets us introduce both “common” management processes and VNF-specific processes as needed.  Without that it’s going to be darn hard to meet NFV’s operations objectives, gain any service agility, or even sustain capex reductions.  All of NFV hinges on management, and that is the simple truth.  It’s also true that we’re not doing management right in NFV yet, at least not in a standards-defined way, and that poses a big risk for NFV deployment and success.

Policies, Services, Zones, and Black Boxes

I blogged earlier about policy management and its role, but since then I’ve had a number of interesting discussions with operators and users that bring more light to the topic.  Some of the most interesting relate to the relationship between how you define a policy and what the specific utility of policy management would be.  To no one’s surprise I’m sure, there’s more than one perspective out there.  To the surprise of some, many of the open questions on policy management are replicated in the worlds of SDN and NFV, even if you consider the latter two in a no-policy-management implementation context.

According to classic definition, a “policy” is a statement of intent, and so you’d probably be accurate if you thought of policies as statements of goals/objectives rather than of methods.  “I want to drive to LA” might be viewed as a high-level policy, for example, as something that constrains a function we can call “Route”.  It’s not proscriptive on how that goal might be realized.

Shifting gears to networking, we’re recognizing that there are many cases where a collection of technology creates a kind of “zone”, something that offers “service” to outsiders at edge interfaces and imposes some set of cooperative behaviors within to fulfill the needs of its services.  IP networks have zones and gateways, for example.  In SDN in OpenFlow form, or in any distributed-policy model, you could envision this zone process creating something like a classic organization chart with “atomic” zones of actual control at the bottom and coordination and interconnection zones building up toward to the master-level control.

Given that where a service transcends multiple zones there would have to be some organized coordination end to end, it’s certainly convenient to visualize this as a policy management exchange.  In fact, that visualization is also useful for NFV.  You could see my route example as a service created across multiple providers or metro areas, where a high-level process picks the general path among the next-level elements, and so forth down the line.

The concept of my route to LA can be viewed as a policy-driven process, but it can also be viewed as what I’ll call a service-driven process.  If each of the metro areas or whatever’s just below my highest “route” awareness offers a “service” of getting me between edge points, you can stitch my LA path by binding those edge points.  The metro processes are responsible for getting me between edges in some mysterious (to the higher level) way.

Is there a difference between policy-driven and service-driven, then?  Well, this is where I think things can become murky.  IMHO, service-driven models require that each element at every layer in the hierarchy advertise a “service” between endpoints.  The highest layer has a conception “route” and that conception is pushed down by successively decomposing the abstract elements we’re routing through.  Get me out of NJ?  That involves the “NJ” element.  That element might then have “South” or “North” Jersey sub-elements.  The primary constraint, always applied by the higher level to the lower when a service is decomposed, is the notion of endpoint and the notion of service, meaning things like driving speed, road type, etc.

If you look at policy models, they could work in a similar way, but what is missing from most is an explicit notion of “route” or “service” because most policy models are meant to overlay a forwarding-table routing process.  The policies relate most often to handling rules that would make up an SLA.  This makes sense given the definition of “policy” that I cited; it’s not reflective of specifics but rather of goals.  “Traffic of this type should be afforded priority” is a good policy; “traffic of this type should to via this route to this destination” isn’t what most policies are about.  If you applied policies in this sense to my trip to LA, what you get is a rule something like “passage west with a given priority should take this route over that one”.  Policy models are most often applied to systems that have internal routing capability, meaning that the connectivity service is in place.  You don’t need to tell a router to route.  Service-based models establish the service, which is why SDN is more service-based.

If you look at the service-driven and policy-driven approaches, you can see that both have a common element, which is the notion of a domain abstraction as a black box.  We also saw something like this in the old days of SNA and ATM routing, where a domain received an entry on the route stack that described an “abstract service” meaning a destination and service quality.  The domain edge popped that off, calculated its own specific route, and pushed that onto the stack instead.  The higher level in this example, and in our service-driven and policy-driven approaches doesn’t know the details of how to get through the next-lower-level structure and it doesn’t even know that structures below that next level exist.  Why?  Because it won’t scale otherwise.

SDN control is a good example of a need for hierarchy.  You obviously cannot scale a single central SDN controller to handle all of the Internet at a detail level.  You could subdivide the Internet into smaller zones and then collect those control zones into second-layer superzones, and so forth.  If you decided that you could manage say a hundred “route abstractions” per controller (or whatever number you like) you group devices till you get roughly a hundred, then group those centurion zones upward in the same quantity.  In two levels you have a thousand total devices, in three a hundred thousand, in four ten million, and you’re at a billion in five levels.

In NFV you can illustrate another value of hierarchy.  Suppose you have a “service” defined as a collection of four functions.  Each of those functions can be deployed anywhere the service is offered, and the infrastructure would probably not be homogeneous through all those locations.  So imagine instead a hierarchy of functions at the service level, linking down to a hierarchy at the location level.  A service “function” binds to a location-specific implementation of that “function” whose future decomposition depends on the equipment available or the policies in place.  Or both.

Abstraction is another important attribute of zone-based control.  A black box is opaque from the outside, so a zone should not be advertising its internal elements and asking higher-level functions to control them.  If you allow high-level elements to exercise control over what happens many layers down, you have to communicate topology and status from the bottom to the top, for every possible bottom.  Thus, we can say that policy management, SDN control, and NFV object modeling all have to support the notion of a hierarchy of abstract objects that insulate the higher-level structures from the details of the lower layers.  I think this principle is so important that it’s a litmus test for effective implementation of any of those three technologies.

You might wonder how this notion relates to my “structural versus functional” modeling, a prior topic of mine on NFV implementation.  The answer is that it’s orthogonal to that point.  In both structural and functional models I think you need a hierarchy for the same reason that componentization of software doesn’t stop when you create a process object and an I/O object.  Service agility, meaning fast and easy creation of services, depends on being able to assemble opaque objects to do what you want.  You can’t impose a requirement to link low-level details or you’ve created a system integration task for every service you create.  That’s true whether we’re talking about policies, SDN, or NFV.

Policy-based and service-based control zones seem to me to be inevitable, and which of them is best will depend on whether you have intrinsic connectivity support within a zone.  If you do then you need only constrain its use.  If you don’t then you’ll have to explicitly establish routes.  But to me this isn’t the important feature; black boxes are opaque so you don’t know how somebody did their job inside one.  What is important is that you stay with the notion of manageable control hierarchies and insure that your abstraction—your black-box creation—is effective in limiting how much information has to be promulgated to higher levels.  If you don’t do that you build a strategy that won’t scale and it won’t matter what technology you’re using.

Looking Ahead to the New Business Model of the Cloud

Friday tends to be my recapitulation day, in no small part because there’s typically not much news on Fridays.  Today I’d like to touch on Apple’s results, VMware’s pact with Google on cloud features, and EMC’s overall pain and suffering.  They continue to paint a picture of transition, something that’s always fun because it creates both problems and opportunities.

What’s particularly interesting today is the juxtaposition of these items with the quarterly reports from Apple and Google.  These companies both had light revenues, and that suggests that some of the high-flying sectors of the market are under pressure.  Google is said to be losing market share to Facebook, but what’s really happening is that online advertising is being spread around among more players and it’s a zero-sum game.  Amazon is proving that it will have to be creative in shipping unless it wants to keep discounting more every year, to the point where there are no margins left to them.  Moral: Google and Amazon need to be in a broader market.  Keep that in mind as we develop this blog, please!

Apple is clearly winning the smartphone wars, with once-arch-rival Samsung (Android) sinking further in both market and revenue terms.  One interesting thing I found when talking to international users was that the term “iPhone” is becoming synonymous with “smartphone” in many geographies, and even here in the US I’m seeing a pickup in that usage.  There’s also growing evidence that app developers favor either Apple’s platform or even Microsoft’s Windows 8.x Metro over Android.  Google’s decision (if the rumors are on target) to become an MVNO may well be a reaction to Apple’s dominance.

If that’s true it could present a challenge for Apple in the cloud area.  I’ve always felt that Apple was lagging in the cloud-exploitation side of things.  A part of this is because Cloudification of features or services tends to anonymize them, which is hardly what brand-centric Apple is seeking.  But Google’s MVNO move makes little sense unless you think they’re going to tie in hosted features with their handsets, and even propose to extend those features across into Apple’s world.

Suppose Google were to create a bunch of whiz-bang goodies as cloud-hosted mobile services extensions.  Suppose they then made these available (for money, of course) to Apple developers.  Does Apple then sit back and let Google poach?  Even if Google didn’t make cloud-hosted service features available to iPhone types, an MVNO Android-plus-cloud move could be the only way to threaten Apple.  Particularly if operators wanted to use NFV to deploy agile mobile-targeted services (which they tell me is exactly what they’d like to do).

The Google-of-the-clouds notion is interesting given that Google just did a deal with VMware to add some Google cloud services to VMware’s vCloud Air.  This is being seen by virtually everyone as a counterpunch against Amazon and Microsoft, both of whom have more cloud-hosted services available for their platforms.  I think this is important because it suggests that even in mainstream cloud computing we’re starting to see more emphasis on “platform services” beyond IaaS as differentiators and also as revenue opportunities.  A cloud platform service explosion could create mobile utility too, if it exploded in the right direction.

More important than even Google’s MVNO and VMware’s aspirations is the fact that platform services are the key elements in “cloud maturation”.  We’ve been diddling at the edges of cloud opportunity from day one, and we’ve achieved only about two-and-a-half percent penetration into IT spending as a result.  Worse, my model still says that IaaS and “basic PaaS” will achieve only about 24% maximum share of IT spending, and that well down the line.  But if you start adding a bunch of platform services that can integrate in IoT data, mobile social frameworks, context for point-of-activity productivity enhancement and suddenly you can get a LOT more.

How much is “a lot?”  It’s tough to model all the zigs and zags here but it looks like the opportunity on the table for the cloud through optimized platform services could be 1.5 to 2.0 times what basic cloud would get.  Better yet, for providers at least, is the fact that the adoption rate on platform-service-based cloud could be almost twice as fast, getting cloud spending up to as much as 30% of IT spending by 2020.

You have a burst in the arctic fox population any time you have a burst in the lemming population, but when lemming counts drop back to normal so do fox counts.  For decades now, technology has been feeding on overpopulation of opportunity.  IP in business networks was created by the burst of distributed computing initiated by microprocessor chips.  OTT was created because online advertising was less costly than buying TV commercials.  But all these rich growth mediums have been used up by opportunistic bacteria at this point.  Now, to move forward, we’ll have to be more creative.

The telcos and the big IT companies and other “mature” businesses now have to face their own reality, which is that the OTTs and the cloud were never “competing” with them in a strict sense, they were just parallel players in the ecosystem, evolutionary co-conspirators in an attempt to exploit changing conditions.  However, what we’re seeing now is convergence of business models created by an elimination of market low apples.

Google cannot make easy money any more.  Neither can Amazon.  Both companies are now looking to “the cloud” as a solution, but the cloud involves much higher capital investment, better understanding of operations, and systemization in addressing some new benefit models to generate new sales and profits.  In heading for the clouds, Google and Amazon are heading for a place that’s darn close to where carriers have been all along—cash flow machines building services off massive infrastructure investments.  Both Amazon and Google would be trashed by the Street in minutes if they ever suggested they were doing that.

Google’s MVNO aspirations and fiber, Amazon’s cloud, are all tolerable as long as they don’t generate a boatload of cost that will threaten the Street’s view that these companies are growth companies and not utilities.  Somehow these high flyers have to build services at a new layer, where capex and opex can be lower but where value can be higher.  Does that sound familiar, like what the telcos have to do in order to get beyond the bit business?  And guess what; the telco model of today is closer to the cloud model of tomorrow, at a business level, than either Amazon or Google are.  So the telcos don’t need business model transformation at all—their competitors are going to be rushing to the telco business model because there’s nowhere else to go.

I’m not saying that the telcos are going to win over Google and Amazon, or vice versa.  What I’m saying is that we’ve not been seeing OTT competition up to now, but we’re darn sure going to see it over the rest of this decade, and every signal in every quarterly report bears that out.  And that, friends, is going to produce some very interesting market shifts and opportunities as well as very dramatic changes in the very structure of networks, applications, and information technology.

Where Tactics Meets Strategy–Software

There’s probably no doubt in the minds of anyone that Wall Street and Main Street see things differently, particularly after the 2008 financial crisis.  Every quarter we get a refresher course in why that is but sometimes the differences themselves are enough to blur the lesson.  To make it clear again, let’s look at some of this quarter’s results in networking.

Ericsson is one of the giants of the industry, which is interesting given that the company seems to make less and less gear every year.  Faced with plummeting margins on hardware, Ericsson elected to stake its future on professional services.  The theory, IMHO, was that equipment vendors were going to take root in current product/technology silos and refuse to embrace anything new for fear it would interfere with quarterly profits.  Given that, a gulf would grow between what vendors produced and operators needed, a gulf Ericsson would be happy to bridge through professional services for a nice billing rate.  With professional services to boost sales, Ericsson has fended off product-oriented competitors, even Huawei.

Huawei is every network vendor’s nightmare.  The Chinese giant has low equipment prices but at the same time has been investing heavily in R&D, as the company’s recent opening of an NFV lab in Xi’an demonstrates.  Huawei has also been improving its own professional services portfolio and reputation while sustaining their role of price leader even there.  I’ve seen Huawei’s credibility rise sharply in emerging markets, and also in Europe.

Ericsson’s tactical problem in this quarter reflects this, I think.  The US is the only area where Huawei is weak and US operators underperformed, which hurt Ericsson where they should have been strongest.  The question, though, is whether this is some temporary setback or whether the US is leading the rest of the world into capex caution.

The strategic problem Ericsson faces is that professional services are gap-fillers.  You get integration specialists because you have to draw on multiple product sources for optimal deployment.  You get professional services/development projects to fix that disconnect between needs and products.  But product vendors and buyers aren’t stupid; everyone knows that as a new technology becomes mainstream it adapts itself to mainstream demand, which means the mainstream isn’t demanding professional services any more.

The Street’s focus with respect to Ericsson is (no surprise) on the short-term.  “Ericsson Q4 Earnings Miss on Dismal North America Business” is a typical response.  Yes, an explosion in the US could have driven Ericsson up, but so could a sudden rush of orders from Mars or Titan, either of which were about as likely.  The big question for Ericsson is whether you can be a network company without anything significant in the way of product breadth.

Then we have Juniper, and their issues seem to be more internal than with competitors.  OK, I get the fact that they’ve had three CEOs in a year.  I get the fact that that old North American capex thing is hitting them too.  But the Street has liked them all along, particularly after this quarter’s report.  It’s not that Juniper did well—they were off in sales by about 11% year over year.  It’s not that they gave great guidance; they were cautious.  It’s almost like the same analysts who said that Ericsson’s problem was North American sales think that somehow those sales will recover for Juniper.

Again, let’s look deeper.  Juniper has focused on cutting costs, and on buying back stock.  You can only cut costs to the point where you have to outsource the CFO role on earnings calls.  You can buy back stock to sustain your share price only to where you have a company with one share of stock (and yes, you could sustain the price of that share at about twenty-four bucks) and a boatload of debt incurred to fund the buybacks.  You can build shareholder value in the near term by shrinking.  You can even see your stock appreciate if you buy back a lot and shrink a lot in costs.  But darn it, you’re getting rewarded for losing gracefully, for offering hedge funds a shot at making a buck from you while your real market opportunity drifts away with your costs.

Networking is in transition because revenue per bit is declining and network equipment is all about bits.  Unless you can do something to bring in new revenue, you are going to shrink.  No new revenue from bits will ever be seen again, so you have to go beyond bits.  But you can’t expect operators to all buy one-off solutions to their problem in the form of professional services.  They will buy solutions, which means they will buy software.  Software, then, should be the heart of both Ericsson’s and Juniper’s transformation, and a step in harmonizing the tactical and the strategic, the network marketplace with Wall Street.

Ericsson is actually more of a software company than most vendors, on paper.  They bought OSS/BSS giant Telcordia years ago, and operations has been a big part of their success.  Their challenge is that Telcordia was never rated as “innovative” in my surveys of operators, and since Ericsson took it over its innovation rating has declined significantly.

Juniper has never been a software company, and in fact a lot of Juniper insiders have complained that it’s never been anything but a big-iron router company.  Yeah, they’ve had this Junos thing as part of their positioning, but that’s about router software.  Juniper’s big opportunity came with its Junos Space product, which was actually (like, sadly, a lot of Juniper’s initiatives) a truly great insight that fell down on execution.  Space could have evolved to become the orchestration, management, and operations framework that sat between infrastructure and OSS/BSS.  They could have turned Ericsson’s OSS incumbency into an albatross and rocked Cisco.

Orchestration, management, and operations unification can create immediate benefits in operations costs.  That could for a time help network vendors to sustain capex growth in their buyer community.  In the long term, this trio is what creates new revenue opportunities, which handles the strategic issues.  Happy buyers, happy shareholders, what more can you ask?

Well, though, what now?  Well, darn it, the answer is clear.  Network vendors need to buy into software.  It’s hopeless for them to try to do internal development of a software position.  It’s also hopeless for Ericsson to try to rehabilitate Telcordia or for Juniper to bring back Space.  They need a new target, a new division left to manage itself and reporting directly to the CEOs to bypass as much politics.  And they need to look at that management, orchestration, and operations stuff as the focus of that new area.  Otherwise, tactical focus to accommodate the absence of a strategic product strategy will lead them to the abyss.

Culture hurts network vendors in attempting to move to software-centricity. “Quarterly myopia” hurts any vendor who takes a short-term risk for a long-term payoff.  But it’s not just myopia any more, it’s delusionalism.  What IBM or Juniper spent on share buybacks could have bought them everything the needed to be strong again.  It’s one thing not to see danger on the horizon, but not even myopia can justify missing it when it’s at your feet.

Cisco has announced new focus on software and the cloud, but hey we’ve been here before John.  If ever there’s been a company who epitomizes the tactic over the strategy it’s Cisco.  But maybe it’s Cisco we need to watch now, because if Cisco is really signaling that it’s time for them to face software/cloud reality, then it’s darn sure time for everyone to face it.

How, though?  There’s more to software than licensing terms, more to the cloud than hosting.  The next big thing, in fact the next big things are staring us in the face but being trivialized by 300-word articles and jabbering about the next quarter.  We don’t need visionaries to lead us to the future, just people who don’t need glasses.