Looking Ahead to the New Business Model of the Cloud

Friday tends to be my recapitulation day, in no small part because there’s typically not much news on Fridays.  Today I’d like to touch on Apple’s results, VMware’s pact with Google on cloud features, and EMC’s overall pain and suffering.  They continue to paint a picture of transition, something that’s always fun because it creates both problems and opportunities.

What’s particularly interesting today is the juxtaposition of these items with the quarterly reports from Apple and Google.  These companies both had light revenues, and that suggests that some of the high-flying sectors of the market are under pressure.  Google is said to be losing market share to Facebook, but what’s really happening is that online advertising is being spread around among more players and it’s a zero-sum game.  Amazon is proving that it will have to be creative in shipping unless it wants to keep discounting more every year, to the point where there are no margins left to them.  Moral: Google and Amazon need to be in a broader market.  Keep that in mind as we develop this blog, please!

Apple is clearly winning the smartphone wars, with once-arch-rival Samsung (Android) sinking further in both market and revenue terms.  One interesting thing I found when talking to international users was that the term “iPhone” is becoming synonymous with “smartphone” in many geographies, and even here in the US I’m seeing a pickup in that usage.  There’s also growing evidence that app developers favor either Apple’s platform or even Microsoft’s Windows 8.x Metro over Android.  Google’s decision (if the rumors are on target) to become an MVNO may well be a reaction to Apple’s dominance.

If that’s true it could present a challenge for Apple in the cloud area.  I’ve always felt that Apple was lagging in the cloud-exploitation side of things.  A part of this is because Cloudification of features or services tends to anonymize them, which is hardly what brand-centric Apple is seeking.  But Google’s MVNO move makes little sense unless you think they’re going to tie in hosted features with their handsets, and even propose to extend those features across into Apple’s world.

Suppose Google were to create a bunch of whiz-bang goodies as cloud-hosted mobile services extensions.  Suppose they then made these available (for money, of course) to Apple developers.  Does Apple then sit back and let Google poach?  Even if Google didn’t make cloud-hosted service features available to iPhone types, an MVNO Android-plus-cloud move could be the only way to threaten Apple.  Particularly if operators wanted to use NFV to deploy agile mobile-targeted services (which they tell me is exactly what they’d like to do).

The Google-of-the-clouds notion is interesting given that Google just did a deal with VMware to add some Google cloud services to VMware’s vCloud Air.  This is being seen by virtually everyone as a counterpunch against Amazon and Microsoft, both of whom have more cloud-hosted services available for their platforms.  I think this is important because it suggests that even in mainstream cloud computing we’re starting to see more emphasis on “platform services” beyond IaaS as differentiators and also as revenue opportunities.  A cloud platform service explosion could create mobile utility too, if it exploded in the right direction.

More important than even Google’s MVNO and VMware’s aspirations is the fact that platform services are the key elements in “cloud maturation”.  We’ve been diddling at the edges of cloud opportunity from day one, and we’ve achieved only about two-and-a-half percent penetration into IT spending as a result.  Worse, my model still says that IaaS and “basic PaaS” will achieve only about 24% maximum share of IT spending, and that well down the line.  But if you start adding a bunch of platform services that can integrate in IoT data, mobile social frameworks, context for point-of-activity productivity enhancement and suddenly you can get a LOT more.

How much is “a lot?”  It’s tough to model all the zigs and zags here but it looks like the opportunity on the table for the cloud through optimized platform services could be 1.5 to 2.0 times what basic cloud would get.  Better yet, for providers at least, is the fact that the adoption rate on platform-service-based cloud could be almost twice as fast, getting cloud spending up to as much as 30% of IT spending by 2020.

You have a burst in the arctic fox population any time you have a burst in the lemming population, but when lemming counts drop back to normal so do fox counts.  For decades now, technology has been feeding on overpopulation of opportunity.  IP in business networks was created by the burst of distributed computing initiated by microprocessor chips.  OTT was created because online advertising was less costly than buying TV commercials.  But all these rich growth mediums have been used up by opportunistic bacteria at this point.  Now, to move forward, we’ll have to be more creative.

The telcos and the big IT companies and other “mature” businesses now have to face their own reality, which is that the OTTs and the cloud were never “competing” with them in a strict sense, they were just parallel players in the ecosystem, evolutionary co-conspirators in an attempt to exploit changing conditions.  However, what we’re seeing now is convergence of business models created by an elimination of market low apples.

Google cannot make easy money any more.  Neither can Amazon.  Both companies are now looking to “the cloud” as a solution, but the cloud involves much higher capital investment, better understanding of operations, and systemization in addressing some new benefit models to generate new sales and profits.  In heading for the clouds, Google and Amazon are heading for a place that’s darn close to where carriers have been all along—cash flow machines building services off massive infrastructure investments.  Both Amazon and Google would be trashed by the Street in minutes if they ever suggested they were doing that.

Google’s MVNO aspirations and fiber, Amazon’s cloud, are all tolerable as long as they don’t generate a boatload of cost that will threaten the Street’s view that these companies are growth companies and not utilities.  Somehow these high flyers have to build services at a new layer, where capex and opex can be lower but where value can be higher.  Does that sound familiar, like what the telcos have to do in order to get beyond the bit business?  And guess what; the telco model of today is closer to the cloud model of tomorrow, at a business level, than either Amazon or Google are.  So the telcos don’t need business model transformation at all—their competitors are going to be rushing to the telco business model because there’s nowhere else to go.

I’m not saying that the telcos are going to win over Google and Amazon, or vice versa.  What I’m saying is that we’ve not been seeing OTT competition up to now, but we’re darn sure going to see it over the rest of this decade, and every signal in every quarterly report bears that out.  And that, friends, is going to produce some very interesting market shifts and opportunities as well as very dramatic changes in the very structure of networks, applications, and information technology.

Where Tactics Meets Strategy–Software

There’s probably no doubt in the minds of anyone that Wall Street and Main Street see things differently, particularly after the 2008 financial crisis.  Every quarter we get a refresher course in why that is but sometimes the differences themselves are enough to blur the lesson.  To make it clear again, let’s look at some of this quarter’s results in networking.

Ericsson is one of the giants of the industry, which is interesting given that the company seems to make less and less gear every year.  Faced with plummeting margins on hardware, Ericsson elected to stake its future on professional services.  The theory, IMHO, was that equipment vendors were going to take root in current product/technology silos and refuse to embrace anything new for fear it would interfere with quarterly profits.  Given that, a gulf would grow between what vendors produced and operators needed, a gulf Ericsson would be happy to bridge through professional services for a nice billing rate.  With professional services to boost sales, Ericsson has fended off product-oriented competitors, even Huawei.

Huawei is every network vendor’s nightmare.  The Chinese giant has low equipment prices but at the same time has been investing heavily in R&D, as the company’s recent opening of an NFV lab in Xi’an demonstrates.  Huawei has also been improving its own professional services portfolio and reputation while sustaining their role of price leader even there.  I’ve seen Huawei’s credibility rise sharply in emerging markets, and also in Europe.

Ericsson’s tactical problem in this quarter reflects this, I think.  The US is the only area where Huawei is weak and US operators underperformed, which hurt Ericsson where they should have been strongest.  The question, though, is whether this is some temporary setback or whether the US is leading the rest of the world into capex caution.

The strategic problem Ericsson faces is that professional services are gap-fillers.  You get integration specialists because you have to draw on multiple product sources for optimal deployment.  You get professional services/development projects to fix that disconnect between needs and products.  But product vendors and buyers aren’t stupid; everyone knows that as a new technology becomes mainstream it adapts itself to mainstream demand, which means the mainstream isn’t demanding professional services any more.

The Street’s focus with respect to Ericsson is (no surprise) on the short-term.  “Ericsson Q4 Earnings Miss on Dismal North America Business” is a typical response.  Yes, an explosion in the US could have driven Ericsson up, but so could a sudden rush of orders from Mars or Titan, either of which were about as likely.  The big question for Ericsson is whether you can be a network company without anything significant in the way of product breadth.

Then we have Juniper, and their issues seem to be more internal than with competitors.  OK, I get the fact that they’ve had three CEOs in a year.  I get the fact that that old North American capex thing is hitting them too.  But the Street has liked them all along, particularly after this quarter’s report.  It’s not that Juniper did well—they were off in sales by about 11% year over year.  It’s not that they gave great guidance; they were cautious.  It’s almost like the same analysts who said that Ericsson’s problem was North American sales think that somehow those sales will recover for Juniper.

Again, let’s look deeper.  Juniper has focused on cutting costs, and on buying back stock.  You can only cut costs to the point where you have to outsource the CFO role on earnings calls.  You can buy back stock to sustain your share price only to where you have a company with one share of stock (and yes, you could sustain the price of that share at about twenty-four bucks) and a boatload of debt incurred to fund the buybacks.  You can build shareholder value in the near term by shrinking.  You can even see your stock appreciate if you buy back a lot and shrink a lot in costs.  But darn it, you’re getting rewarded for losing gracefully, for offering hedge funds a shot at making a buck from you while your real market opportunity drifts away with your costs.

Networking is in transition because revenue per bit is declining and network equipment is all about bits.  Unless you can do something to bring in new revenue, you are going to shrink.  No new revenue from bits will ever be seen again, so you have to go beyond bits.  But you can’t expect operators to all buy one-off solutions to their problem in the form of professional services.  They will buy solutions, which means they will buy software.  Software, then, should be the heart of both Ericsson’s and Juniper’s transformation, and a step in harmonizing the tactical and the strategic, the network marketplace with Wall Street.

Ericsson is actually more of a software company than most vendors, on paper.  They bought OSS/BSS giant Telcordia years ago, and operations has been a big part of their success.  Their challenge is that Telcordia was never rated as “innovative” in my surveys of operators, and since Ericsson took it over its innovation rating has declined significantly.

Juniper has never been a software company, and in fact a lot of Juniper insiders have complained that it’s never been anything but a big-iron router company.  Yeah, they’ve had this Junos thing as part of their positioning, but that’s about router software.  Juniper’s big opportunity came with its Junos Space product, which was actually (like, sadly, a lot of Juniper’s initiatives) a truly great insight that fell down on execution.  Space could have evolved to become the orchestration, management, and operations framework that sat between infrastructure and OSS/BSS.  They could have turned Ericsson’s OSS incumbency into an albatross and rocked Cisco.

Orchestration, management, and operations unification can create immediate benefits in operations costs.  That could for a time help network vendors to sustain capex growth in their buyer community.  In the long term, this trio is what creates new revenue opportunities, which handles the strategic issues.  Happy buyers, happy shareholders, what more can you ask?

Well, though, what now?  Well, darn it, the answer is clear.  Network vendors need to buy into software.  It’s hopeless for them to try to do internal development of a software position.  It’s also hopeless for Ericsson to try to rehabilitate Telcordia or for Juniper to bring back Space.  They need a new target, a new division left to manage itself and reporting directly to the CEOs to bypass as much politics.  And they need to look at that management, orchestration, and operations stuff as the focus of that new area.  Otherwise, tactical focus to accommodate the absence of a strategic product strategy will lead them to the abyss.

Culture hurts network vendors in attempting to move to software-centricity. “Quarterly myopia” hurts any vendor who takes a short-term risk for a long-term payoff.  But it’s not just myopia any more, it’s delusionalism.  What IBM or Juniper spent on share buybacks could have bought them everything the needed to be strong again.  It’s one thing not to see danger on the horizon, but not even myopia can justify missing it when it’s at your feet.

Cisco has announced new focus on software and the cloud, but hey we’ve been here before John.  If ever there’s been a company who epitomizes the tactic over the strategy it’s Cisco.  But maybe it’s Cisco we need to watch now, because if Cisco is really signaling that it’s time for them to face software/cloud reality, then it’s darn sure time for everyone to face it.

How, though?  There’s more to software than licensing terms, more to the cloud than hosting.  The next big thing, in fact the next big things are staring us in the face but being trivialized by 300-word articles and jabbering about the next quarter.  We don’t need visionaries to lead us to the future, just people who don’t need glasses.

Overture Tells a Complete NFV Story

I’ve been critical of the state of NFV in most of my posts, not because I’m opposed to it (I’m a big supporter) but because it’s deserved it.  There is an avalanche of NFV hype and nonsense out there, so much so that it’s rare to be able to say anything that’s not a criticism.  Which is why I’m happy today; I finally have something I can write favorable things about.

Overture Networks isn’t the furthest thing from an expected source of NFV insight that you could find, but it’s certainly not central in the hearts and minds of the NFV aficionados.  They’re a small-ish Carrier Ethernet vendor who recently announced a CPE element that can host virtual functions.  There are probably a half-dozen of these, all of whom assert NFV credentials, but unlike the rest Overture has real substance.  In fact, they may have more substance than NFV giants.

NFV has a lot of moving parts, but the functional heart of NFV is what the ETSI process calls “MANO”, or Management/Orchestration.  It’s MANO’s responsibility to deploy a service based on some form of instructions—call it a “model” or a “descriptor” or whatever.  When a service is ordered or when events dictate runtime changes to configuration of service components, it’s up to MANO to step in and orchestrate resources.  MANO is the most insightful contribution the NFV ISG has made, and MANO concepts are central to every one of NFV’s possible benefits.  Without MANO, NFV-wise, you have notrhing.

The great majority of NFV stories around MANO boil down to having OpenStack support.  OpenStack isn’t even part of MANO in my view, it’s part of the Virtual Infrastructure Manager that rightfully belongs in NFV Infrastructure (NFVI).  You need something above MANO to organize the end-to-end service setup, not just something to stick VNFs somewhere.  It will be a very long time before we have services that have no legacy elements in them (if ever) so you need some flexibility here.  Overture announced that last year with its Ensemble Service Orchestrator.  ESO is based on policies expressed Drools and workflows in BPMN 2.0 (Business Process Model and Notation, an OMG specification).  BPMN policies could be used to define services and service processes in detail, and at a higher level than OpenStack.  Overture, in fact, places OpenStack correctly as a VIM element in their presentation.

ESO gave Overture the distinction of being one of only three industry players that have met my tests for an NFV architecture that can effectively model services at both the functional and structural level.  They also announced a generalized network controller with ESO, so they could deploy on at least some legacy infrastructure.  However, they didn’t have a management story and so they’ve been a sort-of “Ohhhhh” provider rather than a “MAN…Oh” provider up to now.

“Up to now”, because they’ve now released a new capability that blends service definition and deployment with management.  It works in two parts, one of which is a set of advanced analytics applications and the other (the Ensemble Service Intelligence piece) that provides the framework that relates analytics results with resources and services and presents interfaces to management tools and other applications.

From Day One of NFV I’ve been an advocate of creating a repository intermediary between resources and “native” sources of management data and the management and operations tools and applications.  That’s at the heart of Overture’s approach.  ESI is a big-data repository populated by the totality of the resource MIBs (for devices, servers, platforms, and even VNFs) and also by service-to-resource relationships created during deployment.  They extend the basic repository notion with a series of management applications that derive additional intelligence through analytics.  Analytics also provides the service-to-repository correlation necessary to make service management explicit and not just implicit.

In their presentation, Overture includes a service lifecycle process description that builds a service (through a nice drag-and-drop GUI) and then takes it through deployment and management, including autoscaling under load.  This is managed by one of those ESI applications, and the approach demonstrates the value of the repository/analytics approach to management integration.  It appears to me that ESI applications and management data could be used in conjunction with BPMN-described state-event workflows implementing state/event tables in Titan models.  That could allow Overture to integrate management and operations processes into the lifecycle, which would create event-driven management and operations, pretty much the holy grail of OSS/BSS/NMS.

Overture also has a VNF ecosystem, and a trio of service-edge solutions ranging from an Overture-augmented server and a kind of “Dmarc-plus” Overture device to pure software.  Not surprisingly given Overture’s Carrier Ethernet positioning, they integrate these elements into NFV quite well, making the edge and the cloud both elements of NFVI and allowing VNFs to migrate from one place to the other as needed.  They have a decent number of VNFs available, more on the way.

There have been only three vendors who have shown me critical mass in an NFV platform—HP, IBM, and Overture (I think Alcatel-Lucent likely has a good solution but they’ve not provided collateral on the details so I can’t assess it fully).  Overture’s approach doesn’t have the legacy-network integration and OSS/BSS connection maturity offered by HP or the cloud affinity of IBM’s TOSCA-based approach.  But HP and IBM don’t have the same level of resource-to-service management coupling detail as Overture can provide.  What HP and IBM do have is mass, and buyer credibility, though.

Overture’s ESO/ESI adds up to an utterly fascinating NFV implementation, one so complete that you’d think it came from a network giant.  The fact that it doesn’t may be its only real limitation.  Overture has never seemed to push its “NFV strategy” as a general NFV strategy, preferring to see it as an extension of Carrier Ethernet.  They tell a great story to somebody who wants the whole story (their slide deck is fifty slides, the most information anyone has shared with me on NFV), but their positioning still seems to stop short of shouting MANO supremacy from the rooftops.  That I think would be out of their comfort zone.

That raises an interesting point because ordinarily a Carrier Ethernet packaging for VNFs and NFV tools would risk creating a silo, which everyone (including me) opposes.  In this case, you’d have to wonder whether instead of creating a silo, Overture is creating an on-ramp.  An operator with a strong Carrier Ethernet position and early opportunities to augment basic services with VNF-based security and other functional add-ons might conceivably start out with Overture’s ESO/ESI combination and their virtual endpoints and find out they could grow out of that position to broader services in which they’d never have seen Overture as a player.

Of course, an operator who doesn’t want to focus early field trials and deployment on Carrier Ethernet customers might find Overture a lot less appealing, and Overture might not be enthralled by the opportunities these non-Ethernet plays present either.  NFV is a consultative sell at best, and something has to pay for all that effort.  If at the end of the day the operator involved has little use for Overture hardware, will the pot be sweet enough for Overture to hang in?  So if ESO/ESI is an on-ramp, it’s not very well marked.

Somebody could always buy them, of course, and it’s also possible that somebody would step up to establish a partnership with Overture that flows value from small (Overture) to large rather than in the large-to-small direction represented by most of the VNF partnerships today.  Should we think of this as a MANO partnership?  The point is that there is really nothing out there quite like this.  I hate to paraphrase a line from a movie I didn’t even see, but these guys could be contenders.  With some work and positioning it could be a really great story in NFV overall, and in 2015 as I’ve said there are a lot of operators who need some of the stuff that ESO/ESI can provide.  At the least, this may inspire others to step up and tell an NFV story instead of erecting an NFV billboard pointing toward the nearest reporter.  Substance, by the second half of this year, is going to matter a lot, and substance we have here.

The “New TMF”, the “Old Ericsson”, and Kainotophobia

Out with the old, in with the new, as they say.  The TMF has a new head, Peter Sany, and he’s making statements that sound on target (see this interview in the New IP).  Ericsson struggled in revenues for the quarter as they’ve tried to contend with the capex trend in the carrier industry.  Transformations abound, which of course gives us lots of things to think about.  And apparently a lot of things to fear.  “Kainotophobia” is “fear of change” and it’s throwing a kibosh into everyone’s transformation plans.

I’ve worked with the TMF in a lighthearted way for quite a while, and the body is certainly interesting.  The essential concept that John Reilly presented half-a-dozen years ago with “NGOSS Contract” was a factor in my own design for CloudNFV and ExperiaSphere and the notion of “Customer-Facing” and “Resource-Facing” services were also ingredients in my model.  I’ve cited these TMF inspirations regularly and I put a lot of value on the thinking.  It proves to me that the TMF can do good stuff.

What it’s not been good at is what I think the market itself has proven unable to cope with, and that’s “revolution”.  Through the last ten years we’ve been seeing a fundamental deterioration of the business model of technology in general and of networking in particular.  I’ve cited some statistics on this—operators have seen opex climb from a quarter of TCO toward being two-thirds and heading north.  They’ve seen revenue per bit plummet at rates of 50% per year.  ARPU in nearly all market sectors is plateauing and in many markets the customer base is saturating.  We had all the evidence in the world for these shifts even ten years ago, but everyone ignored them.

It was seven or eight years ago that the issues really started to come to the fore.  Almost immediately operators started grousing that vendors were not supporting their transformation goals.  By 2008 every single operator in my survey was rating vendor support of transformation as “unsatisfactory”.  At about that same time, the TMF was absorbing another body that had been effectively focusing vendor and operator efforts to develop a different model of service operations.  The initiative (the IPsphere Forum) died in the TMF.

What was behind both vendor intransigence and TMF glaciation was trying to take an evolutionary view of revolution.  If you want to transform a technology or an industry you have to start with what you’re trying to get to, not where you are already.  If we want to revolutionize network infrastructure, paint a picture of what the ideal would look like.  Revolutions in opex have to start by defining the perfect system for automated operations.  From the goal, we can then follow tendrils of possibilities back toward the present and order approaches by cost and risk to pick one.

The TMF is a political body and an organism focused largely on its own survival, which is what vendors are as well.  They could have answered every single point that their new leader raised in that interview I cited as far back as 2008 and could have decisively addressed the points in 2013.  They didn’t do that not because they weren’t exposed to the right approach—they already had fielded the cornerstone strategies needed—but because they got tangled up in politics and weak leadership and shallow thinking.

Vendors are similarly tangled.  Do the big-name network vendors seriously think that operators would invest growing sums in infrastructure that was yielding diminishing ROI?  These are the guys who were cutting their own costs to make the Street happy.  If you want somebody to increase the “I” you have to increase the “R”.  As important as cost management is, it’s nothing more than a means of refining a positive revenue model.  Only benefit gains can ultimately fund profit growth and infrastructure investment.  Yet vendors have focused not on fixing their market problem but capitalizing on the symptoms.  No products to support transformation?  Ericsson’s answer was to focus on integration and professional services.  Well, there is no way that one-off solutions to ROI problems is going to cut it as far as the industry is concerned, Mr. Ericsson.  Products systematize solutions, lower investment overall, and improve profits.  Band-Aids simply blot up the blood.

We know today that services in the future will be highly personalized.  That has to be true because the consumer market is fad-driven and because mobile broadband has linked technology to moment-by-moment living and not to life planning.  We know that this means “agility” but what the heck does “agility” mean in a tangible way?  It means compositional architectures so it should start with very effective modeling.  Who talks about their service models in SDN or NFV?  It means event-driven processes but nobody talks about service states and state/event tables.  It means resource fabrics and process fabrics that combine to form the cloud, “information fields” created by things like IoT.  Where are the products for this?

If you read the stories of SDN and NFV and the cloud, you’d think we have already met and defeated the future.  The cloud is so entrenched in the media culture that people who are talking about private IT think that has to mean “private cloud” because after all the cloud has won.  Won, if winning is securing 2.4% of global IT spending.  SDN is transforming every company and every network and yet the actual processing power expended on SDN Controllers today is less than that of a single graphics arts firm.  NFV is on every operators’ lips but canny CFOs say that they can’t prove the business case with current activities.

The TMF can solve its problems in six months by simply starting at the top and defining what operations for the new age should look like, forgetting all the old-timers that would offend and all the conservative thinkers whose coffee breaks would be ruined.  Ericsson could buy some critical startups and assemble a complete SDN and NFV story that, when supplemented by professional services, would make them winners even over Huawei.  Every piece of the SDN and NFV pie that’s needed to meet current and future needs is out there already, waiting to be exploited.  Every design and plan needed for transformed operations and “service agility” has already been proposed by somebody, often by multiple people.

Does Ericsson want to fall forever behind?  Huawei is increasing revenues by 20% annually while competitors struggle just to stay even.  Does this sound like you’re winning the game you insist on playing, Dear Vendors?  The TMF has spent more time talking about its ZOOM transformation than would have been required to do a truly effective demonstration using running software.  Does this sound like “focusing on demonstrating progress and then driving consensus, not the other way around” as new TMF president Peter Sany said in the interview?

We are in the industry that has been responsible for more change than any other in modern times.  Why have we allowed ourselves to become so fearful of change now?  We should fear the status quo instead, because it’s not doing well for most of us.

There’s a Revolution In SDN (If We Can Dig it Out!)

One of the biggest issues I have with companies’ positioning is that they are postulating a totally revolutionary impact for something that differs in no significant sense from the status quo.  If a new technology is going to change the world of networking, don’t you think it should do something significantly different?  Perhaps the “revolutionary technology” that’s most impacted by this problem is NFV (and I’ve ranted on it there), but SDN has the same problem and perhaps with less reason.  It’s easy to make claims of an SDN revolution credible, even though most of them aren’t now.

Packet networking is all about packet forwarding.  You can’t connect users if you can’t get them the traffic that’s addressed to them.  In a very simple sense, Level 2 and 3 network technologies (Ethernet and IP) manage packet forwarding via three processes.  One is the process of addressing, which appends a header onto a packet to identify where it’s supposed to go.  The second is route determination, which uses a series of “adaptive discovery” exchanges among devices to determine both the connection topology of the network’s devices and the location of addressed users.  The third is the forwarding process itself—how does a route get enforced by the collective forwarding behavior of the devices?

My opening principle says that SDN has to do something different in this process to make a difference in the market.  The difference can’t be in addressing or nothing designed for the current network services would work with the new, which means that the difference has to be in the forwarding process and/or route determination.

OpenFlow proposes to eliminate adaptive routing behavior by replacing it with centralized control of forwarding on a per-device basis.  The devices’ forwarding tables are updated not as a result of adaptive discovery but by explicit commands from the SDN Controller.  Two models of device-to-controller relationship are possible.  In one, the controller has a master plan for routes and simply installs the correct forwarding entries according to that plan.  The devices get all they need from the Controller when the network (or a device) is commissioned.  The second model is a “stimulus” model where a device that receives a packet for which it has no forwarding instructions queries the SDN Controller for a “mother-may-I”.

It is possible to eliminate adaptive behavior through this process.  An SDN Controller can define failure modes and quickly install rules to restructure routes around something that’s gone bad.  It’s possible that security could be better in this situation because you could hypothesize a device that would pass user requests for any packet handling to a controller for validation and instructions, which would mean no connectivity to anything would exist until the controller validated the relationship being requested.  This could be a pretty significant behavioral twist in itself.

The difficulty that an SDN revolution based on the central model brings is the classic difficulty of central models, which is the performance and availability of the controller.  If the controller goes south, you have the network frozen in time.  If the controller is overwhelmed with requests, you have a network whose connectivity lags more and more behind current demands.  Logically you’d need to establish practical control zones in SDN and federate controller zones to divide up responsibility.  There are a bunch of ways this could be done, and some advocate pressing protocols like BGP into service.  I advocate defining the ideal solution to the problem and then seeing if current protocols like BGP can serve.  If not, you do something new.

The packet forwarding element of SDN is where the real potential value lies.  Even today, where SDN is (in my view, gratuitously) limited to MAC/IP/Port address recognition, you can envision forwarding structures that don’t map to a classic IP or Ethernet service today.  Some of them could be very useful.

Example—the security angle I just mentioned.  Suppose we designed an SDN network that was a set of three layers—edge, metro, core.  Suppose that we had all these layers divided into control zones that made each zone look like a kind of “virtual OpenFlow switch”.  In the metro and core, we’d be focusing on providing stable performance and availability between any metro zone and any other, either directly or via the core.  In the edge zone we’d focus on mapping user flows to forwarding rules for the traffic we wanted to carry—explicit connectivity where permitted.  The central two layers would be operated in preconfigured-route mode and the edge in stimulus mode.  All of this is within the capabilities of OpenFlow today.

Another thing we could do with OpenFlow today is to reconfigure networks to respond to traffic, either based on time of day or on traffic data made available to the controller.  OpenFlow networks are not going to be impacted by old Ethernet bridging/route restrictions or router adjacencies in IP; you can have as many paths as you like and engineer every packet’s path if that is what’s needed (obviously that would take a lot of controller horsepower but the point is that it could be done).

With some simple extensions we could do a lot more with OpenFlow SDN, and a bunch of these have already been proposed.  Three very logical ones are the support for a more general DPI-based flow matching to rules, enhancements to what can be done when a match occurs (especially packet tagging in the rule itself), and the use of “wild-card” specifications for matching.  If you had these capabilities you could do a lot that standard networks don’t do well, or at all.

One thing is intrinsic load-balancing.  You could at any point on any route initiate a fork to divide traffic.  That would let you “stripe” loads across multiple trunks (subject as always to the question of dealing with out-of-order arrivals).  You could prioritize traffic based on deeper content issues, diving below port-level.  You could implement one of the IP schemes for location/address separation.  You could mingle L2/L3 header information including addresses to manage handling, handle traffic different depending not only on where it’s going but where it came from.  You could authenticate packets and tag them to reduce spoofing.

The point here is that there is no reason why an IP or Ethernet service has to behave traditionally other than that the technology offers no practical alternative.  What OpenFlow SDN could provide is totally elastic match-and-process rule handling.  We could build routers like building software by defining processes, defining triggers, and initiating the former based on the latter.  And because the mechanism would be protocol-independent it would never be obsolete.  This is what OpenFlow and SDN should be, could be.

Why isn’t it?  Some university researchers have proposed most if not all of the extensions I’ve mentioned here, and many of the applications.  The challenge is turning all of this into product, and there the problem may be that the vendors aren’t interested in true revolution.  VCs who used to fund stuff that was revolutionary now want to fund stuff that’s claimed to be revolutionary but doesn’t generate much cost, change, or risk—only “flips” of the startups themselves.  I think the ONF should face up to the challenge of SDN revolution, but given who sponsors/funds the body that may be an unrealistic expectation on my part.  If it is, we may wait a while for SDN to live up to its potential.

The Cloud, NFV, their Relationship, and Opportunity

Everyone who’s followed NFV knows that there is a relationship between NFV and the cloud.  Logically there would have to be, because public cloud services host applications on a per-tenant basis with tenant isolation for security and performance stability.  That’s what network features for individual customers need, so it would be totally illogical to assume a totally new multi-tenant approach would get invented when NFV came along.

The thing is, this simple justification would lead you to believe that not only was there a relationship between the cloud and NFV, the two were congruent.  As you’ll see, I think it’s very likely that NFV and the cloud will evolve into a common organism, but we’re not there yet.  That current separation is something that proponents of both NFV and the cloud need to minimize, and that a lot of NFV marketing is exploiting in a cynical and negative way.  Thus, we need to understand just what the relationship between the cloud and NFV is, and what the differences mean right now.

A good discussion of differences should start with similarities, if for no other reason to prove that convergence of NFV and the cloud is not only inevitable, it’s already happening.  Cloud computing is a computing architecture that allows components/applications to be hosted on shared servers in a highly separated (multi-tenant) way.  The obvious advantage of this shared hosting is that the cost of the servers are amortized across more applications/users and so the per-user cost is less.  This is analogous to the “capex reduction” benefit of NFV.

The problem is that pooled, shared, resources are not infinitely more efficient as they become infinitely large.  There’s a curve (expressed by the Erlang C “cumulative distribution” curve) that shows that utilization efficiency grows quickly as the resource pool gets bigger, but this tapers off to eventually a plateau and further increases in the pool, even large ones, make little difference.  The biggest savings occur early on.  What that means is that enterprises with large data centers approach the efficiency of cloud providers, which means that public cloud services couldn’t save much in the way of capex.  Note that operators have quietly shifted away from a pure capex-driven NFV value proposition.

Fortunately for cloud providers and NFV proponents there’s another force at work.  Most SMBs and even some enterprises have found that the cost of supporting IT infrastructure is growing faster than the cost of the infrastructure.  For many, TCO is already two-thirds opex to one-third capex.  By adopting a cloud model of IT (particularly SaaS, which outsources most support) a business can transfer operations to a player who can get access to skilled labor and use it efficiently.  If we look at the savings side of cloud computing’s benefits, opex reduction is now the most compelling story.  And that is also the same with NFV.

Cost reduction vanishes to a point, though.  You can’t justify major technology revolutions through cheapness because at some point the percentage reductions in expense that you can present won’t fund the technology changes any more.  ROI based on cost management always declines over time, so you need to have something else—new benefits.  For cloud computing, this means turning the cloud into a new-age application architecture that can do stuff that was never done with traditional IT.  Amazons growing repertoire of cloud services or my notion of point-of-activity empowerment are examples of a benefit/revenue-driven cloud.  For NFV, this is the “service agility” argument.

What’s the difference between NFV and the cloud, then?  The first answer is that because NFV targets large sophisticated buyers, it has to do a better job of harnessing benefits from its incremental costs or there will be no movement to NFV at all.  NFV is in many ways a kind of super-DevOps, an architecture to automate the processes of deployment and management to the point where every erg of possible extra cost has been wrung out, every inefficiency of utilization eliminated.  First and foremost, NFV is a cloud-optimizing architecture.

Because NFV addresses today (for its prospects) the problems all the cloud will face down the line, “cloud” approaches look a lot like NFV approaches if you just look at one application/service in one limited test.  Most of the NFV PoCs, for example, really look more like cloud hosting than dynamic, agile, flexible NFV.  This has allowed virtually every vendor who purports to have an NFV story to over-promote what’s really a cloud value proposition.  You can replace many custom network appliances not with virtual functions but with cloud components, and that’s particularly true for functions that have very long lives.  Where NFV becomes critical is when you have to do the deployments a lot, often, for short intervals.

NFV’s “service agility” benefit depends largely on evolving how services are built to generate more of this dynamism.  This point gets missed a lot, in no small part because vendors are deliberately vague about the details.  If we need to rethink service creation, we necessarily have to spend some time considering the new architecture.  It’s a lot easier to say that we’ll cut provisioning time from two months to two days, which is great for time-to-revenue, right?  But if the customer didn’t want the service in two days but had two months’ notice (as opening a new office would likely offer) we have less chance of any revenue gain.  If the customer has the service already we get nothing; you can’t accelerate revenue you’re already collecting.

Here in facing the service dynamism issue, interestingly, the cloud may be leading NFV.  I believe that Amazon knows darn straight well that its cloud service future lies in being able to build cloud applications that are cloud applications, totally unsuitable for premises execution.  I also believe that we are seeing, in the mobile revolution, more and more situations where these new cloud applications could be a major benefit to users and a major revenue source.  That’s where my trillion dollars a year in incremental revenue for point-of-activity empowerment comes from.

NFV has led the cloud in recognizing that DevOps has to grow a lot to manage the scale of computing that cloud success would imply, and to manage the increasing dynamism that exploiting the cloud’s special characteristics would create.  But NFV has been totally unable to deal with the issue of how dynamism is realized, how application components that create dynamic experiences become service components.  The OTTs and the cloud developers are thinking more about that than the network operators and the NFV advocates.

Neither NFV nor the cloud can now succeed without the other.  Without NFV, the cloud’s growth will expand its cost of operations in a non-linear way until there’s no further benefit that can be realized.  Without the cloud and an understanding of the notion of cloud-specific services/applications, NFV will never realize a nickel from service agility and stall out when operations costs can’t be reduced further.  The question is which constituency—cloud/OTT or network operator—is going to get smart first and address the issues that the other constituency today is handling a lot better.  It may be that if the “cloud” wins, OTTs win and operators are forever consigned to public utility business models no matter how they’re regulated.  If NFV wins, then operators have a darn good chance of making the OTTs of today into the CLECs of tomorrow.


What Google’s MVNO Plans Could Mean for Operators

A number of independent rumor sources say that Google is finally going to make the MVNO move, striking reseller deals with Sprint and T-Mobile to become a Mobile Virtual Network Operator (MVNO).  This is what I thought Amazon should have done with its Fire phone, and what I still think would be a possible move for Apple.  It’s a move that promises to reposition Google in the network market, in the cloud, with advertisers, and with users.  One that threatens its traditional competitors and creates some new ones.

This isn’t likely to be a simple “make-more-money” play.  MVNOs are typically companies who offer lower-cost services with some restrictions or priority limits on access relative to the customers of their parent carriers.  Given that Sprint and T-Mobile are price leaders already it may be difficult for Google to discount much and still retain any profit margin at all.  That suggests that Google may have other plans to leverage the relationship through add-ons.  If not, if Google is looking at a “premium MVNO” that would charge more, they’ll still have to justify that extra cost somehow.

For the industry, this could be a radical move.  While other players like Amazon and Apple have not yet pulled the MVNO trigger, it’s likely that a move by Google could spur them to action.  Even if there’s no immediate response, the threat here is clear.  A major handset player (even an indirect one, via Android) and a cloud player and an ad giant becomes an MVNO?  A lot of carrier planners are going to get heartburn on that one, even if there were no other risks, which there are.

One thing at risk here is further disintermediation by handset vendors.  Most customers have as much or more loyalty to their devices as to their carriers.  Many mix service and device features up, in fact.  Operators have been increasingly concerned about the power of the device vendors, particularly Apple, in controlling service adoption and upgrades.  There was a lot of operator interest in Mozilla’s phone project, which was a lightweight platform intended to act as a portal to carrier-hosted features rather than be a device with a lot of local smarts.  It never took off, but it was an indicator that operators were taking handset vendors seriously, risk-wise.  They’ll surely be even more concerned now.

This is old news, of course, and what I believe will be the real risk here is at a higher level.  Mobile services, as I’ve pointed out before, are unique in that they reset the relationship between users and information by supporting what users are doing and not helping them plan.  You may research stuff online from your living room or den, but you make immediate purchase decisions from your phone—because it’s with you when you’re in a buying mode.  What I’ve called “point-of-activity empowerment” is a potential powerhouse new set of benefits, something that could drive almost a trillion dollars a year in new revenue.

With both Android and an MVNO position, and with content, ad, and cloud resources aplenty, Google could frame a bunch of new services targeting the mobile user.  Those services could help Google make a transition from being dependent on advertising (a two-thirds-of-a-trillion dollar space even if all forms of advertising are counted) to paid-for services that could bring in four trillion dollars or more in total.  They could also help operators monetize their infrastructure investment better, but not if Google gets the money.

The mobile/behavioral services tie in nicely with some of Google’s other interests, like self-driving cars.  These are gimmicks now but a lot of what would have to be behind the vehicles in the way of route knowledge and even IoT integration could be useful to human drivers and pedestrians.  There’s also a strong ad tie-in with integrating movement, social framework of the user, and their “intent” as expressed by questions or searches they launch.  All of this stuff could be the basis for a series of services, both to advertisers/retailers and to users.

A new giant MVNO like Google and the prospect for Amazon and Apple to follow suit generates a lot of potential changes in the mobile operator’s plans.  There are already examples being reported of MVNO grooming using SDN, and that would be more likely if big names like Google get into the game.  Even more radical changes could come in the IMS, EPC, and NFV areas.

Mobile service-layer technology has been overly complex, costly, and high touch.  Vendors like Metaswitch have already introduced lighter-weight technology for IMS that would be ideal for an MVNO, depending on the technical integration offered by the parent operators.  Google could base their service on a simpler stack.  Beyond these basics, Google would be likely to jump into a different voice and message model (think Google Voice and Gmail, perhaps, or Hangouts overall) and that would put pressure on operators to find a voice/SMS platform that’s more agile and cheaper.  If we find out that Google’s deal is for broadband/data only, we’ll know something very important—classical mobile voice and SMS is dead.

EPC is an issue because most of what EPC does is accommodate mobility and provide assured paths for premium services.  If Google takes a complete OTT voice and IM model, there’s nothing for EPC to do other than to follow users when they move from cell to cell.  Other location-independent routing approaches have been proposed; might Google try one?  At the least, we might be looking at a future where “Internet offload” offloads everything, which makes the concept a bit damp.

For NFV, this could be the goad that finally generates useful action or the straw that breaks the camel’s back.  Carrier standards processes have long missed the mark when applied to software-driven functionality of any sort, and the ETSI NFV work (with its first phase just completed and published) has a very powerful/revolutionary concept (MANO) that’s buried in a rigid and complex framework that doesn’t cover the full service spectrum in infrastructure terms and isn’t fully integrated with operations or applications.  Vendors at this point are certain to jump out to build around the edges of the spec to take advantage of the limited scope and to differentiate themselves.  In doing so they might propel NFV to a place where it could actually help operators build agile services—services to do what Google is likely now signaling it plans to do.

It’s my view that the Google move will propel SDN, NFV, application architectures for mobile empowerment, and a bunch of other things.  The propulsion will be focused on vendors, though, and not on the standards processes.  There is simply no time left now to diddle with consensus.  Once Google gets in the game, once Amazon and Apple are stimulated to accelerate their own MVNO positions, it’s every vendor for themselves.

The Cloud: Just Sayin’…

IBM reported its numbers which, in terms of revenue and guidance at least, were not happy.  I’ve talked about the opportunities IBM still has in some prior blogs, and speculated on some of the marketing factors and other decisions that may have led them to where they are.  What I’d like to focus on today is something a bit different, something that has impact not only on IBM but also on every other tech company that sells to business.  Call it the “politics of the cloud.”

IBM has for years been a master of the national account, the large-scale buyer whose IT activity justified a team dedicated to them.  This is a natural evolution from the early days of IT when the only accounts that could afford computers were large accounts.  I’ve argued that over the last decade, IBM has steadily divested itself of the hardware that had populist appeal, making itself more and more into the giant who targeted giants.  I think their current quarter is certainly consistent with that, but it’s not my story here.

You sell giant IT to giant companies, but most of all you sell it to professional IT types within those giant companies.  IBM’s reliance isn’t just on large buyers, it’s on technically sophisticated buyer organizations—CIOs with nice big staffs and budgets.  This is important because while most big companies aren’t very interested in becoming little, more and more of them appear interested in weaning away from those big IT organizations.  That’s where the cloud might come in.

If you slice and dice the numbers as you’re all aware I’m fond of doing, you come to the conclusion that business IT is a trillion dollar market annually, worldwide.  The most optimistic numbers I’ve seen for public cloud services peg the space at just south of thirty billion, which is less than 3%.  Further numbers-dicing brought me to the conclusion that the benefits of public cloud versus private IT in a pure economic sense would limit the public cloud market to about 24% of IT spending, roughly ten times what it is now.   The question that IBM may be raising is whether there are benefits that aren’t in the realm of pure economics, drivers beyond financial that dip into the realm of company politics.

Many, perhaps even most, of the big companies I’ve surveyed and worked with had considerable internal tension between IT and line departments.  One company I worked for went through regular cycles of decentralized versus centralized IT as the angst the line people felt overcame financial efficiency, then in turn was overcome by that efficiency down the line.  We’ve all heard stories about departments wanting IT to run itself as a profit center, competing with other outside options.  All of this made good media fodder but nothing really came of it.  Till the cloud.

The cloud, as a decentralized way of getting application support, has perhaps the first opportunity in the whole of the information age to unseat central IT.  Suppose that line departments played on that cloud characteristic and rolled their own per-department strategies?  Obviously you can’t have everyone doing their own thing to the exclusion of regulations or information integration, but you could darn sure reduce the role of IT in buying stuff and focus them on integration.

In this model, what happens to the IT giants like IBM who have bet on selling to the CIO and IT organizations, or at least have sold things that it takes centralized IT to consume?  Likely nothing good.  IBM saw good growth in its cloud services, but their annual run rate of $3.1 billion compares to over $22 billion in total revenue for this quarter alone.  First, how much of this is offsetting losses in sales of IT elements, and second how much of it depends on organized IT to buy it?

For the cloud overall, that’s the question.  We can’t get beyond about 24% penetration into IT spending for public cloud services unless we find drivers beyond the economic efficiencies that the IT organizations would recognize.  SMBs justify the cloud in large part through presumed reductions in technical support costs.  Might a larger company take the step of downsizing IT and end up in the same situation as an SMB—dependent on software services and not internal IT resources?

I don’t know the answer to that, frankly, and I’m not confident it would be possible to survey to find it out.  I also don’t know whether further development of cloud infrastructure by network operators and others (like Amazon) might create a new set of application services and big data analytics that would then tend to make mobile productivity support increasingly an exploitation of third-party services.  Might that trend bend further application development toward the cloud, and might that then combine with push-back against IT to build a larger trend?  I don’t know that either.

The mainframes of old were justified by two truths—you couldn’t buy a cheap small computer, and with IT serving the mission of capturing retrospective business data (remember data entry clerks?) it made sense to stick all the compute power in one place.  Cheaper systems begat distribution of IT resources, and even though we still have central data centers and repositories, we’re not done with the transformation created by distributing processing outward.  Workers with mobile devices, wearables…where does it stop?

Certainly, if there’s any validity to this position, SaaS is what matters, cloud-wise.  The migration of current applications to the cloud isn’t what a line department revolution would drive, and mobile worker productivity enhancement based on new software architectures clearly doesn’t require IaaS hosting of old stuff.  Amazon’s movement to “platform services” of cloud-resident software features and Microsoft’s and IBM’s enhancements of basic IaaS with PaaS features are probably valid stepping-stones, but perhaps more valuable to providers than to users themselves.  So a line-department cloud drive is going to require a different kind of marketing, one that stresses applications and not just platforms, ecosystems not products.

All of this is speculation, of course, and there are always counter-trends just as there are today and have been in all of IT’s history.  I don’t think that IBM will be killed by the cloud, that centralized IT will fall in some giant line-department coup.  I do think that the factors I’ve cited here will change how businesses use IT and how applications are linked to workers and their productivity.  As that happens, the basic rules of IT spending will change.  When they do, I have to admit that my 24% number will be harder to justify.  It might turn out to be right, but the value was derived by assessing relative cloud economics with current application models.  A major change to those models could create a major change in IT spending, cloud penetration, and vendor fame and fortune.

We are almost certainly underplaying the revolution we call “the cloud”, and in particular we’re discounting the impact that the cloud could have on the relationship between IT as a kind of universal staff function and the line organizations.  Nike and Reebok, as I used to say in my seminars, make sneakers not networks.  The purpose of business is not to consume IT, and many users in my survey complain that IT has become an end to itself.  You need information technology of course, like you need transportation, power, natural resources, capital, and labor.  The relationship among the last three of these has always been filled with tension.  Maybe that tension is expanding.  If it is, then we may see very different drivers emerge.

Different winners too.  The cloud is a unique partnership between hardware and software.   Of the two, it’s software that shapes the business value and the economic tradeoffs.  So can IBM or another “hardware vendor” build a software position strong enough to make hardware unimportant?  Is a cloud partnership inevitably one with a network operator or CSP and not an enterprise?  Can an IT vendor become a CSP by leveraging their current strength?  More questions I can’t answer, but that will likely be answered decisively by real-world company politics before the decade is out.

The NGN Bridge: Drivers, Trends, and Carpentry

You’ve probably noticed by now my enthusiasm for the metro space.  I think that enthusiasm is vindicated by the recent Street speculation on Verizon’s next-gen metro program, which the Street analysts say will go primarily to Ciena with a nod toward Cisco.  The thing is, there are other fundamental numbers in play that have been validating metro as a market for years now.  We’re just starting to see the results, and we’re not done.  Some of the trends weave through our ever-popular themes of NGN, SDN, and NFV.

If you look at BEA data you find that between 1990 and 2013 (the last posted year), growth in consumer spending on communications services has run about 30% ahead of growth in personal spending overall.  But what’s in this category tells the real story.  Telecom services is lagging overall spending growth by about 10% through that period, spending on postal/package services has fallen by more than half, and spending on Internet access has grown three hundred times faster than personal spending overall.

Despite what people may think, consumer Internet access is a metro service.  If this service is the only driver of increased ARPU to speak of in the whole of communications, which is what the data (from the Bureau of Economic Analysis) says, then the only area where we can expect to see much enthusiasm for additional investment is in the metro.

Here’s some other interesting stuff.  If we use the same 1990 baseline, growth in business spending on IT overall has been about 30% slower than investment in equipment overall.  For networking it’s been only about 17% slower, but slower nevertheless.  So what we’re saying is that the consumer is the real driver of networking, and likely the real driver of IT.  Consumers, we must note, do nothing in terms of site-to-site communication, they want experience delivery.  And with the drive to mobility, their experiences are increasingly local.

Profitable, mass-consumed, video is delivered from metro caches.  Popular web content in general is cached in a metro area.  Ads are cached, and as we start to see mobile-behavioral services emerge we’ll see that those services are fulfilled from cloud infrastructure that’s local.  NFV is going to host virtual functions proximate to the service edge in the metro, so NFV cloud resources will also be metro.  Are you getting the picture here?

The Street analysis of Verizon’s metro update spending is interesting because 1) it’s relevant to what is for sure the future of capex and 2) it demonstrates that metro is more about fiber transport than about IP.  Ciena is the projected big winner here, but in technology terms it’s optics that’s the big winner.  My model says that metro spending, which is now running about 40% optical, will shift through the rest of this decade to nearly 60% optical.  IT elements, which make up less than 5% of metro spending today (mostly caching/CDN, IMS, etc.) will grow to make up almost 17% of metro spending by 2020.  That means that all network gear other than optical, which today accounts for more than half of metro spending, will account for only 23% by 2020.

There’s plenty of precedent for a focus on experiences and hosting.  In our golden analysis period here (1990-2013), the telecom industry has lagged businesses overall in spending on fixed assets, but information processing services have seen six times the growth rate in investment.  This, with no significant contribution at that point from the cloud.  That means that we are generating more spending growth on infrastructure outside telecom.  To be sure, broadcasting and telecom has about four times the spending of the information processing services sector, but that’s obviously changing given the difference in growth rate.  In the last five years, in fact, the two sectors added about the same number of dollars in capital investment even though, as I said, information processing is a quarter the size of telecom/broadcasting.

The net trend is obvious.  The cloud is going to shift more and more information services to metro.  NFV is going to do the same, and as a result of this we’ll be seeing most data center communications become metro services.  Given that companies network sites and not people, and that sites are not increasing by any significant percentage you can see that all of the “services” that have upside for the future are metro services.  And all of these metro services are bending, in infrastructure terms, toward fiber and IT.

The big vendors in the IT space should be in the cat-bird’s seat here, and certainly some of them (like HP) are.  The big network vendors need to have a fiber position or a server position to be well-set.  Many, including Alcatel-Lucent and Ciena and Infinera, have fiber and Cisco has both fiber and servers.  NSN, Ericsson, and Juniper have neither to speak of, so these guys are the ones who have to face the biggest transformation.  Ericsson has already signaled its intentions to rely more on integration and professional services, NSN is looking at a merger with Alcatel-Lucent according to the Street, and Juniper has wanted somebody to buy them for a long time.

Why has this sort of thing not received much attention?  We are looking at the future of networking though past-colored glasses.  We are presuming that success of the Internet means success of routing, that success of the cloud means success of traditional data center switches, and that network investment will be spread out over the globe and not focused on metro-sized pockets.  As a result, we’re missing a lot of the real trends and truths.

Our industry trends strongly suggest that carrier infrastructure is trending toward a polarization between a bottom fiber layer and a top layer consisting  of virtual elements  in the form of software, hosted on servers and sited in data centers close to the point of user attachment (the metro, of course).  It’s not likely in my view that we’ll see a lot of fiber guys getting into the server space, nor will we be seeing server kingpins launching fiber optic transport and agile optics.  But this polarization does put pressure on the optical people because software is almost infinitely differentiable and fiber is definitely not.  That suggests that an SDN and NFV strategy and software partnerships and relationships (of the type Ciena is attempting) may be absolutely critical for fiber players to sustain margins.

Virtualization is what develops the new-age binding between optics and IT elements, and that’s a software construct.  Anyone can be a software player; even optical giants could do prodigious work there if they wanted to.  So it is likely that virtualization software in the manifestations of both SDN and NFV will frame how the two polarized ends of future infrastructure will join, and who will do the carpentry.

It’s also worth noting that the statistics I’ve been citing suggest that SDN and NFV are not “driver technologies” forcing change, but rather are steps being taken to accommodate broader market sweeps.  If that’s the case (and I believe it is) then it further validates the view that SDN and NFV could be opportunities for vendors who face “fundamentals-driven disintermediation” to establish a new model that could survive into the post-2020 future.  Certainly it means that foot-dragging on SDN and NFV are more likely to do harm than good, because preserving the current market paradigm would mean reversing macro trends no vendor can hope to control.

That, to me, all of this means that software is critical for the current electrical-layer incumbents whether they see it or not.  Even though these guys are by my reckoning all sliding into a pit as the polarization between optics and IT develops further, they have a chance to redeem themselves (albeit at a lower level of sales) through software.  Even this is probably not a surprise; nearly all the network vendors have been promising a greater focus on software for years.  SDN and NFV are simply the latest technologies to represent this long-standing trend, and they probably won’t be the last.

They will be critically transformational, though.  Economics is what transforms networking in a force sense; technology only guides the vector through which the force is applied.  We have some positive outcomes still available, some cat-bird seats still available.  We just need to see who sits in them.

Tech Future: IBM’s in Trouble and Maybe You Are Too

It’s always interesting to listen to or read about what’s happening in the tech market.  You get the impression that the industry is a vast river that’s dragging everyone to a common destination.  We have systemic this and technology-trend that and it’s all pretty much relentless.  There are obviously systemic trends, and I’ve certainly talked a lot about them.  There are also individual trends, things that by accident or design have been propelling companies into different orbits than the market overall.  Fridays are a good time to talk about them.

Intel announced its numbers, and it had record revenues.  The fact that yesterday was a bad day for tech drove down shares more than the results raised them, but from an industry perspective it’s pretty obvious that microprocessors are doing well.  One reason is that there has not been the total shift away from PCs that’s been predicted since tablets came along.

I have an Android tablet, and just about a month ago I replaced it with an ultrabook that had the ability to fold into various configurations to make it more convenient to use.  One reason was that for me, the tablet could not really replace a PC because of the business use.  When I go on vacation, I have to presume that something might come up that would force me to do a bit of work.  No, it’s not that people contact me and ask me to (I’d just say “No!”) but that I might want to extend the trip and push it into a period when I’d already agreed to deliver something.  Anyway, I can use an ultrabook like a PC and also like a tablet so it makes more sense.

That’s what I think is happening in the PC world.  We are learning that the exciting revolution of the tablet is exciting but not necessarily compelling.  Many people, I think, are finding that a slightly different PC is really a better choice for them, and that’s making the tablet more a supplement to the PC than a replacement of it.  I’ve noted before that many will still make the tablet transition because all they really do is go online, but more people need PCs than we think.

IBM is about to announce its numbers, and the Street is lining up on the bullish side of the room for Big Blue, where sadly they also find me.  IBM has weathered more industry transitions than any other tech company and it would be sad to think it might now be in deep trouble, but all the indications are there.  IBM has been slowly shifting out of the more competitive x86 space, first by selling off PCs and then all its x86 servers.  This looked smart to many (including IBM, obviously) but it had an unexpected consequence.

Tech is commoditizing overall.  Apple consigned itself to a second-class PC company for decades because it had no acceptance in the business market.  Today, Apple is moving despite their consumer targets and IBM, king of business, is trying to ride their coat-tails.  The hidden truth there is that you can’t be a computer/software success selling only to big companies.  IBM killed its own brand by getting out of all the product areas that the masses could buy.  Now they find it difficult to deal with competition in the Fortune 500 when the Everybody Ten Million is safely in someone else’s camp.

HP is in the opposite situation.  The fact that PCs have outperformed has helped HP, who by some measures is gaining market share faster than anyone else.  HP is also dealing with the industry revolution in a saner way, not by jumping on the tablet bandwagon like it’s the only savior of western culture but by creating a PC/server business polarization that lets the company play both sides of the opportunity, or the same side from two different directions.

The point here is that exaggerated measures are often induced in response to exaggerated trends, and we all know that there’s nothing that can happen in tech these days that’s not going to be blown way out of proportion.  A company who avoids throwing themselves out a tenth-story window to avoid a puff of vapor, whether they’re smart enough to see it for what it was or just lucky enough not to notice it, has a better future than the one who jumps.

In networking, we are facing the same thing.  Everything that’s happening in networking today, and everything that’s happening in business IT as well, is driven by one common truth, which is that we have largely used up the benefit case for more spending.  We have empowered workers as well as current IT architectures can, we’ve connected people and information as well as anyone is willing to pay for.  You want more cost, you need more benefits.  This means that “jumping” as IBM has done is a bad idea, because there’s nowhere to land that’s any better.

What’s frustrating (to me, at least) is that there is no reason to believe that there are no other benefits out there.  My models have shown that businesses could spend nearly $400 billion per year on network/IT services if mobile empowerment of workers were harnessed optimally to improve productivity.  They say another $600 billion annually in consumer spending, this largely on services, could accrue if we were to give mobile users the kind of things that even current trends show they want.

Cisco is widely seen as having led us into the IP age, but that’s not the case.  Cisco was blundering along in IP and IBM, who owned business networking, killed their own opportunity with an overweight vision of “SNA”.  Buyers fell into Cisco’s arms.  The key point, though, is that this seismic change came about first and foremost because we were having a revolution—one of connection cost.  A lot of new things were empowered by that revolution, including IP and distributed computing and the Internet.

We’re still having a cost revolution.  The cloud is a market response to the lifting of constraints on information delivery and process distribution that has arisen because of that same declining cost per bit that’s vexing the operators today.  The problem the cloud faces, and IBM and HP and Cisco and even Intel all face, is that we’re not seeing this for what it is.  If a resource becomes cheap, the best strategy is to exploit it more.  OTTs arose because operators didn’t deal with their own market opportunity, but the OTT model isn’t perfect either.  They’ve picked low apples, things like ad-sponsored services were barriers to entry were limited and ROI high.  The real tech of the future is what PCs were in their heyday and what mobile devices are now—a mass market.  What does a mass-market cloud-coupled IT world look like?  What software does it depend on, what platforms does it make valuable, what services does it both drive and then facilitate?

Answer that question and you’re the next Cisco.