HP Results Show a Clear Path, but Not a Will

HP reported their numbers, and they were somewhat par for the tech earnings course this season, meaning they missed in revenue, met guidance on profits, and were tepid in their expectations for the coming year.  The Street has been all over the place on HP stock since the call, but I guess the trend is down.  HP also replaced their Enterprise Group leader, which may or may not be related to the quarterly numbers—take your pick!

I said yesterday that HP needed some aggressive offense, but what the call showed me was a company playing defense and losing a couple yards every play.  A little loss is better than a big one, but everyone who’s watched football knows that you can’t win games by losing only a little ground on each play.  I think it’s time for HP to trot out the classic “Hail Mary” pass (if they ever get to offense), but it’s still a question of whether they know how.

HP software numbers were better than expected, and that to me suggests that HP still has enough software DNA to be able to trot out some good cloud-related tools aimed at boosting enterprise productivity and not cutting costs.  Sadly, cost-cutting seems behind all the other successes they cited on the call.  If you look at one of their hits, Moonshot, it’s a low-cost server underneath the skin.  If you look at converged cloud, it’s OpenStack and cost-based IaaS.  On the call, HP acknowledged the importance of mobility and cloud.  That’s not enough; they need to differentiate in both areas, not just lay down an easy bet.  If you extend their priorities along the lines of “empowermentware” as I suggested, you can get to some specific areas HP needs to address.

First, you have to link explicitly to mobility, because it’s what a mobile worker needs that creates the opportunity for a new productivity paradigm.  Out in the trenches, as they say, things are totally different from the prevailing needs at the desktop.  Can HP, who has failed to create a strong mobile device story, still create a mobile empowerment story?  Who knows?  They talked about it, but not much more.

The second critical point is that the cloud has to step out—beyond IaaS, beyond the basic precepts of OpenStack.  HP did OK in software, as I said, beating expectations.  Why not take some of their software and make it into an extensible platform, an added dimension to OpenStack?  It would take very little for HP to create a framework to add platform and even software-as-a-service features to OpenStack, and doing that could be a differentiator for them.  In addition, it’s obvious to most (even within the OpenStack community) that the Neutron approach to having almost release-based service model extensions just won’t cut it.  If network virtualization is like all other virtualization—an abstraction/instantiation process—then there should be an easy path to defining new abstractions, new service models.  HP should know that, and should be stepping up to address the issue.

The final point is application architecture.  We have an architected application-driven model for software today in SOA, and the problem is that we achieved application goals at the expense of flexibility.  We have a highly flexible model for binding components into services in the web or RESTful model, but it’s a model that makes virtually everything that should be fundamental (like security and governance) into an optional extra.  We have emerging technologies that can model data and processes much more flexibly, and these models should be the foundation for the new age of empowerment, because they can handle the scale, the personalization, the diversity of resources and multiplicity of missions…you get the picture.  Why is HP not leading with those technologies?  We will never get the cloud totally right without them.

Competitors will eventually get this right, HP.  If you want to revolutionize the IT model, look to Amazon.  With razor-thin retail margins driving its planning, it’s hard to find even network applications that don’t look good by comparison.  IaaS, which is a death trap for even many common carriers, can look like a rose garden to Amazon.  Imagine how mobile could look?  If Amazon becomes an MVNO, which many say it intends to do down the line, it could not only get a boost in TAM by leveraging all its connected devices, it could create a mobile-friendly PaaS cloud platform of its own, one that would then be a big headache for HP because it would be in place, an incumbent approach that HP would then have to batter out of the way.  If HP moves now, they could still get out in front here, and that’s essential if they’re to gain full benefit from their stated goals and their own technology strengths.

But there’s a lesson for HP in EC2 as well.  IaaS cloud services are a terrible business and HP shouldn’t be in them at all.  Getting street creds in the cloud is one thing, but killing your margins to get them (even if you succeed) is another.  Maybe that’s the final lesson for HP; you have to pick your street.  Play to Wall Street and you’re as volatile as stocks are today; play to Main Street and you may have a long slog but you have a good finish.  Step out, HP, and step out boldly.

HP’s Only Path to Survival

HP is scheduled to report earnings tonight after the US markets close, and in keeping with my practice with Cisco, I’m going to focus today on what HP should be doing, uncluttered by the way the numbers fall.  Again as with Cisco, I may blog a bit about what actually happened.

HP is an IT company, to no one’s surprise, and so it’s falling victim to the general malady of IT—too little growth in the benefit case to drive sufficient revenue and profit growth.  You need to fuel purchasing through benefits, and when you can’t claim many more you can’t buy much more.  The only exception is the consumer space, where benefits are less financially tangible, and even there HP and the industry has issues.

In the consumer space, HP was at one time a leader in hand-held technology but they were anemic in how they pursued it because they were afraid it would undermine the HP PC position, which was more profitable overall.  There was a time when HP had the only portable device worth having, and look at them now.  They also followed the Microsoft lead in defending rather than attacking, and fell victim to the smartphone and tablet craze.  What can HP do about that?  That’s the first question.

Then we have the business space.  HP has both network equipment and servers, which should give it a leg up on competition.  It had that advantage up until Cisco brought out UCS, but it was never really able to exploit it because like many companies, HP is a bunch of silo business units who compete with each other for internal management favor as much (or more) as they compete in the broad market with Cisco or IBM.  Servers were once brand-buy items; you picked a name vendor if you were a big company.  Now servers are almost commodities.  Remember that NFV was targeting the running of network functions on “commercial off-the-shelf servers.”  If there’s such a thing as COTS then HP and every server company has a challenge.

Is that challenge the cloud?  HP should have a credible cloud story but their positioning of their cloud assets has been tentative, and I think that behind that is the same old issue of fear of overhanging your primary products’ sales.  The market theory is that cloud computing is a cost-based substitution of hosted capacity for owned capacity.  That would have to be a cost savings because there was less capacity needed in the cloud to do the same work—thus the cloud is a net loss of servers.  Only that’s not true.  Yes, IaaS is an economy of scale play, but IaaS isn’t the cloud market that matters.  If you look at the cloud at the highest level, it could be the largest single source of new server deployment, and a net win.  And how to get their cloud game going is HP’s second question.

I think that the answer to the two questions lies in recognizing that they’re one.  “The cloud” is a nicely fuzzy term we’re using to paper over a true revolution in information technology, a revolution that focuses on what I’ve been calling point-of-activity empowerment.  People with mobile devices want mobile-friendly information resources and information presentation.  We weave portable stuff into our lives in very different ways than we weave in things that sit on our desk or in our living room.  The differences allow us to build new dependencies, gain new benefits, and those new dependencies and benefits justify more spending.  We will get more from the cloud, and we’ll pay more to get it.  This is the real message of the cloud, and it’s the message that HP should have been crying from the rooftops for five years now.  The good news is that so should everyone else in the cloud game, and they all dropped the ball.  So HP still has a chance.

The perfect data center for this agile point-of-activity cloud is different from the typical corporate or even cloud data center.  I’m not saying the differences are profound, but they’re more than enough to justify a differentiated positioning.  The hardware design needs to be different but most importantly the software needs to be different.  That’s where HP’s cloud strategy is failing.  When I look at the advances to OpenStack that are in progress at some level, I see HP a bit player in driving the bus.  HP should be moving heaven and earth to expand and extend OpenStack, but they should also be building a layer of “empowermentware” on top of open-source cloud technology to embody the new value proposition for the cloud.

The same can be said in the appliance space.  The basic architecture of the PC has been around since the early 1980s, when nearly all PCs were running in splendid isolation.  Even IBM had to play catch-up as others (remember the Irma Board?) provided early connectivity.  What does a machine designed to be a thin client for an empowerment cloud look like?  I doubt it looks like a laptop or even a tablet or smartphone.  Things like wearable tech should be fitting into an architecture, an architecture that HP could have (and still could) define.

So here’s the net-net for HP.  What happens to them tonight on their earnings call and tomorrow in the market is a side show.  The important question is whether they are ready to go flat out toward the empowermentware goal, and then fit their hardware strategies to host empowerment tools and terminate empowerment flows.  If they can do that, and quickly, they will emerge from this process a lot stronger.  If they can’t then they are in for a slow decline.  HP is a kind of consolidated entity—HP, Compaq, and DEC all contribute DNA to the current company.  We’ll be looking for fragments of HP DNA in other companies if HP doesn’t move, and move now.

Channelized versus OTT Video: More Data

Some of the latest data on TV providers seems to reinforce the notion that the video world is changing, and of course that’s true.  The problem for TV delivery strategists is that it’s hard to tell just what factors are driving the changes, and without that key insight you can’t easily address the problems that change might create for your bottom line.

The market data for the spring, drawn from the quarterly earnings numbers, suggests that subscription television services are losing ground more rapidly than usual.  Spring is typically a bad time for TV because of a combination of movement to summer homes (or away from winter homes) and the return of students from college.  This spring does seem a bit worse than usual, but I’m not sure how much of the loss can really be recovered.

TV viewing is about households, not people, because households are what subscribe.  Data I’ve cited before suggests that when people establish a multi-person household they tend to gravitate toward traditional viewing patterns regardless of how committed to iPhones/iPads and YouTube or Netflix they might have been.  Thus, one important point is that any time you have a lack of growth in the number of households you have a loss of stimulus for subscriber gains in subscription TV overall.  And guess what?  We’re in a period when record numbers of adult children are not leaving the nest for economic reasons.  Nothing TV can do is likely to push these kids out, other than perhaps hiring them all.  Even then, I think there’s compelling data to suggest that young adults value the added disposable income they get from living at home more than the independence they lose.

Another interesting thing about spring, astronomically speaking, is that it’s going to lead to summer, meaning summer reruns.  People traditionally flee subscription TV in the summer because of the dearth of good material.  Remember, the largest reason people don’t watch subscription TV is because they’re not home, and the second largest that there’s nothing worthwhile on.  Even “good” summer shows often fail to attract the dedicated audience because the shows can be preempted for sporting events, and because in the summer it’s more likely that viewers will be out somewhere.

I think anyone who’s gone through their share of summers and TV viewing knows that in fact the quality of summer material is better this year, and has been getting better for the last couple of years.  The number of people who tell me that they have shifted to on-demand or OTT viewing for lack of something to watch is down by almost 30% this summer versus last, a big change.  Most of this gain is due to the fact that cable channels are increasingly taking up the slack and even major networks are running summer-season shows.  This is smart because viewing habits are just that—habits, and you don’t want to train your viewers to go elsewhere.

That raises some questions on the TV Everywhere concept.  Is the use of OTT video to supplement channelized viewing a good thing or a bad thing?  There I think the jury is still out.  My data says that people who can’t watch something in its regular slot would rather watch it via on-demand viewing unless it’s a sporting event.  For sporting events, the preference is to view it in a social/hospitality environment (a bar comes to mind) if you can’t view it at home.  So it’s not clear whether having the game or the show available “live” helps much, and it’s pretty likely that having it available on-demand on a different device doesn’t move the ball very much.

Where TV Everywhere is good for providers is in holding customers as channelized subscribers where the viewer is likely mobile a good chunk of the time.  In general this means the young independents, people whose viewing habits seem destined to change anyway as soon as they establish a true household, with a partner and perhaps children.  But just as this year reveals some gains in summer-viewing loyalty, it illustrates weakness in the traditional fall-through-spring prime TV season.  Remember that almost 30% told me that they were happier with summer shows?  Almost 20% said they were significantly less happy with prime-season programming.  Too much reality TV, they said, and also too short a season for new material.  TV Everywhere could give viewers access to material that would substitute for this poorer fare.

“Could” is the operative word.  People watch more on-demand these days because they miss more regular program times.  On-demand breaks the cycle of dedicated viewers who schedule their lives around programming, and when that cycle is broken the viewer becomes increasingly interested in just getting something that suits their momentary fancy.  That’s as easily done with OTT.  Yes, you need prime-time on-demand, but there’s no question that over time this is weakening the bonds that hold us to traditional viewing.

IMHO, people want virtual channels with specific shows slotted into their own convenient slots and with material selected based on what they like and what their peers recommend.  I think Apple and Google would like to see this model prosper, but the problem for both is that the commercials just aren’t as valuable in that model.  We’ve gotten a bit better in leveling TV per-minute advertising and OTT video advertising—it’s gone from TV being worth 33 times as much to only 28 times as much—but we’re still a long way from being able to fund new material, and if you distill all of what I’ve said about video, you see that it’s the material that makes channelized TV stand or fall, material and demographics.

A shift to OTT viewing would also have profound consequences for broadband delivery, not so much to the home (U-verse proves that you can give the average household enough capacity to view TV even over copper) but in the metro aggregation network.  Instead of feeding linear RF programming to head-end sites to serve tens of thousands of customers, you now have to deliver thousands of independent video streams to every CO to reflect the diversity of viewing there.  And with revenue per bit in the toilet, how exactly do you build out to make that happen?  So until we can answer the dual questions of paying for programming and paying for delivery, I don’t think we’re heading for an OTT revolution.

How “Open” is My Interface?

Carol Wilson raised an interesting point in an article in Light Reading on SDN and NFV—that of collaboration.  I’m happy that she found the approach CloudNFV has taken to collaboration and openness credible, but I think some more general comments on the topic would be helpful to those who want to assess how “open” a given SDN or NFV approach is, and even whether they care much whether it’s open or not.

An important starting point for the topic is that the network devices and servers that make up “infrastructure” for operators are going to have to be deployed in a multi-vendor, interoperable, way, period.  Nothing that doesn’t embrace the principle of open resources is likely to be acceptable to network operators.  However, “open” in this context is generally taken to mean that there exists a standards-defined subset of device features which a credible deployment can exercise across vendor boundaries.  We know how this works for network hardware, but it’s more complicated when you bring software control or even software creation of services into the picture.

If you step up to the next level, I believe there are three possibilities:  an “open” environment, an “accommodating” environment, and a “proprietary” environment.  I think everyone will understand that “proprietary” means that primary resource control would operate only for a single vendor.  Vendor “X” does an SDN or NFV implementation and it works fine with their own gear but the interfaces are licensed and thus it won’t work and can’t be made to work with equipment from other vendors.  Today with software layers, “proprietary” interfaces are usually private because they are licensed rather than exposed.

The difference between “open” and “accommodating” is a bit more subtle.  To the extent that there are recognized standards that define the interfaces exercised for primary resource control, that’s clearly an “open” environment because anyone could implement the interfaces.  I’d also argue that any environment where the interfaces are published and can be developed to without restriction is “open”, even if that framework isn’t sanctioned, but some will disagree with this point.  The problem, they’d point out, is that if every vendor defined their own “open” interfaces it would be extremely unlikely that all vendors would support all choices, and the purpose of openness is to facilitate buyers’ interchanging components.

This is where “accommodating” comes in.  If in our resource control process for SDN or NFV we define a set of interfaces that are completely specified and where coding guidance is provided to implement them, this interface set is certainly “open”.  If we provide a mechanism for people to link into an SDN or NFV process but don’t define a specific interface, we’re accommodating to vendors.  An example of this would be a framework to allow vendors to pull resource status from a global i2aex-like repository and deliver it in any format they like.  There is no specific “interface” to open up here, but there is a mechanism to accommodate all interfaces.

Let’s look at this through an example.  In theory, one could propose to control opaque TDM or optical flows using OpenFlow, and in fact there are a number of suggestions out there on how to accomplish that.  IMHO it’s a dumb idea because packet-forwarding rule protocols don’t make sense where you’re not dealing with packets.  Suppose that instead we created a simple topology description language (we have several; NED, NET, NML, Candela…) and we expressed a new route in such a language, using some simple XML schema.  We have a data model but no interface at all.

Now suppose we support passing the data from the equivalent of a “northbound application” to the equivalent of the OpenFlow controller, where it’s decoded into the necessary commands to alter optical/TDM paths.  If we specify an interface for that exchange that’s fully described and has no licensed components, it’s an “open” interface.  If we express no specific interface at all but just say that the data model can be used to support one, we have an “accommodating” interface.

My point here is that we need to be thinking about software processes in software terms.  I think that “open” interfaces in software are those that can be implemented freely, using accepted techniques for structuring information (XML, for example) and transporting information through networks (TCP, HTTP, XMPP, AMQP, whatever).  I think “standard” interfaces are important as basic definitions of functional exchange, but hard definitions define fixed structures.  In the current state of SDN and NFV it may be that flexibility and agility are more important.

We likely have the standards we need for both SDN and NFV interfaces in place, because we have standards that can be used to carry the needed information already defined—multiple definitions in most cases, in fact.  Where we have to worry about openness is in how providers of SDN or NFV actually expose this stuff, and it comes down not so much to what they implement but what they permit, what they “accommodate”.  I think that for SDN and NFV there are two simple principles we should adopt.  First, the information/data model used to hold resource and service order information should be accommodating to any convenient interface, which means it should not have any proprietary restrictions on accessing the data itself.  Second, the interfaces that are exposed should be fully published and support development without licensing restrictions.

This doesn’t mean that functionality that creates a data model or an interface can’t be commercial, but it does mean that a completely open process for accessing the data and the exposed interfaces is provided.  That’s “open” in my book.

Tech: Does it Crouch or Does it Stand Tall?

It should be clear to everyone in networking this week that the industry is moving to a very different place.  There was a time when the limiting factor in entertainment, communications, and empowerment was delivery.  We spent like sailors (as they say) to get better delivery because we had a business case for what we could deliver and it was hampered by the limits of network capacity and connectivity.  Practically nobody bothered with a strict business case.  I remember when financial analysts valued an Internet customer at $50 thousand, and valued Internet companies by multiplying that by the subscriber base.  The good old days.

This morning on one of the financial channels, a tech markets specialist was talking about an interview with Cisco’s CEO, and he commented that he didn’t understand how Cisco harmonized the fact that the industry was supposed to be getting better according to Chambers, but oh by the way we’re cutting staff.  I think the answer is pretty clear; Cisco knows darn straight well that the industry’s not going to sustain their growth expectations.  Business cases have caught up with networking…with tech, in fact.

We are starting to see in networking what we’re seeing in computers, which is an exhaustion of the value proposition for further investment.  Clearly we’re not going to toss servers out and go back to high stools and accountants wearing green eye shades pouring over ledgers, but we are already looking for cheaper servers just as Chambers is looking for lower human costs at Cisco.  ROI is the ratio between return and investment, and if you can’t eke out any more return you’ve got only one way to make ROI better.

In the computer space, we heard this week that Dell had gained on both IBM and HP, increasing its server sales when rivals saw theirs dropping.  I think that underneath all the disorder created by potential going-private or getting raided, Dell is facing the transformation in computing as well as anyone could.  In fact, in some ways, the current mess sets Dell up with some ashes from which its Phoenix can rise.  All they need to do is to face the future squarely.

Which is what?  In my terms, it’s the “supercloud”, an architecture for computing, networking, service creation, service delivery, that focuses on agility through virtualization.  Right now, we as an industry are stuck in conventionalism.  It’s like giving a kid a chance to play doctor, or lawyer, or police officer or maybe soldier, and they decide to play “wage slave” instead.  What good is imagination if all you can do with it is replicate the current reality?  Cloud computing, SDN, NFV are all simply applications of virtualization, the realization of flexibility through the creation of a dynamic system of “abstraction-creation” and a generalized way of converting abstractions into reality by making dynamic resource assignments.  That’s what we should be doing, and we’re not.

The cloud lowers costs, they say.  SDN lowers costs, they say.  NFV will lower costs, they say.  All this lowering is just cutting the heart out of the industry, and for no reason.  We have, by my reckoning, almost three trillion dollars per year of available benefits to harness.  Dell, I think, sees that pie (or a piece of it) and is thinking about how to get it.  Who, in networking, can say the same?

Ironically, the vendors that seem to have the best grasp of the current reality are the vendors who for one reason or another would seem the least likely to have any grasp at all.  Alcatel-Lucent is a company that’s struggled with multiple personality disorder from its formation, and that’s never been able to tell an exciting story except maybe over a drink.  Huawei is a company who’s going to win in the current networking game because all its competitors are too dull to recognize that if there is no revolution in networking’s future, the future will be price-based commoditization that only Huawei can hope to win.  That Huawei is already winning.  Yet Alcatel-Lucent has taken some concrete steps toward the supercloud model with CloudBand, and Huawei is really looking at software-defining the network.

Network operators believe in the supercloud concept even if they’d not likely call it that.  They believe that by moving away from static appliances into virtual elements they can improve agility and service creation, and thus address opportunities faster and better.  They believe that the OTTs have nothing on them except the ability to exploit “free” services, and ad sponsorship is both not-free and inherently limited.  The whole global advertising budget is about one quarter of that available revenue upside for the industry.  Operators know how to sell services; OTTs know how to give them away.  In the end, which model wins?

I think the industry could have an exciting future, but as long as enterprises have the CIO reporting to the CFO instead of the CEO and as long as “transformation” means radical cost-cutting, we’re not going to reach it.  And this is the fault of the vendors, because there has never been a free market that succeeded by demanding that the buyer go out and evangelize to convince the seller to produce something.  Innovation, revolution, doesn’t work by demanding buyer risks.  Watch “Jobs” if you don’t agree with that.

People tell me that SDN isn’t going to work, and I agree.  Same for NFV, and for the cloud.  But my agreement isn’t based on a conviction that these concepts can’t work, but on the conviction that the goal we’ve set for them—the goal of wringing some pathetic chump change from re-execution of the tired junk we’ve relied on for decades—can never inspire the market.  We have tied virtualization, as a concept, to a tree and then criticized it for not getting around.  Well, we’re the problem here, and so we as an industry have got to fix it by fixing ourselves.

Cisco Proves Our Point

Well, if you thought that a telecom spending spree or an enterprise market flush with unexpected cash were going to drive Cisco and the industry to unprecedented growth, Cisco’s call likely disabused you of that notion.  While Cisco beat most estimates very slightly, its guidance was considered tepid and certainly the continued references to “challenging macroeconomic environment” on the call didn’t boost confidence.

Telecom spending is certainly ramping up a bit, and mobile is the biggest focus of that incremental spending and Cisco’s gains in mobile infrastructure are consistent with the market reality that you invest where you get the biggest return.  Enterprise spending is down a bit, and that’s consistent with the fact that networking has become a cost center and not an opportunity at the enterprise level.  Cisco’s success with UCS was impressive, but as I said yesterday it’s not enough to pick the low apples of server opportunity here.  Cisco needs to drive the bus to the future, not ride along as a passenger.

An area I think is a missed-so-far opportunity is metro.  Yes, it’s true that mobile gets the lion’s share of investment but mobile investment is transforming metro networking.  There are at least twenty to 100 times as many mobile POPs in a metro as there are wireline POPs, and with LTE each of these will have more data feed than a whole CO did fifteen years ago.  Seamless handoff between WiFi and LTE also create changes in topology, and CDN evolution is pushing caching out toward the edge.  What we need is a new metro infrastructure where all of the services that we create for mobile and content are created virtually from a mixture of SDN and NFV technologies.  I’m talking about “MaaS” or “Metro as a Service”, in effect.  If Cisco is doing well in mobile, they need to be talking MaaS to leverage that success.

The data center is another example.  Even leaving the cloud aside for the moment, we have this data-center-as-a-service thing going on in the media, which is for the most part vacuous hype and nonsense.  Underneath it, though (as is the case with most vacuous hype) is a kernel of truth that could be profound if anyone took the time and effort to deal with it.  We need to be thinking about a future where security, access rights, application coupling, big data, governance, business policies, and all that good stuff are attributes of the platform and not glue-on-and-hope-it-works extensions.  We need to replace data busses with knowledge busses, replace “big data” with “big insights”.  We’ve gotten so used to the idea of managing complexity that we’re risking slipping into creating more of it just so it can be managed.  The network is a necessary arbiter of information exchange, an element in every horizontal and vertical flow and every IT-driven productivity process.  Cisco is in a strong position to make a name for itself by creating DCaaS that means something.

And then, of course, there’s the cloud.  Let me spell things out for you here, Mister Chambers.  If you look at all of the things that could drive cloud data center deployment incremental to current deployments the only one that sticks its head above the statistical noise level is Network Functions Virtualization.  When we run our model on this, as I’ve said before, we find that an optimum NFV deployment would add between 14 thousand and 35 thousand metro-cloud data centers (small to mid-sized to be sure, but still data centers) in the US alone, about three times that globally.  If you want to succeed in the cloud, all you’d need to do is to win big in the optimum NFV space and you’re there.

Cisco has an NFV strategy they’ve been quietly developing in dialogs with Tier One carriers.  They need to expand it, to embrace the full potential of function hosting not only for the current services built on current middle-box technology but also for future services where there are no current box models to draw on.  The wonder of virtualization as an abstraction/instantiation process is that you can create new abstractions and instantiate them with many of the components you already have.  That would provide service agility to operators, and better top-line growth.  Top-line growth equates to bottom-line cost tolerance—more spending on infrastructure.

Cisco knows about cost management.  They’re cutting 4 thousand jobs to improve their profits in a market where they believe they can’t raise revenues enough to provide the needed boost.  Why would they think their operator or enterprise customers are dancing to a different financial drummer?  Don’t they also have to face quarterly calls?  Won’t they respond to a lack of convincing revenue growth by cutting costs, and isn’t buying Cisco gear a “cost” to those buyers?  It’s time to pull your head out of the sand here.  Arguing that an Internet of Things or immersive telepresence or whatever is going to drive up traffic inexorably, and force operators to build out even at a loss, is as stupid as arguing that the need for employment drives Cisco to employ regardless of whether they can earn a profit on the result.

Networking is commoditizing, Mister Chambers.  Your buyers, in both the carrier and enterprise space, would be happy to spend more to get more return, but they won’t spend more while getting less.  It’s your responsibility, Cisco’s responsibility, to make it possible to justify more spending, and to justify it in a solid financially sensible way—the way you justify your own.

Where Will Cisco Find TAM?

Cisco reports its earnings tonight, so you might wonder why I’d blog about Cisco before the Big Event.  The answer is that I’m not blogging about what Cisco did in the last quarter, but what it’s facing in 2014 (I may get to that last quarter on my blog on Thursday!)

Chambers has been very clear in his statements that Cisco’s future lies in becoming the top IT company, meaning that Cisco wants to grow out beyond networking.  Its UCS servers have in fact been very instrumental in placing Cisco in the forefront of the cloud revolution, and certainly they’ve helped Cisco expand its total addressable market (TAM).  The challenge Cisco faces is that servers aren’t enough.

According to my surveys, Cisco’s UCS successes have been in the best possible areas—network-related server applications like cloud computing.  That’s helped Cisco to gain market share in the server space by leveraging its network incumbency and also helped to sustain reasonable margins on its servers, no mean feat given that servers are commodities in the broad market.  The thing is, Cisco’s aspirations commonly lead to the statement that it wants to be the “next IBM”, and taken literally that would be a bad thing.  IBM is suffering now, suffering from loss of strategic engagement created in part by poor articulation and in part by the diminishing role of computer hardware.  Dell gained market share on both IBM and HP, demonstrating that servers don’t have the old-time strategic shine.  And servers, stripped of the Cisco cachet, are what UCS is.

I think that most of the smart strategists at Cisco—and there are many—realize that the “next IBM” path has to take Cisco through the cloud, not through the server.  In fact, cloud computing anonymizes servers because it virtualizes them.  However, “the cloud” can’t mean cloud stack software because the fact is that no new proprietary strategy has a ghost of a chance of competing with current cloud stacks.  Surely the cloud can’t mean “applications” to Cisco; they’d have to buy a bunch of companies to be credible.  So what does it mean?

One path, which Cisco has sort-of-taken with WebEx, is the “cloud-means-cloud-services” track.  Cisco’s biggest customers (network operators, government, and big enterprises) are generally happy to buy software as a service, and the former group would be happy to resell cloud-based services from others too.  The problem with cloud services is that margins on IaaS are very thin, PaaS is something for platform/OS providers to offer, and SaaS means you either have to own software or have a killer developer strategy.  I think this approach could ice some cloud cake if Cisco could get the lower layers baked, but it’s not going to fill the pan on its own.

Another path would be to look at the cloud-to-network boundary.  Software-defined networking is IMHO evolving into a two-layer architecture with a software connectivity framework on top and a hardware layer driven more by policy than by centralized controllers at the bottom.  Cisco has done a number of SDN-related acquisitions but it’s not really articulating a cohesive two-layer strategy at this point.  Might Cisco meld all its stuff into a new software-overlay model that’s designed from the first for this new two-layer SDN approach?  That would be a smart move in some ways.

The only problem with it is that any virtual-overlay model of SDN tends to make the underlay part, the hardware, into a commodity even faster than before.  If hardware is where you make your bucks, that’s not good news.  I think it’s possible to meld the layers to create a whole greater than the sum of the parts, but I don’t know whether Cisco would be willing to take the risk…I don’t know that I would if I were in their place.

That leaves the last approach, which is to create a cloud platform from the network side, what I’ve called a “supercloud”.   OS players have a lock on PaaS now because most PaaS is a step up the stack from bare metal, meaning bare server.  Why not think about PaaS as a step above bare resources in a more general way?  Extend both hosting and networking in a common model, with the features exposed as platform APIs?

This is a notion with some precedent; Alcatel-Lucent’s CloudBand is in fact a model like this, though the company doesn’t position it that way (at least in a public sense).  And if you look at CloudBand’s architecture you find a cloud-plus-network framework in another way—the Carrier Platform as a Service is an overlay on the cloud that adds additional network functionality.  That’s where Alcatel-Lucent proposes to host network functions virtualization, for example.

The fact that a competitor is doing something like this could be a stimulus for Cisco or a negative.  Cisco likes to be a “fast follower” rather than a leader, letting others take the big risks, but they also don’t like to look like they’re embracing a competitor’s approach.  Alcatel-Lucent doesn’t represent any major threat to Cisco’s core router/switching business (nobody does) but they are a force to be reckoned with in the carrier space.

What might bring Cisco’s ultimate plan out into the open is NFV, and in particular the proof-of-concept initiative that’s developing in the NFV ISG itself.  PoCs aren’t supposed to be competitive, they’re supposed to validate or refine elements of the NFV specification process.  But it’s hard to see how a highly visible and highly credible PoC from a big network vendor wouldn’t be a standard against which other vendors would be judged.  Can Cisco keep its own approaches under wraps if somebody like Alcatel-Lucent jumps out and does something highly credible?  And remember, CloudBand is already an NFV platform as well as a platform for SaaS.

Cisco needs to make a move here, and soon.  There are only a few opportunities to grab onto a credible “supercloud” mission, and other little grasping fingers are already brushing the prize.

Lessons from Blackberry

We have a bit of an interesting counterpoint story this morning, involving the future of networking and how we navigate transitions.  Let’s open with Blackberry, who has announced they will be forming an activity to seek bids for JVs, acquisitions, or whatever.

Blackberry was the darling of the mobile phone space prior to the iPhone, and while it’s tempting to think that what won for Apple and lost for Blackberry was the coolness, the truth is that Apple won because they realized that the smartphone market would be won by somebody who could sell the phones to consumers.  A mass market, when it exists, subsumes all the specialized markets, and iPhone success drive Blackberry out of its little niche.

I think everyone realizes now that the right move for Blackberry was to be more aggressive in the consumer space; in fact, they should have realized all along that this sort of thing would happen and even realize who’d be the player likely to make the move.  Instead, Blackberry (RIM, then) hunkered down to defend its old position and withered there.

We now have a similar transition going on in networking.  There was a time when a third of all network revenue was attributed to business services and when businesses were the only consumer of broadband.  A typical T1 access connection (one end, not including the interexchange part) cost about nine hundred bucks per month.  Now we have services in the hundreds of megabits per second for a fraction of that price.  Networking overall has gone mass-market.

Business networking was about connectivity, and consumer networking is about experience delivery.  Everyone knows that if you’re getting something delivered, all you care about is that it gets there in one piece, and it follows that experience networking doesn’t offer a lot of opportunity for differentiation for the delivery part.  Bits are bits—ones and zeros, so differentiate that!  But it also follows that if experiences are what you care about, producing them is where the money is, and will be.

This is what’s driving the business of networking.  We are in an era where what matters is cheap transport and valuable experiences.  That’s what the revolution to the real “next-generation network” will have to contend with.  Produce bits as cheaply as possible and then toss them out like Johnny Appleseed to pave the way for profitable overlays.  The telcos, like RIM, made a mistake by not realizing this and grabbing some of those profitable overlays when they had the chance.

For the network vendors, the people who make money pushing bits, we’re at a similar crossroads.  The bits get cheaper and the experiences get better in our future, so you can try to capitalize on either or both of these moves.  The embodiment of the trends that impact these two directions are the concepts we call SDN and NFV.  Vendors have to accept that one or both these options will dominate their future, or they go the way of RIM.

SDN is IMHO a pure defense, a bet that commoditizing bits will provide a mechanism to retool networks to simplify operations and so reduce its cost, and also to specificize services to the exact connection model needed for delivery, tossing out the more generic architectures of IP and Ethernet communities that have dominated our thinking for decades.  I don’t disagree that it could be something that helps some vendors gain market share, but it’s still a less-than-zero-sum game here.  The bit-pushing pie is going to get smaller over time, more dominated by cost-based leaders like Huawei.  If you’re honest with yourself you can see this already.

NFV is more complicated.  Yes, the initial Call for Action white paper issued nearly a year ago was all about cutting costs by substituting general-purpose servers for more expensive appliances, but I think the founding operators knew all along that the architecture that hosted the virtual functions could rule the world later on, so to speak.  NFV is the potential union of all the networking concepts of the modern age, all directed at framing that elusive NGN model.  Whoever wins in NFV wins in NGN.

So who is that?  A recent report suggests it will be Cisco and NSN, but I’m not convinced.  Cisco does have the best inventory of NFV-related assets because it has (alone among the major network vendors) a strong server position.  But even a strong server position isn’t a durable asset in an initiative that’s targeting commercial off-the-shelf servers.  Cisco is going to get a RIM-like lead in the NFV space, an opportunity to shape its total vision based on a unique and early access to a subset of the market.  We don’t know how to do NFV now, and somebody with all the pieces that could be used might well figure it out.  But they still have to be able to exploit that knowledge in a durable, profitable way.  And they have a big problem, which is that network transformation always hurts the incumbent the most.  That’s why RIM thought they should stick their head in the sand with consumer smartphones.

Cisco shares that risk of overhanging old products with NSN.  NSN’s “Liquid” stuff is smart positioning, but it’s not SDN and its not NFV, and if NSN were to make a big thing out of either of these technologies it would lead it down a path that would demand more network equipment, some of the very stuff NSN has been shedding in fact.  NFV and SDN collide and cohabit in the metro, and NSN is more about mobile than metro.

Huawei is the Apple of networking, not because they have the consumer integration smarts but because they have the key asset in the network equipment space—cost leadership.  All they have to do is not-lose and they win.  For the rest of the vendors, the time has come to step up and take some proactive steps, before the Fate of RIM befalls them.

T-Mobile Teaches an (Indirect) NGN Lesson

T-Mobile has been shaking up the mobile industry with its no-contract-and-phone-upgrades plans, and they’ve worked to gain post-pay customers for the carrier, not a first but certainly a recent first.  The problem is that this “good news” really isn’t more than a “transient good” because this model isn’t sustainable.

Suppose you’re in the milk business.  Milk is tough to differentiate on features and you don’t want a milk price war, so you decide to offer people a free glass with their milk, in return for their committing to buy from you for the next six months.  Your competitor will quickly match that, and maybe raise it to a two-glass set.  Next thing you know you’re offering a complete set of dishes and a table, and competitors are quickly matching every stride you take.  All that happens is that everyone’s profit margins go down the tube.

The answer, as we can likely see from this example, is to find a way to feature-differentiate what you’re actually selling.  As simple as this may seem, the telecom industry has done a truly awful job of facing the obvious, for a variety of reasons.  Biggest of these is that it takes time to find service differentiation, and operators are like every other business these days—focused on delivering the next quarter.  And network equipment vendors are in the same boat.

I talked to some sales people at some of the giant equipment vendors over the last month, and one thing I found interesting was that all of them said that their company was committed to strategic selling.  The problem was that they were also dedicated to meeting quota for the current period, and you get paid for the latter and not for the former.  Hint to sales managers:  Sales people sell what you commission them to sell, not what you tell them to sell.

Logically, the solution to this situation would be for vendors to build future service growth into current product sales, so that new stuff was suitable to build differentiation and ARPU.  The problem in that picture is that it’s extremely difficult to do much in the way of future-proofing at zero marginal cost.  With Huawei out there cutting margins for everyone as it is, who would want to go into a head-to-head with Huawei with a five or ten percent cost disadvantage created by adding features need in the future but not in the present?  Sounds like a gold-edged milk cap to me!

This is background for the whole SDN and NFV thing, IMHO.  Underneath it all, what operators are trying to do is transform their business model in a way that doesn’t require them to pay too much forward—they have to control “first cost” and return on infrastructure.  In the surveys I do, I see a clear signal that they want to transform in two phases—a phase that delivers next-gen infrastructure justified by current-period cost reduction, and another phase that exploits that infrastructure to create differentiated services.

SDN was a failure as a strategy here.  First, it doesn’t deliver revolutionary savings.  Second, it is a connection-plane model so it’s not going to create higher-level feature value-add, and at the end of the day pushing a bit with Technology A isn’t much different to the bit-pushee than it would be with Technology B.  Now the torch has been passed to NFV, and NFV has the advantage of being primarily an IT technology rather than a network technology.  You can say that it’s a “virtualization” approach, but it’s really more a software approach.  With NFV you say goodbye to boxes and run applications that replicate the features those boxes provided.  If you demand sanctioned, standardized, interfaces and functionality for each virtual element then it’s easy to play best-of-breed and lowest-price with competing vendors, so your costs are managed.  That’s where the first-phase benefit comes.

For the second phase, you’ve got to be more of a visionary.  The cloud is the future of IT, so the cloud is the future of NFV.  If NFV can create an architecture that can deploy and manage services created by cooperative virtual functions, it can deploy multi-component cloud computing applications, and hybrids of these two things.  This means that the operator gets a future-proof service layer as a byproduct of getting the benefits of capital cost reduction.  No first-cost risk any more.

Vendors, of course, are largely un-thrilled with this model.  While it’s possible that it might create as much or more trouble for Huawei as it does for the rest of the field, it creates a whole new host of competitors and it also eliminates the proprietary lock that appliance-based services tends to create.  At any rate, the vendors are all very careful to guard the boundaries of whatever NFV they’re thinking about in an attempt to protect what they have while chasing what their competitors might have.  But with NFV established as a common goal of network operators, can vendors ignore it?  The first guy who conforms gets an advantage, so in the end the vendors will follow along just like the other operators followed T-Mobile’s lead in plans and phones.

The challenge for NFV in meeting its goals, and those of the operators, is partly technical and partly procedural.  On the technical side, the management of this new “supercloud” is something out of the twilight zone in terms of issues and changes.  Attempts to solve the management problem of NFV infrastructure through traditional means risks compromising opex and the whole NFV business case.  On the procedural side, the big question is whether the ISG can sustain an open model at the three layers of NFV—the virtual network functions that replace appliances, the ecosystem that deploys and manages them, and the infrastructure they run on.  Implementations will be the final proof of this point, and we’ll likely see many more coming this fall.

Can NSN Change More than It’s Name?

NSN has changed ownership but not its name, it’s simply changed what the acronym decodes to.  “Nokia Solutions and Networks” is actually kind of appropriate at two levels.  First, obviously, it conveys Nokia’s sole ownership of the venture.  Second, it conveys that what NSN has been turning into is a “solutions” or professional services player.  By shedding more product lines, NSN has actually anticipated a general trend among the giant telecom vendors—spin off or sell off the low-margin stuff where you can’t differentiate on features and abandon those fields to Huawei or ZTE.

If you ran a plumbing supply store, you might find yourself with a mixture of pipes and fittings on which you earned maybe 15% gross margins, some fixtures like toilets that gave you 30% margins, and some gilt-edged toilet seats that delivered 100% margins.  You might also have an installation department that delivered 40% margins.  If customers were looking for one-stop shop, your profits on an overall plumbing job could be decent.  The problem comes when they start shopping around for all the pieces, acting as their own “general contractor”.  Then you’re forced to stop carrying the low-end stuff because you can’t compete on price, and that low-margin basement product set doesn’t pull through the good things.

The question is whether, once you’ve done this, you see enough deals.  When a customer wants a new toilet they likely want a new seat, and if you sell only seats they get used to seeing the guy who sells both seat and toilet.  They then go to that supplier when they need a seat alone.  The moral is that you can’t shed pieces of a natural ecosystem.  NSN wants to supply mobile technology, which at one level seems like an ecosystem but which is really a part of a larger system called “metro”.  That makes them vulnerable.

You probably know by now that I’m a champion of metro networking as the foundation for carrier spending and vendor opportunity, but I have a good reason.  First, everybody invests where they get return, and metro return is in some cases ten times core return.  Nearly all the profitable traffic operators carry never gets out of the metro network.  Second, metro is being transformed by ecosystemic pressure, which means that the combination of metro toilets and metro seats (so to speak) is highly valuable to buyers.

One ecosystemic pressure on metro is SDN, which is a technical response to the problem of low return on access bandwidth.  Operators want to flatten the OSI layers to reduce both capital and operations costs, and SDN offers a theoretical way of doing that.  I emphasize the “theoretical” because there have been few presentations of SDN that actually live up to potential.  Operators today tell me that there has been no significant improvement in metro economics other than the consistent reduction in unit cost we’ve seen all along.  Price is falling faster, so that’s not enough.

Another ecosystemic pressure on metro is created by NFV.  NFV is a kind of one-two punch in terms of impact.  In the near term, NFV evolution is justified by the reduction in capital spending on custom devices.  In the middle term, the key is the operationalization of cloud-hosted services and features, and in the long term it’s the agility that software-based networks offer in responding to market trends.  But whatever drives NFV deployment, the impact of it is to create a different style of metro network, almost an overlay or parallel structure.  The “metro cloud” is a highly meshed collection of NFV-justified data centers, some as small as perhaps a rack or two of gear and some large enough to change local weather.  Metro today is an aggregation network.  Further, this future metro cloud becomes the way that we deliver things like content and even the way we manage mobility.  There’s a continuous connection between it and the old aggregation structure, and that creates this highly dynamic ecosystem.  If you can supply the metro of the future, then you win.  You can supply the metro of the future if you can supply the key elements in the metro cloud.  The rest of the infrastructure is just those low-margin pipes and fittings.

So where does that leave NSN?  My view is that they believe that because mobile networking, meaning things like IMS and the RAN, are getting more capital attention than the rest of the network, they can leverage a mobile win and control enough accounts.  My view is different; NSN wants the toilet seat to pull through the toilet.  In order to gain conclusive control of metro without having all those cheap pipes and fittings, NSN has to command the metro cloud and that’s a very tall order for a company without a shred of cloud DNA.

Even that won’t be enough, though.  Anyone who has ever shopped for bathroom fixtures knows that advertising is a big part of any victory a seller might gain.  Nobody becomes a retail giant by building a store somewhere and then hoping people will stop in on their way by.  You have to draw people in, and that’s something network vendors are truly awful at in general, but that the Eurogiant vendors have been particularly bad at.  Does NSN have an SDN, NFV, and cloud position?  Not really; they said they were addressing “business issues first and technology issues second”.  The problem is that buyers in the metro space are looking for technology that transforms their business, and the NSN articulation seems to be a set of goals without any realizations (see http://www.lightreading.com/packet-core/nsn-unveils-its-technology-vision/240156505).

NSN didn’t change their acronym, and that’s a good thing for people like me who would otherwise struggle for six or more months getting used to a new name.  The real question is whether they changed anything, and that’s the question I now want them to answer…for me and for the market.