What Winds are Blowing the Cloud?

The Light Reading story on Verizon comments regarding  cloud adoption by enterprises is yet another example of the issues that face a revolutionary development in an age of instant knowledge gratification.  It’s a decent story, and it frames the cloud dilemma of our time.  There is no question that the cloud is revolutionary, but there’s a big question on our readiness to project its value effectively.

Almost 20 years ago, CIMI started modeling buyer behavior based on a decision model, and that process obviously required us to survey buyers to determine how they made decisions and what factors influenced them.  There are obviously differences in how revolutionary stuff is justified versus evolutionary stuff, and this survey process made it clear that for revolutionary developments buyers pass through some distinct phases.

The first phase is the buyer literacy development phase.  You can’t make a business case for something if you can’t begin to assess the costs and benefits.  When something like the cloud bursts on the scene, the first thing that happens is that businesses start grappling with the question “what good is it?”  In the last fifteen years, this grappling process has been hampered by the fact that objective information on value propositions is increasingly difficult to obtain.  In the spring survey, buyers told us that there were no truly credible sources for business-level validation of a revolutionary technology.  Vendors over-promote, the media trivializes, and analysts are seen as either agents for the vendors or simple-minded regurgitators of the standard market party line—everything new is good.

When buyer literacy reaches about 33%, meaning when about a third of a market can at least frame the business value proposition, the market can sustain itself naturally.  When literacy is below that, the problem is that buyers can’t readily assess the technology, and can’t validate it through conversations with their peers.  We have yet to reach anything like that threshold with the cloud, and so the market is struggling, and Verizon’s original comment about slower-than-expected adoption is right on.

Obviously, a sub-par market literacy rate isn’t the same as a totally illiterate market.  Within the mainstream enterprise market are pockets (verticals) where literacy exceeds the required threshold, and in those sectors it’s possible to move things along.  We are in fact seeing some of that movement today, and so when Verizon says that the cloud’s adoption rate is picking up that statement is also correct.

The real problem isn’t the inconsistency of Verizon’s view of the cloud, it’s the question of just what “value proposition” buyers are becoming literate with.  Stripped of all the hyperbole, the value proposition of the cloud is that hosted resources can be more efficient in addressing IT needs than dedicated resources because the latter will likely be under-utilized and present lower economies of scale for both capital/facilities costs and operations costs.  That statement is true for the SMB space because smaller IT users never achieve good IT economies.  It’s also true for some enterprises, where distribution of IT resources has already created pressure for “server consolidation”.  It is not true for mainstream IT functions because large enterprises achieve nearly as good an economy of scale as a cloud provider would.  Furthermore, cloud pricing strategies and the inherent dependence of cloud applications on network connectivity create issues in price/performance, and most companies will be reluctant to cede mission-critical apps to a cloud player (look at Amazon’s recent outage as a reason for this).  So where Verizon is not correct is inferring that we are seeing a gradual but uniform acceptance of the cloud.  What we’re seeing is increasingly literate buyers recognizing what makes a good cloud application and what makes a lousy one.

If your app runs on a rack-mount server in a big data center with backup power and a staff of techs keeping it going, you are not likely to gain much by taking this app to the cloud.  If the app requires maintaining and accessing a couple terabytes of data per day, you’re not likely to get anything out of the cloud either.  More resource utilization, less cloud value.  More effective in-house operations support, less cloud value.  Higher data and access volumes, less cloud value.

Of course, all of these truths (and truths they are, regardless of vendor or media hype) are predicated on an important assumption that might not be true itself.  The assumption is that the value of the cloud is in its ability to lower costs.  That value proposition, in my surveys, has consistently led to the conclusion that about 24% of current IT spending could be cloudsourced.  Less than a quarter, and not including the prime business core apps.  But suppose we went deeper.  We could gain nearly four times this level of cloud adoption by fully realizing the potential of cloud computing to support point-of-activity empowerment, the framing of IT power at the service of a worker wherever that worker is and whatever that worker is doing.  But to do this we have to rethink the way we empower workers, we have to rethink the way we build applications, how we organize and store data, how we couple processes to business activities…everything.

Which is why you don’t hear about this from cloud players.  Who wants to sit back and live off the in-laws while you wait for your prospects to restructure their whole IT plan?  Why not just go out there with a simple “get the cloud to save money” story, and hope that somehow enlisting media and analysts to support the cloud as being a kind of divinely mandated future will pay off for you?  Earth to cloud providers—you will never achieve your goals if you don’t help your prospects set and achieve reasonable goals for themselves.

Two Winds Blow Changes

Out with the old, in with the new.  Sounds like New Year’s Day, and so clearly inappropriate to the season in a calendar sense.  It also works with networking, though, and we have a couple of the OWTOIWTN items in news today.

First, financial sources are saying today that Juniper is dropping MobileNext, its architecture for mobile metro infrastructure and connectivity.  One of the themes of Juniper’s future success, at least as touted by analysts and on some earnings calls, was that MobileNext would take off and increase Juniper’s share of the mobile capex pie, the only pie getting bigger in all of the carrier world.  So what now?

Actually, Juniper hasn’t been setting the world on fire with MobileNext so in one sense it’s probably not losing something in a tangible sense if the story proves true.  Operators have tended to look to vendors with a bigger piece of the total mobile infrastructure picture in their configuration diagrams, particularly RAN and IMS.  Juniper didn’t have that and so was always a bit on the outs, and it was late to market to boot.  What Juniper is losing whatever intangible upside that a bigger chunk of mobile spending might bring, but that’s where things get complicated.

Mobile metro infrastructure, notably the Evolved Packet Core (EPC) stuff, is high on operators’ list of things they want network-function-virtualized.  Nearly everyone is already near to or offering virtual MME, the signaling-plane part of the process, but operators are looking for hosted versions of the critical PGW and SGW stuff that handles traffic.  Given that, it really doesn’t make a lot of sense to be expecting big windfalls from an EPC-focused product line.

But then there’s the flip side.  If you are a vendor who wants mobile capex bucks in your own pocket, and if you know you lack the full mobile ecosystem, why not jump out with your own totally virtual solution to mobile metro infrastructure?  Particularly when operators tell me that they believe the big players in EPC will not surrender their lucrative business by exposing their assets in virtual form.  The operators want virtual EPC, not just cloud-hosted discrete pieces of 3GPP diagrams, but a single metro-black-box that exposes external EPC-ish interfaces and does everything under the covers, but using new technology—like SDN, which Juniper has.

Poison the well here, Juniper.  If you can’t make EPC money, you should create a virtual EPC, a whole virtual metro core, and make sure nobody else does either.  That would twist competitors’ noses, make your customers happy, and make this whole MobileNext thing look less like a gigantic case of having your nose in the sand as the industry moved on past.

The other news item is the suggestion that Australia could walk away from their ambitious plan to push fiber broadband to over 90% of Australian homes and businesses.  The new proposal, which some say would cut the cost of NBN by more than half, would cover just a fifth of homes with FTTH and the remainder via FTTN, with a DSL hybrid making the final connection.  While this sounds like a typical political bait-and-switch game, the real news is that NBN has so far reached only a small fraction of the target deployment.  That’s what I think raises the big question.

Australia, like some areas of the US, suffers from low “demand density” the revenue-per-square mile that can justify broadband deployment by generating ROI.  The solution there was to bypass the incumbent (Telstra) and drive a publicly funded not-for-profit access business.  Run by?  Well, a former vendor CEO for one.  When this came about, I was highly skeptical given the fact that every politician loves to talk about giving the public free or nearly free Internet at extravagant levels of performance, at least till the actual deployment has to start and the actual bills have to be paid.  Creating what was just short of outright nationalism of assets of a public company to protect a scheme that had no credible foundation to prove it could ever succeed wasn’t a good move.

It probably was an inevitable one, though.  The lesson of Telstra is that you have to get access carriers, ISPs, make money.  If you don’t, they don’t invest.  If you have an unfavorable geography or customer base such that ROI to offer great services isn’t likely available, you can do some taxing or subsidy tricks to make things better, but you’d darn straight better think carefully about giving the whole problem to the government to solve.  Incumbent carriers know how to run on low ROI.  If we did any credible measurement of “government ROI” where do you suppose their numbers would fall?  In the toilet, in most cases.

Telecom regulation has to be an enlightened balance between consumer protection and “the health of the industry”.  We have passed over onto a decidedly consumeristic balance in Europe and under Genachowski the US has done some of the same.  Australia simply extended this bias to its logical conclusion.  If you can’t get companies to do what you want for the price you want because it’s not going to earn respectable ROI, you let the government take over in some form.  Maybe you force consolidation, maybe you quasi-nationalize.  In all cases, you’ve taken something you had regulated and made work as a public utility, and turned it into a business experiment.  Australia shows that experiment can have a bad outcome.  The government shouldn’t be running networks, and they can’t micromanage how they’re run if they want anyone else to run them either.

Shrinking Big Data and the Internet of Things

If you like hype, you love the cloud, and SDN, and now NFV.  You also love big data and the Internet of things.  I’m not saying that any of these things are total frauds, or even that some of them aren’t true revolutions, but the problem is that we’ve so papered over the value propositions with media-driven nonsense and oversimplification that it’s unlikely we’ll ever get to the heart of the matter.  And in some cases, “the heart of the matter” demands some converging or mutual support from our elements.

The “Internet of things” is a good example of a semi-truth hiding some real opportunity.  Most people envision the Internet of things to mean a network where cars, traffic sensors, automatic doors, environmental control, toll gates, smart watches, smart glasses, and maybe eventually smart shoes (what better to navigate with?) will reside and be exploited.  How long do you suppose it would take for hackers to destroy a city if we really opened up all the traffic control on the Internet?

The truth is that the Internet of things is really about creating larger local subnetworks where devices that cooperate with each other but are largely insulated from the rest of the world would live.  Think of your Bluetooth devices, linked to your phone or tablet.  The “machines” in M2M might use wireless and even cellular, but it’s very unlikely that they would be independently accessible, and in most cases won’t have a major impact on network traffic or usage.

These “local subnetworks” are the key issue for M2M.  Nearly all homes use the public address 192.168.x.x, (a Class C) which offers over sixteen thousand addresses but only 256 per network.  There are surely cities that would need more, but even staying in IPv4 the next public address range is a Class B with over 65 thousand addresses, and there’s a single Class A with over 16 million addresses.  Even though these addresses would be duplicated in adjacent networks there’s no collision because the device networks would be designed to contain all of the related devices, or would use a controller connected to a normal IP address range to link the subnets of devices.

What this is arguing for is a control architecture, and that’s the real issue with M2M or the Internet of things.  If we have local devices like wearable tech, the logical step would be to have these devices use a local technology like WiFi or Bluetooth to contact a controlling device (a phone or tablet).  The role of this controlling device is clear in a personal-space M2M configuration; it’s linking subordinate devices to the primary device.  In sensor applications of M2M, this controller would provide the central mediation and access control, the stuff that lets secure access to the network happen or that provides for control coordination across a series of sensor subnets.

To me, what this is really calling for is something I’ve already seen as a requirement in carrier and even enterprise networks—“monitoring as a service”.  The fact that you could monitor every sensor in a city from a central point doesn’t mean that you have to or even want to do it all at the same time.  In a network, every trunk and port is carrying traffic and potentially generating telemetry.  You could even think of every such trunk/point as the attachment point for a virtual tap to provide packet stream inspection and display.  But you couldn’t make any money on a network that carried all that back to a monitoring center 24×7, or even generated it all the time.  What you’d want to do is to establish a bunch of virtual tap points (inside an SDN switch would be a good place) that could be enabled on command, then carry the flow from an enabled tap to a monitor.  Moreover, if you were looking for something special, you’d want to carry the flow to a local DPI element where it could be filtered to either look for what you want to see or at least filter out the chaff that would otherwise clutter the network with traffic and swamp NOCs with inspection missions.

This to me is a good example of what we should be thinking about with virtual networking.  If networking is virtual, then network services should be defined flexibly.  Who says that IP or Ethernet forwarding are the only “services”?  Why not “monitoring as a service” or even “find red car in traffic?”  NFV, in particular, defines network functions as hosted elements.  Cloud components and all manner of traffic or sensor-related functionality are all hosted elements too, so why not envision composite services that offer both traffic handling and device control (as most networks do) but also offer functional services like monitoring-as-a-service or “find-red-car?”

In at least the monitoring applications of M2M, “big data” may be more an artifact of our lack of imagination in implementation than an actual requirement.  OK, some people might be disappointed at that conclusion, but let me remind everyone that the more complicated we make the Internet of things, the more expensive it is and the less likely it is to ever evolve because nobody will pay for it.  We have to rein in our desire to make everything into an enormous tech spending windfall because there is absolutely no appetite for enormous tech spending.

SDN and NFV cooperate through network services, too.  Simplistic NFV design might use SDN only to create component subnetworks where virtual functions live.  But why stop there?  Why not think about all the services that completely flexible packet forwarding could create?  And then why not combine SDN connection control with NFV function control to produce a whole new set of services, services truly “new” and not just hype?  Could we perhaps find an exciting truth more exciting than exaggeration?  Well, stranger things have happened.

Two Tales, One Cloud

If you’re a cloud fan, which I am in at least the sense that I believe there’s a cloud in everyone’s future, it’s been a mixed week for news.  VMware has announced its Nicira-based NaaS platform aimed I think at cloud data centers, and the move has gained a lot of traction among the “anybody-but-Cisco” crowd, and a major Amazon outage has made more people wonder how cloud reliability can be better when cloud outages seem a regular occurrence.

On the VMware side, I think that there’s an important move afoot, but not one as revolutionary as VMware might like to portray.  If you look at software-overlay-modeled SDN, which space Nicira launched, it’s evolving in two dimensions.  First, it’s spreading more to a complete network architecture by embracing more end-to-end capability.  Second, it’s becoming more a formal “network-as-a-service” framework, focusing on what it delivers more than how it’s delivered.

The challenge for VMware is that anything that’s designed to be a piggyback on virtualization is going to be inhibited with respect to both these evolutions and for the same reason—users.  Making NaaS work inside a data center or even at the heart of a cloud isn’t all that difficult, but the challenge is that you’re either focusing on NaaS services that are horizontal inter-process connections or you’re doing one half of a network—the half where the application or servers reside—and not the end where users connect.  With limited geographic scope you can’t be a total solution.

I think it’s very possible to construct a model for enterprise network services wherein each application runs in a subnet, each category of worker runs in a branch subnet, and application access control manages what connections are made between the branches and the applications.  VMware could do this, though I admit it would force them to create branch-level software SDN tools that would necessarily rely on software agents in end-system devices.  But would VMware’s “excited” new partners jump on a strategy that threatened network equipment?  “Anybody but Cisco” has more partner appeal than “Anything but routers!”

The thing is, all of this protective thinking is inhibiting realization of SDN opportunity by limiting the business case.  SDN isn’t one of those things that you can cut back on and still achieve full benefits.  The less there is of it, the less value it presents, the less revolution it creates.  For VMware and its partners, the big question is whether SDN in their new vision is really “new” enough, big enough, to make any difference.  What it might do is set Cisco up to turn the tables on them, because nobody will like little SDN in the long run.  Go big or go home.

With respect to Amazon, I think we’re seeing the inevitable collision of unrealistic expectations and market experiences.  Let me make a simple point.  Have twenty servers spread around your company with an MTBF of ten thousand hours each, and you can expect each server to fail on the average about once every year and a half, but there’s a pretty good chance that at least one of them will fail every month, so something will be down often.  Put the same 20 servers in a cloud data center behind a single cloud network with perhaps 20 devices in it and you have a whole new thing.  We can assume the same server MTBF, but if the network works only if a half-dozen devices all work, the MTBF of the network is a lot lower than that of the servers, and when the network fails all the applications fail, something that would have been outlandishly improbable with a bunch of servers spread around.

My point here is that the cloud is not intrinsically more reliable than discrete servers, it simply offers more and better mechanisms to control mean time to restore, or MTTR.  You may fail more often in the cloud but you’ll be able to recover faster.  If one of our 20 servers breaks it might take hours or days to get it back—you send a tech and replace it, then reload all the software.  Amazon was out less than an hour.  Could Amazon have engineered its cloud better, so the outage was shorter?  Perhaps, but it would then be less profitable and we have to get away from our childish notion that everything online or in the cloud is a divine right we get at zero cost.  Companies either make money on services or they don’t offer them.

The fault here isn’t Amazon, it’s ours.  We want to believe in the cloud, and we don’t want to fuss over complicated details like the formulas for calculating MTBF and MTTR for complex systems of devices.  The details of cloud computing are complicated, the suppliers of the services and the vendors involved have little to gain by exposing them, and so if buyers and the media don’t demand that exposure we’ll never get to those complexities, and never really have an idea for how reliable a given cloud service is, or could be.

The other point I think is interesting about the Amazon cloud outage is that we’ve had several of these now and the big news is the number of social-network sites that are out, not the companies who have lost all their compute capabilities.  It’s not that company outages aren’t important but that it’s likely all the big customers of Amazon’s cloud are social network startups.  That’s not a bad thing, but it does mean that those who use Amazon’s cloud growth as a measure of cloud adoption by enterprises may be mixing apples and oranges.

Two tales, and both suggest that we’re not getting the most from the cloud because we’re not trying hard enough.

Microsoft’s Problems: More than One Man Deep

Probably a lot of people, both now and in the future, are going to say that Steve Ballmer’s departure from Microsoft was “the end of an era”.  Certainly in a management continuity sense that was true; Ballmer was Gates’ heir apparent after all, so he was a continuation of the original Microsoft.  What’s not certain is whether the original Microsoft has ended.  If it hasn’t, then Microsoft is in big trouble.

Another thing a lot of people are going to say is that Microsoft wasn’t “innovative” and that’s not true IMHO.  Microsoft had all the pieces of the mobile-device pie in place, had an early insight into the cloud, had street creds in all of the critical innovations of our time, at least as early as some who are now known for “developing” the spaces.  The thing that hurt Microsoft was that classic need to trade off some of the present to seize the future.  Behind every Microsoft innovation was the shadow of Windows, and Microsoft could never get out of it.

I don’t see much value in reprising how Microsoft got to where they are except in a narrow sense to weave the tale of how they might get out of it.  If you spent too long looking out the Windows (so to speak) then you’ve got to turn away and do something else, and that raises two questions.  First, what else?  Second, is it too late for any steps to save Microsoft.

If you polled the masses for where Microsoft needed to go, you’d likely get “the cloud” and “wearable tech” as your answers.  I think that underneath it all, these are faces of the same coin.  As technology becomes more portable, it’s not surprising that you’d start to take it with you.  If you do, then you’re likely to weave it into your life more intimately than you’d weave something that was nailed to your desk somewhere.  If you do that, it becomes an information conduit to some broader infrastructure—the cloud—and it’s also helpful to have those tech elements integrated in some way with what you normally have and wear while romping about.

The point here is that smart appliances creates a new behavioral revolution, tech-wise, and it’s that revolution that Microsoft has to play.  What happens to how we live, entertain ourselves, work, play, think, decide, when we have an awesome agent clipped to our belt or in our pocket or on our wrist or nose?  This is the stuff Microsoft needed to be thinking about, and still needs to plan for in some way.  The PC was a more distributable form of computing than the mini, which was a more distributable form of the mainframe.  We still have mainframes and minis, but as we move to smaller and more portable devices we shift our behavior to accommodate a more intimate interaction with technology.  Microsoft wanted to see the future as the PC versus these things, and had IBM done that when PCs first came along they’d likely be bought by somebody else by now.  Which could happen to Microsoft, in the extreme.

So what does Microsoft need to do?  Start with the behavior and not with the device.  How exactly will people change their lives based on portable technology?  We know that whatever it is, it will present itself as device agents for cloud knowledge and power.  That means a new software architecture, a new network architecture, new devices.  If I have a phone and a tablet and a watch and glasses that are all empowered, do I have to contort myself into a knot to look at them all in quick sequence?  Imagine walking down the street in a crowd where everybody’s doing that; it boggles the mind what Times Square might look like.  So you likely have to see wearable tech as a dynamic ecosystem.  That’s a space where the Apple’s and Google’s haven’t got all the answers yet, so Microsoft could do something there too.  All of these behavioral impacts create opportunities, but all of the opportunities don’t endure forever.  It’s too late to have a tablet success, or a phone success, Microsoft.  You need to have a behavior success.

All of this is true for the rest of the IT and network industry as well.  For Intel and other chip makers, we’re moving into a time when the big-money items will be on the polar extremes of the tech space—little chips that run at lower power and can be integrated into devices, and big behemoth knowledge-crunching technology suitable for a cloud data center.  The new model makes a lot of middle ground go away and there’s nothing that can be done to save it.

In networking we know that the most critical development will be a “subduction” of the Internet into an agent-cloud model.  That was already happening with icons and apps and widgets and so forth.  Nobody can effectively use behaviorally empowering technology if they have to spend ten minutes entering search terms.  They have to have shortcuts that link whim to fulfillment.  That’s interesting because it reshapes the most basic notion of the Internet itself—a link between an address and a desired outcome.  You go to a site for something, but if you can’t really “go” in a direct sense, what happens network-wise.  And how is it paid for, because ads displayed on watches don’t seem to offer much potential?

The world is changing because of tech, which is no surprise (or shouldn’t be) because it’s been changing since the computer burst on the commercial scene in the 1950s.  Microsoft’s success in the future, and the success of every network operator, network vendor, and IT vendor, will depend on its ability to jump ahead of the change not try to replicate the steps that have driven it along.  The past already happened; it won’t happen again.

NFV Savings and Impacts: From Both Sides

I’ve been reading a lot of commentary on network functions virtualization (NFV) and I’m sure that you all have been too.  Most of it comes from sources who are not actively involved with NFV in any way, and since the NFV ISG’s work isn’t yet public it’s a bit hard for me to see how the comments are grounded in real insight.  It’s largely speculation, and that’s always a risk, particularly at the high level when the question of “what could NFV do to infrastructure” is considered.  Sometimes the best way to face reality is to look at the extremely possible versus the extremely unlikely and work inward from both ends till you reach a logical balance.

If you think that we’re going to run fiber to general-purpose servers and do optical cross-connect or even opto-electrical grooming, be prepared to be disappointed.  General-purpose servers aren’t the right platform for this sort of thing for a couple of reasons.  First, these applications are likely highly aggregated, meaning that breaking one breaks services for a bunch of users.  That means very high availability, the kind that’s better engineered into devices than added on through any form of redundancy or fail-over.  Second, the hardware cost of transport devices, amortized across the range of users and services, isn’t that high to begin with.  Bit movement applications aren’t likely to be directly impacted by NFV.

On the other hand, if you are selling any kind of control-plane device for any kind of service and you think that your appliance business is booming, think again.  There is absolutely no reason why these kinds of applications can’t be turned into virtual functions.  All of IMS signaling and control is going to be virtualized.  All of CDN is going to be virtualized.  The savings here, and the agility benefits that could accrue, are profound.

Let’s move inward a bit toward our convergence.  If we look at middle-box functionality, the load-balancing and firewalls and application delivery controllers, we see that these functions are not typically handling the traffic loads needed to stress out server interfaces.  Most middle-box deployment is associated with branch offices in business services and service edge functions for general consumer services.  The question for these, in my view, is how much could our virtual hosting displace?

If we presumed that corporate middle-boxes were the target, I think that the average operator might well prefer to host the functions at their network’s edge and present a dumbed-down simple interface to the premises. Customer-located equipment can be expensive to buy and maintain.  Since most service “touch” is easily applied at the customer attachment and much harder to apply deeper, it’s likely that virtual hosting could add services like security and application delivery control very easily.  Based on this, there would be a strong pressure to replace service-edge devices with hosted functions.

On the contrary side, though, look at a consumer gateway.  We have this box sitting on the customer premises that terminates their broadband, offers them DHCP services, possibly DNS, almost always NAT and Firewall.  Sure we can host these functions, but these boxes cost the operator perhaps forty bucks max and they’ll probably be installed for five to seven years, giving us a rough amortized cost of six dollars and change.  To host these functions in a CO could require a lot of space and the return on the investment would be limited.

This edge stuff is the current “NFV battleground state”.  You can already see box vendors addressing the risks by introducing “hosting” capability into their boxes.  A modern conception of an edge device is one that has basic packet-pushing combined with service feature hosting, which essentially makes a piece of the box into an extension of NFV infrastructure (providing that NFV deployment can actually put something there and manage it).  You can also see software vendors looking at how they could create better economy of scale for middle-box functions that support consumer or SMB sites and thus have relatively low CPE costs.

If we move up from our “unlikely side” the next thing we encounter is the large switch/router products.  These products, like transport optics, are likely doing a lot of aggregating and thus have availability requirements to consider, and high data rates create a major risk of swamping general-purpose technology with traffic, even using acceleration features.    If we were to presume that the network of the future was structurally 1:1 with that of the present, having all the layers and devices in either virtual form or real form, I think we could declare this second transport level to be off-limits.

But can we?  First, where aggregation devices are close to the network edge, in the metro for example, we probably don’t have the mass traffic demand—certainly nothing hopelessly beyond server capability.  Second, even if we presume that a device might be needed for traffic-handling or availability management, it’s possible that NFV could get an assist from SDN.  SDN could take the functions of switching or routing and convert them into a control/data plane behavior set.  The former could be NFV-hosted and the latter could be hosted in commodity hardware.  That would make a victory of legacy device technology at this first level of aggregation a pyrrhic victory indeed.  All that needs to happen is that we frame the notion of aggregation services in the metro in a nice service-model-abstraction way so that we can set up network paths as easily as OpenStack Neutron sets up subnets to host application components.

This is the key point to our middle ground, the key factor in deciding how far from the “good side” of potential NFV applications we can really expect NFV to go.  If you look at the technology in isolation, as middle-box hosting, then the impact is limited.  If you look at NFV as edge hosting then there are a number of very logical steps that could make NFV much more broadly successful.  And the more successful it is, the more of metro networking (which is where edges and aggregation are located, after all) gets translated into NFV applications.  And NFV applications are cloud applications where traffic is pre-aggregated by data center switching.  That means you could consume optics directly, and you’d end up with a metro network consisting of NFV data centers linked with lambdas, fed by a thin low-cost access network.  If you believe in NFV revolution, this is what you have to believe in, and the big step in getting there is a service-model partnership between SDN and NFV.

HP Results Show a Clear Path, but Not a Will

HP reported their numbers, and they were somewhat par for the tech earnings course this season, meaning they missed in revenue, met guidance on profits, and were tepid in their expectations for the coming year.  The Street has been all over the place on HP stock since the call, but I guess the trend is down.  HP also replaced their Enterprise Group leader, which may or may not be related to the quarterly numbers—take your pick!

I said yesterday that HP needed some aggressive offense, but what the call showed me was a company playing defense and losing a couple yards every play.  A little loss is better than a big one, but everyone who’s watched football knows that you can’t win games by losing only a little ground on each play.  I think it’s time for HP to trot out the classic “Hail Mary” pass (if they ever get to offense), but it’s still a question of whether they know how.

HP software numbers were better than expected, and that to me suggests that HP still has enough software DNA to be able to trot out some good cloud-related tools aimed at boosting enterprise productivity and not cutting costs.  Sadly, cost-cutting seems behind all the other successes they cited on the call.  If you look at one of their hits, Moonshot, it’s a low-cost server underneath the skin.  If you look at converged cloud, it’s OpenStack and cost-based IaaS.  On the call, HP acknowledged the importance of mobility and cloud.  That’s not enough; they need to differentiate in both areas, not just lay down an easy bet.  If you extend their priorities along the lines of “empowermentware” as I suggested, you can get to some specific areas HP needs to address.

First, you have to link explicitly to mobility, because it’s what a mobile worker needs that creates the opportunity for a new productivity paradigm.  Out in the trenches, as they say, things are totally different from the prevailing needs at the desktop.  Can HP, who has failed to create a strong mobile device story, still create a mobile empowerment story?  Who knows?  They talked about it, but not much more.

The second critical point is that the cloud has to step out—beyond IaaS, beyond the basic precepts of OpenStack.  HP did OK in software, as I said, beating expectations.  Why not take some of their software and make it into an extensible platform, an added dimension to OpenStack?  It would take very little for HP to create a framework to add platform and even software-as-a-service features to OpenStack, and doing that could be a differentiator for them.  In addition, it’s obvious to most (even within the OpenStack community) that the Neutron approach to having almost release-based service model extensions just won’t cut it.  If network virtualization is like all other virtualization—an abstraction/instantiation process—then there should be an easy path to defining new abstractions, new service models.  HP should know that, and should be stepping up to address the issue.

The final point is application architecture.  We have an architected application-driven model for software today in SOA, and the problem is that we achieved application goals at the expense of flexibility.  We have a highly flexible model for binding components into services in the web or RESTful model, but it’s a model that makes virtually everything that should be fundamental (like security and governance) into an optional extra.  We have emerging technologies that can model data and processes much more flexibly, and these models should be the foundation for the new age of empowerment, because they can handle the scale, the personalization, the diversity of resources and multiplicity of missions…you get the picture.  Why is HP not leading with those technologies?  We will never get the cloud totally right without them.

Competitors will eventually get this right, HP.  If you want to revolutionize the IT model, look to Amazon.  With razor-thin retail margins driving its planning, it’s hard to find even network applications that don’t look good by comparison.  IaaS, which is a death trap for even many common carriers, can look like a rose garden to Amazon.  Imagine how mobile could look?  If Amazon becomes an MVNO, which many say it intends to do down the line, it could not only get a boost in TAM by leveraging all its connected devices, it could create a mobile-friendly PaaS cloud platform of its own, one that would then be a big headache for HP because it would be in place, an incumbent approach that HP would then have to batter out of the way.  If HP moves now, they could still get out in front here, and that’s essential if they’re to gain full benefit from their stated goals and their own technology strengths.

But there’s a lesson for HP in EC2 as well.  IaaS cloud services are a terrible business and HP shouldn’t be in them at all.  Getting street creds in the cloud is one thing, but killing your margins to get them (even if you succeed) is another.  Maybe that’s the final lesson for HP; you have to pick your street.  Play to Wall Street and you’re as volatile as stocks are today; play to Main Street and you may have a long slog but you have a good finish.  Step out, HP, and step out boldly.

HP’s Only Path to Survival

HP is scheduled to report earnings tonight after the US markets close, and in keeping with my practice with Cisco, I’m going to focus today on what HP should be doing, uncluttered by the way the numbers fall.  Again as with Cisco, I may blog a bit about what actually happened.

HP is an IT company, to no one’s surprise, and so it’s falling victim to the general malady of IT—too little growth in the benefit case to drive sufficient revenue and profit growth.  You need to fuel purchasing through benefits, and when you can’t claim many more you can’t buy much more.  The only exception is the consumer space, where benefits are less financially tangible, and even there HP and the industry has issues.

In the consumer space, HP was at one time a leader in hand-held technology but they were anemic in how they pursued it because they were afraid it would undermine the HP PC position, which was more profitable overall.  There was a time when HP had the only portable device worth having, and look at them now.  They also followed the Microsoft lead in defending rather than attacking, and fell victim to the smartphone and tablet craze.  What can HP do about that?  That’s the first question.

Then we have the business space.  HP has both network equipment and servers, which should give it a leg up on competition.  It had that advantage up until Cisco brought out UCS, but it was never really able to exploit it because like many companies, HP is a bunch of silo business units who compete with each other for internal management favor as much (or more) as they compete in the broad market with Cisco or IBM.  Servers were once brand-buy items; you picked a name vendor if you were a big company.  Now servers are almost commodities.  Remember that NFV was targeting the running of network functions on “commercial off-the-shelf servers.”  If there’s such a thing as COTS then HP and every server company has a challenge.

Is that challenge the cloud?  HP should have a credible cloud story but their positioning of their cloud assets has been tentative, and I think that behind that is the same old issue of fear of overhanging your primary products’ sales.  The market theory is that cloud computing is a cost-based substitution of hosted capacity for owned capacity.  That would have to be a cost savings because there was less capacity needed in the cloud to do the same work—thus the cloud is a net loss of servers.  Only that’s not true.  Yes, IaaS is an economy of scale play, but IaaS isn’t the cloud market that matters.  If you look at the cloud at the highest level, it could be the largest single source of new server deployment, and a net win.  And how to get their cloud game going is HP’s second question.

I think that the answer to the two questions lies in recognizing that they’re one.  “The cloud” is a nicely fuzzy term we’re using to paper over a true revolution in information technology, a revolution that focuses on what I’ve been calling point-of-activity empowerment.  People with mobile devices want mobile-friendly information resources and information presentation.  We weave portable stuff into our lives in very different ways than we weave in things that sit on our desk or in our living room.  The differences allow us to build new dependencies, gain new benefits, and those new dependencies and benefits justify more spending.  We will get more from the cloud, and we’ll pay more to get it.  This is the real message of the cloud, and it’s the message that HP should have been crying from the rooftops for five years now.  The good news is that so should everyone else in the cloud game, and they all dropped the ball.  So HP still has a chance.

The perfect data center for this agile point-of-activity cloud is different from the typical corporate or even cloud data center.  I’m not saying the differences are profound, but they’re more than enough to justify a differentiated positioning.  The hardware design needs to be different but most importantly the software needs to be different.  That’s where HP’s cloud strategy is failing.  When I look at the advances to OpenStack that are in progress at some level, I see HP a bit player in driving the bus.  HP should be moving heaven and earth to expand and extend OpenStack, but they should also be building a layer of “empowermentware” on top of open-source cloud technology to embody the new value proposition for the cloud.

The same can be said in the appliance space.  The basic architecture of the PC has been around since the early 1980s, when nearly all PCs were running in splendid isolation.  Even IBM had to play catch-up as others (remember the Irma Board?) provided early connectivity.  What does a machine designed to be a thin client for an empowerment cloud look like?  I doubt it looks like a laptop or even a tablet or smartphone.  Things like wearable tech should be fitting into an architecture, an architecture that HP could have (and still could) define.

So here’s the net-net for HP.  What happens to them tonight on their earnings call and tomorrow in the market is a side show.  The important question is whether they are ready to go flat out toward the empowermentware goal, and then fit their hardware strategies to host empowerment tools and terminate empowerment flows.  If they can do that, and quickly, they will emerge from this process a lot stronger.  If they can’t then they are in for a slow decline.  HP is a kind of consolidated entity—HP, Compaq, and DEC all contribute DNA to the current company.  We’ll be looking for fragments of HP DNA in other companies if HP doesn’t move, and move now.

Channelized versus OTT Video: More Data

Some of the latest data on TV providers seems to reinforce the notion that the video world is changing, and of course that’s true.  The problem for TV delivery strategists is that it’s hard to tell just what factors are driving the changes, and without that key insight you can’t easily address the problems that change might create for your bottom line.

The market data for the spring, drawn from the quarterly earnings numbers, suggests that subscription television services are losing ground more rapidly than usual.  Spring is typically a bad time for TV because of a combination of movement to summer homes (or away from winter homes) and the return of students from college.  This spring does seem a bit worse than usual, but I’m not sure how much of the loss can really be recovered.

TV viewing is about households, not people, because households are what subscribe.  Data I’ve cited before suggests that when people establish a multi-person household they tend to gravitate toward traditional viewing patterns regardless of how committed to iPhones/iPads and YouTube or Netflix they might have been.  Thus, one important point is that any time you have a lack of growth in the number of households you have a loss of stimulus for subscriber gains in subscription TV overall.  And guess what?  We’re in a period when record numbers of adult children are not leaving the nest for economic reasons.  Nothing TV can do is likely to push these kids out, other than perhaps hiring them all.  Even then, I think there’s compelling data to suggest that young adults value the added disposable income they get from living at home more than the independence they lose.

Another interesting thing about spring, astronomically speaking, is that it’s going to lead to summer, meaning summer reruns.  People traditionally flee subscription TV in the summer because of the dearth of good material.  Remember, the largest reason people don’t watch subscription TV is because they’re not home, and the second largest that there’s nothing worthwhile on.  Even “good” summer shows often fail to attract the dedicated audience because the shows can be preempted for sporting events, and because in the summer it’s more likely that viewers will be out somewhere.

I think anyone who’s gone through their share of summers and TV viewing knows that in fact the quality of summer material is better this year, and has been getting better for the last couple of years.  The number of people who tell me that they have shifted to on-demand or OTT viewing for lack of something to watch is down by almost 30% this summer versus last, a big change.  Most of this gain is due to the fact that cable channels are increasingly taking up the slack and even major networks are running summer-season shows.  This is smart because viewing habits are just that—habits, and you don’t want to train your viewers to go elsewhere.

That raises some questions on the TV Everywhere concept.  Is the use of OTT video to supplement channelized viewing a good thing or a bad thing?  There I think the jury is still out.  My data says that people who can’t watch something in its regular slot would rather watch it via on-demand viewing unless it’s a sporting event.  For sporting events, the preference is to view it in a social/hospitality environment (a bar comes to mind) if you can’t view it at home.  So it’s not clear whether having the game or the show available “live” helps much, and it’s pretty likely that having it available on-demand on a different device doesn’t move the ball very much.

Where TV Everywhere is good for providers is in holding customers as channelized subscribers where the viewer is likely mobile a good chunk of the time.  In general this means the young independents, people whose viewing habits seem destined to change anyway as soon as they establish a true household, with a partner and perhaps children.  But just as this year reveals some gains in summer-viewing loyalty, it illustrates weakness in the traditional fall-through-spring prime TV season.  Remember that almost 30% told me that they were happier with summer shows?  Almost 20% said they were significantly less happy with prime-season programming.  Too much reality TV, they said, and also too short a season for new material.  TV Everywhere could give viewers access to material that would substitute for this poorer fare.

“Could” is the operative word.  People watch more on-demand these days because they miss more regular program times.  On-demand breaks the cycle of dedicated viewers who schedule their lives around programming, and when that cycle is broken the viewer becomes increasingly interested in just getting something that suits their momentary fancy.  That’s as easily done with OTT.  Yes, you need prime-time on-demand, but there’s no question that over time this is weakening the bonds that hold us to traditional viewing.

IMHO, people want virtual channels with specific shows slotted into their own convenient slots and with material selected based on what they like and what their peers recommend.  I think Apple and Google would like to see this model prosper, but the problem for both is that the commercials just aren’t as valuable in that model.  We’ve gotten a bit better in leveling TV per-minute advertising and OTT video advertising—it’s gone from TV being worth 33 times as much to only 28 times as much—but we’re still a long way from being able to fund new material, and if you distill all of what I’ve said about video, you see that it’s the material that makes channelized TV stand or fall, material and demographics.

A shift to OTT viewing would also have profound consequences for broadband delivery, not so much to the home (U-verse proves that you can give the average household enough capacity to view TV even over copper) but in the metro aggregation network.  Instead of feeding linear RF programming to head-end sites to serve tens of thousands of customers, you now have to deliver thousands of independent video streams to every CO to reflect the diversity of viewing there.  And with revenue per bit in the toilet, how exactly do you build out to make that happen?  So until we can answer the dual questions of paying for programming and paying for delivery, I don’t think we’re heading for an OTT revolution.

How “Open” is My Interface?

Carol Wilson raised an interesting point in an article in Light Reading on SDN and NFV—that of collaboration.  I’m happy that she found the approach CloudNFV has taken to collaboration and openness credible, but I think some more general comments on the topic would be helpful to those who want to assess how “open” a given SDN or NFV approach is, and even whether they care much whether it’s open or not.

An important starting point for the topic is that the network devices and servers that make up “infrastructure” for operators are going to have to be deployed in a multi-vendor, interoperable, way, period.  Nothing that doesn’t embrace the principle of open resources is likely to be acceptable to network operators.  However, “open” in this context is generally taken to mean that there exists a standards-defined subset of device features which a credible deployment can exercise across vendor boundaries.  We know how this works for network hardware, but it’s more complicated when you bring software control or even software creation of services into the picture.

If you step up to the next level, I believe there are three possibilities:  an “open” environment, an “accommodating” environment, and a “proprietary” environment.  I think everyone will understand that “proprietary” means that primary resource control would operate only for a single vendor.  Vendor “X” does an SDN or NFV implementation and it works fine with their own gear but the interfaces are licensed and thus it won’t work and can’t be made to work with equipment from other vendors.  Today with software layers, “proprietary” interfaces are usually private because they are licensed rather than exposed.

The difference between “open” and “accommodating” is a bit more subtle.  To the extent that there are recognized standards that define the interfaces exercised for primary resource control, that’s clearly an “open” environment because anyone could implement the interfaces.  I’d also argue that any environment where the interfaces are published and can be developed to without restriction is “open”, even if that framework isn’t sanctioned, but some will disagree with this point.  The problem, they’d point out, is that if every vendor defined their own “open” interfaces it would be extremely unlikely that all vendors would support all choices, and the purpose of openness is to facilitate buyers’ interchanging components.

This is where “accommodating” comes in.  If in our resource control process for SDN or NFV we define a set of interfaces that are completely specified and where coding guidance is provided to implement them, this interface set is certainly “open”.  If we provide a mechanism for people to link into an SDN or NFV process but don’t define a specific interface, we’re accommodating to vendors.  An example of this would be a framework to allow vendors to pull resource status from a global i2aex-like repository and deliver it in any format they like.  There is no specific “interface” to open up here, but there is a mechanism to accommodate all interfaces.

Let’s look at this through an example.  In theory, one could propose to control opaque TDM or optical flows using OpenFlow, and in fact there are a number of suggestions out there on how to accomplish that.  IMHO it’s a dumb idea because packet-forwarding rule protocols don’t make sense where you’re not dealing with packets.  Suppose that instead we created a simple topology description language (we have several; NED, NET, NML, Candela…) and we expressed a new route in such a language, using some simple XML schema.  We have a data model but no interface at all.

Now suppose we support passing the data from the equivalent of a “northbound application” to the equivalent of the OpenFlow controller, where it’s decoded into the necessary commands to alter optical/TDM paths.  If we specify an interface for that exchange that’s fully described and has no licensed components, it’s an “open” interface.  If we express no specific interface at all but just say that the data model can be used to support one, we have an “accommodating” interface.

My point here is that we need to be thinking about software processes in software terms.  I think that “open” interfaces in software are those that can be implemented freely, using accepted techniques for structuring information (XML, for example) and transporting information through networks (TCP, HTTP, XMPP, AMQP, whatever).  I think “standard” interfaces are important as basic definitions of functional exchange, but hard definitions define fixed structures.  In the current state of SDN and NFV it may be that flexibility and agility are more important.

We likely have the standards we need for both SDN and NFV interfaces in place, because we have standards that can be used to carry the needed information already defined—multiple definitions in most cases, in fact.  Where we have to worry about openness is in how providers of SDN or NFV actually expose this stuff, and it comes down not so much to what they implement but what they permit, what they “accommodate”.  I think that for SDN and NFV there are two simple principles we should adopt.  First, the information/data model used to hold resource and service order information should be accommodating to any convenient interface, which means it should not have any proprietary restrictions on accessing the data itself.  Second, the interfaces that are exposed should be fully published and support development without licensing restrictions.

This doesn’t mean that functionality that creates a data model or an interface can’t be commercial, but it does mean that a completely open process for accessing the data and the exposed interfaces is provided.  That’s “open” in my book.