Alcatel-Lucent Tries a Fresh API Slant

In this blog I’ve noted many times that service providers have been searching for years for a true “service layer” architecture, one that could help them enter the new age of OTT or hosted services.  Network equipment vendors have been curiously backward in supporting this transformation, even though many of them have most of the tools needed.  As a result, operators have begun to break with vendors and chart their own path, including Telefonica whose Digital operation is a showcase for other operators looking for inspiration in NGN services.

Telefonica’s Tu Me platform is essentially a set of APIs, and some network vendors have more than a fair share of APIs.  One such vendor is Alcatel-Lucent, whose notion of the “high-leverage network” seems very congruent with operator direction.  The problem with Alcatel-Lucent has been less a lack of technology than a lack of effective articulation.

Well, Alcatel-Lucent may be trying to fix their problem.  Last week the company announced a program (professional services plus product elements) designed to help operators become players in what Alcatel-Lucent calls the “digital economy”.  The program, from a positioning perspective, is all about APIs as the composable elements of the services of the future.  This API thrust seems to be linked to a much broader effort to bring some of the Alcatel-Lucent strong points out from the oblivion of poor articulation.  It may be that effort more than the new program that’s helpful to Alcatel-Lucent down the line.

I’ve always been of the view that one element of Alcatel-Lucent’s Open API Program, the Service Composition Framework (SCF) was the key to its success—if success it will have.  SCF is the exposure and orchestration piece of the puzzle, the way that APIs get linked to create some unified experience.  Alcatel-Lucent has had this element for a year or more but it’s not been visible; it’s now a key part of their website positioning of Open API, and thus of HLN.

Alcatel-Lucent has also now acknowledged one of the key points of service-layer development for operators; you have to figure out how to expose your network assets, and that means more than QoS.  Their new professional services activities are aimed in large part at the process of creating durable service assets out of existing infrastructure and systems.  These assets, presumably exposed via APIs, could then be composed with SCF or simply made a part of an API inventory that Alcatel-Lucent has also obtained (through acquisition) tools to sustain.

There’s no question that this is progress, and there’s little question that a serious drive by Alcatel-Lucent on the service layer will generate competitive responses.  None of the major vendors have gotten much service provider respect for their service-layer plans; of the vendors who had any influence to speak of only Cisco gained in our spring operator survey, and they gained only a little.  The players we’re most interested in watching are the other mobile giants, Ericsson and NSN, because they share a problem with Alcatel-Lucent—a problem beyond articulation, though they have that one too.

The issue is IMS.  Outside the IMS community itself there are few who believe that IMS can play a role in data services these days, and yet the Big Three mobile players are all committed at least pro forma to IMS as the mobile service solution.  One truth about the modern service layer is that there IS no “mobile service solution” only a “service solution”.  Alcatel-Lucent seems to be moving toward an API-zation of IMS, and if that’s true they might be preparing to position IMS as one of those exposable assets.

NSN’s position on services is cloud-friendly but it’s lacking in detail on IMS.  Instead NSN seems to be positioning its service layer along TMF lines, which I think is curious given that Ericsson  just bought Telcordia and thus has a stronger position in both the TMF and OSS/BSS spaces.  Are they counter-punching?  Could be.  Ericsson has Telcordia, of course, but they are also prime sponsors of an open-source high-availability framework called Open Service Availability Framework or OpenSAF.  There are a lot of good things in OpenSAF and there are even some links between it and modern issues like SDNs, but like Alcatel-Lucent, Ericsson has been pretty ineffective in pushing its service-layer vision or exposing its credentials.  Maybe now both NSN and Ericsson will be forced to take a more aggressive position.  Maybe they’ll also have to address the service-to-IMS connection better.

 

 

 

Spectrum and U-Verse, Apple and Cable

The DoJ cleared the Verizon deal with cable companies for spectrum in return for joint marketing, but with conditions.  The primary fear of regulators at the DoJ was that Verizon might not roll FiOS as aggressively under a deal that let them co-market cable services.  FiOS, like all FTTH, is very sensitive to demand density and so might be marginal in some geographies; Justice wants Verizon to have as much incentive as possible to deploy FiOS.

Cable operators are, under the deal, free to bundle Verizon’s wireless services with their own cable offerings in areas where FiOS is deployed, and this is presumably another condition to encourage Verizon to differentiate itself with FTTH.  It’s clear that the DoJ is pushing the deployment of fiber as a matter of public policy.

Fiber to the home is a pet notion for FCC Chairman Genachowski, of course, and that makes it pretty likely that he’s going to get the deal blessed by the FCC, still a requirement for final approval.  Genachowski says he’s prepared a draft order for consideration by the full Commission, and I’m not hearing of serious objections being raised at the FCC.  The interesting thing is that the 4G spectrum parts of the deal may be less important to the Verizon/AT&T competitive dynamic than the implications of cable resale.

The focus on FiOS here obscures the fact that Verizon is free to market cable in areas where there’s no FiOS, thus allowing Verizon to break from DSL as a secondary broadband strategy.  The story is that Verizon and most operators have become convinced that DSL is not an alternative to fiber or cable in delivering video, AT&T’s U-verse notwithstanding.  People tell me that the cost of a DSL-IPTV approach isn’t going to be competitive and that the onrush of HD and possibly 3DHD will stress DSL loops to the point of unreliability or unsuitability.

I’ve never been a fan of U-verse and the slotted-DSL approach to video simply because I don’t believe you can push enough bandwidth on the average loop to support household demands, forgetting the cost of the IPTV.  The cable deal with Verizon thus sets up the possibility that telcos would cede areas to cable completely, or would settle on DSL as the low-end “universal service” of broadband.  There’s even the possibility, which I hear is being discussed at both AT&T and Verizon, of using a form of cable to replace copper loop in areas where fiber still won’t pay back.

All of this maneuvering is due to the simple fact that cable broadband costs less per household passed than either DSL at reasonable speeds or fiber, and that pretty much any form of broadband access needs to support video or it’s not likely to pay back on investment.  US operators have been losing DSL broadband customers to cable anyway.  They’ve also been losing traditional voice customers even for “home phones”, some to their own VoIP and some to other VoIP providers.  Thus, at least a few telco planners now believe it would be better to throw in the towel on wireline (in copper loop form) and embrace cable and fiber depending on the opportunity density.

Watch AT&T here.  A major access reprise would cost them a ton of money and put their wireless build-out at risk.  They’re almost forced to try to do their own deal with cable in most geographies and to deploy fiber where they can.  Either way this puts more pressure on non-wireless capex because money goes into access build-out.

Another video story is that Apple is trying to get MSO consent to include STB features in its Apple TV product.  This is a story I have some reservations about, because there’s a program from the FCC to open up the STB space and it’s far from clear to me that Apple would want to court MSOs to accomplish what the FCC is trying to do with a set of Orders.  The story has one logical element, though; the goal of Apple is to create an integrated channel guide that would let it present iTunes content as virtual channels.  All my work suggests that this is the real key to making streaming video a regular part of TV viewing.

But is all this negotiating and effort smart, or even necessary?  It’s possible, with a tiny infrared emitter, to tune a STB from an external device.  Most of the Windows Media Center PCs have come equipped with that feature, in fact.  Users don’t like it and rarely bother, but certainly Apple could figure out a way of making the thing look stylish (think Nexus Q).  That would bypass the MSO consent issue, and I really have a problem believing that the MSOs would want to consent on this one; they have nothing to gain.

What Apple needs to be working on is ad return for streaming video.  Unless you believe that pay TV will replace ad-sponsored TV, you have to believe that commercials will yield enough to pay for production and distribution whether you stream or view linear channels.  They clearly do pay for linear production and distribution, but streaming video commands 4% or less the ad value.  My model says that before you could cover even a third of the COST of current material, much less any profit, you’d have to stick so many ads in streaming video the users would rebel.  The challenge is that all anyone has worked on in the way of streaming ad differentiation is the proposition that it provides better targeting.  That translates to “lower adspend” to advertisers, which is how streaming ad value got compromised.  Can anyone make ads more effective while streaming?  Apple would be my bet, but they’ve not done it yet.

 

Is Cisco a Follower Waiting for a Leader?

Cisco reported its numbers, and they were pretty much on target with expectations.  The company announced a large increase in its dividend, and the Street is saying that Cisco is continuing to progress toward becoming a “mature company”.  I think that’s a bit overstated, but there’s no question that Cisco is facing that sort of change.  It’s the “How?” that’s still an open question.

Issue one is margins and differentiation.  IBM was the creator of “sum-of-the-parts differentiation”, an ecosystemic drive to sell products not as parts but as the whole.  To be such a seller you need astonishingly high levels of buyer strategic influence, and IBM had that.  They had people there on scene, credible people, who could lead companies through tech planning with confidence.  Naturally these people were going to plan for IBM’s toys, and so naturally those products sold and their margins held up.

In networking, only Cisco has enough strategic credibility to be an IBM.  The problem is that even Cisco is still holding levels of influence well below IBM’s 1980 numbers.  Then, IBM had enough influence to drive deals even against competitive pressure (which shows today in IBM’s survival).  Cisco never had that level of influence and doesn’t have it now.

Cisco has always wanted to be a broad-market player but they want to get there on the safe trail.  Don’t take major risks, they say, be a “fast follower”.  Let somebody set the trends and test the waters and then step in with your favored account influence and eat those leaders for lunch.  That works if there are leaders, and I think that Cisco’s maturity dilemma today is really hinging on how they respond to that issue.  Network vendors have been universally unwilling to face the future.  There’s nobody to “fast follow”.  Cisco must now decide whether to continue to wait for a patsy to take point, or to risk leadership.

If Cisco is really signaling a desire to become networking’s IBM, then they’re settling into maturity before they’ve grown any facial hair.  The commoditization issue, the value submergence in networking, isn’t a done deal yet.  The easiest way to counter the account control and influence of a competitor is to demonstrate they’re dinosaurs controlling the swamps at Ground Zero of the asteroid event.  Revolution is what sweeps aside evolution, and what Cisco has really said this quarter is that they aren’t going to be revolutionary.  Will someone else?  Of the remaining players, HP probably has the best chance, but I’m afraid that HP is still too mired in trying to be what it was a decade ago.  The PC won’t lead anyone into the future.  Today, it’s the cloud, mobility, and the ubiquitous broadband that connects these concepts.  Cisco has nearly all the pieces.  So does HP.  It’s doubtful that any other vendor will be able to pick them up but it’s certain that the unifying message that would benefit both Cisco and HP can be countered by disunion—pick apart the ecosystem and the ecosystemic seller has no story.  A network vendor can still spoil Cisco’s party.

There are undercurrents of illogic in the picture of global tech that Cisco and others are painting too.  Service providers have not traditionally changed capex much in response to macroeconomic trends; why would they be doing it now?  Enterprises are being offered a network-mediated resource consolidation process we call “cloud computing” to reduce costs.  Why would they not embrace it in difficult economic times?  We are not suffering an economic suppression of tech as much as we are suffering an inspirational suppression of tech.  Slouch, let you hair fall into disorder, shuffle along, and you’re a bum.  Wait too long for a leader to fast-follow and you’re a slacker.  And if the best candidate won’t run, the best candidate who does run will win.

The Tablet, the Cloud, the Future: Hand in Hand?

Consumerism is raising its head again in networking, and yet at the same time we’re seeing some new initiatives aimed at businesses.  The temptation here is to say that these seeming contradictions are unified by the search for profit, but there may also be some technical factors that are either driving changes or perhaps being driven by them.

The rise of Android in phones is a good example of consumerism.  Yes, iPhones are way cooler than standard devices (even though Android’s “Jelly Bean” release is said to be a considerable improvement) but they’re also way more expensive.  In a mass market, the guy who does best with the masses wins.  Samsung is riding Android smartphones to victory regardless of patent issues, and that demonstrates that Apple is vulnerable in a market it created.

In tablets,  Apple seems to be widening its share, and you have to wonder what the difference is.  I think one difference is that Android is so fragmented.  Not only are there a bunch of different tablets from different vendors with different features, there’s also the problem of different Androids.  Tablets run everything from version 2 to version 4 these days, and many vendors have said that they will not be advancing their older models (even a year old) onward to new versions as they come along.  That makes life harder for developers and it also discourages users who feel they may be stranded.

Tablets are in the long run more important than smartphones, though.  A tablet is a real information appliance, and that makes it a threat to a lot of different business models.  The business implications are clear, for example; workers can be given something portable and thus take work with them, as long as they have a connection.  Which is where tablets get really interesting; most of them don’t have 3G/4G connectivity, they rely on WiFi.  Tablets are the biggest driver of change in metro networking.

Standard wireless treats the Internet as a nuisance traffic source; we hear all the time about “Internet offload”, a strategy to get free traffic off premium paths.  With tablets, it’s the Internet/OTT stuff that’s valuable, and of course WiFi is a non-premium path almost by definition.  It’s also more affiliated with wireline (a hub off a local broadband connection) than with wireless (towers and RANs).  So as tablets get more popular, they tend to push “mobile” investment more to wireline, and they create issues like mobility or at least portability in wireline services.

I think that tablet consumerism is ultimately going to homogenize metro infrastructure, gradually pulling all of the wireless/wireline differences into virtual networks on common technology, at least where ownership issues allow.  Tablets will also redefine business mobility, with a focus on roaming intra-facility or within company properties where most mobile workers really roam.  Eventually, tablet consumerism will redefine cloud.

Apple’s challenge is to hold the lead here.  With Jelly Bean, Android may have reached a level of stability and ease of use where it could challenge Apple.  If Google can somehow make all their Android-vendor clocks chime at once with upgrades, they could hope to develop a real tablet counter to the iPad.  Right now, Apple by controlling tablets can control a lot of the evolution of user experiences and even network infrastructure.  Google takes over if they can make Android for tablets what it already is for phones.

The cloud dimension is the one I think is most important.  Tablets, as devices with richer interfaces, are capable of supporting richer services.  It makes no sense to host these services on the device, particularly if the services require a lot of information digestion to arrive at a simple answer.  Apple with Siri and Google with Knowledge Gate are taking steps to separate searches from questions in handling, and I think the latter are going to the cloud.  Thus, the cloud becomes both a requirement for success and a defense against competitors.  Apple is still lagging in its cloud conception, and also in its deployment of cloud assets.  Google could win here, and that could both magnify any tablet drive Google undertakes and shift the focus from devices (where Apple is a clear winner) to cloud, where Google is the master.

Tablets, in short, are really important, and so we need to watch the trends there carefully.

Tom to Alcatel-Lucent and Competitors…If You Want to Live, Think Cloud/SDN!

S&P added its voice to Moody’s and downgraded Alcatel-Lucent to a negative outlook.  The rating agency cited the combination of slower capex growth among telcos and increased competition.  They’re right about the problem, but I remain convinced that Alcatel-Lucent and the others in the industry have created this mess for themselves by sticking their heads in the sand.

The recent push to usage pricing is an admission by operators that they’ve lost control over their average revenue per user (ARPU).  If users are paying Netflix for video they could have paid telcos.  There isn’t a single OTT service offered today, whether funded by ads or by payments, that telcos could not have supplied.  Why didn’t they?  In part it was because they were thinking of these services in the light of a service architecture, and they expected their vendors to provide them with the tools to take that next step.  And their vendors did not.  Now, for over a year in fact, operators have been moving away from vendor solutions, looking instead to startups or to their own integration.  That focuses innovation above the network, and that reduces the ability of network vendors to differentiate based on features.

Nobody in network equipment is going to get rich on service-layer tools; as a percentage of revenues they’ll never pay the mortgage.  But nobody will sell a differentiated network product unless they have something at the service layer to provide the differentiation.  While Alcatel-Lucent actually had a better story in this regard than their competitors, they have never told it with conviction.  Underneath it all, these guys are all box-and-bit people, and those kinds of people make poor salespeople for a service story.

At some point this problem becomes unsolvable.  The Titanic could have been saved by prompt action in the first couple of hours, but past that point nothing was going to happen other than move the demographics of the survivors around a bit.  Same here.  We are running out of time for vendors to adopt the right approach, and if they don’t there will be no course of action that will save them all from a date with the rating agencies down the line—all except of course Huawei.

There’s still a lot of innovation opportunity to address, though.  The “cloud” is almost totally misperceived, for example.  All the talk is about IaaS, and that’s dumb.  First, all cloud services ultimately have to produce SaaS to the end consumer, so that’s where the top of the food chain has to be.  Everything else is just prey.  Second, are we really thinking that the model of virtualization that was aimed at consolidating single-application discrete servers will live forever?  Will there never be apps written for the cloud?  If there are, why would those apps want to base themselves on virtualization?  It’s needed only to separate legacy stuff, and let me tell you right now that most legacy stuff is not going to the cloud at all, not in “legacy” form.  We will build the cloud’s usage by building its value, which is done by building cloud software.  Look there for benefits, people!

If we have the cloud wrong in the sense of expecting too little change, then we have the network wrong too.  SDN is like the cloud, it’s a label we’re attaching to a set of transformations that will extend beyond what people would have originally thought was coming, what the term originally meant.  The future of networking is tied directly to the cloud, and yet we are in the infancy of the concept of cloud networking and what little has been done is simply overlay nonsense.  There will be a role for overlays in the future.  Do you know what it is?  Have you read about it?  Probably not, but the fact is that overlay virtualization will work only with fabrics.  Think about it; no network structure means no benefit to hosting virtualization on each node.  There aren’t any.  So if we’ve missed this point, how much else is out there waiting to be seized?  Maybe some vendors will figure it out before they all face that ratings-agency demon.

Could Finding the “Right SDN” Help?

Cisco will be reporting its earnings this week, something that for sure everyone in networking and on the Street will be anticipating.  By any measure, Cisco is a giant in the space and most certainly it’s the leader in IP technology, the layer where the Internet and most business networking is done.  To paraphrase Sinatra, “If you can’t make it there, you can’t make it anywhere!”  If margins and sales won’t hold up in IP then there’s little hope of escaping from commoditization.

It’s not like buyers don’t have issues they’d like their vendors to resolve.  Among service providers and enterprises, my surveys have found a surprising level of continuity in terms of expectations.  Operators say that they face significant reductions in revenue per bit and need to reverse this trend or they can’t invest in the network.  Enterprises say that their business case for IT and networking hasn’t been improving and that this forces them to continually “improve” by reducing costs to meet corporate targets.

I think one reason why vendors are a bit deaf to these pleas is that they see the solution lying up in the application or IT processes and not in the network.  Yes, it’s true that you need applications to empower workers and to create new online experiences, but you also need connectivity.  The notion of making the network a better partner with applications has been around a long time, and it’s representing what might be called the “sane approach” to the future of networking.  You can see attempts to reach this partnership goal developing from the application side, in activities like the Quantum virtual network abstraction of OpenStack or the DevOps activities of cloud deployment.  From the network side, the path arguably lies through software-defined networking (SDN).

Most of the focus of SDN discussion has been on high-level SDN services, and this despite the fact that I firmly believe that you can’t software-define a network with a technology that doesn’t touch its nodal behavior.  Nicira and similar above-Level-3 technologies are software-defined endpoints, not software defined networking.  But does this mean that I’m waiting for somebody to define IP networks through SDN principles?  Nope.

As you rise higher in the OSI stack and enter the realm of large numbers of users and network devices, you make the process of creating a service conception around packet forwarding more difficult.  OpenFlow tells a node how to push a packet, and absent a bunch of intelligence on topology, status, and signaling/control with adjacent smart devices, this isn’t going to get you anywhere.  Push OpenFlow and SDN down even one layer (Level 2) and you reduce the scope of the problem because processes there are simpler.  OSI network layers are known by what they expose to their partners above.  That’s obviously a lot more complicated at Level 3.

OpenFlow and SDNs are a movement to turn the network upside down, in that they demand focusing on lower layers to add value by implication rather than on higher layers to add it explicitly.  You can make an SDN “underlayment” that envelopes both Levels 1 and 2 of the stack, and have this new structure present a fail-safe connective mesh that can look to any higher-level device or user however it would be convenient for it to look.  You can then simplify the operation and even the protocols and products at these higher layers because a lot of what they look for, like failures, are either going to be handled below or can’t be handled at all.

Is the problem here one of excessive defense?  Are vendors like political parties—they’d rather face the past than face the truth?  Is the industry just not holding them to a reasonable standard?  Light Reading published a list of SDN startups and I’d be hard-pressed to find any vendor on that list who met that to-me-critical basic SDN requirement of being present IN the network and not just riding on it.  Maybe we’re creating both SDN and political problems from the same behavior—apathy.

 

Extrapolating Yahoo Opportunity from Market Opportunities

Yahoo surprised investors by backpedaling on returning cash to investors from their exit of their China Alibaba Group position.  Though Mayer hasn’t said what she’d do with the money instead, one obvious possibility is M&A.  There has been talk about having Yahoo develop or acquire platform service capabilities that would exploit social and mobile trends, so that may be where the target of a buy would come from.

On one hand this could be a decent move for Yahoo.  I’m never a fan of tech-company programs to buy back shares or pay out increased cash to shareholders; they tell me the company can’t better use the funds to innovate.  To me, that means they’re not in the real tech space any more, they’re selling appliances with chips in them.  Certainly there’s a potential for an explosive opportunity in the mobile/behavioral symbiosis.  Two questions remain; does Yahoo have any notion of what those opportunities are, and does it have any chance of exploiting them from its current market position.

On the “what do they know” position, it’s encouraging (if true) that mobile/social platforms are an interest because it’s mobile and social that form the leading edge of this whole behavioral trend.  Communications devices imply communication, meaning that it’s logical that if people change behavior to accommodate new mobile broadband options, social behavior would be among the first of the behavioral components to be changed.  However, who these days isn’t saying mobile and social trends are the keys to the kingdom?  We won’t know whether Mayer has any real insight here until something concrete happens.

The “can they do it” question is a harder one.  Yahoo’s position in the market is declining.  Their desire to boost their search and email service to its former luster is logical, but look at Google and Apple with their increasingly “answer-my-question” and “speak with me” bias and you see that you can’t target getting something back in this market, you have to target GETTING AHEAD.  I think that getting ahead today means setting the trend; Apple taught us that.  There is no natural sweep of the consumer market, it’s a vast disorderly Brownian movement of fads and hang-ups and whims and pouts until somebody steps in and makes it otherwise.

This is why I said before that Yahoo needs to partner with the telcos.  They can’t influence mobile/behavioral dynamics on their own simply because it’s nearly impossible for them to start off now and try to overtake giants like Apple and Google who have mobile devices, clouds, and a lot of user hearts and minds.  But while every user surely knows who’s phone they have and whether it’s Android or whatever, they also know what carrier supports them.  There’s no other class of player in the mobile space who has a clear incumbency but who doesn’t have a clear position in exploiting that mobile/behavioral future.  So why not unite?  Think on this carefully, Marissa, or you’ll be sorry.

There is another area where Yahoo might shine, and that’s security.  The Woz comments on the cloud, the highly publicized breaches of cloud security recently, combine to illustrate that we don’t have the cloud under control in terms of security risks.  What’s needed in security is partly what Apple got with AuthenTec and partly a formalized notion of security inheritance that falls into the general topic of “federation”.  When services connect on behalf of their users there has to be strong management of how that’s done.  Right now that’s not the case.

Do we know how any online account manages the linkages to the others?  Those linkages have to be there or we couldn’t link Facebook with Twitter or Gmail with our other email systems.  I think it’s very possible that Apple plans to do something about security, but whatever it does is likely to focus on further sustaining the closed Apple ecosystem, which means Yahoo could look at solving the problem in a more general way.  Is Woz’s comment about the cloud coincidence, or is he as an Apple icon aware of something?  And by the way, there’s no class of player in the market who understands federation better than a carrier.

We should be realizing now that the Internet, or online services, or cloud services, are just too important to be as casually attended as they are.  Nobody wants to rain on the online parade by trotting out all the skeletons, but stories like those that have surfaced will surely taint the growth of both cloud and online services, social and otherwise, if we don’t face the issues more systematically.  Listening, Marissa?

 

HP and Network Vendors: Facing Change

Facing up to change isn’t easy; human nature seems to have deeply entrenched the notion that the safest and best is what’s already been done.  Maybe the reason is that new things often produce bad things, particularly for established markets and players.  Today we have two examples of change-facing, one where a company is adapting and another where an industry is still holding back.

HP, while it’s not yet reporting its earnings, is announcing an EPS upside that’s attributable to cost-cutting, and a major restructuring.  HP is the largest computer company but it’s also spread all over the place, with exposure to virtually every segment of a market that’s been unable to sustain profit margins for players in most areas.  With so many guaranteed misses, it’s hard to find a hit that matters.

It’s hard not to see what HP is doing as akin to sticking old shirts in leaking seams.  Consolidating printing and PCs, for example, doesn’t do anything to raise the sales of either product area, or even reverse the generally negative trends.  It manages costs, and if your commitment to a specific market area can be sustained only by continual reductions in your overhead, you’re facing inevitable exit.  Why not exit now and spend the management cycles on something helpful?

HP is also looking a bit bicameral on services.  On the one hand they acknowledge that EDS didn’t do what they expected it to do, and they’re writing off a lot of service deals.  In that regard, they seem to be following a broader industry trend.  A lot of companies jumped on services without realizing that there are good-returning services and bad, just like products.  Now HP seems to be in the position of saying that they know there are good and bad service areas, but they’re not exactly sure which category a given deal might fit.  If you can’t make profits on equipment OR services, where exactly do you expect to make it?  The cloud?  We have seen some solid technology options from HP there, and some preliminary positioning too, but not anything that conveys dazzling insight.  They need dazzling insight at this point, and if they think it’s coming then cost management can keep things going till it arrives.  If it doesn’t arrive (or doesn’t exist) then this is the start of a long and dark process.

Our other item comes from a content delivery conference, where DeepField Networks commented that video is changing the Internet, changing its very topology.  I’ve made this comment before too; the fact is that video valuable enough to care about has a large enough audience in a given area to merit caching.  This tends to focus more and more traffic on metro destinations, which means less and less of the traffic has to transit the “core” or peering structure of the Internet.  In addition, the crummy ROI on Internet bandwidth is forcing nearly all the smaller players out of the market, so the Internet is becoming increasingly (as DeepField says) a conclave of giants.

This is another of those two-level issues, but this time for network vendors.  The current mixture is undermining the hierarchical nature of traffic, which undermines the gadgets that have been largely responsible for creating that aggregation hierarchy—routers.  Some recent semiconductor data suggests that Cisco and Juniper might be selling less in the way of routers these days.  What replaces them?  In a metro world, investment is strongly slanted toward fiber and switching (Ethernet).  Backhaul, of course, is inherently metro so that exacerbates the shift.  The result is a gradual movement to two product groups that are less profitable than routing already and where router incumbents face even more competition.  You could argue that Cisco’s seeming fixation on videoconferencing is an attempt to promote traffic sources that can’t be cached; live stuff has to go end to end.  The problem is that “video” overwhelmingly means syndicated mass-market content that CAN be cached, and so video growth is driving the very trend that Cisco and other vendors fear.

Most network vendors have, like Cisco, responded to this in part by working to reduce costs.  As always, this is great as a way of getting some runway to use in taking off for a brave new vision of the market.  Is there one, though?  I probably sound like a broken record with this point, but in order for big network vendors to succeed their big buyers need a stronger network benefit case, rooted in benefits that are provably those of the network.  Since I think that nearly everyone would agree the cloud is the vision of a future tight cooperative union between IT and networking, it would follow that forging the tools to support that union is the big opportunity.  That’s even true for HP.

 

DevOps Dawn or Ostrich Posture?

If you’ve followed my blogging here, you know that I’m a special fan of the class of system tools called “DevOps”.  These tools are designed to automate application deployment by transferring information from the development process to a form of template, container, or script, that can then be used to install the application when it’s needed.  BMC software has made an acquisition (VaraLogix) in the DevOps space, adding to its StepStream deal last year.  It’s clear that BMC intends to make this space its own, and important to understand why.

Tactically, DevOps is already an opportunity.  Today’s applications are increasingly deployed as complexes, with layers of applications cooperating.  A typical web-based app, for example, would have a front-end web server, a back-end application server, and a database server.  If there’s component redundancy built in, you can see that the structure gets big fast, and that means that it would be fairly easy to make a mistake in deployment that would send it all crashing down, especially when a failure event tested some thinly-reviewed logic path.

Application integration in the form of worker-specificized GUIs is another area where problems with deployment can already be seen.  In the good old days, you ran an application.  Today, increasingly, you run a screen that’s scraped from a couple of applications.  More and  more, users have GUI-integrated views of apps that are assembled from exercising functional APIs and not application-specific GUIs at all.

The best part of DevOps, though, is that it plays to the broadest mission for the cloud.  Whatever issues exist in deploying applications on static servers in data centers, they pale by comparison to deployments in the cloud.  Add in hybridization, application integration across the public/private boundary, failover public cloud services, and you have a formula for something that no one will ever get right if it’s done manually.  But even that’s not the end.

We’re about to launch ourselves into the age of composable applications and experiences, a period when the whole notion of what an application is shifts from static to dynamic.  This change, which makes “provisioning” and “orchestration” of components two sides of the same coin, takes us forever out of the world of manual deployments.  BMC is smart; they’re taking a lead in a race that most of their competitors don’t even know is being run.

Then there’s the fact that there simply isn’t any logical way to address network connectivity in a dynamic cloud other than to presume that provisioning resources includes provisioning network resources.  OpenStack makes that explicit assumption, for example.  However, I also think that you can’t presume that applications themselves are the “software” in “software defined networking”.  It’s anarchy to let everyone serve themselves at the connectivity counter without mediation.  At the least, what you do to deploy apps has to be reflected in the connectivity of the application system as a whole.  DevOps software, then, is the “software” in SDN.  That might give BMC an enormous leg up in the world of SDNs.

Or not.  The problem is that BMC has never been known particularly for agile DevOps, nor have they been known for flexible DevOps thinking.  Their release on their VaraLogix deal doesn’t anchor things to the cloud at all, and it’s hard to see why the company would waste the bully pulpit of the announcement not to trumpet their superior strategy…if they knew they had one.

What this says is that BMC (like, let’s face it, so many market leaders today) is focusing on the purely tactical and ignoring the real trends driving DevOps from a backwater scripting activity to the mainstream of the cloud and SDN.  If it is, then competitors might now pick up some other startups and run away with a market that BMC could have owned.  If that happens, I sure hope that others in the software and networking space take note.  BMC isn’t the only playing ostrich at the market zoo.

 

Forget Big Data, It’s Time for Big Query!

JP Morgan put out an interesting financial-markets note on “big data”, illustrating the changes it could bring and the value it could create for vendors.  I agree with both those points in principle, but I think there’s a bit more to the problem (and thus to the realization of the opportunity) than the note suggests.

“Big data” is the term we use to describe the mountains of “uncategorized” or “unstructured” information that businesses generate.  Unlike the business-transactional data that’s neatly packaged into RDBMS tables, big data is…well…just out there.  It’s tempting to say that if we analyzed it thoroughly and correctly, we could make better decisions.  It’s also likely correct, but that doesn’t go far enough.

There’s insight buried in virtually every form of information, but is it “valuable” insight?  That depends on the net of the worth of the knowledge you can extract and the cost of its extraction.  My argument is that all our big data hype tends to presume that extraction is almost cost-less and that the value is fairly easy to recognize.  On a broad scale, that’s not easy.  On some narrower scales, it’s tantalizing because it offers us a vision of what harnessing big data might really take.

Take health care as an example.  Everyone says that we can improve patient care by exercising big data, but in most cases the examples that are cited (including the ones in the JP Morgan report) are really not about “big data” but about better use of regular data.  There are mountains of prescription information, patient information, etc. and it’s often fairly well categorized, but it’s separated into chunks by administrative boundaries—pharmacies have some, insurance companies have some, and doctors have some.  Could we get things out of the combination we can’t extract from the pieces?  Sure, but how do we cross those company borders.  Nobody is going to give the other guy the data to keep, nor are they going to pay to store others’ data in duplicate on their own systems.

I think what we really need to be thinking about isn’t so much big data but “big query”.  We already have a cloud model (Hadoop) that’s designed to distribute questions based on where the data needed to answer the questions might reside.  The problem is that these systems presume that the data can be independently analyzed in each of the repositories in which it’s stored.  If we want to analyze the correlation of two variables, we need both of them in the same place, and that might well mean that we have to suck all the big data to one massive virtual database to analyze, which is going to be enormously costly.  Further, we could have done this in the old days of sending tapes and even then it was unwieldy.  We need to make technology work better for us in this problem.

Big query would have to be a technique to perform culling on big data in its distributed form, to extract from it the elements that might meet the correlative criteria on which most data analysis has always depended.  That means sending screening queries to each data node, having them reply with results, and then correlating them centrally.  This is almost a data topology problem in one sense; based on the results from each data node we could pick the cheapest place to join them.  It may also be a probabilistic problem, like finding the God Particle.  We could apply screening criteria with the knowledge that each level of screening would increase the risk we’d miss something by excluding it but decrease the burden of the analysis needed.  So maybe we say “I want a three-sigma or less risk of exclusion” and see what data volumes and costs result, then maybe increase that risk to two sigmas if we can’t afford the results.

I agree big data is a great opportunity, as I said.  I think we need to start thinking about the specifics of how to address it and not sweep them aside in an orgy of enthusiasm.  That’s the only way to make sure that the big data wave doesn’t become just a hype-wave flash in the pan.