Is Akamai/Cotendo a Sign of a New CDN Age?

Akamai has won a reputed battle with AT&T and Juniper over mobile-content-optimizer Cotendo, a company that gets described as a “cloud” player by the media in keeping with the current notion that anything that’s not cloud is not newsworthy.  This is interesting, not for the “cloud” aspect but for the fact that it speaks volumes for the dynamic in video.

First, this demonstrates that everyone in the industry realizes that there’s a fundamental tension between OTT delivery and ROI on bandwidth-creating infrastructure.  They realize that growth in video is going to generate network congestion unless operators put usage caps and premium overages in place.  That will bridle the unbridled expansion of OTT, so nobody really wants it.  The use of specialized techniques to optimize video might either reduce the problem through compression or dodge it in some way (the latter, I think, is unlikely).  Net-net, we need to use capacity smarter because we’re going to pay for it incrementally eventually.

The second thing this demonstrates is that Akamai and other content networks are under enormous profit pressure on their own.  Everyone who wants to be a content player needs to have a CDN component in their story, which makes the current market a breeding ground for Akamai competitors.  The network operators are going into CDNs at record pace, at least in terms of launching a record number of CDN projects in 2011.  These guys are going to be formidable competitors, particularly because they have a tolerance for low ROIs.  Akamai needs secret sauce, and Cotendo might have it.

Finally, this demonstrates that CDNs’ scope is expanding.  I recall a meeting I had a couple of years ago with a major competitor of Akamai regarding trends in the space, and I mentioned that the CDN of the future would likely have to run applications.  They weren’t impressed.  Bet they are now.  Operators have been saying that all their content strategies revolve around CDNs but that means traditional CDNs aren’t sufficient.  That’s good news for others who might have wanted Cotendo, because even they aren’t truly on content’s leading edge.  There’s still room for innovation here, and a LOT of deals to be done.  We’ll be talking a bit more about CDN dimensional change in January’s issue of Netwatcher.

Is Verizon’s Outage a Symptom of Service-Layer Tension?

Verizon’s network suffered another of those large-scale outages, and there’s no official word on just what created the problem.  One story I’ve heard is that it’s the same last time, which is an IMS-based signaling system overload.  If true, the problem illustrates one of the major challenges for both vendors and operators.

IMS has been seen by telephony purists as the natural utopian goal to which all mobile networks aspire.  To web guys, it’s kind of like innovation with both arms and legs tied up.  Underneath the religious war is a simple truth, which is that modern broadband services based on the web model are provably scalable and that’s not yet true with IMS.  Many believe they can’t be, some (myself included) believe we shouldn’t be trying to find out in the first place.

IMS is based on a session-managed notion of communication where connectivity is authorized.  I remember seeing a diagram of the signaling to be proposed to access a website via IMS, and it made ISDN call setup look like blowing a kiss to a passing car.  The issue with IMS is less with IMS for how it works, then, than IMS for how it’s applied.  For broadband web-like apps, we need a web model.  However, web guys have been just as intransigent as IMS guys.  The latter refuse to come up with a logical web-service model, and the former refuse to come up with anything that’s useful for pay-for services.  Operators don’t want to give mobile broadband away, and they don’t want to kiss off their current mobile voice revenues, so they need a transition strategy.

Vendors have implicitly divided themselves on this issue, with traditional players like Alcatel-Lucent, Ericsson, and NSN taking a pro-IMS stance and IP players like Cisco and Juniper taking a pro-web stance.  Note that I didn’t say “anti-IMS”; it’s risky to put yourself in the path of a carrier cash cow heading for the monetization barn.  Both camps seem frozen into functional immobility.  My view is that the future will be a kind of “IMS-plus” world, where IMS manages voice and registration and data services are handled another way.  What way?  That’s what vendors and operators need to hammer out.  Verizon’s problem may stem from trying to accommodate a transition nobody really wants to face.

 

Reading Oracle’s Results

Tech giant Oracle reported, and it wasn’t pretty.  For the first time in a very long time, Oracle disappointed in both performance and guidance.  Pretty much every aspect of its business was weaker than the Street expected, but the hardware guidance (off next year 4% to 14%) was considered dreadful by many.  This is one of the hottest players in tech, so the question that everyone is asking is “Is this a sign?”  Yes, obviously; but of what?  In my view all of this is due to the deadly combination of “structuralism” as a driver and uncertain economic conditions.

Hardware sales were the problem; in guidance as I’ve already noted but also in current-quarter performance.  Software was not stellar but certainly in range.  The thing about hardware is that it’s  just something you run stuff on.  There’s no “benefit” in a business sense; only the ability to realize a benefit created by something else.  So in times when economic conditions are uncertain, hardware is the deal that gets put off or called off.  Even “consolidation” projects spend cash in the present to get overall reductions in future cost.  Right now, pushing off realization of value doesn’t look like a prudent step.  And it’s all because we can’t create PRESENT value.

I’ve noted before that tech is evolving without any significant increase in total benefits.  It’s like we’ve wrung what we can in the way of productivity enhancement and this is where we’re at forever.  Under those conditions, only changes in tech that lower the cost line can be promoted, and of course the problem with cost-based change justification is that buying lower-cost stuff always looks more expensive than staying with the stuff you have.  Structural change demands more stability on the business side, in other words.  We don’t have that these days, so Oracle and other tech players face the challenge of justifying change when staying the course looks safer.  It looks safer because there’s no upside; the best you can hope for is that the future costs are no higher than the present.  Benefits are off the table.

The other challenge that Oracle has is that in a structurally driven market under economic pressure, the broader you are the harder you fall.  It’s impossible to shield the broad market from broad impact.  Think about it; replacing one specialized product in bad times can be justified more easily than replacing everything.  If you’re Oracle and make everything, you’re going to get wet when the negative economic tide comes in no matter how artfully you try to dodge.

If this is all true (and I’m convinced that it is) then there’s a collateral issue everyone faces, which is market share.  Obviously in a market under pressure to stay the course there’s no meaningful market-share gain; you make less even if you gain something in a market that’s contracting.  However, it’s also true that without benefits to increase overall spending it’s hard to have structural expansion of tech without favoring incumbents.  The guy who needs benefits the most is the guy who wants to gain share on competitors.  It follows that if you are incumbent who wants to put away your opponent forever, then figuring out new benefits to drive spending is the way to do it.  They’re locked out of their best chances…forever.

 

What Now: For AT&T and for the Cloud

AT&T has decided to drop its attempt to acquire T-Mobile, citing regulatory opposition.  Some on the Street are putting a good face on the deal’s breakup, saying that it will encourage AT&T to spend more on infrastructure and help vendors.  Alcatel-Lucent is up in early trading in Europe as an expected recipient of all this extra spending.  I’m not sure about this one across the board.

There’s not much question that conditions in the mobile industry favor consolidation.  ARPUs are expected to peak late next year, for example.  Generally, a reduction in the number of competitors improves overall economics.  But on the other hand, it does appear true that the second-tier players in US wireless have been more competitive in price, and AT&T’s customer satisfaction ratings have been on the bottom for several survey seasons now.

What DT is going to do now with T-Mobile is unknown, and that creates my uncertainty with the prediction of vendors sitting under the money tree.  If T-Mobile is going to have to be buffed up cost-wise in order to be dealt away, their capex could drop and offset any gains AT&T might have (which are speculative).  It seems pretty likely to me that DT would try to arrange another US deal, but a merger with Sprint is about all that would be left on the table.  Regulators would like a Verizon merger perhaps even less than one with AT&T.

It would be helpful for mobile evolution if we could get all of this out of the way, though.  Carriers in an industry tend to be more conservative with capex during periods of M&A and consolidation; they want to see how the deals will shake out and also to hold back some cash just in case.  Management is also preoccupied, which tends to delay projects that are driven at the executive committee level, including all of the monetization stuff.  Hopefully that doesn’t happen this year with the US operators and they may be immune because their own projects have advanced more than the global average.

Savvis is reporting that cloud customers are getting more demanding, according to Light Reading.  That fits with our survey results from the fall, and can be attributed to the fact that as cloud projects progress the buyers are finding more things they didn’t expect and demanding more information and clarification.  In our surveys, buyers own measure of their “cloud literacy” has followed an interesting pattern.  At the start of the process they say that they’re cloud-qualified in over 80% of cases.  By the middle of their pilot testing they rate their current cloud literacy at half the level they started, and they also say that they “knew nearly nothing” when they started, reducing their score in retrospect.

I really think that the numbers haven’t changed a whit.  Users have consistently told me that about 24% of their current IT spending could be cloudsourced.  Their battle now is first to figure out just how to accomplish that, but increasingly a second-place goal is whether it’s even true.  Cloud prospects have disqualified more applications than they’ve qualified so far, and of course they started with the application demographics that were most favorable.  They’re not throwing rose petals as much these days, which is probably a good thing.  Reality always sells better in the board room.

 

 

Is Analytics Leading Us to the New Age?

IBM is telling Barron’s that analytics is the next big thing, and they’ve got enough historicity in the “correct” column of the tech ledger that we have to take them seriously.  IBM and Northwestern are even going to have an academic program focused on analytics. The only problem I have with the statement is that I think the rest of the examples that IBM’s Palmisano used to illustrate his claim are a little behind the curve.  The intelligence that lets computers win Jeopardy, so the theme runs, makes better decisions for us all.  The truth is more complicated.

There is an issue in drawing actionable conclusions from masses of data, to be sure.  There have been a number of recent stories on how various strategies for recognizing correlations might move us in new directions.  As somebody who’s been modeling market behavior for three decades now, I’m well aware of the fact that finding patterns in data is complicated.  Certainly we could use improvements here, and certainly we could make better business decisions if we had better answers to the old questions.  Isn’t that what Jeopardy is about, after all?

The question is whether the old questions are the right questions, I think.  Paradigm shifts often don’t really shift paradigms, we just say they do.  The business data we’ve collected may not be the right data; the business framework that applies the decisions may be less than optimum.  In these cases, have we shifted the paradigm?  Analytics may solve the wrong problems faster, but if they are the wrong problems there’s a limit to how helpful that will be.

And help is needed if tech, in business applications, is to resume its normal trajectory of growth.  We have been a full decade now without a new kicker in tech spending, without a new paradigm to help us link IT and productivity in some novel way that would free more benefits to justify more costs—meaning more computers and networks.  It would sure be nice if IBM were right and it was only a matter of diving into our facts a little deeper and finding new meaning.  But I think we’re beyond that.

We never had a ten-year period when IT spending growth was pegged at the bottom of its historical relationship with GDP growth.  We always swung up immediately after we turned down…always until ten years ago.  I think the fact that we broke a pattern that’s lasted through the whole computer age is an indication that looking deeper, meaning looking backward, isn’t the complete solution.  To create a new dimension in business technology we need to empower the worker in more insightful ways.  Fortunately a vehicle is presenting itself.

I think the new paradigm has more to do with Apple than with IBM.  If everyone has a gadget that’s their literal window on the world, then we can see a lot we could never see before.  We can do things we could never have done.  That’s the key transformation.  Yes, that will open a need for a new vision of analytics because we’ll be asked questions that, like a Jeopardy match, demand an immediate answer.  Why?  Because those questions will be asked at the instant the answer is needed.  If tech is integrated into our lives, then it advances at the pace of life, and life’s pace is relentless and accelerating.

We here at CIMI Corporation are entering our third decade, as a company, as a source of strategic market insight, and as a publisher of a network strategy journal.  Truth be told, I’d never have guessed that I’d see a Netwatcher with a Volume 30 designation.  But if you’d have told me that a computer would eventually have beat a Jeopardy champion, I’d have believed that even thirty years ago.  There’s a lesson here.  The things that are hardest to touch are the things that touch us personally, regularly, daily.  Technology’s real revolution, the one that’s underway now, is the revolution that makes tech and us into a virtual and real world that coalesce in a thousand wonderful ways.  That new world will empower new analysis, but analysis won’t be the driver of that world.  We will be, and from us the drive will extent through our personal tech appliances/windows and change every aspect of how we live and work.  Right beside us, right in our pockets, is the leading edge of that new age.

IBM sees a glimpse, and so does Apple.  The question for 2012 will be “Who sees enough?”  Can a company grab onto this massive wave of change and ride it to what could be a whole new level of success?  Who do you think it will be?  IBM, or maybe Apple, or Microsoft, or Cisco or Alcatel-Lucent?  Somebody off the grid?  I suspect that we’ll answer that question next year.

 

Knocked Over by the Winds of Change

RIM turned in a truly ugly quarter and announced its new handset family would be late.  The company is a poster child for the biggest challenge in technology, which is how a company that’s very successful under a given market paradigm can confront a major paradigm shift.  RIM was king of the smartphone when the smartphone wasn’t that smart, and they failed to recognize that the iPhone truly changed the picture.  Once you get caught behind the wave, change-wise, there is little or nothing you can do to catch up.

In the phone/tablet space, of course, RIM isn’t the only guy looking at the dust generated by the Leaders of the Pack.  Microsoft has fiddled through more firestorms in their market than I can count and only their massive market power has kept them from disaster—so far.  Interestingly, the tablet that was the proximate cause of RIM’s problems is also the cause of Microsoft’s.  Yes, phones changed radically, but it’s the tablet that really changes things.  By creating a portable device that’s less than a PC and more than a phone, you not only capture a bigger chunk of PC apps, you create behavior that can be more easily translated down-size into a phone.  Thus, the tablet provides a bridge for big-screen fans to cross to ease their way into little screens.  A tablet failure makes a phone failure all the more difficult to survive.  RIM is learning that now.  The question is whether Microsoft will learn it.

Apple, responding perhaps to Amazon’s Kindle Fire, is now rumored to be prepping a 7-inch tablet.  That’s going to put another stepping-stone on the bridge from large to small, and only magnify the extent to which the workers and consumers of the world become dependent on “point-of-activity intelligence”.  We are transforming the whole of the communications market and the evidence is literally all around us every day.

In network equipment, I’d assert we have a similar (and related) shift underway.  The value of networking can’t be expressed in “convergence” any more; we converged.  Pushing bits has become less profitable every year, and operators are naturally focusing on what’s MORE profitable.  If you harken back to the golden age of telephony, you’d recall that custom local access special services (CLASS) like call waiting were the real cash cows, and they were LOGIC features not BIT features.  UBS analyzed some recent private equity buys in the networking space and noted that there’s typically a pretty significant software contribution in their makeup.  To quote the (dry) words of their summary, those being bought:  “had a higher relative degree of software orientation vs. the broader comm equip industry, and we believe this reflected in the [margin] profiles”.  That’s because software creates features at a lower cost and a faster pace, which is just what you want to see if your bit-intensive business is trapped in the market shifts caused by things like smartphones and tablets.

UBS also downgraded Juniper, citing secular demand issues (the bit market sucks), a slow ramp on QFabric (positioning has been uninspired except in a few narrow verticals), and management defections to Cisco.  The latter point is, I think, doubly troubling.  First it’s never a good thing when your arch-rival can get good people out of your organization.  It suggests that they have a better opportunity trajectory than you.  Second, most companies face change not from the top down or the bottom up, but from the middle.  Thought leaders there can get an audience with the top people and at the same time be connected with the implementations.  That’s where Juniper has been losing most of their people, at a time when they need to have software insights that are real and integrated.  Of course, the Street being what it is, another analyst firm upgraded Juniper yesterday.  You can take your pick.

RIM took its pick, and believed the mindless supporters of the status quo.  The whole of network equipment, faced with exactly the same pressures for change from exactly the same forces, has to make a choice now, and every one of them is doing just that.  Some just don’t realize they are.  If you want to find an equipment winner, Dear Buyer, look for people who are not just standing their ground.  Soon there will be no ground for them to stand on.

 

Clouds and Chips

Alcatel-Lucent is working to improve its position in the enterprise with an OmniSwitch story that links to its carrier cloud story, a combination it calls “Mesh”.  The step is a smart one because the cloud is the largest driver of data center change, but it’s just a bit late in the market timing because enterprises rated this sort of thing more important six months ago than they do today.  It’s not that cloud isn’t important, but that enterprises have reported having to do a lot more to go from a cloud hope to a cloud realization.  In our fall survey we found that cloud futures were slightly less likely to influence buyers of data center switching than in the spring survey.

The issue with enterprises and the cloud in data center switching procurement isn’t the only cloud issue.  Enterprises are rethinking just what they’d do with cloud computing and how much of it would likely happen.  The leading edge of this position is taken by companies who are deciding that if they use cloud technology to create application mash-ups, they can personalize stuff for workers, add new cloud-hosted productivity elements, and preserve much of their current IT infrastructure—software and hardware.  In fact, the average enterprise no longer believes that “private clouds” mean buying something new or changing their data center architecture.  The consistent broadening of cloud positioning has something to do with this, I’m sure; you can’t say that everything is a cloud (in effect) without enveloping the present as well as the future.  But…it’s also true that early hybrid cloud projects have demonstrated that it’s hard to create applications that really elastically move from data center to cloud unless you use componentized software and SOA, in which case you can likely do it without changing your current IT if you are SOA-ized already.

In the carrier space, Verizon has announced that it’s upgrading FiOS infrastructure to be 100G capable in about a half-dozen cities.  This doesn’t necessarily mean that FiOS to the customer will be any faster, and interestingly the same day this news came out another story on the fact that UK users aren’t adopting the highest broadband service speeds available.  For services over 25 Mbps, only 4% have subscribed even though the services are offered to nearly 60% of the market.  This is fairly consistent with US experience where there’s evidence that offering fast broadband may be a competitive advantage but despite that few will actually take it.  Operators in our survey tell us that customers are now and have always been clustered at the low end of the broadband service range.  The lowest price is the best answer.  I think that also argues against the view that offering premium handling for broadband traffic like video has much of a future; people will simply take best-efforts unless it’s truly awful, in which case they’ll change providers.

Consumerization is hitting the tech space overall, and the impact is drifting down the food chain toward the semi-and-silicon space.  Lam Research is buying Novellus, both makers of technology used in chip production.  The goal is to provide a framework for creating faster, better, but predominately cheaper chips as technology focuses more on inexpensive consumer devices that can’t sustain high semi prices.  So is this consolidation?  Sure, though it’s also likely that in the near term it’s a competitive push-back against Applied Materials, who has been a strong player in the space.

 

 

A Transformational Rumor

If we had to pick the parameters of a really big rumor in networking it would be hard to top the one that started circulating Monday afternoon.  Verizon, says the rumor, may buy Netflix!  Here’s a telco, one with FTTH and telco TV, picking up an OTT video player!  Is this the beginning of the Great Era of All-Internet-Everything?  Well, you can probably guess that while I’m not ruling out the rumor, I’m skeptical about the interpretation.

I’ve been blogging for a couple of days now about the issues with wireline services in general, and Verizon’s business shifts in particular.  The summation of my position is that only TV viewing can justify a loop, that DSL can’t support TV viewing all that well, and that Verizon’s deal with the cable consortium for spectrum in return for resale rights may indicate that Verizon itself is thinking about exiting the DSL business.  Instead they’d focus on FiOS where they’ve already deployed it, let the copper plant rust in place and try to shift its users to remarketed CATV.

OK, you can see how this latest story might play into that view.  Look at Friday’s blog:  First you develop your own media properties into something that you can leverage outside your own footprint, essentially disintermediating other LECs.  That gives you a plausible revenue stream with lower capex.  How do you suppose Netflix would fit into that model?   This is what I’ve been worried about.  If broadband ROI falls and OTT opportunity rises, then operators start thinking “Hey, I could be a contender if I were an OTT!”  I don’t think Verizon plans to exit the physical part of the network, but it darn sure sounds like they plan to exit some of it, and to confound their Great Rival AT&T by selling Netflix over AT&T pipes.  Of course this rumor would have to be real, first, and would have to end up in a deal that passed regulatory muster second.  Both might not happen.

If you give me credit for predicting this one, then perhaps you’ll indulge me on my view of the evolution of “networking”.  We are at the end of the push-bits-for-a-profit period.  We are at the beginning of the “fulfill user desires directly for a profit” period.  Thus, what vendors produce that fulfills the latter is what will keep their lights on.  I’m not saying that the network isn’t an essential pipe for delivering stuff, but I am saying that you can’t be in the delivery business while others are fulfilling the retail demand.  If the rumor is true, that’s the message loud and (hopefully) clear.  The profit in network equipment has to be created by creating a more affirmative link between the network and the source of profit.  Carrying the water (or bits) isn’t enough.  The vendors have to climb the stack up to where profits are generated, which is hard.  They also have to link their successes in the service-layer space with their network technology so they can move the big iron and generate revenues.  Service layers are differentiators but they’re not, for the box players of today, replacements for the old transport/connectivity market.

 

Adtran and HP: Wrong Moves?

Adtran is going to buy NSN’s wireline broadband business, a move that I think shows a lot about the space overall.  Obviously you don’t make a compelling offer to sell a business that’s got nowhere to go but up, and even outside the US wireline broadband is in trouble.  I wonder if Adtran isn’t admitting that it’s in DEEP trouble here in the US too.  The company hasn’t been turning in bad numbers but the bloom has long been off the DSL rose here unless you presume that there’s going to be continued upgrades to the outside plant to shorten the loop.  Some operators told me that they’ve concluded that running DSL at rates approaching 50 Mbps incurs most of the cost of FTTH, largely because you can’t really leverage loop plant by shortening unless the initial aggregation point for the copper is pretty close to the home.

The big problem is that you have to put a DSLAM where there’s loops to attach to it, and if the loops were initially thousands of feet long they likely don’t converge at a convenient point short of their current point of concentration, which is usually too far for high-speed DSL.  If you try to redirect the loops you’re pulling copper along new paths, which is crazy given that you can pull fiber.  If you accept lower density with loop connection, you have to run too many fiber DSLAMs and you have too low utilization.  The point is simple; only TV delivery is profitable in wireline, and we can’t easily do TV delivery in even HD mode much less 3D and still do broadband on long-loop DSL.  So NSN is smart dumping its wireline assets and Adtran may be signaling some real problems.

Some will say that the solution to all of this is streaming TV, but I’m a skeptic here.  If you have high-quality broadband you can surely stream TV, but if getting high-quality broadband means FTTH then you can deliver channelized TV too and that’s more dependably profitable.  The goal of TV isn’t to justify IP deployments, after all, it’s to entertain people.  Most of them want a multi-channel experience.

HP has some news too; it’s discontinuing its current TouchPad line as promised but it’s making WebOS open-source and it says it will release a tablet based on WebOS in 2013.  That seems like “there’ll be a pie in the sky by and by” to me; safely beyond the current planning horizon and possibly beyond Whitman’s tenure at HP.  Why?  Because nothing she can do will restore the luster of HP fast enough.  To bet on open-source and not on Android is just crazy unless you think that somehow you’re going to sponsor WebOS more than Google sponsors Android.  After an MMI buy?

The data center is where HP has to shine; Leo was right about that.  Wrong about how to be a success there, though.  A single app doesn’t make you a data center player.  HP has a good asset story in the data center but their lack of a convincing cloud strategy has really hurt them.  That lack would be understandable if everyone had jumped out early and claimed all the good defensible positions, but in our fall survey we found that only 11% of enterprise technologists said they could confidently articulate ANYONE’s cloud strategy fully.  Furthermore, almost 90% said that they believed that the popular view of the cloud was inaccurate.  So here’s the cloud, the most important philosophical concept in all of IT, with a wide-open space where vendor engagement and articulation should be, and HP decides to be a lukewarm follower?  Gosh, “clouds are artifacts of alien zombies” would have been more exciting, or at least press-worthy, and that’s what HP needs now.  You can’t lead a market from a position of invisibility.