Is Cord-Cutting REALLY Real?

Data for the last quarter shows that cable and satellite TV providers lost a significant number of customers, and while the media is declaring this to be a victory for OTT video I think that’s an oversimplification.  There are major changes in video consumption, some driven by technology, some by economics, and some by population demographics.  We need to look at all of them to understand what’s happening, and then we need to explore the consequences.

TV subscribership is a household affair; people buy subscriptions for independent households and particularly for families with children.  Every year we create new households as youth becomes independent, people separate, and through immigration.  Every year, marriage, death, and other factors will destroy households.  The total household count is dynamic, but more significant is how that count is divided among viewing segments, particularly the population between 18 and 25.

Our surveys and our modeling of industry data have showed that TV behavior is fairly static in the over-25 population area, meaning that the percentage of households with TV subscription is fairly consistent.  We see a slight dip post-2008 to reflect economic impact on those with marginal financial resources, but in general TV is so important as an entertainment vehicle that most people will skimp elsewhere to sustain their access.

The one place where we do see a shift is in what we could call “transitional” households.  The households headed by the 18-25 age group are normally the most economically stressed, and they have also had less time to develop a “TV dependency”.  For the period when young adults sustain their own households, single or married, and before children arrive, they are four times as likely NOT to have a TV subscription as the normal population.  While that’s alarming to cable and satellite companies, even that number hasn’t changed all that much.

What HAS changed?  First, young graduates are far less likely to live independently now, so we’re creating fewer households.  Second, people have children later in life and households without children are less likely to have a TV subscription.  Finally, the mobile broadband generation has learned a different kind of entertainment, one driven first by social interaction and second by viewing.  In their entertainment model, they share and talk about clips on YouTube and not about TV shows.  That behavior carries over, and as long as they’re not using the TV to babysit, they are 20% less likely to subscribe to TV than their non-broadband-generation predecessors.

What I’m saying here is that cord-cutting is a great story, but there’s still no hard data to show that it’s having a significant impact on television consumption.  If it were, one symptom would be an increased reliance on VoD versus scheduled TV viewing, and the cable companies report that’s not happening.

While the notion of a cord-cutting generation creating an explosion in streaming video warms the hearts of network equipment vendor CFOs, the truth is that any major shift toward that sort of thing would likely be a disaster.  We already see that usage pricing is coming, but so far it’s being limited to the 5-8% of users who really DO consume a lot of streaming video.  For most Internet users, broadband is still usage-free, and as long as that’s the case the Internet will continue to grow pretty much as before.  If TV keeps its role as the primary video entertainment media, then the Internet can keep its role as the framework for social and entertainment innovation.

 

 

Verizon’s Strike: Last Gasp of Wireline?

Strikes against telcos aren’t anything new, but Verizon’s current strike may be of special significance because it’s coming at a time when the company is wrestling with a question no one ever believed would be asked; is there any future in telephony?  While Verizon has been profitable, the profits aren’t extravagant by any measure of OTT giants, and the profits that do exist are almost entirely on the mobile side.  In wireline, Verizon faces the dilemma of all network operators; how to leverage the loop.

Just short of a third of households don’t even have a wireline phone any more, and that trend is accelerating because nearly all new households (created by recent graduates) have only cellular phones.  On the other hand, TV and broadband are still normally delivered by wires (or fibers), and this means cable MSOs have an advantage over telcos.  CATV can deliver video and broadband and voice at high capacity (1 Gbps) while copper loop is limited to somewhere between about 25 and 60 Mbps depending on the loop length and condition.  The limitations of the local loop are critical because operators have to ask whether to try to leverage it (as AT&T’s U-verse does), replace it with fiber (as Verizon FiOS does), or just toss it and exit wireline.

The broadband policy debates worldwide make the issue more complicated, because there’s tremendous government pressure to deliver higher-speed broadband despite the fact that virtually all users choose cheaper over faster where the options exist.  Policies like net neutrality also close off the easy paths to monetization; sell your own stuff at better QoS.  Almost half of the major operators have had discussions on whether there was a long-term future in wireline.  Almost a quarter think there isn’t, even in the fiber side.

The implications are significant to equipment vendors because almost 75% of infrastructure that’s deployed supports wireline services, which given the fact that wireline isn’t anything close to 75% of revenue, illustrates the dilemma nicely.  For some operators like Verizon, subscriber economic density is high enough to justify a successor plan for the local loop, but for many operators there’s simply no way fiber can return on investment.  Without fiber, it’s going to be an increasing challenge to deliver video and broadband together, and without that there’s nothing to keep mobile-infatuated users from dropping the only wireline service that’s still profitable.

Look at the labor dispute with this background and you see it’s a risky play for everyone because any negative trend in wireline costs, including and perhaps especially in the labor component, tends to push operators in the direction of dropping wireline completely and staying in wireless.  The question “What would people do who wanted their old phone service?” is just as irrelevant as “What would people do for home broadband?”  What do people who want a BMW for a hundred bucks do?  Cope.  Operators don’t want to make this critical decision right now; labor doesn’t want them to make it at all.  We’ve seen plenty of destructive face-offs in our world recently; this could be another.

The mobile broadband explosion is also troubling to some strategists on the operator and equipment side.  The question is whether users are becoming tuned to what might be called the “tablet experience model”.  You don’t watch an hour TV show, you watch a bunch of three-minute snippets.  You really don’t consume that much bandwidth from the perspective of wireline; the average wireline user with 5 Mbps service would likely hit those caps in as little as a week.  I’ve said for years that mobile broadband and behavior were creating a kind of hysteresis; changes in one are changing the other, which in turn change the first again.  It may well be that behavioral apps are going to be the real revenue stream of the future, that cloud hosting of features that will enhance our decisions and lives will be the real “content”.

 

 

Is Ethernet Going Sour?

With Brocade’s cut in guidance on its earnings call, the company joined what seemed a parade of network equipment vendors who’ve called the future of network spending into question.  Most Wall Street analysts have suggested that Ethernet is coming under pressure and that corporate IT spending is likely to be weak.  Both are likely true, but I think the Street is (as usual) content to catalog symptoms rather than address problems.

In the enterprise space, our spring survey found that enterprises were still generally holding their capital plans but were slow-rolling project spending.  A part of the reason was concern over economic conditions, which was a visible issue even before the harsh political face-off that’s virtually killed market confidence this summer.  Another part was some concern over their cloud plans, concern that arose from getting more insight into cost and benefit as they got deeper into the topic.  Both these issues appear to have grown over the summer, and I expect our fall survey to show that.

Data center modernization is the only real driver of network change in today’s market.  Nobody has demonstrated any direct productivity gains out of network change, despite Cisco’s attempts to make telepresence the water-carrier for network expansion.  The problem is that virtualization as a driver for data center modernization appeared to have tapered off even this spring.  It’s not that people weren’t doing it anymore, but rather that the network change part was largely baked and they were back-filling into pre-existing plans.  Cloud computing was the big driver remaining, and the cloud has proved more complex than enterprises had expected.

In the service provider space, I’m seeing the result of five years of declining revenue per bit.  But the thing that’s really hitting now is a more subtle structural issue.  Content, which everyone knows means “video” is the driver of traffic in both wireline and wireless, to the point that you could almost neglect other growth sources in planning.  But content isn’t “Internet” traffic as most would know it.  More and more content is served out of metro cache points, and so it’s the metro capacity that’s consumed.  Metro means Ethernet, and the growth of Ethernet to support content delivery has been the driver in a shift for operators from IP-dominated planning and spending to capital planning that’s Ethernet-dominated.  That process at first tended to focus on more premium players and products because early metro/aggregation Ethernet was an expansion on the previous business-focused Ethernet services infrastructure.  In most metro areas today, according to our operators, the impetus for Ethernet growth is consumer video, and that’s the worst service in terms of ROI.  Thus, price pressure on Ethernet is inevitable.

Where the economic situation enters the picture is double-edged.  On the enterprise side, uncertainties about the revenue line will encourage most businesses to hold back, to delay spending as much as possible.  I think it’s pretty likely that Q3 will be soft for that reason no matter what happens at this point with the economic picture, simply because it’s not possible to wash all the uncertainties out of the market by the end of September.  The question will be whether project budgets take the larger hit, which they normally do.  That’s important because nearly all IT spending growth comes from the project side, because that’s where new benefits are typically introduced.  But in addition to representing the incremental spending for this year, the project budgets for 2011 reset the baseline for 2012.  If those dollars are not spent this year, then the later spending that depended on those projects is also curtailed.  That’s the risk, and it’s particularly acute given that enterprises will be starting their fall tech planning for next year in only a month.

The Ethernet shift for operators will be critical because it’s an illustration that spending is increasingly focused on places where the only differentiation is cost.  Ethernet features are virtually impossible to make meaningful, and “cost” beyond the direct cost of equipment has gotten tied up in conflicting vendor claims, none of which have been compelling to the buyer.  I asked operators what role vendor studies and figures on operations savings meant to their selection, and they told me that these were used where it was convenient to build a management justification for the choice they wanted to make, but almost never actually influenced that choice.  Thus, Ethernet is incredibly subject to pricing/commoditization, and that’s what we’re going to see.

Cisco’s earnings are due on Wednesday, and the Street seems to think they’ll roughly meet guidance.  If they fail to beat the estimates nicely or if they are also cautious in guidance, it will be an indication that the networking industry is in for a very tough patch in 2012.  Switching will be the place to watch too.  The word is that Cisco has been aggressively discounting its own Ethernet products and UCS servers as well, and that’s not helped the industry’s margins in the Ethernet space.  More discounting will confirm my thesis, I think.

 

 

News Buffet

A number of interesting but small new items have emerged in the tech world, and so we’ll do a quick analysis of them before tackling the ugly economic picture.  We’re going to range from network capex to virtualization, but perhaps in the opposite order.

VMware has decided that maybe its pricing was more of a problem then it believed.  As I said in our blog on the change, our analysis showed that most users would expect to pay quite a bit more, particularly users who had just designed virtualization-ready data center architectures.  The new model doesn’t penalize virtual memory as much as the old.

I think that customer angst here was expected; the big driver is likely to be the sudden softening of the economic picture, attributable to the political deadlock in the US and the sovereign debt problems in Europe.  Virtualization is a cost-management approach and could see a pick-up in bad times, but people want to save money on the means of saving money, so a price increase at the wrong time could hurt VMware’s market share.  Our model suggests that while some users will still see higher prices, the critical new-data-center players will likely find the new pricing model neutral to even slightly favorable versus the old.

DirecTV’s US subscriber numbers were disappointing to say the least (though its profits were up on Latin American growth), and the company has admitted it does have an interest in Hulu, thus in the concept of using OTT streaming as a means of augmenting a channelized model.  Comcast has said that for its part it’s not interested in using OTT to extend its service reach, and the difference in perspective here is what’s interesting.

DirecTV is fighting the natural disadvantages of satellite delivery, which is the lack of personalization and interactivity.  But that disadvantage set arises out of the fact that they don’t supply broadband Internet.  An OTT supplementation for them is a logical step because it could help address their limitations while at the same time consuming somebody else’s bandwidth.  Comcast, like all cable operators, has a market-share limit set by the FCC.  It can grow only through something like OTT, but trying that would validate a model that might create a kind of OTT explosion, which works against any broadband provider.

Credit Suisse has issued the third of its operator capex reports, and there’s really nothing new in this most refined prediction.  They think that 2H11 numbers will generally be up over 1H11, and there’s the only place I part company.  My sources are suggesting that there will be a serious hold-back now because of economic conditions, and I’m tentatively modeling 2H11 as just slightly over 1H11 and down significantly from the 2010 levels.  They continue to stress that the ROI on capital projects will be the determinant; money will flow only to where it earns provable returns.

In that regard, I still see the operators’ big problem as the lack of solution-oriented vendor positioning of their wares.  Operators struggling to get “new money” can’t expect to do that by deploying boxes under old paradigms.  That’s pretty obvious, but if it’s true then vendors have to offer some new ones.  What raises ROI?  How does it do it?  Some of my clients are getting so poor a set of monetization-project responses that they can’t even cost out the build-out needed because they can’t find any reasonable assumptions on what products can do now, what they will do in the near future, and what it will cost to augment that to meet monetization goals.

Well, we’ve avoided the point as long as we can so let’s take up the economy.  The big stock dump of the last couple of days (depending on what time zone the exchange is in) shows first and foremost the problem I predicted in our 2008 analysis.  Europe has common currency and divided government, and so it was unable to mount an effective stimulus program.  That led to reduced economic growth, and that in turn led to lower tax revenues and higher safety net costs.  The sovereign debt crisis is the result of that, pure and simple, as I’ve said all along.

What’s now changed is that the US now has the same thing.  If we didn’t see a classical example of divided government in a unified economy, I couldn’t suggest where one could be found.  The impasse, even when corrected at the last minute, has sapped the confidence of both consumers and business, and that’s enough by itself to slow economic growth.  It did.  It’s not over.  Slower growth here means less support for EU growth through exports to us, and that exacerbates their growth problem, which exacerbates their sovereign debt problem.  Which reduces their ability to buy our exports, which makes our growth slower.  You get the picture.

The Washington Post said it beautifully yesterday:  “The forecasts and models created by agencies such as the International Monetary Fund emphasize the point: Miss a revenue or spending target and the numbers look a little worse; miss the growth forecast and debt spirals out of control.”  Our debt fight simply guaranteed our loss of control, and we now join Europe in playing catch-from-behind on a problem we had control over.  The committee to implement the debt compromise hasn’t formed yet, but the politicking is already underway.  Do you think this is going to go better?  If it does, it will only be because politicians on both sides are staring into that death spiral the Post described.  The employment numbers today were unexpectedly good; the confidence crisis isn’t pushing us to disaster quite yet.  We have perhaps two months to fix this, and then we’re in another recession that could easily be as bad as that of 2008.  Remember, doubters; I bucked the positive trend early in that crisis and said it would be the worst since WWII.  I could be right here too.

Cisco’s Video Changes

Cisco is consolidating its video activities into a single unit and its Videoscape head is leaving.  The decision seems an odd one to me if you look at things from a market perspective.  Videoscape was arguably the most complete suite of content delivery elements available from anyone, but the sheer scope of the product seemed to confound the sales process and especially customers.  But is a single video unit the best way to promote video delivery?

Operators have all been eager to monetize video, but while it’s been easy to set objectives for these projects and at least possible to outline functional requirements, operators are still having a tough time putting all the functional blocks inside products they can buy or software they can build or contract.  In theory, Videoscape could have been the mechanism to support that effort, since it has all of the blocks.  For example, Videoscape even includes a service bus architecture that would serve admirably as the technical foundation for a service layer aimed at content monetization.  The problem is that it was never presented effectively.  We’ve seen Cisco presentations that either raised issues and never addressed them or that praised features without putting them in a value context.

Creating a single end-to-end vision of video, one that includes both streaming/channelized and collaborative, is in one way interesting and potentially highly useful and in another way likely to further dilute messaging.  Yes, video monetization has to embrace any delivery model.  Yes, streaming and collaborative video have much in common in terms of service-layer elements (they fall out of a single approach in our current application-note monetization example).  But if I was never able to make Videoscape sing as a solution, how does making the orchestra bigger really help?

Maybe it helps by creating a unit that could be sold or spun out.  One possibility here is that Cisco is preparing to divest itself of the whole video area, and of course having the whole video area under one organizational roof would make that easier.  Furthermore, a video-centric subdivision might be attractive to a bunch of players, from Apple to Microsoft to Google to even IBM and Oracle.  More buyers, more bidders, more shareholder value.

The FCC Does it Again!

The FCC released its first fairly detailed study of Internet performance in promised-versus-delivered form, and while it has some interesting stuff in it, there’s also a rather substantial internal contradiction in the whole study that is troubling for our ability to set broadband policy.  It seems the government has ignored the whole basis for IP networking.

In the good old days of TDM, the “speed” of the network was set by the rate at which the network interface clocked data in and out.  A T1 line delivers 1.544 Mbps all the time, or it’s broken.  That capacity is available no matter whether traffic is being passed or not, and since most data applications don’t actually consume 100% of capacity, the rest is wasted.  When packet services evolved in the 1980s, their cost-savings was based on their ability to divide information into chunks (packets) and intermix them so that idle periods on a trunk were used effectively.

IP is a packet protocol, and IP savings over TDM are based on that same principle.  The packet concept saves money by using wasted space, so the corollary truth is that if there’s no wasted space there’s no money saved.  A traffic source that uses all the capacity rather than chunks of it leaves no gaps for other traffic to fill.  In effect, that source is consuming bandwidth like TDM did.

The speed at which the packet source can present data when it has it is the speed of the interface.    Any synchronous digital interface “clocks” data in and out at a fixed speed, which is its rated speed.  Think of it as creating a fixed number of bit-buckets, each of which can hold a bit if there’s one presented.  Traffic from a group of interfaces, like cable modems or DSL lines, is aggregated upstream, and the aggregation trunks fill traffic gaps in one user’s data with information from another, so the trunks’ aggregate speed is not the sum of the speeds of the individual interfaces.  That’s why we can give a consumer broadband service for under twenty bucks a month when 64 kbps of TDM would cost five times that amount.

So how does this impact our dear FCC and its study of Internet speeds.  Well, they’ve determined that most Internet users don’t get 100% of the advertised speed, meaning the clock speed of the interface.  But 100% of the interface speed 100% of the time would be TDM, which nobody has.  They have a nice chart that shows who does “best” and who does “worst”.  The problem is that all they’re measuring is the degree to which the aggregation network of the operators is fully utilized.  FiOS does best so does that mean it somehow guarantees users TDM bandwidth?  No, it means that FiOS isn’t currently utilized to its design level so users have less aggregation congestion.  By the FCC measure, the operator with the best Internet would be the one with no customers to congest it.

There are two problems with all of this.  First, you can’t make good public policy with data that demonstrates nothing useful or even relevant.  Second, we’re again encouraging a model of consumer broadband, and an expectation set, that are totally unrealistic.  The only way to give users 100% of interface speed all the time is to give every one of them a dedicated path to every content resource.  Making it look like the most uncongested (likely meaning lowest-populated) network is “best” encourages churn that then populates that network and makes its realization of interface speed less than 100%.

 

Reading the iCloud

Apple’s iCloud is advancing quickly to production status, and with the progress comes more clarity into what the service will offer.  Three things about iCloud caught my eye; Windows integration, the pricing for data storage, and the potential competition with Microsoft’s Live strategy.

I’ve noted in the past that one of the biggest issues in cloud computing adoption, and one that is virtually never mentioned, is the cost of storage.  Standard storage pricing from market leaders in the space would put the cost of a terabyte of storage at over a thousand dollars a year, which is more than ten times the cost of buying a terabyte drive and twenty times the marginal cost per terabyte for many data center disk arrays.  With typical installed lives of three years, internal storage is then closing in on being ONE PERCENT of the cloud cost.  Apple’s iCloud pricing sets an even higher price; at $100 for 50GB, a terabyte would cost two thousand dollars a year.

It doesn’t take rocket science to see that we’re pricing cloud storage an order of magnitude or more beyond the equivalent cost in the data center, and many cloud services also charge for outbound delivery.  The rates could double effective storage cost just by churning that terabyte once per month.  Thus, the current cloud pricing policies would discourage the deployment of enterprise mission-critical apps by pushing storage costs way above any possible point of justification.  We’re creating cloud computing for the masses, but not for masses of data.

The Windows connection with iCloud shows that Apple sees the service more like iTunes than like the App Store.  iTunes is a profit center, and the App Store is a feature for iOS that helps build value for the devices it supports.  iCloud is going to be a money-maker in itself, and that demands that Apple open a path to the largest installed base of PCs, which is Windows.  But even this factoid demonstrates something interesting; Windows dominates PCs not appliances, and so iCloud must have a strong value proposition for PCs as well as tablets and smartphones.  It has to leverage local resources more, because on PCs there are more local resources to leverage.

Which brings me to Live.  Microsoft has wanted Live to launch it into online success, but it’s never been able to create a compelling value proposition for Live given the resources available to Microsoft users on Windows and its application base.  That’s in large part due to the fact that Microsoft was so worried about creating something that would take users away from Windows or Office that they forgot that Live had to do that to some degree to have any utility.  They hunkered down on defense, and there’s only a small distinction between battlements designed to serve as a springboard for attack and those designed for your Last Stand.

 

 

Huawei Goes in For the Kill

Huawei, who has been gaining influence by leaps and bounds simply because it’s the network industry price leader, showed real gains in strategic insight in our most recent survey.  Now, Huawei is demonstrating that it intends to keep up its “build-a-strategy” trend by naming a kind of “Chief Security Officer”.  The mainstream thought is that this is intended to alleviate fears by government agencies that Huawei is in some way a spying conduit for the PLA or something.  It’s not.

You don’t have to be a genius to figure out that a company’s naming of a CSO wouldn’t make that company itself less of a threat.  What’s the goal here, then?  It’s to build on Huawei’s growing lead in the networking market as a strategy leader and start to move into specific areas where early opportunity exists.  Security is a major issue for consumers and businesses as well as for service providers, and in the latter case the issue cuts both in the direction of self-protection and in the direction of managed services opportunity.

Our survey of enterprises found that the cloud computing statement they identify with the most was “Only the network can secure the cloud”.  If operators selling network services like VPNs would add a cloud security offering to that VPN, it would likely sell well with enterprises even if it were positioned separate from a cloud offering by that operator.  That’s critical because operators today have a miniscule share of the cloud market, and enterprises are very likely to fiddle a bit on cloud planning to fully grasp the implications.  On the security side, they know.  Not only that, a cloud security offering could grease the sales skids in positioning cloud services.  Who better to buy a cloud service from than the provider of your network security services?

For competing vendors this is another example of fiddling, this time while opportunity burns.  All of the major vendors offer some security tools, but none of them have created effective cloud security positioning, even those who have offerings arguably directly aimed at the cloud, including Cisco and Juniper.  And here’s Huawei, who vendors have historically seen as little more than a street peddler complicating a sweet sales deal by standing outside the Macy’s window, moving aggressively and effectively to make something of the opportunity.  Yet another “shame-on-you-for-your-turtle-pace” moment.

Network equipment isn’t a growth market any more.  A major Street research firm has terminated coverage of ten network equipment vendors, and we’ve noted in past issues that more and more analysts are saying that network equipment spending in the service provider space is now monetization-limited.  The only hope of the network vendors was to create a killer service-layer strategy to fend off Huawei’s aggressive competition.  That’s now increasingly unlikely to happen because most don’t have a framework for a service layer, a platform productization of such a framework, or any idea how to build monetization applications.

On the latter point, we’ve undertaken a project in our ExperiaSphere project to create an application note that describes how, based on a presumed ExperiaSphere model of a service layer, operators could build a solution to their monetization needs.  We’ve drawn the requirements from two critical operator use cases on content and telepresence, and we plan to publish a detailed implementation map.  We have received strong comments of support from big operators on that effort, and when we finish our document (likely to be 12000 words or more and a dozen illustrations) we will make it available freely on our ExperiaSphere website.  We hope that the operators will use it to decipher the complexities of content and telepresence monetization, the principles of a reusable-component-based model of a service layer, and a foundation for some very specific vendor RFI/RFP activity.  We have to tell our operator friends that we believe only they can drive the service layer fast enough to make a proof-of-concept trial by this time next year possible.

 

 

Street Signals, Wrong Signals?

Financial analysts and investors seem to have decided that networking as a sector is in trouble, but most seem to have missed the point on why that’s the case.  Yesterday, the markets sent Alcatel-Lucent down about the same 20% that Juniper fell on the day before, suggesting that they believed both companies faced identical headwinds.  In point of fact, Alcatel-Lucent’s revenue line was good but the company had higher costs, reflecting among other things expanded R&D (Juniper’s stock continued to fall yesterday, over 3%).  Today, the pundits are mostly focusing on how both companies need to “cut costs”.

If a company faces temporary outside forces that limit buyer interest, then it makes sense to cut costs while those forces are acting, and to then expand when they disappear.  The problem is that networking isn’t facing “temporary outside forces”.  Let me quote from Credit Suisse, one of the research firms that’s gotten it right:  “We expect the ongoing disconnect between revenue growth and bandwidth economics to drive an ongoing shift in carrier capex to specific projects focused on revenue generation or cost savings—such as wireless backhaul, cloud/data center build-outs, and extension of VoIP infrastructure—that will benefit certain product markets and vendors while posing challenges to others.”

We’re seeing a fundamental problem with bandwidth economics.  Bits are less profitable every year, and people want more of them.  There’s no way that’s a temporary problem; something has to give, and it’s capex.  In wireline, where margins have been thinning for a longer period and where pricing issues are most profound, operators have already lowered capex year over year.  In mobile, where profits can still be had, they’re investing.  But smartphones and tablets are converting mobile services into wireline, from a bandwidth-economics perspective.  There is no question that over time mobile will go the same way.  In fact, it’s already doing that.

To halt the slide in revenue per bit, operators would have to impose usage pricing tiers that would radically reduce incentive to consume content.  If push comes to shove, that’s what they’ll do.  To compensate for the slide, they can take steps to manage costs but most of all they can create new sources of revenue.  That’s what all this service-layer stuff is about, of course.

The three big network vendors who have done badly in their quarters, from Street perspectives, are Alcatel-Lucent, Cisco, and Juniper.  Others in the network layer like Ciena and Tellabs, have also taken a hit.  Produce bits and you don’t support profit, you only help operators provision to a new level of ROI marginalization.  In contrast, we have players who are winning, like Acme Packet who have held on to some pretty decent share prices by focusing on things that fit the more-profit-justifies-more-spending mold.  Session border control and deep packet inspection are hardly unique to them, but the big vendors are big because they have big bit-pushing gear commitments.

Alcatel-Lucent deserved a better fate from Wall Street because it was able to grow its IP business by 30% or more, which demonstrates that it was able to leverage higher-layer differentiation into the IP layer.  That’s what everyone in the IP and lower-layer networking equipment world needs to do, but in order to do it you have to get that higher-layer differentiation, and nobody seems to be taking that seriously.  So Wall Street may be right here, or it may be pre-judging.  They’re doing the latter if the vendors will get their heads out of the bitpipe and see reality.  They’re doing the former if vendors stay the course.  Ironically, the Wall Street cry to cut costs favors the wrong path, because you can’t innovate in the service layer without any R&D to innovate with.

 

 

Is Half a Loaf Enough for Alcatel-Lucent?

Alcatel-Lucent announced its numbers this morning, and while their results met expectations on the revenue side they fell short of Street estimates on the profit line.  That sent their shares skidding pre-market, making them another telecom equipment casualty.  The financial analysts are calling this a second-half market weakness, but of course it’s more than that.  Some of the big research firms correctly pointed out a couple weeks ago that capex was in decline and that the only redemption would be improved monetization.  A few even pointed out that monetization was the focus of those projects most likely to get funding.

What makes Alcatel-Lucent interesting is that they are one of the strategic-influence winners.  Their revenue line suggests that their mobile services strength was indeed enough to get them good engagement.  The thinner profits suggest that they need a better and broader monetization link to sustain margins in competition with the increasingly credible and aggressive Huawei.

The concept of Application Enablement that’s been a foundation for Alcatel-Lucent positioning for several years is a good one; make the network a partner to applications and you provide a means for operators to monetize new services by creating new network-enabled applications.  The company’s work on the API and developer end of the story has also been strong; they have in fact the strongest and most credible developer program aimed at creating developer-enhanced high-level services.  Their issue has been that they lack an articulated framework for providing the authoring of those enabled applications.  This is the same problem that everyone else in the space has been grappling with, the one we said that Juniper had to solve in our blog on them yesterday.

What is interesting is that we KNOW that at least one of Alcatel-Lucent’s competitors has such an architecture, but the question is whether it’s been “articulated”.  NSN did a preso at the Dublin TMF World meeting, and in it they showed the outline of a service-layer approach that’s completely consistent with the picture we draw and completely compatible with the content and mobile monetization frameworks we’ve published over the last two month in Netwatcher.  They even fit the cloud monetization model that’s scheduled for publication this month.  What’s missing there is first an open and public release (analyst material isn’t public unless we’re told it is, and in any case buyers may or may not have seen the material), and second the details on how the architecture can be used to build services.

It almost seems like we’re in a race to tell people about something we’ve already done rather than in a race to do it here, and I’m confused over the “why” of that one.  I know from both surveys and from project reviews of operator content monetization activity that Alcatel-Lucent’s details aren’t making it to the buyer level, and in most cases the buyers don’t know NSN even has details to share.  I guess the guy who sings first is going to win this one.

Alcatel-Lucent’s 28% growth in IP revenue should be of concern to its competitors because it seems to me a pretty convincing indicator that you can sell more routers if you can show buyers how routers make money and not just carry traffic.  Routers out-grew their mobile stuff, in fact.  If this growth trend continues, then Alcatel-Lucent poses a major threat to any of Cisco’s back-to-basics intentions.  It would also put Juniper on notice that loss of influence in services could translate to loss of market share to Alcatel-Lucent, who in my view is already Juniper’s biggest threat.