Wireline: Too Many Hands, Not Enough Pockets

Some of the changes in networking that are looming on the horizon may start a lot closer to the couch if you read the right hints from current trends.  The battle between Google and Apple for TV position may be driving some other players to change their plans.  For example, Verizon has been offering more streaming-media-friendliness in FiOS and there are now indications that it may also be planning to push a Hulu- or Netflix-like offering outside its territory.  It may be that we’re seeing the entertainment video space at the beginning of a period of major change, but that’s not necessarily going to mean a victory for IPTV or streaming media.

All of this starts with the fact that wireline is seriously non-profitable already and likely to get worse.  Operators face the stark choice of a major plan revamp to shorten the loops radically or replace them with fiber, something that some operators could do in a pinch but that others simply can’t contemplate.  Verizon’s cable deal suggests that it may have a middle-ground approach, which is to cede customers it can’t afford to upgrade to fiber.  The FCC’s regulations this year—neutrality and perhaps more important Connect America put operators in a pinch to improve broadband when copper and DSL are likely to be stretched.  So our change starts with pipe issues.

Then there’s the Google and Apple stuff.  Operators are doubly injured, from their perspective, by the launching of what are effectively Internet TV broadcasters.  They carry traffic with no incremental compensation and their own media opportunities are undermined.  Streaming will not likely match the content of traditional linear TV for a decade or more, but it will tap off premium channel subscriptions, extra rooms, etc, not to mention the more price-sensitive tier.

So the reaction of operators is simple, at least in plan.  First you develop your own media properties into something that you can leverage outside your own footprint, essentially disintermediating other LECs.  That gives you a plausible revenue stream with lower capex.  Second, you deploy premium loop, meaning fiber, where you can justify it and then you do a partnership with cable companies, actually hoping that this partnership will give you a retail markup on their services and convince marginal customers of yours to flee to them.

The obvious question here is whether all of this creates a sustainable broadband market.  I’ve noted for some time that DSL was not competitive with CATV cable for multi-service delivery.  U-verse is a dumb idea that eventually has to fall on capacity versus cost problems.  At this point, I don’t think common carriers can refresh their plan with cable any more than with fiber, even though the pass cost for CATV is less.  Thus, what we are surely going to see is that operators will actively try to flee serving the bottom of the DSL customer list and make up the revenue loss in mobile or out-of-area streaming.  Focus on streaming only exacerbates the capacity problems and low revenue per bit of Internet broadband, and that could discourage plant upgrading, which would lead to greater congestion.

The point I’m making here is that the problems of the online ecosystem are real; they’re already impacting very basic business decisions that will govern how the Internet works in the future.  Regulations can’t compel businesses to lose money; they’ll just fold up.  Connect America can’t subsidize everything; there can be no contributions into the program if nobody can offer service without subsidization by the same program you expect them to contribute to.  Will carriers pull loops out of the ground?  No, but what’s surely going to happen is that more unprofitable areas will be sold off to other players who are small enough to qualify for RUS subsidies.  That again builds up the pool of carriers who take rather than contribute.  In short, this isn’t a good thing we’re seeing.

 

 

A Warning Signal on Mobile/Carrier Capex

I’ve talked about the profit problems of network operators for years now, simply because they’ve talked to me about those problems for years.  In rough terms, revenue per bit has declined by 50% per year for five and a half years running, and this decline means that profits on infrastructure have dropped sharply even though operators have put price pressure on vendors.  All of the challenges of networking that we hear about, including Ciena’s miss yesterday, can be directly attributed to the simple fact that you can’t pay more to produce something that keeps selling for less and less.

Today, Morgan Stanley bucked a shaky consensus that wireless profits too were under pressure but still above ground, saying that their model shows that post-pay ARPU for the US giants will likely trend negative by the end of 2012.  With seven-plus-year financial inertia to contend with, operators can draw the curves themselves based on their inside numbers, and it’s clear that they can see where they end up.  The AT&T drive to consolidate via T-Mobile is, as I’ve said, an indication that economies of scale are now required in wireless, which isn’t the sign of a market with profit growth still baked in.

Another interesting data point is Verizon’s deal with the cable guys.  The latest speculation on the Street is that what Verizon will do next will be a shocker; they’ll start reselling cable broadband and TV even inside their own region!  Why?  Because the profit on DSL is simply not there, and FiOS can’t be made profitable for any more of their footprint than the current 20-million-homes-passed target, at least under the current price scenarios.  I’ve noted for some time that the real competition between carrier and cableco wasn’t fiber versus cable but cable versus DSL, and pressure from the FCC to upspeed the baseline broadband rates is only going to make DSL harder to justify.

Some of the Street speculates that Verizon might then shift to using their extra spectrum to replace DSL with wireless, but I think that’s a misunderstanding of intent.  It would only hasten the price commoditization of wireless.  What I think Verizon is doing is replacing “wire” wireline, meaning copper and DSL with…nothing.  They’d love all those customers to shift to cable because they can’t service them profitably.  After all, hasn’t Verizon been selling off areas where there was limited upside for copper loop already?

From an equipment vendor perspective this is alarming, not that equipment vendors haven’t been reading the tea leaves on this for half a decade already.  The drive to exploit marginally free bandwidth has created enormous profits for a few VCs and early investors, but it’s destabilized the underlying bit industry.  Operators have seen this coming and worked to gain traction in those new higher-layer services but without any real vendor support.  Now, our survey says, it’s right on the edge of too late, and the Street’s indicators seem to back this up.  We are going to see a major shift in capex next year, a shift toward cloud and IT equipment and away from transport and connection.  What will be hit the most, says our model with striking irony, is routing.  Most valuable Internet traffic today is metro, which is optics and switching.  Yet Ciena shows that’s no great business either.  So where do operators invest?  Where it makes them money, which is up above and not in the network.  Our model says this will be visible even next year and a juggernaut trend in 2013.

 

A Duel of Cloud Visions

Cisco is making a big play in the cloud, or for the cloud, with CloudVerse, an architecture that in some ways has the same goals as the one that Alcatel-Lucent announced last month (CloudBand).  Everyone is trying to get a place in the cloud, hoping in part no doubt that the inherent fuzziness of the cloud will admit them.  Cisco, though, has more credentials in the space than most and so you have to take their approach seriously.

In functional ingredient terms, everyone has to build the same cloud.  Clouds are a system for creating a sharable (multi-tenant, in public cloud instances) pool of resources, of assigning those resources to applications/users, and of managing the process to insure efficiency and cost-effectiveness.  In practical terms, what separates cloud players is what they can offer that’s special within this mix and how broadly they can cover the missions.  The former gives them differentiation and may assign a service target (one that values the special something) and the latter gives comfort to buyers who doubt their own skills.  These days, that’s pretty much everyone.

Cisco’s secret sauce in the cloud is completeness of solution, in my view.  Cisco has every piece of technology you need for cloud-building, including the servers.  Where Alcatel-Lucent focuses on data center interconnect because it can’t populate a data center, Cisco can articulate a simple story.  Clouds start with big resource pools that we can build, and grow out through big networks that we can also build.  Yes, Cisco needs to articulate the cloud story better to align its capabilities with the real business trends, but they have an easy story to create and sell.  If they get good at it, it will put a LOT more pressure on competitors to do something smart on their own, and precious few network vendors can realistically match Cisco’s scope.  With HP in management ruins and IBM out of networking at least in terms of having their own products, Cisco may be the only one-stop shop in the cloud mall.

Arch-rival Juniper is taking the other approach, the “cloud secret sauce” with its new drive to promote data center fabrics as the logical heart of the cloud.  This story also has inherent strength since clearly a cloud has to start with a resource pool, but like Alcatel-Lucent’s CloudBand the Juniper tale needs more collateral to be as effective as a potential Cisco story.  The challenge for Juniper is articulating a cloud position that their fabric can be the centerpiece for, as Alcatel-Lucent’s is to articulate the overall scheme of cloud services that justifies and empowers their interconnect vision.  What this says is that cloud positions have to be holistic; if they aren’t naturally complete in a product sense then they have to be complete in a vision sense.  That fits with the fact that cloud prospects, particularly network operators or other public cloud-builders, want a strategy that they can drop in with a minimum of extra integration effort and delay.

I think everyone in the cloud space is minimizing the most important truth about the cloud, and that’s the fact that the cloud isn’t about an alternative to enterprise data centers at all.  The real notion of the cloud has been transformed by usage and expectation now, transformed into a vision of a flexible and elastic and universally available pool of knowledge and compute power.  Yes, in theory, that new pool could be used to implement the old stuff, but more than that it could transform how we use computing and information, coupling it more tightly to our behavior.  Yes, I know this is my usual “mobility/behavior” theme, but the fact is that where work and workers can be concentrated into buildings the notion of ubiquity of availability of IT resources becomes rather lame; they already are by virtue of having limited the scope of the use of the resources.  What makes the cloud different is that it’s not requiring massing up humans to justify it; instead it’s presuming they are distributed and that’s clearly the case.

The jury is still out on just what “distribution vehicle” will drive all of this, or whether a single driver is even necessary.  You can say this is about smartphones or tablets or game consoles or anything else and be wrong, I think.  This is really about the notion of a general set of information appliances that become more integrated with our lives and work practices, and that are agents for a new model of information empowerment.  This shift will generate a lot of winners, and losers too.  Which camp will you be in, Dear Vendor?

 

 

Facing Up to Transformative Times

It’s a kind of day of change in technology, particularly if you believe some of the speculation that’s floating around.  Certainly 2011 has been a challenging year for vendors, and it may be that the challenges will be only more formidable in 2012.  Given that, it wouldn’t be surprising if we saw some dramatic market shifts, and vendor actions.

Microsoft’s move with Verizon to make the Xbox into an STB is pretty obviously a move to make the box more populist.  Kinect has certainly made it into a leading game platform but there’s a limited market for games, particularly in terms of demographics.  Microsoft wants the Xbox to be an entertainment hub, something that would validate the fond hope of everyone in the PC space that a computer would become a fixture in every living room.  A big part of this move is that Microsoft knows that gaming is shifting more to portable devices and Microsoft certainly has its own aspirations with phones and tablets.  Why create cross-currents that could erode you own product lines?  Get one product family safely out of the way of evolution for the other.

Staying with tablets and cross-currents for the moment, Dell has dropped its Slate family after having laid a tablet egg with it.  The problem I think Dell had (and continues to have) is that it’s so worried about its PC sales that it’s apologetic about positioning anything that might impact the PC even a little bit.  On the one hand, it didn’t want to miss the tablet revolution, but on the other it didn’t want to overhang its laptop business.  So it fumbled with both hands, I guess.  They never really did anything with Slate other than say they had it, which was hardly an inspirational marketing tactic.  Now they’re out of tablets and they’ve called their marketing into question.  They also are looking a bit like HP in terms of fumbling, which isn’t a role model I think they wanted to adopt.

I’m hearing a lot about how network vendors might be facing their own future changes this week.  NSN started a broader reconsideration of strategy with its decision to pull itself into a kind of marketing spear focused at the heart of mobility.  That aligns the company with the current capex focus, at least on the network equipment side.  Ericsson has arguably been gradually doing the same thing for years, focusing on mobile and professional services where margins are higher.  Now at least some Street analysts are suggesting that maybe Alcatel-Lucent is going to have to follow suit.

Ciena is kind of a poster child for the current issue set, ironically; it’s not a broad player but rather a player who adds capacity to a world that’s supposed to be starving for it.  So why then is it getting dissed by the Street?  Because margins stink, meaning profits.  Yes users want more Internet bits but they won’t pay for them, which means that there’s enormous price pressure on the producers of those bits and on the equipment they use.  The trend in networking, as I recount in more detail in the Annual Technology Forecast issue of Netwatcher this month, is to push traffic handling and capex downward to the lower layers because it’s cheaper there.  The whole of networking, in fact, is commoditizing.  Ciena, already in a spot that can’t be eliminated—you need bits—can’t be as profitable there.  So that’s why Alcatel-Lucent and Ericsson, and NSN can’t be bit-pushers either.

For Alcatel-Lucent, this could mean a spin-out of some of the lower-level technologies that are building revenue and engagement but not profit, but the problem with that is that Alcatel-Lucent’s strategic influence with buyers has literally been in a class by itself because of its product breadth.  Can they sustain that influence in the face of a constriction of product offerings?

The cloud is the common ground in this story.  NSN, by focusing on mobility, may have effectively abandoned it.  Ericsson has abandoned it.  Alcatel-Lucent just embraced it, but not as fully as I believe they need to.  The cloud is likely to be the first new capex focus in networking, but it’s a focus that’s not as much on network equipment as on IT—servers and software and storage.  Network vendors need to have a place in the cloud or they all end up squabbling over RAN scraps, and mobile bit-pushing will commoditize as inevitably as wireline has.

While Alcatel-Lucent’s cloud strategy could be interesting, Cisco and Juniper are really the players to watch here.  They’re the guys with the most at stake, after all.  IP, the mainstay of both companies, is commoditizing and traffic is migrating downward in terms of handling, toward Ethernet and optics.  Something has to be done for these guys.  Cisco has servers and all of the pieces of the cloud, and so could be the vendor who manages to create a cloud story that’s network-empowering.  Juniper has no servers and it has the narrowest product portfolio of all of the network vendors.  That’s hurting the company in terms of strategic engagement.  In our most recent survey, Juniper stands alone at the bottom of that chart.  So the question is first whether a network-friendly vision of the cloud can be created at all.  If it can, will Cisco have a better shot given that it has all the pieces, or will Juniper have a better shot given that it has all the motivation (or should)?

 

SAP and the Cloud

SAP announced it was buying a US web/cloud software company (SuccessFactors) that provides human resource management and productivity tools, a move to boost its total cloud repertoire and thus be more competitive as a cloud provider.  The decision shows that the SaaS business is starting to create problems with product scope that didn’t exist for hosted software, and that’s an important shift in my view.  It shows that the adoption model for SaaS is one of one-stop-shop even when the same buyers have rejected that notion as unnecessary in their data centers.

Our surveys suggest that there are a couple of reasons for this change.  First, nearly everyone in the buyer space says that the fewer cloud providers you have, the better.  It’s not a matter of economy of scale, it’s a matter of minimizing integration and support issues, they say.  The second point is that there is a different buyer constituency for SaaS than for installable software.  Two-thirds of all SaaS buys are directly driven by operations personnel and not by IT, and these people frankly don’t give a hoot what earlier IT practices were.

SuccessFactors has a cloud-coupled architecture and a core product set, and it’s reasonable to wonder whether SAP intends to roll some of its stuff into the SuccessFactor framework, to roll a version of SuccessFactors into hosted form, or to just keep the two elements separate.  It seems to me that succeeding in human resources planning with the M&A doesn’t leverage the cost enough.  Making the software hosted moves backward from the benefit, so it’s likely that over time SAP intends to make this the packaging for a broader set of tools targeted (as SuccessFactors is) primarily at SMBs.  This would set SAP up for the growth opportunity the cloud represents.

That’s a reflection of something I’ve also picked up in surveys, which is that the important market in cloud computing isn’t IaaS even though that’s where more than 60% of current spending is focused.  Higher-layer services do more for the user and so justify higher prices and profits.  You can’t replace IT and support functions with IaaS, only hardware servers.  You can replace both with SaaS, and that’s what users are now finding interesting.

This suggests that in the long run the operational benefits of the cloud may be more significant than the equipment economies.  This again favors PaaS and SaaS services because it’s hard to create cloud operational benefits when the cloud provider doesn’t supply the operating software.  It also means that service-layer tools for the cloud will have to do more to enhance operations.  Support economies of scale are something cloud hardware/software providers haven’t really spent much time promoting, likely because they’ve not spent much time addressing them!

Alcatel-Lucent and HP Buff Up Their Cloud

Alcatel-Lucent continues to develop their cloud vision, announcing a partnership with HP that demonstrates Data Center Network Connect and CloudBand.  From a business perspective it’s a win for both companies.  HP needs (badly needs, in fact) a relevant public cloud position that integrates networking and computing, because of its competition with Cisco.  Alcatel-Lucent needs a cloud strategy for operators, one that can provide cloud services in the traditional way but also host features and content—on computers.

I noted in Netwatcher, our technology journal, last month that network vendors needed to take a more affirmative position in the cloud.  Not only is it the core driver for enterprise network investment, it’s the biggest new focus of capex and infrastructure planning for network operators.  Most vendors have at least some product foundation for a cloud position, and at least two (Cisco and Juniper) have more pieces to the cloud puzzle than Alcatel-Lucent.  So why has Alcatel-Lucent managed to get its story out?  We even wrote about a possible Juniper cloud position back in the spring, one that would have leveraged their QFabric and PTX, and nothing came of it.  Could it be that Alcatel-Lucent is starting to think holistically?

In our fall survey, operators and enterprises both complained that their vendors were stuck selling piece parts instead of solutions.  When buyers are faced with major transformational pressure on their own revenue side, they want products that combine to address those pressures and they don’t want to have to guess on whether the blue blocks connect with the green ones.  So Alcatel-Lucent’s biggest victory here may be that it’s finally talking the right talk.  If the company can now link their cloud story to Application Enablement in a convincing way, they could be promoting the whole ecosystem.

They also need to drive the bus on the HP relationship because HP is hardly the darling of the corporate world when it comes to business strategy these days.  HP has data center switching but nothing that measures up to what Cisco or Juniper can offer, and that creates a vulnerability for Alcatel-Lucent’s strategy of data center connection.  I’ve already noted that Juniper could have done something pretty interesting in this space almost nine months ago, and in theory they could still work quickly to make a counter-splash.  Cisco, having the computer technology as well as switching, could do even better.  That means that Alcatel-Lucent can’t stand still with this story.  Good job so far; now don’t blow the goal-line play.

 

 

What’s Happening in Cloudsourcing, Really?

Enterprises are coming to terms with the cloud, according to my most recent survey, but they are also recognizing that there are things that “classical wisdom” says they can do in the cloud that simply don’t work out.  This is going to change the risk/benefit profile for cloud computing for the providers because eventually the facts of cloud cost and benefit are going to get out.  In the near term, there’s an opportunity for cloud providers to step up and support the real world, getting a jump on competitors, and that creates a similar opportunity for vendors who can support cloud infrastructure in some way.

What everyone has ignored in cloud computing, say enterprises, is the cost of data storage.  The applications that meet business-case requirements so far are typically those that have relatively little storage required, hardly the mainstream of IT.  Most responsible studies of cloud adoption (in which, modestly, I’ll include my own) have always expected that penetration of cloud computing into the enterprise would be under 30% because of pricing.  What business buyers are learning is that basic virtual-machine hosting a la IaaS doesn’t address application support costs, and so that strategy cannot be expected to gain much savings from support economies either.  This more than any other factor is moving buyer consideration up the food chain toward PaaS and SaaS.

Still, according to enterprises, the big question is what will happen to data costs.  Cloud services exact a double penalty for cloud data; you pay for storage and then you pay for access.  Large databases that are churned regularly end up costing a mint in the cloud.  Transactional applications can’t deal with summary data (they create the data you’re hoping to summarize) so they are difficult or impossible to cloudsource.  IBM and other vendors have been pushing the use of summarized repository abstractions as the basis for BI applications in the cloud, and this notion is a good one because most BI doesn’t need to deal with detail-level data.  The result is that BI apps have outrun just about every other mission-critical app class; they dodge the data cost problem.

In the SMB space the picture is brighter in some regards.  SMBs are finding that the cloud’s biggest benefit is dodging technical support costs, which are higher for the SMB because of limited career path for good techs and often limited tech resources available in the local labor pool.  National recruiting is outside the scope of most SMB hiring.  SMBs also have literally an order of magnitude less data to store and move around, which means that their cloud costs are more easily covered by savings.  It’s hard to get a good estimate for SMB cloud potential in a realistic survey because SMBs are less likely to have a handle on either their current costs or their potential savings, but it looks to us like well over half of SMB IT spending could be cloudsourced based on our fall survey.  That says that the primary target for cloud services has to be the SMB…or even the consumer.

We’re starting to see some interesting strategies to try to make all this work better even at the enterprise level.  The most promising approach is to focus the cloud data market on backup or deeper storage, which means creating a transparent storage hierarchy that starts on prem and moves into the cloud, with applications running to move information among the tiers based on policy.  IBM, Oracle, and Microsoft are all said to be proposing this sort of thing, and a few startups have productized it in the form of storage appliances that virtualize such a hierarchy.  The tools aren’t perfect yet but progress is being made, and this may be the most important technology evolution in terms of maximizing the cloud opportunity.

Does FioS-for-Xbox Green-Light IPTV?

Microsoft’s Xbox will now feature the ability to stream 26 FiOS channels and access (presumably the same network) content on VoD.  The step isn’t a technological revolution because AT&T’s U-verse has been a streaming IP video service from the first (and it’s also available on Xbox, but in the same form; FiOS for Xbox is channel-limited for now).  It may be at least a step in a kind of service revolution, though.  The operators themselves are mixed on this topic.

Streaming content is often seen as a problem for “the Internet” but most streaming content is never really on the Internet at all.  Popular content, even many YouTube videos, is cached near the access edge and often rides only a short distance to the user.  Globally, operators tell us that “commercial-grade” content travels on the average only 11 miles from point of storage to point of delivery and that all of that distance is beyond the Internet peering point.  CDNs, in short, control content traffic and not the Internet.  The explosion in video content doesn’t really drive up “Internet” traffic, it drives up access network traffic.

So where’s the beef?  Operators who don’t have a viable financial scenario under which to deploy FTTH are concerned that access congestion could truly kill their infrastructure.  If you push glass to the edge you have a relatively unfettered upgrade path because fiber capacity is formidable.  If you have copper, the problem is that you run up against hard technical limits.  A third of operators say they can’t support IPTV on at least 50% of their plant.

There’s also a question of neutrality policy that concerns operators when they look at streaming video.  By buffering you can make any IP path deliver content, but the buffering delay quickly turns off consumers.  Getting someone to wait even a full minute for video to start is a challenge, and even ten minutes of buffering (which no one would wait for) on a 60 minute show can produce stutters in the stream if there are significant variations in the traffic congestion along the path.  Thus, video should in theory have some sort of priority.  While neutrality rules don’t necessarily rule out video priority, they do leave questions on how it could be paid for and whether priority paths for video could be “gamed” for use in other applications.

Interestingly, operators weren’t that worried about the classic issues of video revenue per bit on streaming services; they see more of a problem with “non-commercial” video like YouTube that can pull video further through their infrastructure and thus create more capacity problems in more places.  The alternative for them is to cache more video, and for customer-loaded material it’s hard to do that except reactively.  Mobile video creates issues for them because often it’s stored and streamed in a form that can’t be displayed on the device anyway (it’s down-sampled, in effect) but that still uses up the full repertoire of bandwidth along the path.  That means transcoding, and when access or downstream points get congested it means adaptive transcoding.  You’re getting the picture here.

So why is Verizon doing this, and why has AT&T already done it?  Answer; because they can.  AT&T’s situation is easy; U-verse is already IPTV.  Verizon’s situation is relatively easy too; FiOS has the capacity.  More to the point, the incremental cost of supporting streaming to Xbox or something else isn’t high enough to invalidate an access business model that meets ROI goals.  Add channelized TV to broadband in a sensible way and you can still make money where customer densities are reasonably high.  The moral is that every market is different, and announcements of services in Japan or Germany or even the US Northeast don’t mean that everyone will eventually see these services.

On that matter, we’re looking at the enormous FCC “Connect America” order now, the replacement for the old Universal Service rules with an excursion into intercarrier compensation.  We’ll be blogging on this for our TMT Advisor premium blog when we’re done.

If Huawei Wins in the Enterprise, Who Loses?

Wall Street is speculating on Huawei’s future in enterprise networking as the company prepares to make what everyone (including Huawei) says will be a major push there.  In some ways, the move is a validation of the thesis that service provider networking is not a growth market; constraints on transport ROI are simply too great to allow for explosive growth there even if some service-layer enhancements to revenue could be found.  One operator responding to our fall survey said “Sure we could use cloud or other service profits to subsidize transport, but why would we?  We’d just be helping competitors ride for less.”

Enterprise networking isn’t any more immune to bit commoditization than service provider networking is, but for a company with no enterprise exposure at all it’s a new market area with a new set of opportunities.  Huawei knows that the enterprise market is, if anything, more price-sensitive than the service provider market.  It’s recent strategy of using price leadership to gain an entrée and then demonstrating insight to lock down the deal fits very well with enterprise buyers.

Huawei’s entry into the enterprise space is going to pull at least 15% market share out of the current-vendor pie.  The Street is speculating that Cisco would suffer the most based on the logical if somewhat simplistic presumption that the guy with the largest market share has the most to lose.  Yes, but that party also likely has the greatest account control.  Our surveys suggest that price-based competition is initially most successful when it’s aimed at buyers who are somewhat strategically adrift, meaning that they are not being influenced by a vendor plan for network advance that makes sense.  Heck, says the price leader, nobody really understands this stuff so why pay so much for it?  We think that the players that could be most at risk are those with limited scope and strategic influence, which would suggest that HP and Juniper might have a bigger problem in the near term.  HP is already reeling from loss of influence arising out of confusion over its long-term direction.

 

Operators Rethink Service Priorities?

The retail holiday season got off to a good start in the US according to all reports, with significant gains over last year and a particular focus on consumer electronics.  One hot space is the tablet market, where Apple cut iPad prices and where Android devices continue to gain overall market share.  A new group of tablets based on Nvidia’s Tegra 3 quad-core technology is expected shortly, and of course the new Android 4 (“Ice Cream Sandwich” or ICS) is also expected to begin rolling out, though likely not in time for this season.

Tablets aren’t the cause of the network operators’ angst this holiday season, but in our just-completed survey of operators we found that 4 out of 5 said that they believed that tablets would be their greatest future challenge.  The reason is simple; the devices have a large enough screen to be credible platforms for entertainment video.  While tablets are primarily WiFi devices and most operators believe they’ll stay that way, they still are expected to increase overall streaming traffic.  This traffic is more likely to come in through hotspots, making it a problem for the wireline network rather than for mobile backhaul.

What’s perplexing perhaps about tablets is that they don’t appear to figure as much in what I’ve been calling the “mobile/behavioral transformation” as smartphones.  The reason is that while smartphone users report their devices are with them an average of 82% of their waking hours, tablets are with their users only 28% of the time.  Smartphones are turned on for effectively 100% of the time they’re with their owners, while tablets are on only a little more than a fifth of the time.  All this means that tablets can’t be used for instant gratification; they can’t be wired into our lives as intimately simply because they’re not “handy”, meaning available.  The fact that tablet trends are decisively shifting to the larger 9/10-inch form factor is only exacerbating that issue; you can’t walk around holding one easily.

The reason all of this is creating operator hassle is that the two premier revolutionary appliances are really hitting different parts of the network and generating different risk/opportunity balances.  Operators feel they are somewhat in control in the smartphone space; nearly all phones are sold under carrier service contracts of some sort.  In the tablet space, four out of five are sold retail over the counter and WiFi, as we’ve said, dominates.  The problem is that virtually every tablet buyer is a smartphone owner, and companies like Apple are already working to strengthen the symbiosis between the device classes.  How then do operators create integrated stuff?  It probably gets back to the service layer, but how?  I had a recent illustration of the issues with the use of Verizon’s FiOS apps for Android; I can’t control my TV with an Android tablet because Verizon wants the device’s “phone number” to register it.

The tablet/smartphone dichotomy may be one of the reasons why operators told us that they were “rethinking” their mobile service and even content strategy.  In the most recent survey we found that nearly all Tier One operators now believed that they needed an integrated service-layer approach, which is only a slight change, but this time nearly all also said that they believed that mobile, content, and cloud had to advance more in parallel.  In the past, most operators said they were prioritizing content monetization; that’s no longer true.

Operators also said they were refocusing their capex, much as some Wall Street firms had suggested, on revenue enhancement and cost management.  That doesn’t tell the story fully, though.  Cost management strategies, like improving mobile backhaul efficiency, are the focus of the operations budget planning process and revenue enhancement is the focus of the board-level monetization-team projects.  In short, we’re seeing operator procurement split more affirmatively in terms of sustaining transport/connection structure on the one hand, and enhancing “services” beyond connectivity on the other.  Vendors are finding this split hard to deal with at the sales level, in part because they can’t always get access to the monetization teams and in part because they’re simply not used to selling products in support of a mission beyond moving bits.  We did see some noticeable changes in vendor influence, and those will be included in our December Netwatcher issue as well as provided in hour-long presentations on a consulting-call basis to our clients.