Netflix and the Health of OTT Video

Netflix may be showing us something about the streaming video market with its decision to raise the prices for “combination packages” of mail and streamed video services.  The new policy is to price the two independently rather than to discount the combination, a move that raises the monthly price by about 50% and impacts nearly 60% of Netflix customers.  It’s certainly a big step, so why take it?  Because you have to, for three reasons.

The big problem with OTT video is that it’s totally dependent on having access to the content, but that it’s not likely to now or ever create the revenue stream to generate all that content on its own.  It has to license material from others.  Because any base of video will eventually pall for users unless it’s continually refreshed with new material, that licensing process is ongoing.  Because old stuff is cheap to license and new stuff typically more expensive, the need to get more material pushes the OTT player to newer and more expensive material with each refresh, and thus raises its cost.

A second problem is that the people Netflix is licensing content from are the same people it’s competing against.  TV and movie companies want you to watch their material in the traditional way, not suck it over your broadband connection.  If broadband streaming is truly supplemental in terms of use, then it’s not a threat but rather a potential incremental revenue source (hence TV Everywhere).  If cord-cutting really does become an issue, then the first response of the content producers is to make up their losses in traditional channels by charging more for material.

The third issue is Netflix’s need for growth.  Netflix has followed the classic “all-you-can-eat” Internet pricing scheme, which means it can grow revenue only by adding subscribers or by raising prices.  Competition will make even sustaining their current base more difficult over time, and that leaves only the second option.

It’s also likely that Netflix will face higher costs beyond content licensing.  The greater competition will mean more spent on marketing.  Any growth in the number of videos streamed per user will drive up its server or caching costs, and network operators are increasingly demanding some settlement for the traffic Netflix is generating.  This means that the company’s financial performance would tend to sink over time unless revenue growth was even higher, problematic for all of the reasons cited.

It’s not unlikely that Hulu’s founders are interested in selling in part because the streaming business model is problematic even if you have some of your own content to contribute.  The reason is that unless each content owner involved in Hulu charges a fair market rate for its stuff, it’s undermining its own revenue stream and hurting its own shareholders.  There can be no free ride for online video.  Which, of course, means it can’t be free.  Which, of course, means that the notion that the Internet is going to be our free pass to everything we want in the future is nonsense.  Which means we need to figure out how to make money for everyone in the food chain before we upset the whole market dynamic.

Three Steps to Rational Neutrality…and Cisco Woes

The EU is a focus of a lot of things these days, and we can now add net neutrality to the list.  The EC hearings on the issue, launched late in June, produced the predictable results—people are alarmed at the risk of loss of innovation and privacy and competitiveness, but they have no practical contributions to make.  What’s now happening is that a group of EU operators and equipment vendors are preparing their own report in response to the EU activity.  This report will conclude that the interests of the public and the aggressive 100 Mbps broadband goals set by the EC, would best be met if the operators had the latitude to explore different business models to the best-efforts peering-agreement model of the Internet today.

I have to tell you that the issue of business model is to the Internet what entitlement reform is to budget processes.  Yes, there is no question that the current peering model which doesn’t provide for settlement among ISPs or between ISPs and content sources is flawed.  In fact, it’s probably broken.  The challenge is that we’ve created a whole industry based on the broken model, and unraveling that model without scarring the players involved may not be as easy as just saying “Now’s the time”.  My view is that we need to transition to a new business model in stages.

  • Stage number one is to allow the operators to create premium services and sell them to either consumers or content providers, with the proviso that these services not degrade best-efforts Internet.  These services should include not only enhanced handling but also higher-layer services.
  • Stage number two is to establish settlement among ISPs for all such premium handling as a mandatory element in any peering agreement, but again as an independent element to best-efforts Internet.
  • Stage three is to extend settlement agreements to include best-efforts traffic, starting with situations where that traffic is delivered through premium subscription services and moving to more general applications.

If we don’t offer network operators choices to help them recover the cost of enhanced Internet usage and performance, we won’t have either.  That will not only destabilize the operators, it will hurt the network equipment space rather badly.

Cisco, premier provider of network equipment, is the subject of escalating rumors about job cuts, the latest being that the total could run to ten thousand jobs.  What seems to be happening is that Cisco is considering a range of options, some of which include the sale of or spin-off of some of its businesses, notably the Scientific Atlanta property.  The most radical cuts would seem to come from exercising one of these choices.  However, it does seem likely that the company will actually cut over 4 thousand jobs.

There is no company in the industry that embodies what I believe to be the “intransigence of incumbency” than Cisco.  Networking has undergone revolutionary changes, changes that Cisco products helped to bring about.  Despite this, Cisco has gone forward with a marketing, strategic, and product approach that has presumed the most simplistic of all possible futures—the one that’s nothing more than a bigger form of the present.  With arguably the best intellectual property portfolio in the industry, the stuff that carriers would kill to be able to deploy effectively, Cisco has failed to show those very carriers what effective deployment would involve.  “Buy a router today because traffic demands are growing exponentially!” says the sales guy.  Hey, I’m tired of hearing about exponential growth, and so are the buyers.  Traffic doesn’t mandate investment, return on investment does.

 

Video-Chat Wars

As some things change, others stay the same.  That’s about how I see things fresh from two weeks in Brazil, a place on which I’ll also comment here.  We’re seeing changes in the networking business space as Google vies anew with Facebook and Twitter, and yet the moves raise the same issues we’ve faced all along.  In the economic world, it almost seems like Groundhog Day.

Google+ is definitely a revolution, a step toward social networking as many believe it should have been all along.  Because it avoids most of the privacy problems that seem inherent to Facebook’s simple model of “friends”, it could potentially be used more effectively without putting its members at risk.  Because it’s built around communication, it would establish Google not only as a social network leader but also as a player in the web-based communications space that will eventually displace the old PSTN we’ve come to know.  And behind it all looms the old Google/Microsoft face-off, this time regarding the Microsoft acquisition of Skype.

Make no mistake, Google wanted to counter the Skype deal probably as much or more as it wanted to be a social networking player.  Skype, in Microsoft’s hands, could become a powerful force to integrate Microsoft cloud software into people’s lives.  Skype could also be the foundation for social communities, of course, and having Microsoft in a position to exploit Skype at its leisure wouldn’t serve Google’s interests.

The fact that Facebook went running to Skype for a deal is interesting too.  They can’t now expect to buy the company after all, and they’ve admitted that they have either never thought of the communication-based social network (unlikely) or that they can’t toss money and time at creating one to counter Google’s move.  Facebook’s weakness, as I’ve pointed out, is its off-market trading and correspondingly high valuation.  They can’t afford to keep going to the well for more capital and they can’t be perceived as losing ground—though they are.

All of this comes at a time when the Street is newly aware of the eroding credibility of carrier capital budget planning.  To quote Credit Suisse, “We expect the ongoing disconnect between revenue growth and bandwidth economics to drive an ongoing shift in carrier capex to specific projects focused on revenue generation or cost savings”.  Network spending focused on cost is an open invitation to Huawei, and spending on revenue generation is clearly not going to focus on creating more of the low-value bits that have put carriers in the disconnect to begin with.  This is the issue that raised our concerns about Alcatel-Lucent’s FP3 chip announcement.  The world doesn’t need a way to push more bits until we can figure out how to make bits pay, and right now everything happening in the industry is disintermediating the operator more.  Alcatel-Lucent, we’d note, continues to champion IMS as the basis for mobile broadband “services” when the Google/Facebook brouhaha makes it clear that it’s going to be tough to make even IMS voice work effectively against OTT P2P competition.

With bit-pushing going out of fashion, Cisco seems unable to break out of the bit-and-box marketing mold and is instead looking to cut costs by cutting headcount.  The company’s reported early-out package expired in late June and there’s no official word of how many people took advantage of it, but we did hear that there were still as many as three thousand more jobs on review for elimination.  That could push the total cuts above the 4,000 that were rumored.  Cisco’s intransigence with respect to the service layer is creating an opportunity for its competitors, who could not only gain market share on Cisco’s fall from grace but also gain an early lead in the service layer.  So far, though, nobody is stepping up with a good story, and we’d not be surprised to see any improved positioning saved for early September, timed to the carrier strategic technology planning cycle that will end around November first.

On the economic front, the question of whether Greece can avoid a technical default seems almost answered in the negative, but at the same time eclipsed by what Italian sources say is a speculator-driven attack on Italy’s debt.  This is the issue that the EU needed to avoid; the house-of-cards attack on weaker countries generated because speculators believe that the strong (notably Germany) won’t accept a rescue package that keeps players out of default.  A default would trigger credit-default-swap payments, and CDSs are the instruments of speculators.

Alcatel-Lucent’s FP3: Good Evolution but Not Revolution

Alcatel-Lucent did its own ballyhoo this week, with the announcement that the company had promised would make the Internet faster.  I’m not big on ballyhoo, and I have to admit that I have mixed feelings on the Alcatel-Lucent announcement.  I want to be fair, and so I want to start with the perspective I bring to the issue of “advancing the performance of the Internet”.

I’m a strategy analyst, someone who surveys and models markets.  My goal isn’t to find out what people want, but rather to find out what’s going to happen, and what’s not.  People want gigabits for nothing, but that’s not going to happen.  We could give people faster broadband today in a technical sense but the decision would fail for financial reasons.  You could argue that anything that reduces the cost of the much-for-nothing goal is an advance, but it’s doubtful that any single development could create a cost revolution, and that is the dilemma Alcatel-Lucent’s announcement poses.  They have made an impressive technical stride that I’m not confident is a significant market stride.

What Alcatel-Lucent announced was it a quite revolutionary achievement in special-purpose network semiconductor design and fabrication, its FP3 chip.  This chip, says Alcatel-Lucent, is faster (by 4x), smarter, and greener (50% less power per bit) than ever before, and the company says that it will accelerate the adoption of 100GigE from edge to core.  Certainly the chip can support multiple 100G interfaces or a future 400G interface, but the question of capacity in my view isn’t one of technical performance as much as of financial performance.

Operators we survey are watching 100G Ethernet to be sure, for a time when it would be economically justified.  They don’t think they’re at that point.  The problem they have is that revenue per bit is already plummeting.  A chip 4 times faster would presumably need a network four times higher in capacity.  Our model says that consumers will pay 17% more for 4x speed; operators are estimating 20% more on the average.  The uptake for premium speed tiers is low, and FCC data shows that broadband users in the US cluster at the low-cost end of the service range.  So how does making the network capable of higher performance change things fundamentally?

Something for nothing, or something for next-to-nothing, may be appealing but it can’t be delivered.  The thing is, people are willing to pay for stuff; it’s just that they’re not willing to pay for Internet bandwidth.  Apple’s success with apps demonstrates that people will shell out millions for just the convenience of using an app for what they could get from a website.  If there’s a revolution in the market, it has to come from allowing the people who build the networks to participate in this higher-level part of the food chain.  They want to do that; they’ve told me that explicitly in surveys for four years now.  They want to add services to their networks, to add a service layer to their network layer.

Which brings us to “smarter”, the third claim for the FP3.  What exactly does that mean?  Alcatel-Lucent says that smarter means delivery of personalized services and content, massive IPv4/6 scale for the future, and fully programmable.  But unless we believe in 100G to the user, the access network will have to be able to do all of that or the user will never see the service, and the FP3 won’t be out there.  We all know that supporting most of the “smart” things is likely an edge role; the power of the Internet was to avoid being aware of individual users or flows deeper inside the network—it doesn’t scale.  And what exactly is the FP3 programmable to do?  Yes, the number of VPLS and VPRN instances is doubled, along with the number of queues and (almost) the number of routing table entries.  The question is how exactly this creates monetization, the revenue per bit that operators need to push up if they’re to punch capacity up by 400%.

Alcatel-Lucent did offer a couple of ideas on services, one on the general evolution of the service experience and one on the video distribution process.  I agree with the points in both; what I still have a problem with is what role the FP3 plays beyond moving the bits around.  I doubt that Alcatel-Lucent proposes to add customer or service-flow awareness to deeper aggregation products that have the traffic scale to justify 100G.  It doesn’t scale.  Are they proposing some intermediate “not-aware-but-sentient” role for the network?  I’d love to hear about that.

I didn’t hear it in this announcement.  It’s possible that the FP3 could play a role in binding the services of the future to the network of the future, but Alcatel-Lucent doesn’t say that.  It’s possible that Alcatel-Lucent intends to meld its Application Enablement and Open API themes downward into the network and create a multi-layer profit partnership, but they don’t say that either.  The FP3 is specifically faster, specifically greener, and un-specific about how it’s smarter.  And it’s smarts that will revolutionize the Internet; smarts that generates bucks and not just bits.  For a company who has gained router market share because of its success in the mobile and content service layers, it’s disappointing they would forget a strong TECHNICAL service-layer tie here.  It would be more disappointing if there isn’t one.

Without monetization in a service sense, traffic can’t be profitable even at current prices, and the FP3 presumes a 400% traffic gain.  Long before we reached that, the current market model would collapse into usage-based pricing and that would limit traffic growth and also the growth of the Internet.  We have to create a healthy ecosystem here, and the FP3 picked up three credible points about that ecosystem; speed, smarts, and power efficiency.  It validated two of the three.

So that’s my dilemma.  I think the engineering is impressive, in fact VERY impressive.  I think it could reduce the cost of high-capacity devices, and just the fact that Alcatel-Lucent announced it may suggest it’s planning to go higher on the router capacity tree.  I just don’t think Alcatel-Lucent has proved that it revolutionizes the Internet, because nothing is going to do that except something that revolutionizes the Internet business model.  The capacity play they’ve made for the FP3 is dangerously close to following Cisco down into the “bandwidth at any cost” abyss that I warned about when Cisco announced its ASR enhancements.  The FP3 shouldn’t have been about speeds and feeds, but about dollars and cents.  The “smarts” point of the FP3 launch is the one that had to be the strongest and that was instead the weakest.  Might Alcatel-Lucent plan to correct that down the line?  Perhaps, in which case I’ll take another look when they announce it.  For now, this is a strong evolution, but it’s not a revolution.

 

NSN and Demand-Side Networking

The fate of NSN is now even more unclear than usual as the WSJ reports that talks to sell a stake in the venture to private equity firms has failed.  Nokia and Siemens are said to be looking to restructure the deal, but Nokia has said publicly that several options for the venture remain open.  What happened here?  NSN was, and is, one of the larger players in the space.  The problem, I think, is a combination of market pace and vendor inertia, and NSN is hardly the only player guilty.

Networking used to be a supply-side business.  Operators bought gear that created services, and their plans for forward service evolution set infrastructure needs.  People did two- and three-year plans for releasing new voice switch software, for example, because the pace of service evolution was driven by the operators and was totally predictable.  The thing is, while there are plenty of workers in telcos worldwide who probably still think in glacial terms, the market is now driven by people like Apple and Google, who have planning cycles measured in months and not years.  The pace of market change has become more like an avalanche than a glacier.

About four years ago, network operators worldwide awoke to the problem and began to demand support for new initiatives to capture some of the opportunity that the agile handset and OTT players were getting.  This was at first called “transformation” and then “monetization”, and from the first the network vendors balked at the move.  Operators speculated to us that their vendors were afraid that a rethinking of the operator business model would delay sales, and they pressed on to encourage more spending NOW rather than to build the framework that would have justified MUCH MORE spending later.

The gap between buyer and seller was greatest in the US where market pressures from OTT and handset players was greatest, but it’s spread to most other markets.  As it did, it posed more problems for vendors.  With the exception of Huawei, the strategic influence of EVERY network equipment vendor fell as buyers became frustrated with their lack of monetization support.  NSN wasn’t the worst in strategic interest results; their wireless credentials have kept them a contender.  They actually improved in the spring survey based on wireless strength, in fact.  But NSN is a conservative company, somebody who doesn’t understand marketing or the changes that have come about.  It has the assets but doesn’t promote them.

For example, NSN has a “Service Delivery Framework” architecture for the new operator service layer.  It’s fairly complete, as much as any vendor’s architecture we’ve seen, and it’s positionable and credible.  The problem is that the slides that describe it were, when we got a copy, marked “Confidential”.  We can’t talk about it.  They don’t talk about it.  The reason, we believe, is that NSN is trying to use the framework within a professional services context rather than as a product.  That has resulted in the company’s disengagement from four of the five content monetization deals we’ve seen in detail, simply because they don’t have visible assets to get themselves to bat.

Ericsson has a similar problem, in my view.  They also have a good “SDF” diagram that they don’t seem to share in public.  They also have a strong professional services bend, and they want to make monetization projects and not monetization products as a result.  Thus, they are missing a key opportunity—just like NSN is.

NSN could be a valuable property.  Nokia and Siemens don’t have to sell a stake in it if they can push NSN to take the strategic and marketing steps that it’s capable of taking.  If they don’t do that, then any buffing-up-type “restructuring” isn’t going to help.  Thus, my open letter to both Nokia and Siemens is to make NSN into what it’s capable of being and reap the rewards yourself.  Your alternative isn’t going to be pretty.  With the wireless position NSN has, a service-layer story could make the company a compelling partner.  They have one, but it’s not catching on.

Google, Monetization, and Carrier Clouds

Google looks like it’s facing more anti-trust angst; the FTC is reported to be launching an investigation into Google’s advertising and search business, and former and current CEOs Schmidt and Page have decided they don’t want to appear before a Senate Committee hearing on roughly the same topic.  All of this comes as Google, the largest of the large in terms of OTTs, faces a combination of competition from Apple (bleeding through its iPhone and iPad successes) and increasing difficulties in creating sources of new revenue.

Advertising online is coming under more pressure for a number of reasons.  First, as I’ve noted in past blogs, advertisers are interested in it only to the extent that targeting could reduce their overall costs by reducing “overspray”.  It’s clear that their ideal approach would be to show ads only to those who would then buy based on them.  That has focused online advertising on “interdiction” approaches; get into the track of the buyer between the decision to buy and the execution on that decision.  Search ads can work for this, but the trouble is that companies normally first try to game natural results with optimization strategies.  In any case, it’s clear that the search ad business is mature at this point.  Banner or in-video ads have been problematic from the first; my own research has shown that buyers have almost no recall of these ads in banner or site-pre-roll form and that they have limited recall of even in-video ads.  The latter appears to be caused by the unique buyer ability to tune out dross while on a computer; run the same videos on a TV and they see the ads fine.  These pressures generate a pressure on the industry to increase their “value”, and that can lead to even more problems.

A recent report suggests that the behavioral targeting practices of the industry, for example, are leading participants to a place where they risk becoming “adware” and generating consumer and possibly regulatory backlash.  The irony is that BT is a refinement on targeting, which is a refinement on a process whose goal is to REDUCE spending.  Thus, the industry could be said to be eating itself on one hand while risking regulatory wrath on the other.

The simple truth is that ads can’t sponsor everything; they can’t even sponsor very much.  The total global adspend wouldn’t pay the total global capex of providers, and of course online ads won’t get that total adspend and the providers don’t get much of online ad revenues.  This is why many operators see “monetization” as important.  They have one major asset; a market where people pay for something.  Maybe they don’t pay enough or with the right model (usage  versus unlimited, for example) but they pay.  Monetization strategies are strategies to present something that people are willing to pay for.  In Netwatcher in June we offered a content monetization architecture; a mobile one is coming in July.  The goal here is to demonstrate that it’s possible to build a service structure that can earn revenue.  If broadband access is a conduit for delivering paying services that the operator can offer, then it’s OK for it have a minimal ROI.

Verizon has decided to keep its cloud offerings under the Terremark brand, which I think is a good idea because the cloud and cloud-hosted services fall outside the traditional structure of a telco.  Operations in telecom providers means the network, and science and technology is not typically involved in monetization projects.  While internal IT (OSS/BSS) has IT expertise, most operators for now are seeing their internal IT and cloud IT separate, until they’re sure that the two can share an infrastructure without creating security or performance issues for either one.

Cloud monetization is the third pillar of operator profit-building (after content and mobile).  Operators here are envisioning a rather convoluted evolution, with service feature technology (the evolution of the stuff hosted today on SDPs), OSS/BSS, and cloud services all being relatively independent at first and then converging over time.  One likely instrument of this convergence is the “feature cloud”.  Managed security services and other services offered from a cloud platform are little different from content or mobile service features hosted on a cloud, and over time the difference will likely disappear.

 

 

Signposts to a Video Future

Sometimes I get frustrated by surveys and research because they never seem to make the distinction between things that are correlated and things that are causal.  The example today is some new research on teen mobile behavior.  It lists all the usual things; they watch the least TV, they use mobile video the most, the use social networking the most, they SMS instead of calling.  All of this is implied as the new sweeping change in the market.  As these youth age, their habits become the market.  TV is toast, and so is voice.  Well, I think this is a vast oversimplification.

Babies do things different from adults too, but just because they have different behavioral norms they don’t necessarily keep cooing and batting mobiles or sleeping in cribs as they age.  People’s role in life, the social framework they inhabit, sets their basic patterns of behavior.  If you look at “youth-in-transition”, the people who have left school, entered a stable relationship, etc. what you see is an almost immediate shift in focus away from a hide-from-supervision model.  Nobody is going to yell at you in your own home (well, perhaps your partner).  Yes, it’s true that many still indulge in some avoidance of supervision or surveillance but it’s not the single driving force for most.  The best example is that I’m unable to find any statistically reliable indication that young people who start a family today are any less likely to have a TV and use traditional multi-channel TV resources than those who started one ten years ago.

The net of this is that youth is different primarily because youth isn’t adult, isn’t personally responsible.  Many of the doomsday scenarios being painted, and many of the radical behavioral shifts being postulated, are simply not going to be as dramatic as most seem to think.

The Hulu story is perhaps a poster child for this.  Hulu was hailed by many (most, in the media) as the harbinger of the new age of video, where television is relegated to homes for the aged and everyone chuckles over “I Love Lucy” reruns because currently created content has moved to a new online form.  But now it’s pretty clear that Hulu is on the block, and the question is why.

One possibility is that it’s not making money.  OTT in-video advertising rates are about 3% of TV commercial rates, and advertisers seem to be more interested in using online targeting to reduce their costs than to engage their prospects better.  The other possibility raised today is that it’s being too successful.  The theory is that the owners of the Hulu JV (Disney, Comcast, News Corp) have a major disagreement on whether the service is undermining paid channelized TV services.

Speaking of tensions of mission, Alcatel-Lucent has announced something that’s designed to take the inherent contradictions out of in-home or hospitality broadband.  Right now the practice is to deploy WiFi in these locations and tap WiFi to unload the wireless network.  Femtocells have always been the alternative; an operator would instead site femtos in the home and in cafes and airports, and these would give operators their offload without creating a need to support devices that can jump onto any hotspot and thus escape operator control.  The theory I’ve been hearing is that femtos would also push back against players like Apple and Google who are trying to get more control of the mobile market, and possibly even of mobile services, by limiting operator subsidies to devices that are 3/4G only.  If that’s the idea, it’s dead on arrival in my view.  There’s no way that operators can make appliance guys, particularly Apple, blink on this one.  But there is a way to make the operator’s services more valuable by creating a 3/4G conduit that works even in the home.  The trick is to figure out how to make all that happen without making the user pay for femto airtime, and so far that’s been an issue for operators.  The Alcatel-Lucent/Broadcom reference design for femtos is a good step for operators if they’re willing to accept the framework in which femtos can succeed—they’re a captive-but-free alternative to WiFi.

Some sort of offload is important for wireless video, obviously, particularly given that tablets are already according to some research consuming more video per device than PCs, on the average.  Having both options available is smart, providing the buyers are willing to take the consequences either way.

 

 

Appliance Wars May Be Restarting!

Today’s market is rife with rumor about new mobile devices; the new model of the iPhone and the first true Amazon Kindle tablet.  The aficionados will revel over details like processor and resolution, but the big point here is that the future of mobile services is getting decided by mobile devices and not by mobile operators or infrastructure.

Apple’s new iPhone is likely to have the usual upgrades in speed and resolution and a bigger and better camera, but the big question is whether it will have the SIM-less design that some rumors have identified.  That would be important because it would suggest that Apple may be working to break away from the traditional way that mobile phones are sold—by operators in a subsidy relationship that demands early exclusivity.  Given the iPhone would be a new model and not a new device, there’s probably no risk of exclusivity pressure at this point, so a SIM-less model would likely indicate a proactive plan on Apple’s part, a wide-ranging move to become a truly independent mobile provider that might extend into becoming an MVNO.

Amazon’s Kindle tablet could be one reason why Apple would look at a new business model for its mobile devices.  Ebook players like Amazon can field low-ball-priced appliances because these gadgets can lock users to proprietary book formats and guarantee future sales to make up any loss-leader pricing on the device.  Barnes & Noble already has a Color Nook that’s a rather good and extremely cheap Android tablet, but B&N doesn’t have the marketing clout that Amazon has.  However, if Amazon does field a Kindle, then B&N will certainly ramp up its own Color Nook tablet support.  The company has about 300 apps now; there are tens of thousands available and the Color Nook is still a couple Android versions behind the market.

A battle for the subsidized reader-tablet market between Amazon and B&N would almost certainly force Apple to look at that same approach.  The problem is that while iTunes and the App Store could have been used as the revenue engine for a low-priced appliance, Apple didn’t work that way when the i-stuff was launched, and they’re probably reluctant to surrender the margins now unless they can get them back elsewhere.  That’s why I’ve been speculating that the iCloud we see is a shadow of the iCloud to come.  With some new service revenue stream, Apple could then offer lower-priced appliances and still keep the revenue line growing.

All of this demonstrates the issues that network operators face, of course.  The subsidy relationship between phones and services has been a way for them to influence the ever-more-powerful Apple and Google, and an independent subsidy model would trash that strategy.  More competition in the space, particularly competition on services and content, puts more pressure on operator service-layer plans, too.

In other news, all of the recent hacking has generated a rise in corporate tension on security, and in our spring survey of enterprises we’d already found security a hot-button issue.  Juniper has been working to promote its network-based security model in this atmosphere, something that we believe should be done aggressively given that enterprises believe that “only the network can secure the cloud”.  One partnership in the cloud space illustrates the potential for Juniper’s Pulse security client as a point-of-interaction security outpost, and I’m glad to see this kind of positioning emerging because it’s been under-played by Juniper up to this point.  Even now, I’d still like to see more aggressive positioning.

 

Caps, Content, and Clouds

Verizon is now rumored to be preparing to move to tiered pricing for mobile data in early July, and there’s a growing conviction among operators that everyone will be charging for usage on mobile networks by the end of this year and that everyone will be charging for wireline usage by the end of 2012.  The move to usage pricing isn’t being heralded as a victory; operators would prefer to get their profits from other sources.  The problem is that they’re becoming convinced that they cannot roll out a monetization strategy in time to sustain infrastructure investment without a bandwidth-cost kicker.

Mobile data use is growing by 50% per year worldwide, and that’s creating not only a backhaul cost but also a multiplication of towers as cells become congested with more users and more traffic per user.  The whole metro infrastructure is stressed by the mobile traffic load, and of course wireline streaming video traffic is also contributing.  The rumors that Google will be offering a YouTube-branded streaming multi-channel “cable replacement” service aren’t making operators happy either.  In fact, it’s streaming competition for TV that’s generating the conviction that they need usage pricing even more than concerns about the pace of their monetization progress.  Only usage pricing can stem the exploitation of incrementally free bits by OTT players, as operators see the situation.

The Google offering, if it’s real, will be a test of the complex relationship between content cost, content cost recovery, and content quality.  Any streaming service that purports to replace channelized TV will have to offer viewers something that’s equivalent in quality.  For Google, whose service is rumored to be involving only 10 to 20 channels, the issue is even more significant because the traditional solution to things like summer reruns has been to let customers flee to cable channels.  What happens when there aren’t any, or what happens if the prime networks are never there?  You can’t just stream old episodes because they’re already available online through a variety of sources, from Amazon to Netflix and from many networks themselves, and in on-demand form to boot.

The pricing model relates to content quality.  If you have first-run movies and original series like HBO or some other pay-for channel, you can charge the consumer directly and forego advertising.  If you want to draw on network TV or older content you’ll likely need to get revenue from advertising at least as a supplement, and the challenge there is that the per-impression rate for ads in streaming video is about a thirtieth that of a TV commercial.  And you have to pay for content out of the total of what customers pay you and what advertisers pay you; either to produce it or license it.

Video in TV and movie form doesn’t always work, and these models of content consumption have been with us for decades.  We’ll have to see how a streaming model can work, which Google may demonstrate, either in a positive or negative way.  We’ll also see how this will relate to usage pricing.

Moving on into the cloud, BT has made some interesting comments that suggest that it at least isn’t necessarily seeing itself as a cloud competitor, but rather perhaps as a cloud orchestrator.  That view isn’t broadly held among the larger operators we’ve surveyed; most believe that they will be deploying their own cloud infrastructure and offering a range of services from IaaS through hosted managed security services, all living on the cloud.  What BT is suggesting is that they might have some such infrastructure and services, but that they see themselves as a kind of prime cloud contractor, integrating multiple cloud providers into a single seamless service that then links to the enterprise.  This single service would also likely include network transport and connectivity, meaning likely a VPN.

What may be behind this is the fact that most network operators have a relatively narrow geographic sweet spot.  BT is strongest in the UK, obviously, just as Verizon is strongest in the northeastern US.  While both operators have international service trunks, neither is likely to want to maintain data center facilities globally, nor are they likely to want to backhaul all their global cloud customers to their home regions.  For some low-margin services, like IaaS, they’d be better off playing the federated role BT advocates.  For higher-margin services they might actually want to federate hosting rights from a partner in various geographies.  That’s what’s going to take some time to work out, because international standards in this area have failed to keep pace with the market.

 

 

Broadband Twins Not So Alike?

Good news to providers of wireline broadband services and their equipment vendors; mobile isn’t going to destroy wireline!  Actually, nobody who takes the time to think about the reality of mobile infrastructure believed that to begin with, but the comment does offer some opportunity to look at two aspects of broadband evolution.  “Two aspects” is the relevant point because it’s become clear we have two interdependent broadband futures and not just one.

Mobile services are an RF-based extension to metro infrastructure.  You pipe bits to a tower and beam them into space, and because of fundamental physics you can’t push infinite bits over RF.  That means you need to have multiple towers to serve a large population of users, and that even then a large concentration of users in a geography will limit what you can deliver to them.  An LTE cell might support 100 Mbps of aggregate bandwidth, which means 20 streaming HD videos even in a smallish form-factor would totally consume the capacity.  To supply more, it’s more antennas and more backhaul.  To illustrate the point, if you wanted to offer the hypothetical 100 Mbps per home capacity using wireless, you’d have to give every user an antenna and a backhaul, and the result would look suspiciously like wireline broadband with a WiFi in-home network.

So we aren’t going to replace our streaming 3D TV wireline feeds with mobile.  But obviously we’re not going to drag fiber optics behind us as we commute or go out to dinner either.  Fortunately, most users report a sharp division in what they do with mobile broadband and what they do with wireline.  We don’t expect to watch feature films on smartphones, and we don’t think that getting directions to a store or restaurant before we leave for the place is exactly progress either.  Microsoft Streets and Trips revisited?  Anyway, communication fits into the context of living, and the sharpest contrast in how we live is the contrast between the at-home and away-from-home behavior models.  Since teens practice supervision avoidance by staying away, this contrast is also the largest reason why youth behavior online differs from adult behavior.

From an infrastructure perspective, though, you can see that delivering mobile bits and delivering wireline bits is the same up to the last mile, or at least it could be.  The migration to LTE is important for mobility because it’s a migration to an architecture that provides IP dialtone rather than one that pushes online bits into a voice channel.  True LTE migration is designed to equip the mobile user to be a user of the Internet, drawing on the Internet service range.  That, of course, is the rub in terms of revenues.  The revenue per bit on internet dialtone in the wireline world is very low.  If that same low-revenue model translates into the mobile world, operators lose much of their incentive to offer mobile broadband in the first place.  Or they are incented to apply usage pricing to gain back what they’d lost.  The service world is where operators hope to take up the slack, the place where they avoid per-bit charging because they’ve successfully become purveyors of some of those things in what I just called the “Internet service range”.

The challenge is that just what might be in that range is very difficult to predict.  It’s clear that some content, meaning video, is going to be delivered in mobile form, particularly because of the tablet revolution.  People are going to increasingly view “TV” on tablets, substituting personal viewing for collective TV-watching because TV lacks the compelling star power it once had, the power that got the family all seated in one place at one time.  Now they all want to watch different things because programming has had to diversify to stay relevant.  Tablets might well become the personal viewing platform for the home, particularly as summer reruns push viewers out of traditional TV into VoD, where finding the best thing to watch as a family may present insurmountable obstacles.   Might that in turn create more interest in tablets as a platform when a viewer is out-of-home?  Sure thing.  But there’s more to mobile than just mobile content.

We already know that mobile advertising is different than wireline advertising, even for the same websites.  People on the move who use search are more likely to be looking for an immediate purchase.  People on the move are more offended by irrelevant ads.  People sitting in a social setting but away from home might be more amenable to ads, though.  LBS is important; mobile messaging and social features are important for roving users.  We’ve only now started to see how much might be done in the way of leveraging mobility and behavior.  Telcos want to cash in, but so do Apple and Google.  It’s the next area of an arms race.