New Business Models, New Market Issues

The international tensions of the week haven’t been ended; far from them in fact.  Japan has raised its nuclear incident to a “5” on the international scale, between Three Mile Island and Chernobyl.  The UN, with both China and Russia abstaining, voted to approve any measures short of invasion in Libya, though at this point it may be too late to save the rebellion.  However, world markets see the moves as expected, anything expected isn’t as high a risk.  That’s sent stocks generally higher.

Libya has said it will halt its attacks, a move that’s sent futures up sharply, but of course we don’t know whether the promise will be kept and whether rebels will respond in kind.  Nobody seems to doubt that the world community wants the government of Khadafy toppled, and thus the move by the government in Libya may work against international interests unless it allows the rebels to win, which would mean that Libya’s cease-fire would have to be unilateral—hardly likely.

The big tech news is that Cisco intends to begin paying a dividend.  The decision has been long expected, and the dividend of six cents per share was enough to get the stock trading up a bit in pre-market action.  However, it’s little more than a percent, which hardly makes Cisco an income stock.  IBM’s yield is higher, for example.  Still, the move is important for Cisco in transitioning the company into a more mature model.  The question now is whether Cisco will be able to emulate IBM and continue to not only hold on to current incumbencies but also develop new ones (what the company calls “adjacencies”).

Continuing in tech, HP continues to talk up its own business shift, with WebOS, software, and the cloud being the big feature.  None of that is surprising, but there’s not a whole lot of order and architecture coming out of the company.  What does appear to be true is that HP will be pushing WebOS both in the consumer and business space, hoping to make it a true cloud partner.  Some analysts and media have speculated that the company would focus on business cloud users for WebOS, but I don’t think that’s likely.  The business cloud space is a good one for HP to be sure, but first they’d not have made WebOS a part of every PC had a business-only mission been their goal.  Second, the overall success of an appliance based on WebOS will depend on some consumer success; they can’t make a go of it on business opportunities alone.

Another interesting trend we’re seeing is the shifting of ad bidding from the book-space-annually “up-front” market of TV toward a more real-time bidding model.  RTB today doesn’t really mean real-time in the sense of when the ad is served a bunch of bidders collect and bid to be the one serving it.  Generally, it means that bidders have provided bidding policies to an engine that applies the policies to generate a bid for a given impression then awards the impression to the highest bidder.  The goal is to provide better targeting and better ad yield at the same time.

The tracking aspects of RTB certainly collide with the federal interest in do-not-track, which may well be why the topic has generated so much negative buzz.  Knowledge is money in the ad game, and when you consider that the total global ad budget is less than a quarter of total world communication service revenues you realize that only a successful few can hope to make a ton of money here.  RTB is a way to maximize yield, and thus it’s a driver to collect information that makes a given target valuable to advertisers.  The trend toward RTB will inevitably lead to higher monetization for social networks who can gather information more easily, pressure to accept DPI-based intelligence-gathering by ISPs, and much more tracking by more people, with the inevitable publicized privacy breaches.  Regulation here is coming, and largely because the industry won’t exercise collective discipline.  The browser players’ DNT technologies are only a gesture; more is needed and more is coming.  It’s only a question of who provides it.

Finally, the New York Times is following others into the world of the paywall, beginning yesterday for Canada and later this month in the US.  The decision is another indication that people are realizing that content is valuable and that those who can produce it have a right to expect some monetization, particularly when some of the use of news content in particular makes money for the user and not for the producer of the material (hence the tension with news aggregator sites).  While I certainly sympathize with the desire for free stuff, I also understand that this trend isn’t going to stop with the times.  Everyone in an ecosystem needs to live to play their own role, and that’s something we’re just now learning about the Internet ecosystem.  It’s not going to be an easy lesson for many.

 

More Video Thoughts, Some Economic Hope

Yesterday wasn’t a happy day for global markets, but it’s already looking like sanity might prevail.  Most on the Street realize that Japan isn’t a large enough chunk of the global economic pie to cause a major disruption even if we presumed that their economy was wrecked—which it’s not.  There will be short-term dislocations as Japan’s production and consumption patterns try to restore themselves to normal, and in the longer term Japan will almost surely have a major increase in economic activity as the country rebuilds.

One factor that might have an impact on worldwide investment patterns is a possible selling of foreign assets as Japan funds its rebuilding.  Even that’s not likely to have a major impact both because a significant selloff isn’t likely and because Japan isn’t the primary holder of critical foreign financial assets—China is.

In tech, an interesting set of developments comes in the online video space.  Netflix is bidding to produce an original series—it’s first—and is rumored to be willing to pay big to get the deal done.  At the same time, some networks have objected to Time Warner Cable’s plan to stream video to an in-home iPad.  The complaint is that the application isn’t within the covered syndication rights negotiated between the parties.

The TWC application has proved so popular to consumers that the service stalled on excess traffic and has had to be phased back and restructured.  This is proof positive, in my view, that the tablet really does change everything in video.  It’s big enough to create a good video experience for an individual user, and it can be carried around.  It’s clearly the expansion of the TWC program beyond the living room that the networks fear, but it may be that the big risk in video is illustrated not just by the TWC deal but by the combination of this and Netflix.

TV sucks these days, so say most users in my survey (of course, many have said that in the past).  There is definitely a growing angst over the quality of primary-network TV, a growing flight of viewers to specialized cable networks.  There’s also a flight out of the traditional channelized pattern to VoD and to streaming online video.  All of this started off as a flight from perceived loss of quality, combined with increased irritation over the growing intrusion of commercials.  However it started, though, the first result of this flight is a change in consumer viewing—a demand for that just-right piece instead of a willingness to watch the least of N evils, where N is the number of regularly viewed channels.  Increasing consumer intolerance for all but the just-right exacerbates the flight risk from traditional viewing.

The problem with all of this is that there’s only a certain amount of content available in libraries that can satisfy the new demanding consumer.  How many movies on Netflix can someone stream before they’ve used up all that interest them and exhausted their tolerance for re-viewing?  The cable networks realize that they have a pretty large library of more timeless (and less viewed) material, and Netflix realizes that even this won’t be enough.  What makes Netflix different is that they have subscription revenue right now, and they can use some of it to produce content and possibly pyramid themselves into being an online “network”.  They need a hot show with some hot stars to start the process, but recent cable network drama and action series demonstrate that you can make an appealing show without well-known players as long as you have a bully pulpit to launch it from.

All the online video pressure created by the intersection of the more demanding user and the more tablet-mobilized user base is certain to create major network stress, particularly in mobile broadband.  I think that the genius of Alcatel-Lucent’s lightRadio scheme is its direct application to the simple truth that broadband wireless will have a very different usage-density pattern than normal wireless.  People will watch tablet video where they can stop and smell the roses (or the traffic, or whatever) rather than while roaming about.  That focuses video consumption where sedentary behavior is likely, which is in turn focused on waiting areas, restaurants, hotel rooms, etc.  Those areas will need much more capacity, and while you could cover them with femtocells or WiFi hotspots, those investments might not translate into as good an ROI as microcells on streets near a row of popular cafés.

All of this could have a major impact on the CDN world, too.  If users demand more personalized video, then the notion that 10% of the videos get 90% of the traffic will cease to be valid.  That means that traditional VoD models that rely on local servers alone will be stressed because there’s too much video to locally serve, and because tablet mobile behavior is making caching forward toward the user essential in preserving metro capacity and backhaul resources.

Video is getting complicated!

 

New Looks at Old Dynamics

The world’s markets continue to oscillate as speculation, risk, and policy questions all collide to generate uncertainty.  I don’t think that the fundamentals are at risk, but there’s enough perception of problem to invite attempts to exploit uncertainty for profit.  The nuclear situation in Japan and the aftermath of the earthquake are creating risk, and so is the tension in the Middle East, most notably now in Libya and Bahrain.  There’s not much more to say on these issues other than to acknowledge that we’re likely in for a period of increased market swings, and that I believe the long-term trends will be positive.

In the US, everyone is trying to jump on the hot 2012 issue, which is perceived to be jobs.  One challenge that exists in the job market is that state and local governments are strapped for money owing to falling tax revenues.  That means that some means of balancing budgets is required, and for most that means reducing payrolls.  The net job loss here could make the jobs recovery lag the economic recovery because it offsets private-sector growth.  The problem is that it’s virtually impossible for states or local governments to avoid layoffs.

The much-publicized situation in Wisconsin was a Republican reaction to the state government dilemma, and many now believe it will end up hurting Republican chances in the 2012 elections.  The population at large, according to polls, doesn’t favor the union-busting mindset.  Even if all the move does is mobilize the Democratic base, that would virtually guarantee Republican defeat, and possibly on a large scale.  Other states have tried a different tactic, working to improve the state climate as a home for businesses and thus boost tax revenues.  It will probably take a year to see what works, which puts the results right at the time of the 2012 elections.

Copyright reform is also something that’s gaining traction, with the Administration taking a surprisingly activist role on what one might think would be a pro-business and thus Republican issue.  The Administration wants to reform a number of aspects of copyright law regarding media distribution, including the authorizing of wiretaps for felony infringement and possibly authorizing DPI to look for infringing exchanges.  While these reforms are far from promoting surveillance of everyone to prevent their infringement, they might well trap people who are infringing.  For those who believe that file exchange of copyrighted material is a right, that would be a blow.  But the US is unusually dependent on intellectual property development for its economic health, and with jobs a priority, you know where we’re likely headed.

One example of the infringement trend is the assertion that usage pricing might curtail piracy by making it expensive to host P2P copies of copyrighted material for exchange.  I do think you can make a case that rather than establishing usage caps and pricing increments for downloads, you might want to consider incremental pricing on uploads.  Most responsible studies suggest that as much as 80% of upload traffic is P2P exchange of copyrighted material, and it hardly seems fair to charge users overall to subsidize traffic that’s technically illegal.

Staying with our video theme, we now have some tests starting to determine how in-VoD advertising might be accepted by users.  Cable companies and telcos who support VoD have been interested in ad insertion in their material, mirroring the patterns of normal broadcast television.  There’s no question that users would prefer free programming without ads, but that’s hardly the issue and isn’t likely practical in the long run anyway.  The question is how many ads will they accept, and will users have a negative response if they can’t fast-forward VoD material to skip ads.  Many users are watching VoD for lack of anything interesting on channelized TV or to escape seemingly exploding advertising there, so striking a balance is important.

At the heart of the issue here is whether we’re in a kind of deadly spiral.  Lower appreciation for channelized content causes flight to online streaming, where ad revenues are lower.  The lower ad contribution reduces the available dollars to produce programming, which results in more flight. Over time, the stock of already-produced-and-not-seen content for each viewer is exhausted, and the incoming stream of new content isn’t sufficient to fulfill the needs of the audience.  Do they then go back to living-room charades, play Monopoly, or what?  Interesting question.

 

Tensions, Plans, and Stories

The nuclear situation in Japan is now starting to rattle markets that were previously prepared to shrug off the disaster in the context of global economic recovery.  At this point, I still believe that the issue is short-selling by hedge funds rather than any indication that the disaster will impact global economics in the longer term.  The human side of this picture is awful, but the economic side is manageable.  The challenge is that speculators are trying to profit from uncertainty by driving shares down, hoping to make money as fearful investors sell to protect profits, and then again as prices rise.

In the Middle East, our problem is that we have international bickering creating uncertainty.  There’s no question that virtually everyone would like to see Libya change governments (Russia, China, and some authoritarian states are the exception).  Even the Arab League wants Khadafy to go.  However, absent actual military action like a no-fly zone, it seems likely that the rebels will lose, and while diplomats argue over whether to establish such a zone, the situation in Libya may be moving past the point where the decision will even matter.  At the same time, we have more tension in Yemen and now in Bahrain, and all of that seems to put oil at risk.  The problem is that oil prices are actually falling now, supposedly because of the impact of Japan’s crisis on domestic oil consumption.  Is that the reason, or are speculators simply moving out of oil and into stocks?

In the tech world, HP’s CEO is promising a more cloud-engaged HP, which seems a smart move given that it’s clear that cloud computing will in some way be the driver of virtually all data-center-centralized IT consumed in the next five years.  I’m not suggesting everything migrates to public clouds with Google hosting banking applications or something; what’s going to happen is a gradual hybridization of private and public cloud architectures.  That means any company with server aspirations had better get on board, and HP has had a good set of tools and no blueprints.  Of course, the same can be said for most network vendors, and even about Oracle.  Microsoft and IBM get top marks for having a cloud strategy from both enterprises and service providers; Cisco still wins among the network vendors.

XO is getting into the cloud services business too, hardly the first carrier to do that, and it says it will be focusing on the SMB space.  That’s a much more ticklish proposition than most are willing to admit.  My surveys do show that SMBs are likely to cloudsource a larger portion of their total IT spending, but the problems are first that their total IT spending is smaller than that of large enterprises and second that the cost of sales and support for SMB customers is much higher.  My model says that companies like XO can sell services to their current base, but that it will be difficult for them to expand beyond that.  With a relatively small target audience, it’s then a question of whether XO can gain enough economy of scale to be an effective cloud player.  Their situation is reflective of the cloud market overall; you either are a big player or you’re an inefficient and therefore marginal player.

A new report is suggesting that the problem with piracy of copyrighted material is created by the greed of the producers, whose high prices encourage piracy.  I’m not really convinced here, I have to say.  Yes, it’s true that you could reduce piracy by reducing the incentive to pirate by lowering the price of the real goods.  That’s not the question, though.  You could reduce auto theft by giving cars away, too.  The issue here is creating the largest possible value from the economic ecosystem, setting the price that produces the optimum revenue flow.  As long as pirates can make a profit at a given price, and as long as they’re largely unconcerned about being caught, we’re going to have piracy.  The notion that it’s the fault of the producers doesn’t pass the sniff test.

In fairness, the study is suggesting that high local prices in third-world locations is creating a piracy incentive, and that may then be getting redirected at pushing copies even to markets where prices are lower.  OK, that I can buy to a degree, but even here the problem is that we already have created pirates, and they’re thriving in many countries.  Changing local pricing isn’t going to put them out of business internationally, or likely even locally, at this point.  But it might do nothing at all; most third-world countries aren’t consuming Rolex or Gucci knock-offs locally and never did, so how the real thing might have been priced isn’t relevant.

In the ongoing usage-pricing debate, the question now that AT&T has made its move is whether the cable competitors will go along.  Comcast does have a usage cap but it’s set much higher than AT&T’s DSL cap (at the same level as the U-verse cap; 250 GBps per month), and other operators have experimented with (and withdrawn) usage-price plans in the past.  Recent studies show that cable has a higher EBITDA than even OTT players as a group, but of course most of the profit comes from TV and not broadband.  Furthermore, the cable companies have little incentive to push broadband usage when most of the usage they’d be pursuing is video that threatens their TV business.  There’s surely going to be a push for regulators to kill the concept of caps and incremental billing, but even the current neutrality order wouldn’t forbid that and it’s very unlikely in my view that Congress would order the industry to fix prices or pricing policies; they’d kill investment.  Republicans would never agree anyway.

 

Japan, Usage Pricing, and You

The earthquake in Japan and the related threat of nuclear reactor problems are surely the headline items for today.  The human scale of the disaster isn’t even known at this point, but many of even those who have survived uninjured face a difficult future.  Homes, businesses, and infrastructure have been severely damaged, and the consequences of the quake will linger for years to come.

Economically the markets are having a hard time predicting what will happen.  The quake will impact Japan’s economy and production in the near term.  It may cause a shift to production for internal reconstruction in the near term.  In the longer term, reconstruction will likely boost the Japanese economy, which had been threatened with deflation.  Short-term negative impact in Japan seems inevitable, though Tokyo seems determined to release a flood of money to counter any immediate liquidity risks.  The nuclear issues will almost surely curtail the country’s aggressive plans to increase the percentage of total electricity that nuclear plants generate from the current 33% to a goal of over 50%.  While near-term energy consumption will be down because of the reduction in driving, a switch from nuclear to fossil fuels may have some impact on energy costs.  It’s also been suggested that the reconstruction in Japan may increase sales of technology and other products, and that insurance companies might dump Treasuries to pay claims.  However, the turmoil seems to have boosted Treasury demand so any sales by insurance companies may well be quickly taken up with little net movement in price/yield.

The problems in Japan will certainly have an impact on how companies view disaster recovery.  The scale of the problem in Japan makes it clear that where there is a realistic risk of natural disasters (as there is on the US west coast, for example) it’s critical to plan for disruptions that could have a considerable geographic scope, and for substantial loss of power and communication in the impacted area.  That, in my view, will push companies more and more toward the concept of cloud backup, not only for data but for data centers or at least mission-critical apps.  I think the big beneficiaries in this could well be the service providers, who are in a better position to distribute compute assets than most pure-play backup types and have the financial credibility to make enterprises comfortable.

Broadband policy issues are moving into a new period; AT&T has announced it will be imposing usage caps on both DSL and U-verse customers.  Initially they’ll not be charging if people go over their cap only a few times, but it’s obvious that the carrier is joining many of its fellow ISPs worldwide and getting ready for usage pricing above a certain cap point.  The caps are set generally where cable competitors set theirs, so it’s likely there will be little impact on the carrier’s churn.  The question of whether access is profitable or not has been debated for years now, but if you look at operator financials and financial practices the answer is clear.  We’re now going to get a broader practical demonstration of the consequences of poor Internet settlement policy.  Yes, I know I’ve been complaining about that for a decade or more, but that’s the whole problem.  It’s not like this issue has snuck up on us; we’ve just been determined to stick our heads in the sand.

Alcatel-Lucent has been pushing its Application Enablement and Open API Program as solutions to the revenue problem; most recently they’ve touted the integration of a Scandinavian carrier’s billing with Angry Birds in-app purchasing.  It’s progress to be sure, but as I’ve noted before, operators can’t monetize the network by selling billing and customer care.  They have to be direct players in higher-layer services, and that requires more than developer exposure of carrier IT assets, it requires the creation of service features in an exposable, manageable, context.  That’s the goal of a new-age service layer, something that’s just as overdue as settlement.

 

Google, Juniper, and the Cloud

We’ve got a number of interesting points to end our week, and monetization issues are at the heart of them.  Let’s get moving!

Google is taking some steps that show how hard it is to be a portal player.  First, it’s letting users block sites in search results, and second it’s classifying gmail to facilitate users’ discarding of stuff they think is unimportant at best or junk at worst.  Anything that hurts ad revenues anywhere could be argued to be counter to Google’s interest, but the problem is that undisciplined exploitation of any resource tends to discourage users.  Search is hurt by having junk results, with junk being defined as either something that games SEO for unwarranted placement (which Google has been working to eliminate) or as simply something the user doesn’t like.

In the best of all possible words, advertising would go only to people who might be receptive, but since it’s impossible to know who exactly might be, the tried-and-true approach has been saturation bombing.  That the bomb-ees are getting tired of the attention is clear, and you can see a similar trend in the TV commercials area.  In my surveys, users cite disgust with commercials nearly as often as the cite disgust with programming as reasons to cut the cord.

The underlying problem here is that we have too many people looking at ad sponsorship to drive their business models and not much actual ad revenue to do the sponsoring with.  As media conduits try to raise revenues by reducing ad prices to attract new players (often by cutting the per-commercial time) they have to spawn more ads, and their revenue growth goals create the same pressure.  So we have people who want their lives ad-sponsored but don’t want to see the ads that are paying for the stuff.  Clearly this isn’t going to work for long.

On the equipment side, Juniper provided analysts, including me, with some more color on their QFabric security integration.  There are two dimensions to this; first that it illustrates in some detail what a “service” is in QFabric terms, and second that it illustrates the exploding complexity of security in a virtualization and cloud environment.  The combination of these things positions QFabric much more directly as an element in a cloud computing strategy, which is how I’ve viewed it from the first.  Since I think anything truly cloud-enabling is important, we’ll look at this a bit here.

In the QFabric pitch, Juniper’s CTO make a point to say that QFabric offered both virtual networks and “services” as its properties.  Security was such a service, and the way a service is created is by routing a flow through a service “engine”.  Because the flow’s route explicitly intersects with a security process, the flow route is secure without external intervention.  You can see that this sort of engine/service thing could easily be applied to other stuff like application acceleration, though Juniper hasn’t said anything about what might be a service of QFabric beyond security.  Interestingly, by making security appliances into QFabric service engines, you effectively virtualize security and increase the number of machines you can secure with a given device.  It’s like the old test of a good compiler; can it compile itself?  An offering to secure virtualized resources has to be a virtualized resource.

On the complexity-management side, the problem Juniper is solving here is the fact that it’s impossible to build a static-appliance security framework around a virtual or cloud data center.  Even if application-to-server assignment isn’t totally dynamic as it would be in a cloud computing application, it’s still easy to move virtual machines around with management commands.  In fact, the VM tool players are falling all over themselves to make the process of moving VMs easy, which only makes security hard.  All those problems are multiplied in the cloud, where resource assignment is dynamic by definition.

Juniper’s idea is to combine in-fabric security and in-VM security (through its Altor acquisition) to build a truly secure virtual data center.  By adding support for cloud “director” functionality, Juniper will presumably then extend this to the more dynamic resource assignment of the cloud.  As it is, VMs carry their security credentials as metadata, so cloud partners who have Juniper products on their end will be able to interpret and apply the policies even in a public cloud.

We’re starting to see a glimmer of what security would look like in a cloud-enabled future, or at least how Juniper sees it.  The question now is whether competitors will present their own holistic model of cloud/virtual security, or whether they’ll simply try to counterpunch pieces of this.  A good industry debate on the issues here would be helpful for everyone, but especially to service providers who need a secure model of cloud services if they ever hope to sell them to anyone but startups and hobbyists.  As I’ll get to in a minute, clouds are really critical for operators’ future monetization plans, which makes them critical for network spending and for the future of broadband services.

It’s good to see that Juniper is building a cloud story here in the security space, and in fact their security story is pretty holistic and strong in a cloud perspective.  That particular aspect of the announcement would have been stronger had the QFabric/PTX symbiosis been played up and had both that and security had very explicit links to Junos Space.  Yes, it’s hard to make a big launch work, but rather than try to do a big launch made up of five or ten deeply technical pieces (as they did a couple years ago) they could do a strategy launch and then instantiate it.  Detailed announcements invite defeat in detail, and the cloud is no place to take that risk.

Mobility is an increasing driver for change, and some Google predictions show how profound the changes might be.  Google asserts that as many as half of all retail transactions and two-thirds of retail purchase dollars will be initiated on mobile devices.  A big chunk of this is the presumed shift from credit cards to near-field communications and phones as credit devices, a Google push, and frankly I doubt the number no matter how they arrive at it.  Still, it does appear likely that mobile devices, ubiquitous broadband, and new consumer behavior patterns will create new market opportunities.  That brings us back to the question of broadband policy and monetization.  In the best of all possible worlds, the big ISPs would be offering their own higher-layer services and would thus be encouraged to look at the access connection as a delivery asset they can exploit.  As it is, they see it as a delivery asset their competitors alone can exploit.  Mobile services is where the change in that myopic view will have to arise, because that’s the focus of investment and change right now.

The cloud is the operator conception of its future IT infrastructure, the basis for content delivery and feature hosting.  I’m so interested in the cloud because I believe that effective operator exploitation of cloud potential is the only way to avoid commoditized broadband and anemic broadband investment.

 

Tech Revolutions?

Nobody can say that the tech space isn’t moving at frightening speed, and that may be true on the business side as much as with the technology itself.  We’ve got two stories of stunning shifts in the tech business landscape, and while neither are confirmed at this moment, there’s certainly a level of plausibility that can’t be ignored.

The first of the stories, one that’s been percolating below the surface for a bit, is that the “wholesale 4G” hopeful LightSquared is looking to dump its plans to build out a network and instead piggyback on Sprint’s LTE modernization plans.  This particular tune goes back at least six months; I’ve been skeptical from day one about a pure MVNO-host model for LightSquared given the plummeting margins in mobile broadband.  In fact, I told a vendor then that I’d not put much stake in the LightSquared build-out plans.

The obvious question now, presuming the rumor is true, is what this does to NSN.  Given that in my view they never had a shot at this because it was never going to be financially feasible, the answer is “nothing” in an objective sense.  Subjectively it obviously would hurt NSN because it looks like another slip from a company that seems to be slipping all too often.  NSN’s Motorola-pieces acquisition has had its closing delayed again, for example, and the company seems unable to shake a marketing and positioning paralysis that’s almost a legend in the space.

The least obvious question, and the most important one, is whether this is a clear sign that the mobile broadband space is already circling the drain before any significant LTE deployment and before the full wave of tablets hits the market and drives down revenue per bit even further.  If there’s any sanity in the current neutrality flap here in the US (since it’s politics, such sanity would be a happy accident), the core truth is that we’re attempting to prop up a business model with regulations that can’t be sustained at the financial level.  That would suggest the right approach is minimalist; guarantee non-discrimination and stay out of issues like compensation and settlement, premium handling, etc.  Give the industry no exit path but quitting, and they might LightSquared en masse.

At the root of the problem here is the lack of industry foresight into the impact of mobile broadband.  A full year ago I started pushing the concept that mobile broadband, flexible appliances, and consumer behavior were a big interdependent feedback loop (premium blog clients have access to the presentation, “Mobility, Behavior, and the Transformation of Telecom”).  Recent research by comScore shows that smartphones and tablets are already transforming behavior simply because you can use them everywhere you go and not just at home or in the office.  If checking Facebook is your thing, it’s likely that mobile checking will come to dominate simply because you can do it all the time.  Habitual behaviors that can be universally fed by one technology and only episodically by another will migrate to the place where they’re satisfied most of the time.  That means that wireline quickly becomes linked primarily to things that are necessarily anchored, like viewing stuff on a HDTV.  Tablets that size would really jam up the subways, after all.

In other tech news, the ITU has decided firmly to go its own way with a MPLS-Ethernet management standard, an issue it’s been fighting with the IETF over for a year or more.  This is another of those issues that I think is being covered badly and analyzed perhaps even worse.  The core of the dissent here, as operators tell me, is that the IETF wants transport MPLS to be managed as an IP service alone, and the operators want it to be managed at least optionally as an extension of the carrier Ethernet management standards.  Why?  Because you may recall that the whole T-MPLS/MPLS-TP thing was really spawned by a desire do cut off a set of Ethernet enhancements known as either PBB-TE or PBT.  Where MPLS-TP is intended to supplement Ethernet metro infrastructure, it makes sense to manage it using Ethernet standards (hence, the ITU Y.1731 standard).  That would provide end-to-end management capability, and it’s not clear to me why anyone thinks that providing E2E management as an option in carrier Ethernet deployments of MPLS is a bad thing.  I guess I don’t understand standards-religious wars.

But then, tech is just a subset of society.

 

“Free” May Not Be!

Politicking over the net neutrality rules continues, with the House holding a hearing on the matter.  It’s pretty hard for the House to overturn an FCC order without enacting legislation, and that’s not going to pass the Senate or a Presidential veto, so the whole thing is clearly an exercise.  The real test for the order will come in the courts, and it’s virtually impossible to say how long that might take to work through.  But the debate shows the depth of idiocy associated with the whole process.

The FCC’s position can be reduced to “consumers lack enough choice in broadband providers to allow them to vote against site blocking with their feet”.  True, but that’s not really the part of the order that most people object to.  You can simply say “no blocking of sites or discrimination in traffic handling based either on the site or the traffic type” and be done with it.  The FCC didn’t do that; instead they took excursions into things like whether offering premium handling was tantamount to a kind of blocking-one-by-exalting-another relativism.  Even the question of whether a type of traffic could be blocked is in my view kind of moot; as long as operators didn’t have the ability to apply different rules in different geographies, there are no providers who would not face immediate competitive disaster if they were to impose unusual handling restrictions.  But the real problem is whether the FCC has any authority at all in this matter, and that’s what the courts will decide.

Meanwhile new developments in the market continue to raise the stakes for operators.  Facebook’s deal with Warner Brothers to stream movies is an example of how many different kinds of players are emerging to treat “the Internet” as a kind of inexhaustible free spectrum to be used to broadcast anything at near-zero cost.  But the “near-zero cost” really means “near-zero price” because the operators are forced to carry the traffic and often with no compensation at all.  Which opens my “technical” (as opposed to jurisdictional) objection to the order.  We need settlement for the Internet, period.  The bill-and-keep-and-laissez-faire-peering model is just too prone to arbitrage and that can kill investment incentives in key parts of the ecosystem.  The Republicans are right in fearing that, but they’re not raising the settlement issue either because it’s not politically popular.

What’s interesting here is that everybody’s pandering to the voter’s silly lack of understanding on how the Internet works (and has to be sustained from a business perspective) is heading toward a point where the worst-case solution is the only one left to apply.  You can’t stop usage pricing with regulations, even from Congress.  You can’t order a company to invest unprofitably.  Of all of the options available to deal with the explosion in incrementally free traffic, the worst is charging the user.  We need broadband fat pipes to deliver premium services, and we can’t get them by essentially declaring that there can’t be any premium services (no special handling) and that everyone can send for nothing.  Usage pricing, here we come, unless Congress or the FCC gets smart, and if that were possible they’d have done it already.

The media’s not helping here, nor the industry.  OFC is the usual bandwidth-is-the-main-course love feast, even though at least one optical component vendor is signaling the industry that forward demand is looking really problematic.  The lack of a way to monetize incremental broadband traffic is an almost-certain fatal disincentive to develop it, and yet we’re prattling on about 100 GB Ethernet like that was the solution to the problem.  It’s not capacity that’s a problem, it’s making capacity valuable.

In the IT world, an interesting development is that two major computer vendors (HP and Asus) plan to offer something other than Windows on laptops or netbooks.  HP will be making every one of its PCs dual-boot between Windows and WebOS, something that could be a very big move toward popularizing its Palm-acquired mobile OS.  Asus will be offering, according to a rumor, Android and/or MeeGo netbooks.  The decision by some major players to move to a non-Windows OS even as an option could be huge.  On the one hand, it could be valuable for small-form-factor PCs to stave off tablet competition, but on the other it could be a major problem for Microsoft and could also create market backlash from users who don’t understand that the new “laptops” may not run the apps they’re used to having.  Securing two OSs in a dual-boot situation is its own support problem, of course, for the enterprise.

All of this testifies to the enormous problem that tablets combined with ubiquitous broadband and cloud computing could create for the market.  If we had, in theory, a completely ubiquitous “cloud” to tap into, then the only thing a user would ever need is an HTML5 browser glued onto a minimalist OS (does this sound like Google Chrome?)  The  problem is that what we may be doing is creating another set of industry convulsions whose impetus depends on that inexhaustible broadband virtual spectrum.  If we don’t solve the problem of creating a survivable industry ecosystem here, we risk stranding billions in investment in a paradigm of computing that won’t work without the current bandwidth pricing model, as that model breaks down.

See?  Free isn’t any more free than freedom.

 

For the Week: March 7th 2011

It’s obvious that the big question this week, politically and economically, will be what happens in Libya.  Politically, the situation poses a kind of double threat.  First it’s a continuation of a kind of Middle-East-domino problem that might or might not result in democratic sweep of the region.  Second, the turmoil puts western governments in a quandary, balancing the hope of reducing the loss of life against the risk of effectively entering in on the side of the rebels.  Economically, the problem is the rising cost of oil and its effect on consumer prices (gas and goods) and the recovery.

There really isn’t a supply problem with oil; the Saudis have pumped enough to make up for the Libyan loss.  The issue is speculative purchase of futures contracts, which is what’s drive up oil most of the times it’s jumped in the recent past.  Some curbs on the financial industry (which certainly needs curbing overall) could help the situation more than actions like releasing oil from the strategic reserves, but the administration knows that a credible threat to release reserves could curb speculation and help pricing.  It’s not helped so far this morning.

In tech, we’re counting down to the RIM tablet and wondering how competitors will manage the new iPad 2 in their plans for the fall.  The challenge for them all at this point is the sense that there’s still got to be another generation of Android tablets to catch up, which means that the current generation may be obsolete even before it’s released.  Not only does that hurt sales, it could even discredit a complete product line by stomping on its launch and limiting early interest and market share.  It’s the first announcement that gets the most ink.

Enterprises are also starting to work through the issues of tablet-based collaboration, and interestingly that’s one of the things that RIM is expected to try to exploit.  A tablet is most valuable as a collaborative tool for “corridor warriors”, in what my research identified as “supervisory intervention” applications rather than team activities.  In supervisory collaboration, a worker seeks approval or answers on a particular issue, an issue normally represented as a document or an application screen.  The process demands the supervisory/support person share the document/application context and simultaneously discuss the problem.  Thus, you need voice and data together.  Some tablet vendors and media types have suggested that video collaboration is the answer—tablets have the cameras after all.  The problem is that video takes a lot of capacity, people don’t like random video calls that intrude on their current context, and there’s no evidence that video helps pairwise relationships be more productive.  Voice is the answer, but how exactly do we use collaborative voice with tablets?  RIM’s answer is likely to be by creating a tight link between the tablet and a Blackberry, and that may be a good approach.  We’ve noted this issue in some enterprise comments on the difference between iPhones on AT&T and the same phone on Verizon; the collaborative multi-tasking support is better on the first than on the second, obviously.

In the service provider space, I’m seeing renewed activity on service-layer projects, but not so far any conclusive sign of forward progress.  We’ve been working with five operator projects in monetization and two of the five are now looking like they’ll actually start doing something in the next three or four months.  The barrier is still the question of how to insure that assets created to monetize a specific opportunity like content delivery are compatible with the monetization of other opportunities that may or may not be targets of projects at the moment.  The need to repurpose assets across services is clear to the operators, and while it’s becoming clear to at least some vendors (IBM has been pushing this with increasing effectiveness) it’s not universally recognized.

The thing that seems to be catalyzing the service layer is the cloud.  Network operators see cloud computing as a revenue opportunity, and they also realize that cloud-compatible infrastructure is a good platform for OSS/BSS and feature-generating software—even for content delivery.  IBM’s renewed push into the service layer is coming through its Cloud Service Provider Platform, which it explicitly touts as reusable as a service-layer framework in addition to hosting retail or wholesale cloud computing services.  How far this sort of thing gets is hard to predict, though.  It might be that by this fall real projects will be committed, and real money spent.

 

Cloud Futures

The most interesting set of tech developments today relates to cloud computing positioning and services.  At the Enterprise Connect conference, Salesforce and Global Crossing both made cloud announcements, and both had what I see as a common thread; create a SaaS cloud service and build a platform-as-a-service offering around it.  Salesforce did this based on a social-network-integrated customer service platform (Service Cloud 3) and GC did it based on an integrated UC-friendly model that bundles the cloud with SIP trunking.

We don’t need more cloud acronyms (or hype) but there’s some substance here in the trend, at least, and possibly both offerings as well.  Enterprises and SMBs have sharply different appetites for cloud services, with the former preferring IaaS/PaaS services targeted at backup and overflow management and the latter preferring SaaS.  The primary reason for the difference is that most enterprises already run most applications they need internally, and so there’s an issue of tossing one model and embracing another, something that gets assigned a high risk premium in cost-benefit terms.  But UC and social customer service aren’t typically implemented internally at this point, so there’s less pushback.  That could converge the sales models for all business sizes, and not only create a better total market picture but also create broader reference accounts to accelerate deployment.

There were also a number of service provider cloud announcements this week, beyond the GC one, and it’s becoming clear that the providers plan to be major contenders in the cloud space.  IBM and Microsoft are both positioning actively to support provider cloud plans, with the former stepping up their game in what we’ve called the “Symbiotic Cloud”, the model of provider cloud that combines internal IT (OSS/BSS), feature hosting and monetization, and cloud service offerings into one infrastructure.  Obviously this trend, and the fact that GC is already calling its cloud “network-centric”, means that network vendor cloud plans will have to mature in a hurry if they want to be doing something other than holding the coats of the IT giants.

The SaaS model is interesting to operators because it displaces more cost and thus justifies a higher price (that’s also true of PaaS versus IaaS).  Early indications are that operators are most interested in getting into SaaS via partnerships or through services like UC/UCC, where they believe they have a natural market.  Our research consistently shows that network operator cloud services are more credible when presented through a sales force than when presented through a retail portal.  It appears that some of the portal disadvantage could be overcome through effective marketing, but of course “network operator” and “effective marketing” are hardly synonymous even in areas where the operator has some incumbency.  Partnerships thus seem likely to rule here.

Most infrastructure players are not looking to partner in the cloud, largely because it reduces the profit margin.  Where operators have a potential advantage is that their internal rates of return are low, their ROI expectations are more easily met, and thus they can be profitable on a relationship with tighter margins.  Operators can also normally create what’s perhaps the best economy of scale in capital infrastructure and operations of anyone in the market, particularly if they involve their own applications to build up their cloud base.

Economic recovery is going to help the cloud market, I think.  We’re going to see demand grow faster than confidence, and that means that there will be at least an early tendency to take a service-based solution to incremental computing demand rather than to commit to a capital project.  In total revenue, this absolutely will not be the “Year of the Cloud” but it may be the “Year of the Cloud Paradigm” for those who want to sell the services.  Positioning an offering is likely to get a lot harder in 2012.