Japan, Usage Pricing, and You

The earthquake in Japan and the related threat of nuclear reactor problems are surely the headline items for today.  The human scale of the disaster isn’t even known at this point, but many of even those who have survived uninjured face a difficult future.  Homes, businesses, and infrastructure have been severely damaged, and the consequences of the quake will linger for years to come.

Economically the markets are having a hard time predicting what will happen.  The quake will impact Japan’s economy and production in the near term.  It may cause a shift to production for internal reconstruction in the near term.  In the longer term, reconstruction will likely boost the Japanese economy, which had been threatened with deflation.  Short-term negative impact in Japan seems inevitable, though Tokyo seems determined to release a flood of money to counter any immediate liquidity risks.  The nuclear issues will almost surely curtail the country’s aggressive plans to increase the percentage of total electricity that nuclear plants generate from the current 33% to a goal of over 50%.  While near-term energy consumption will be down because of the reduction in driving, a switch from nuclear to fossil fuels may have some impact on energy costs.  It’s also been suggested that the reconstruction in Japan may increase sales of technology and other products, and that insurance companies might dump Treasuries to pay claims.  However, the turmoil seems to have boosted Treasury demand so any sales by insurance companies may well be quickly taken up with little net movement in price/yield.

The problems in Japan will certainly have an impact on how companies view disaster recovery.  The scale of the problem in Japan makes it clear that where there is a realistic risk of natural disasters (as there is on the US west coast, for example) it’s critical to plan for disruptions that could have a considerable geographic scope, and for substantial loss of power and communication in the impacted area.  That, in my view, will push companies more and more toward the concept of cloud backup, not only for data but for data centers or at least mission-critical apps.  I think the big beneficiaries in this could well be the service providers, who are in a better position to distribute compute assets than most pure-play backup types and have the financial credibility to make enterprises comfortable.

Broadband policy issues are moving into a new period; AT&T has announced it will be imposing usage caps on both DSL and U-verse customers.  Initially they’ll not be charging if people go over their cap only a few times, but it’s obvious that the carrier is joining many of its fellow ISPs worldwide and getting ready for usage pricing above a certain cap point.  The caps are set generally where cable competitors set theirs, so it’s likely there will be little impact on the carrier’s churn.  The question of whether access is profitable or not has been debated for years now, but if you look at operator financials and financial practices the answer is clear.  We’re now going to get a broader practical demonstration of the consequences of poor Internet settlement policy.  Yes, I know I’ve been complaining about that for a decade or more, but that’s the whole problem.  It’s not like this issue has snuck up on us; we’ve just been determined to stick our heads in the sand.

Alcatel-Lucent has been pushing its Application Enablement and Open API Program as solutions to the revenue problem; most recently they’ve touted the integration of a Scandinavian carrier’s billing with Angry Birds in-app purchasing.  It’s progress to be sure, but as I’ve noted before, operators can’t monetize the network by selling billing and customer care.  They have to be direct players in higher-layer services, and that requires more than developer exposure of carrier IT assets, it requires the creation of service features in an exposable, manageable, context.  That’s the goal of a new-age service layer, something that’s just as overdue as settlement.

 

Google, Juniper, and the Cloud

We’ve got a number of interesting points to end our week, and monetization issues are at the heart of them.  Let’s get moving!

Google is taking some steps that show how hard it is to be a portal player.  First, it’s letting users block sites in search results, and second it’s classifying gmail to facilitate users’ discarding of stuff they think is unimportant at best or junk at worst.  Anything that hurts ad revenues anywhere could be argued to be counter to Google’s interest, but the problem is that undisciplined exploitation of any resource tends to discourage users.  Search is hurt by having junk results, with junk being defined as either something that games SEO for unwarranted placement (which Google has been working to eliminate) or as simply something the user doesn’t like.

In the best of all possible words, advertising would go only to people who might be receptive, but since it’s impossible to know who exactly might be, the tried-and-true approach has been saturation bombing.  That the bomb-ees are getting tired of the attention is clear, and you can see a similar trend in the TV commercials area.  In my surveys, users cite disgust with commercials nearly as often as the cite disgust with programming as reasons to cut the cord.

The underlying problem here is that we have too many people looking at ad sponsorship to drive their business models and not much actual ad revenue to do the sponsoring with.  As media conduits try to raise revenues by reducing ad prices to attract new players (often by cutting the per-commercial time) they have to spawn more ads, and their revenue growth goals create the same pressure.  So we have people who want their lives ad-sponsored but don’t want to see the ads that are paying for the stuff.  Clearly this isn’t going to work for long.

On the equipment side, Juniper provided analysts, including me, with some more color on their QFabric security integration.  There are two dimensions to this; first that it illustrates in some detail what a “service” is in QFabric terms, and second that it illustrates the exploding complexity of security in a virtualization and cloud environment.  The combination of these things positions QFabric much more directly as an element in a cloud computing strategy, which is how I’ve viewed it from the first.  Since I think anything truly cloud-enabling is important, we’ll look at this a bit here.

In the QFabric pitch, Juniper’s CTO make a point to say that QFabric offered both virtual networks and “services” as its properties.  Security was such a service, and the way a service is created is by routing a flow through a service “engine”.  Because the flow’s route explicitly intersects with a security process, the flow route is secure without external intervention.  You can see that this sort of engine/service thing could easily be applied to other stuff like application acceleration, though Juniper hasn’t said anything about what might be a service of QFabric beyond security.  Interestingly, by making security appliances into QFabric service engines, you effectively virtualize security and increase the number of machines you can secure with a given device.  It’s like the old test of a good compiler; can it compile itself?  An offering to secure virtualized resources has to be a virtualized resource.

On the complexity-management side, the problem Juniper is solving here is the fact that it’s impossible to build a static-appliance security framework around a virtual or cloud data center.  Even if application-to-server assignment isn’t totally dynamic as it would be in a cloud computing application, it’s still easy to move virtual machines around with management commands.  In fact, the VM tool players are falling all over themselves to make the process of moving VMs easy, which only makes security hard.  All those problems are multiplied in the cloud, where resource assignment is dynamic by definition.

Juniper’s idea is to combine in-fabric security and in-VM security (through its Altor acquisition) to build a truly secure virtual data center.  By adding support for cloud “director” functionality, Juniper will presumably then extend this to the more dynamic resource assignment of the cloud.  As it is, VMs carry their security credentials as metadata, so cloud partners who have Juniper products on their end will be able to interpret and apply the policies even in a public cloud.

We’re starting to see a glimmer of what security would look like in a cloud-enabled future, or at least how Juniper sees it.  The question now is whether competitors will present their own holistic model of cloud/virtual security, or whether they’ll simply try to counterpunch pieces of this.  A good industry debate on the issues here would be helpful for everyone, but especially to service providers who need a secure model of cloud services if they ever hope to sell them to anyone but startups and hobbyists.  As I’ll get to in a minute, clouds are really critical for operators’ future monetization plans, which makes them critical for network spending and for the future of broadband services.

It’s good to see that Juniper is building a cloud story here in the security space, and in fact their security story is pretty holistic and strong in a cloud perspective.  That particular aspect of the announcement would have been stronger had the QFabric/PTX symbiosis been played up and had both that and security had very explicit links to Junos Space.  Yes, it’s hard to make a big launch work, but rather than try to do a big launch made up of five or ten deeply technical pieces (as they did a couple years ago) they could do a strategy launch and then instantiate it.  Detailed announcements invite defeat in detail, and the cloud is no place to take that risk.

Mobility is an increasing driver for change, and some Google predictions show how profound the changes might be.  Google asserts that as many as half of all retail transactions and two-thirds of retail purchase dollars will be initiated on mobile devices.  A big chunk of this is the presumed shift from credit cards to near-field communications and phones as credit devices, a Google push, and frankly I doubt the number no matter how they arrive at it.  Still, it does appear likely that mobile devices, ubiquitous broadband, and new consumer behavior patterns will create new market opportunities.  That brings us back to the question of broadband policy and monetization.  In the best of all possible worlds, the big ISPs would be offering their own higher-layer services and would thus be encouraged to look at the access connection as a delivery asset they can exploit.  As it is, they see it as a delivery asset their competitors alone can exploit.  Mobile services is where the change in that myopic view will have to arise, because that’s the focus of investment and change right now.

The cloud is the operator conception of its future IT infrastructure, the basis for content delivery and feature hosting.  I’m so interested in the cloud because I believe that effective operator exploitation of cloud potential is the only way to avoid commoditized broadband and anemic broadband investment.

 

Tech Revolutions?

Nobody can say that the tech space isn’t moving at frightening speed, and that may be true on the business side as much as with the technology itself.  We’ve got two stories of stunning shifts in the tech business landscape, and while neither are confirmed at this moment, there’s certainly a level of plausibility that can’t be ignored.

The first of the stories, one that’s been percolating below the surface for a bit, is that the “wholesale 4G” hopeful LightSquared is looking to dump its plans to build out a network and instead piggyback on Sprint’s LTE modernization plans.  This particular tune goes back at least six months; I’ve been skeptical from day one about a pure MVNO-host model for LightSquared given the plummeting margins in mobile broadband.  In fact, I told a vendor then that I’d not put much stake in the LightSquared build-out plans.

The obvious question now, presuming the rumor is true, is what this does to NSN.  Given that in my view they never had a shot at this because it was never going to be financially feasible, the answer is “nothing” in an objective sense.  Subjectively it obviously would hurt NSN because it looks like another slip from a company that seems to be slipping all too often.  NSN’s Motorola-pieces acquisition has had its closing delayed again, for example, and the company seems unable to shake a marketing and positioning paralysis that’s almost a legend in the space.

The least obvious question, and the most important one, is whether this is a clear sign that the mobile broadband space is already circling the drain before any significant LTE deployment and before the full wave of tablets hits the market and drives down revenue per bit even further.  If there’s any sanity in the current neutrality flap here in the US (since it’s politics, such sanity would be a happy accident), the core truth is that we’re attempting to prop up a business model with regulations that can’t be sustained at the financial level.  That would suggest the right approach is minimalist; guarantee non-discrimination and stay out of issues like compensation and settlement, premium handling, etc.  Give the industry no exit path but quitting, and they might LightSquared en masse.

At the root of the problem here is the lack of industry foresight into the impact of mobile broadband.  A full year ago I started pushing the concept that mobile broadband, flexible appliances, and consumer behavior were a big interdependent feedback loop (premium blog clients have access to the presentation, “Mobility, Behavior, and the Transformation of Telecom”).  Recent research by comScore shows that smartphones and tablets are already transforming behavior simply because you can use them everywhere you go and not just at home or in the office.  If checking Facebook is your thing, it’s likely that mobile checking will come to dominate simply because you can do it all the time.  Habitual behaviors that can be universally fed by one technology and only episodically by another will migrate to the place where they’re satisfied most of the time.  That means that wireline quickly becomes linked primarily to things that are necessarily anchored, like viewing stuff on a HDTV.  Tablets that size would really jam up the subways, after all.

In other tech news, the ITU has decided firmly to go its own way with a MPLS-Ethernet management standard, an issue it’s been fighting with the IETF over for a year or more.  This is another of those issues that I think is being covered badly and analyzed perhaps even worse.  The core of the dissent here, as operators tell me, is that the IETF wants transport MPLS to be managed as an IP service alone, and the operators want it to be managed at least optionally as an extension of the carrier Ethernet management standards.  Why?  Because you may recall that the whole T-MPLS/MPLS-TP thing was really spawned by a desire do cut off a set of Ethernet enhancements known as either PBB-TE or PBT.  Where MPLS-TP is intended to supplement Ethernet metro infrastructure, it makes sense to manage it using Ethernet standards (hence, the ITU Y.1731 standard).  That would provide end-to-end management capability, and it’s not clear to me why anyone thinks that providing E2E management as an option in carrier Ethernet deployments of MPLS is a bad thing.  I guess I don’t understand standards-religious wars.

But then, tech is just a subset of society.

 

“Free” May Not Be!

Politicking over the net neutrality rules continues, with the House holding a hearing on the matter.  It’s pretty hard for the House to overturn an FCC order without enacting legislation, and that’s not going to pass the Senate or a Presidential veto, so the whole thing is clearly an exercise.  The real test for the order will come in the courts, and it’s virtually impossible to say how long that might take to work through.  But the debate shows the depth of idiocy associated with the whole process.

The FCC’s position can be reduced to “consumers lack enough choice in broadband providers to allow them to vote against site blocking with their feet”.  True, but that’s not really the part of the order that most people object to.  You can simply say “no blocking of sites or discrimination in traffic handling based either on the site or the traffic type” and be done with it.  The FCC didn’t do that; instead they took excursions into things like whether offering premium handling was tantamount to a kind of blocking-one-by-exalting-another relativism.  Even the question of whether a type of traffic could be blocked is in my view kind of moot; as long as operators didn’t have the ability to apply different rules in different geographies, there are no providers who would not face immediate competitive disaster if they were to impose unusual handling restrictions.  But the real problem is whether the FCC has any authority at all in this matter, and that’s what the courts will decide.

Meanwhile new developments in the market continue to raise the stakes for operators.  Facebook’s deal with Warner Brothers to stream movies is an example of how many different kinds of players are emerging to treat “the Internet” as a kind of inexhaustible free spectrum to be used to broadcast anything at near-zero cost.  But the “near-zero cost” really means “near-zero price” because the operators are forced to carry the traffic and often with no compensation at all.  Which opens my “technical” (as opposed to jurisdictional) objection to the order.  We need settlement for the Internet, period.  The bill-and-keep-and-laissez-faire-peering model is just too prone to arbitrage and that can kill investment incentives in key parts of the ecosystem.  The Republicans are right in fearing that, but they’re not raising the settlement issue either because it’s not politically popular.

What’s interesting here is that everybody’s pandering to the voter’s silly lack of understanding on how the Internet works (and has to be sustained from a business perspective) is heading toward a point where the worst-case solution is the only one left to apply.  You can’t stop usage pricing with regulations, even from Congress.  You can’t order a company to invest unprofitably.  Of all of the options available to deal with the explosion in incrementally free traffic, the worst is charging the user.  We need broadband fat pipes to deliver premium services, and we can’t get them by essentially declaring that there can’t be any premium services (no special handling) and that everyone can send for nothing.  Usage pricing, here we come, unless Congress or the FCC gets smart, and if that were possible they’d have done it already.

The media’s not helping here, nor the industry.  OFC is the usual bandwidth-is-the-main-course love feast, even though at least one optical component vendor is signaling the industry that forward demand is looking really problematic.  The lack of a way to monetize incremental broadband traffic is an almost-certain fatal disincentive to develop it, and yet we’re prattling on about 100 GB Ethernet like that was the solution to the problem.  It’s not capacity that’s a problem, it’s making capacity valuable.

In the IT world, an interesting development is that two major computer vendors (HP and Asus) plan to offer something other than Windows on laptops or netbooks.  HP will be making every one of its PCs dual-boot between Windows and WebOS, something that could be a very big move toward popularizing its Palm-acquired mobile OS.  Asus will be offering, according to a rumor, Android and/or MeeGo netbooks.  The decision by some major players to move to a non-Windows OS even as an option could be huge.  On the one hand, it could be valuable for small-form-factor PCs to stave off tablet competition, but on the other it could be a major problem for Microsoft and could also create market backlash from users who don’t understand that the new “laptops” may not run the apps they’re used to having.  Securing two OSs in a dual-boot situation is its own support problem, of course, for the enterprise.

All of this testifies to the enormous problem that tablets combined with ubiquitous broadband and cloud computing could create for the market.  If we had, in theory, a completely ubiquitous “cloud” to tap into, then the only thing a user would ever need is an HTML5 browser glued onto a minimalist OS (does this sound like Google Chrome?)  The  problem is that what we may be doing is creating another set of industry convulsions whose impetus depends on that inexhaustible broadband virtual spectrum.  If we don’t solve the problem of creating a survivable industry ecosystem here, we risk stranding billions in investment in a paradigm of computing that won’t work without the current bandwidth pricing model, as that model breaks down.

See?  Free isn’t any more free than freedom.

 

For the Week: March 7th 2011

It’s obvious that the big question this week, politically and economically, will be what happens in Libya.  Politically, the situation poses a kind of double threat.  First it’s a continuation of a kind of Middle-East-domino problem that might or might not result in democratic sweep of the region.  Second, the turmoil puts western governments in a quandary, balancing the hope of reducing the loss of life against the risk of effectively entering in on the side of the rebels.  Economically, the problem is the rising cost of oil and its effect on consumer prices (gas and goods) and the recovery.

There really isn’t a supply problem with oil; the Saudis have pumped enough to make up for the Libyan loss.  The issue is speculative purchase of futures contracts, which is what’s drive up oil most of the times it’s jumped in the recent past.  Some curbs on the financial industry (which certainly needs curbing overall) could help the situation more than actions like releasing oil from the strategic reserves, but the administration knows that a credible threat to release reserves could curb speculation and help pricing.  It’s not helped so far this morning.

In tech, we’re counting down to the RIM tablet and wondering how competitors will manage the new iPad 2 in their plans for the fall.  The challenge for them all at this point is the sense that there’s still got to be another generation of Android tablets to catch up, which means that the current generation may be obsolete even before it’s released.  Not only does that hurt sales, it could even discredit a complete product line by stomping on its launch and limiting early interest and market share.  It’s the first announcement that gets the most ink.

Enterprises are also starting to work through the issues of tablet-based collaboration, and interestingly that’s one of the things that RIM is expected to try to exploit.  A tablet is most valuable as a collaborative tool for “corridor warriors”, in what my research identified as “supervisory intervention” applications rather than team activities.  In supervisory collaboration, a worker seeks approval or answers on a particular issue, an issue normally represented as a document or an application screen.  The process demands the supervisory/support person share the document/application context and simultaneously discuss the problem.  Thus, you need voice and data together.  Some tablet vendors and media types have suggested that video collaboration is the answer—tablets have the cameras after all.  The problem is that video takes a lot of capacity, people don’t like random video calls that intrude on their current context, and there’s no evidence that video helps pairwise relationships be more productive.  Voice is the answer, but how exactly do we use collaborative voice with tablets?  RIM’s answer is likely to be by creating a tight link between the tablet and a Blackberry, and that may be a good approach.  We’ve noted this issue in some enterprise comments on the difference between iPhones on AT&T and the same phone on Verizon; the collaborative multi-tasking support is better on the first than on the second, obviously.

In the service provider space, I’m seeing renewed activity on service-layer projects, but not so far any conclusive sign of forward progress.  We’ve been working with five operator projects in monetization and two of the five are now looking like they’ll actually start doing something in the next three or four months.  The barrier is still the question of how to insure that assets created to monetize a specific opportunity like content delivery are compatible with the monetization of other opportunities that may or may not be targets of projects at the moment.  The need to repurpose assets across services is clear to the operators, and while it’s becoming clear to at least some vendors (IBM has been pushing this with increasing effectiveness) it’s not universally recognized.

The thing that seems to be catalyzing the service layer is the cloud.  Network operators see cloud computing as a revenue opportunity, and they also realize that cloud-compatible infrastructure is a good platform for OSS/BSS and feature-generating software—even for content delivery.  IBM’s renewed push into the service layer is coming through its Cloud Service Provider Platform, which it explicitly touts as reusable as a service-layer framework in addition to hosting retail or wholesale cloud computing services.  How far this sort of thing gets is hard to predict, though.  It might be that by this fall real projects will be committed, and real money spent.

 

Cloud Futures

The most interesting set of tech developments today relates to cloud computing positioning and services.  At the Enterprise Connect conference, Salesforce and Global Crossing both made cloud announcements, and both had what I see as a common thread; create a SaaS cloud service and build a platform-as-a-service offering around it.  Salesforce did this based on a social-network-integrated customer service platform (Service Cloud 3) and GC did it based on an integrated UC-friendly model that bundles the cloud with SIP trunking.

We don’t need more cloud acronyms (or hype) but there’s some substance here in the trend, at least, and possibly both offerings as well.  Enterprises and SMBs have sharply different appetites for cloud services, with the former preferring IaaS/PaaS services targeted at backup and overflow management and the latter preferring SaaS.  The primary reason for the difference is that most enterprises already run most applications they need internally, and so there’s an issue of tossing one model and embracing another, something that gets assigned a high risk premium in cost-benefit terms.  But UC and social customer service aren’t typically implemented internally at this point, so there’s less pushback.  That could converge the sales models for all business sizes, and not only create a better total market picture but also create broader reference accounts to accelerate deployment.

There were also a number of service provider cloud announcements this week, beyond the GC one, and it’s becoming clear that the providers plan to be major contenders in the cloud space.  IBM and Microsoft are both positioning actively to support provider cloud plans, with the former stepping up their game in what we’ve called the “Symbiotic Cloud”, the model of provider cloud that combines internal IT (OSS/BSS), feature hosting and monetization, and cloud service offerings into one infrastructure.  Obviously this trend, and the fact that GC is already calling its cloud “network-centric”, means that network vendor cloud plans will have to mature in a hurry if they want to be doing something other than holding the coats of the IT giants.

The SaaS model is interesting to operators because it displaces more cost and thus justifies a higher price (that’s also true of PaaS versus IaaS).  Early indications are that operators are most interested in getting into SaaS via partnerships or through services like UC/UCC, where they believe they have a natural market.  Our research consistently shows that network operator cloud services are more credible when presented through a sales force than when presented through a retail portal.  It appears that some of the portal disadvantage could be overcome through effective marketing, but of course “network operator” and “effective marketing” are hardly synonymous even in areas where the operator has some incumbency.  Partnerships thus seem likely to rule here.

Most infrastructure players are not looking to partner in the cloud, largely because it reduces the profit margin.  Where operators have a potential advantage is that their internal rates of return are low, their ROI expectations are more easily met, and thus they can be profitable on a relationship with tighter margins.  Operators can also normally create what’s perhaps the best economy of scale in capital infrastructure and operations of anyone in the market, particularly if they involve their own applications to build up their cloud base.

Economic recovery is going to help the cloud market, I think.  We’re going to see demand grow faster than confidence, and that means that there will be at least an early tendency to take a service-based solution to incremental computing demand rather than to commit to a capital project.  In total revenue, this absolutely will not be the “Year of the Cloud” but it may be the “Year of the Cloud Paradigm” for those who want to sell the services.  Positioning an offering is likely to get a lot harder in 2012.

 

We Try to Position Juniper’s PTX

Juniper made a second major announcement in two weeks, this time its PTX MPLS-optical supercore switch.  The product’s roots probably lie in early interest (“early” meaning the middle of the last decade) by Verizon in a new core architecture for IP networks that would eliminate the transit routing that was common in hierarchical IP cores.  Since then, everyone from startups (remember Corvus?) to modern players like Alcatel-Lucent, Ciena, and Cisco have been announcing some form of optical-ized core.  What makes Juniper different?

Good question, and it’s not easy to answer it from the announcement, but I’d say that the differentiator is the chipset.  Junos Express appears to be the same basic chip used in the recently announced QFabric data center switch.  Thus, you could say that the PTX is a based on a low-latency MPLS switching architecture that’s more distributed than QFabric.  Given what we perceive as a chipset link between the products, I’m creating a term to describe this; Express Domain.  An “Express Domain” is a network domain that’s built using devices based on the Express chipset.  A PTX network is an Express Domain in the WAN and QFabric is an Express Domain within a data center.

If you look at the PTX that way, then what Juniper is doing is creating an Express Domain linked by DWDM and running likely (at least initially) in parallel with other lambdas that still carry legacy TDM traffic.  It becomes less about having an optical strategy than it is about creating a WAN-scale fabric with many of the deterministic features of QFabric.  Over time, operators would find their TDM evolving away and would gradually migrate the residual to TDM-over-packet form, which would then make the core entirely an Express Domain.  The migration would be facilitated by the fact that the latency within an Express Domain is lower (because packet handling can be deterministic, as it is with QFabric) and because the lower level of jitter would mean it’s easier to make TDM-over-packet technology work.  Overall performance of the core would also improve.  In short, we’d have something really good for none of the reasons that have been covered so far in the media.

This (if my interpretation is right) is a smart play for Juniper; create an MPLS-based virtual domain that can be mapped to anything from a global core to a data center.  Recall that I noted in the QFabric announcement that Juniper had indicated that QFabrics could be interconnected via IP/MPLS.  Clearly they could be connected with PTXs, and that would create a supercloud and not just a supercore.  What would make it truly revolutionary, of course, would be detailed articulation of cloud-hosting capability.  I think that capability exists, but it’s not showing up at the right level of detail in the positioning so far.  In any event, if you add PTX to QFabric in just the right way, you have a cloud—probably the best cloud you can build in today’s market.

If Juniper exploits the Express Domain concept, then the PTX and QFabric combine to create something that’s top-line valuable to the service providers.  Yes, there are benefits to convergence on packet optical core networks, but those benefits are based on cost alone and cost management isn’t the major focus of operators right now—monetization is.  You can’t drive down transport cost per bit enough for it to be a compelling benefit in overall service pricing, nor enough to make low-level services like broadband Internet profitable enough.  Furthermore, achieving significant capex savings for the operator means achieving less total sales for the vendor.  That’s the old “cost-management-vanishes-to-a-point” story.  But you can do stuff at the service layer that was never possible before, drive up the top line, and sell more gear overall rather than less.  Or so I think.  We’ll be asking for clarification on these points, and in our March Netwatcher we’ll report on what we find.

iPad 2 and Beyond

The big news today is Apple’s new iPad announcement, an event whose usual Apple drama was upstaged by a surprise visit by Steve Jobs.  The essence of the announcement was familiar; iPads are making us smarter, healthier, richer, better looking, and so forth, and that’s from the first version.  Now look what’s going to happen!

What is going to happen?  Well, 2011 is the “Year of the Copycats” according to Jobs, but Apple isn’t resting on its laurels.  The iPad 2 is based on a new dual-core chip that’s twice as fast, with new nine-times-faster graphics, front-and-rear-facing video cameras, built-in gyro, 33% thinner (thinner than an iPhone 4)—you get the picture.  The new model will source HDMI at up to 1080p, which makes it a logical companion to HDTVs and probably presages more Apple work there.  Network-wise, it’s not breaking any ground yet—still 3G and WiFi and no LTE or WiMAX.  Pricing is the same; starting at about five hundred bucks.  Overall, it’s a major upgrade in performance and a modest improvement in features—the improvement being the dual cameras.

The new iPad 2 will certainly make things harder for the Android guys, particularly those who like Motorola have just announced their own tablets.  The current Android lot are just about equal at best to the iPad, though most are significantly heavier/thicker, and the new iPad 2 trumps that form factor.  There’s a lot of clever engineering in the gadget, even to magnetic catches on the cover that are sensed by the device and used to trigger a power-up when the cover is removed.  But you really don’t expect to see a cover demonstration on video at a launch event.  Apple is rising to the challenge of competition, but it’s also showing that even its own dramatically innovative culture can’t create a revolution every year.  The biggest bison can still be dragged down by a large pack of even little wolves.

But in the meantime, we do have a clear trend to follow.  Appliances are going to get lighter and more convenient but also more powerful, with better and better video.  That’s going to make enterprises look even harder at using tablets for worker empowerment, and it’s going to make tablets a more and more attractive way to consume video, making multi-screen strategies all the more important.  And most of all, we’re seeing yet again that the market is in the hands of the consumer device vendors.  Nobody else is making any real progress being exciting.  Without excitement there’s no engagement with the media.  Without media engagement there’s no ability to market.

In the mobile space, Verizon has decided to eliminate its unlimited-usage iPhone plan in favor of usage pricing, and if anyone thinks that usage pricing isn’t going to be universal for mobile broadband now and wireline broadband soon, they’re delusional.  Already the media is lamenting the death of the “bandwidth fairy” and beating their breast about the impact this will have on consumers and on the Internet.  Hey, I want a free iPad, and a nice Audi A8 for twenty bucks, and I could really use a Nikon D3 with a 70-200 VR lens (just ship it; no need to send a note to see if somebody already sent one because I can use as many as you provide!)  The market’s not made up of wants but of exchanges of goods or services for dollars.  There has to be a willingness to exchange.

AT&T, who has been into usage pricing for mobile broadband for some time, is also becoming a major carrier proponent of cloud services, and announcements are expected from other providers through the spring.  Cloud computing is a perfect space for network operators because they’re experts at providing services with a low ROI, and that means better pricing and faster market uptake.  In fact, it’s a testament to the problems of revenue per bit on broadband access and Internet services that cloud computing is considered a profit opportunity.  Cloudsourcing applications have to be significantly (22-35%) cheaper to be credible.  What makes network operators so interested is that their own cloud infrastructure (for OSS/BSS and feature/content hosting) will create formidable economies of scale if they’re done right.  That makes the operator a cost leader in a cost-driven market.

You have to wonder everything technical is going to become either a consumer plaything or a service of a giant telco, simply because we’re losing the ability to communicate with the market.  Jobs, even on medical leave, has more star power than anyone else in tech, maybe more than everyone else combined.

 

Take a Lesson From Cable/Retail

The Internet has proved disruptive to a lot of traditional business models, and possibly none more than the retail model.  Recent numbers from Forrester say that online retail sales will hit nearly $280 billion by 2015, and I think they could easily top $350 billion.  While this is small potatoes in absolute terms, the online model has also changed the pricing and margins of retailers.  Anything that’s expensive-ish and that has a model number is going to be priced online even if the consumer comes into the store to see it first.  That changes the whole way that buying behavior has to be manipulated, or it sets retail storefronts as involuntary live catalogs for people who use Amazon to visit.

The role of the Internet in buying stuff combines with social media to generate about 80% of all online time spent by consumers, with video making up nearly all that’s left.  People do little or nothing, comparatively speaking, to further their education, manage their finances, improve their health, or any of the other things that broadband proponents say are the reasons why everyone needs better/faster/some broadband.  With the exception of video (which, remember, is only about 20% of online time) none of these applications are bandwidth-intensive.  Mobile video is a bandwidth hog in mobile terms, but a mobile video stream is small potatoes in the spectrum of wireline broadband, where nearly everyone who has broadband at all can get at least 6 Mbps.

The question of how much broadband you need has implications beyond public policy.  Vendors would love to visualize the future as one where video-greedy consumers demand more and more capacity while network operators draw on somehow-previously-concealed sources of funding to pay for the stuff.  The fact is that’s not going to happen, of course.  Recently the cable industry offered us some proof of that.  If you cull through the earnings calls and financial reports of cable providers, you find that they like the telcos are focused on content monetization and not carrying video traffic.  The difference is significant; monetization means figuring out how to make money on content, where traffic-carrying is simply providing fatter pipes.  For cable, the difference is whether they utilize DOCSIS 3.0 to provide some new video services or to expand broadband capacity, and they’re voting to do the former.

The fact that all kinds of network operators are looking for monetization beyond bit moving may explain why the big IT vendors like IBM are working to be seen more as a cloud partner to these players than as a cloud service competitor.  Microsoft alone of the big vendors seems focused on going their own way with their Azure cloud offering, and that’s likely because Microsoft is focused on competition from Google.  I’ve been hearing rumors that Oracle has decided against a hosted cloud offering and decided instead to focus on service provider cloud opportunities.

The complexity of the cloud market is shown in the latest IDC numbers, which give IBM the leading spot again.  What’s interesting is that IBM outgrew the x86 commodity server space, and in large part because of its strength in mainframe and mini-frame non-x86 products.  In fact, growth in that area doubled the server industry average.  What this shows is that enterprises were telling me the truth when they said that there were really two models of IT evolution; virtualization-centric (based on x86 and Linux) and service-centric and largely based on other OS platforms that used SOA for work distribution.  IBM’s strength could be its ability to harmonize these two worlds, though so far that’s not how they’re positioning themselves.  But then the media’s not understanding the existence of the two groupings, so what can we expect?

In economic news, Fed chairman Bernanke said that he expected there would be a small but not worrisome rise in inflation, and it does seem as though the basic strategies for economic recovery are working.  Wall Street is also showing it’s less concerned about a major problem with the oil supply, though obviously oil prices are up on the risk so far.  It’s important to note that oil, like nearly every valuable commodity, is traded.  That means that speculative buying of oil contracts drives up prices even though none of those speculators actually intends to take delivery on oil, and thus there’s no actual impact on supply or demand.  They’re betting on future real price increases at the well-head or on more demand, and we pay for the profits on their bets.  It’s an example of how financial markets influence the real world, and sadly there’s more of that kind of influence today than there is of cases where the real world influences financial markets.

Monday, Monday

The weekend brought more disorder to the Middle East, particularly Libya, but while the initial turmoil there had knocked stock prices down a bit, the decline has not been alarming and it was reversed on Friday.  Today futures and the European exchanges both suggest an up market again.  Even cooler-than-expected growth in US consumer spending isn’t hurting, and some suggest that Buffett’s bullish letter to investors may be the cause.

In the tech world, Cisco’s stock-price woes continue; the company has been largely flat since its earnings call while competitors Alcatel-Lucent and Juniper have been on a bit of a tear.  Fundamentals isn’t much of a motivation for stock movement these days, but it is clear that investors in the main believe that the latter two stocks have a potential for an upside and Cisco doesn’t have that same potential.  Objectively, I think that’s all true.  Cisco need to work through some very real product issues as well as redefine its internal sales-driven (as opposed to “value-driven”) culture.  Alcatel-Lucent and Juniper both need to learn how to sing better, but both have made what could be very significant product announcements in the last couple weeks.

OK, Cisco is in the dog house for now, but I still have to point out in fairness that the company could largely eliminate its problems in a stroke with some light-weight M&A and some heavyweight re-positioning and strategizing.  The service layer, which means the cloud-to-network binding for both enterprises and service providers, is the sweet spot of the future market.  Own it and you can hope to pull through your solutions en masse.  It’s still open territory.

There may be cloud architecture competition emerging from new quarters.  F5 today announced it had worked with IBM to develop a reference architecture for the cloud.  The architecture clearly covers the creation of private clouds based on virtualization, and F5 promises that it will be extended to envelope public cloud components to hybridize them with private clouds.  We see no reason why the architecture (which looks much like Eucalyptus, and that’s no accident according to F5) can’t be used for public cloud applications, including service provider clouds.  IBM has specific aspirations in the service provider space, and the reference architecture may be a step in helping prospective SP clients build cloud services that can then easily hybridize with enterprises.  It seems to us that the approach would also support SOA applications, but that’s not a specific part of the release.

Staying in the cloud, Verizon is planning to offer UCaaS, hoping to capture a share of business buyers who want unified communications and collaboration that includes users on mobile devices.  Generally, businesses embrace the notion of service-based pricing as opposed to building their own solutions because they like the cash flow better and because they may fear making a capital investment in a space that’s undergoing major change.  However, carriers have for years lost market share with hosted communications options relating to voice services, and it seems to me that this offering would be all too easy for OTT giants like Google to counter if they feel like getting into the space.

Moving to consumer social networks, JP Morgan says it’s going to take a stake in Twitter, and speculation is that will happen by buying out some existing investors.  The deal is said to value Twitter at over $4 billion, and it’s the sort of thing that already has the SEC concerned that private equity is circumventing the protections created by public corporation status while keeping the companies private in name.  I’ve got major reservations about any strategies that have the effect of empowering the “professional” investors and not the general public, which this would surely seem to do.  Further, I wonder whether we’re not creating another opportunity for bubbles by creating a whole new exit strategy set; companies don’t sell out, they don’t go public, but they sell pieces off privately to pay off early investors.  How do we avoid collapse when eventually the public has to bail out the last of the “private” investors like JP Morgan?

The murky regulatory area isn’t getting less murky.  Republicans have recently signaled that they’re not prepared to compromise on their rejection of any sort of net neutrality principles. While that doesn’t mean there won’t be any (Democrats can block any attempts to un-fund or weaken the FCC’s position here), it does mean that if the courts throw out the FCC’s latest order (which I think is likely) then there’s no option to create comparable rules through legislation.  That would mean market forces would decide what happens, always a risk but perhaps not as great a risk as bad explicit policy.  The current FCC order isn’t bad in my view, but I think there’s less than a 30% chance it will stand.

Another semi-regulatory issue is raised by Comcast’s announcement it would not be offering paid streaming video service to non-subscribers, something at least one satellite TV rival says it’s preparing to do.  That may raise an issue with regulators who think that Comcast must make at least NBCU content available to competitors on the same basis as they offer it internally.  Does not offering separate streaming video satisfy that condition?  Comcast may have another reason to appeal the FCC’s order—which is already a target of appeal by other players.  The Comcast/Level 3 dispute may even join the parade here!

Tech, overall, is in a bit of a state of flux, which may be why it’s off today when the Dow is up.  Good economic conditions overall don’t guarantee tech company success these days, and since bad economic conditions guarantee failure in most tech sectors, the industry may be headed for some whipsawing as investors try to price out the current muddy trends.