Will Cisco’s Ume Break Networks, Policies, or Both?

Cisco released its expected home videoconference solution with the somewhat cutsey name of Umi, which to make things worse is supposed to have a horizontal accent line over the “U” to indicate a “you” pronunciation.  Whatever the spelling and character set, it’s potentially a significant product.  Umi brings HD videoconferencing to the home TV, and while it doesn’t have some of the social/chat features that Cisco promised, it could still be a game-changer in a number of ways.

Free Internet video calling is already available from a number of sources in PC-PC form, but Umi promises a friendlier form, from the living room and on a big screen.  If adoption is what Cisco hopes, it could popularize video calling and generate a ton of new traffic.  For a router vendor who already has a big market share, organic growth of that sort is a good thing—maybe.

The “maybe” here is that it’s very possible that a strong showing for video calling in any form could push operators over the edge into metered usage pricing, which would be a bad thing for router vendors, Internet users, and frankly just about everyone.  There are many who believe that it’s inevitable (we’re among them) but extravagant video growth would certainly hasten the day, and in particular it could push a pricing change as early as 2011 for some markets, particularly the cable MSOs.  Because these guys have constrained upstream capacity, applications like video calling that source as much as they sink, bit-wise, are particularly challenging.  It could also polarize the current public policy debates on net neutrality, mixing billing/cost issues with neutral carriage issues.  It could be a destructive mix, and we’re likely to see the impact sooner rather than later.

More Color on Alcatel-Lucent’s Strategy

Alcatel-Lucent had an invitation event for industry analysts yesterday, and since the group was small relative to normal events there was a good opportunity for discussion and engagement.  The goal was to give us an idea of where Alcatel-Lucent was going in the near term and in a more strategic sense, and I think they accomplished the goal overall.

It’s clear that Alcatel-Lucent is still having a bit of an identity crisis—several, in fact.  They’re still apologizing for the aftermath of the merger, which looks to be finally accomplished in fact and not just in name.  They’re also having a bit of a confidence crisis, even though their articulation is strong and their strategic credibility numbers lead the network equipment vendor space by a pretty decent margin.  They’ve been battered a bit by Wall Street and by the internecine struggles of the past, and they kind of need a hug .

In a tangible sense, the big news out of the event was that Alcatel-Lucent has a much broader capability set in Open API than was first apparent.  Yes, the program is linked to applications and developers and the smartphone universe, but it’s really more than that.  Open API is a federation engine that absorbs multiple APIs, orchestrates unions, and exposes the results.  It could be used to federate CDNs (which is something Alcatel-Lucent says it’s working on, though they didn’t say if the Open API was part of the work), cloud computing, and even multiprovider service provisioning of the type that TMF/IPSF has been involved in.  How far they’ll take this capability probably depends on operator traction, but watch the space for some action later this year as a possible signal.

It’s also clear that they’re betting heavily on LTE and still doubling down on IMS, which is logical given their LTE focus.  I still think there are a few too many IMS references; yes, we know they have it, that operators will leverage it if they deploy it.  We need to know what else they will have in the way of enablers for their Open API to expose.

The Alcatel-Lucent challenge, in fact, is to try to rise above legacy,  including IMS, without turning their back on it.  Part of the secret of Alcatel-Lucent’s high strategic credibility is their broad engagement.  They can’t sustain their whole portfolio forever, but they need to exploit the parts of it that continue to involve them in the broad strategic sweep of the service provider space.  At the same time, they have to stop making every application look like IMS in a brown paper bag, or every benefit come down to offering QoS.  The future is built on the past, and present, but that doesn’t mean the three march in lockstep.

What Verizon’s Datacenter Spending Portends

Verizon said it would be making a major investment in data centers for, among other things, cloud computing.  The result will be an addition of space for over 5 thousand servers and an expansion to about 200 data centers worldwide, including sites in Australia and the UK.  While “the cloud” gets a lot of play on this deal, it’s really more about enhanced services and a shift in their profit model from selling bits to selling experiences.  A couple decades ago, new services meant new network equipment.  These days, it means servers and software, and it’s being driven today by a rush to create a meaningful strategy for content delivery and monetization.  While that’s the hottest issue in the market, it’s an example of the broader issue of generating revenue in an age where transport and connection matter a lot less.

 The next generation of carrier “services” will be experiences.  The foundation of experiences is software, running on a connected set of data centers—a cloud.  But media hype about the value of the cloud has been several miles wide and a lot less than an inch deep, and most operators would echo the Pacnet exec quoted in a recent article; he’s glad he’s not the only one working through the fog of the cloud.  For operators, in particular, the imprecision of “cloud” is a challenge because they want an architecture on which to build their infrastructure plans.  They had that for networks, and now they need it for clouds.  All those datacenters need to be filled with gear, and how that will work and how it will make money are now the critical question for operators–and vendors.

HP’s New CEO: Best Available but Not Perfect

HP picked former SAP CEO Leo Apotheker as its new CEO, a move that surprised many in the industry but that doesn’t particularly surprise us.  The criticism of Apotheker stems largely from the fact that his tenure at SAP was hardly stellar; the company lost market share to Oracle throughout and he was unable to stem the tide.  But truth be told, the problems at SAP were more related to the conservatism of SAP’s marketing processes and board, something that Apotheker had little chance of changing.

 This is a software age in IT, and also an age where software and hardware form a one-stop ecosystem.  Who should HP have picked?  A hardware guy from within?  That’s bad on two counts, given companies’ tendency for internecine warfare and given that hardware isn’t where it’s at.  Who has succeeded against Oracle?  Nobody.  Apply both those truths and you have no candidates at all.  Apotheker actually understands the role of software well, and understands the relationship between software and all of the competitive hardware platforms.  He’s the best choice they had, in our view, but he’s also a good choice.

 He’s not a perfect choice.  The missing skill is networking, the understanding of which is critical to position the hardware/software ecosystem for virtualization in the data center and beyond it, in the cloud.  We think that networking is the card that HP will need to play against Oracle and against IBM as well.  True, Apotheker has no evil preconceptions in the space, but he doesn’t have the broad grasp of the current market issues, or at least hasn’t demonstrated that grasp.  Given that Cisco’s entry into the datacenter IT space has already started to force competitors like IBM to give more thought to a networking mission, and given that HP has networking products already, exploiting networking is not only essential, it’s something that has to be done in the short term.  Can Apotheker do that?  We don’t know at this point, and if he can’t do it or find someone to delegate it to, then HP will face some challenges.

Net Neutrality–Again?

Rep. Waxman, the House sponsor of an attempt to pass legislation to direct the FCC’s decisions on net neutrality, has withdrawn the bill for lack of support.  This ends, at least for the moment, another of Congress’ attempts to create telecom policy through explicit legislation.  It’s not the first time bills have been dropped; since 1996 virtually every attempt to change policy has died without coming to a vote.

I still believe that Title II classification with reasonable wholesale rates (the Canadian model, for example) is workable, and in fact might be more logical given the range of services that are likely to migrate to IP without being part of the Internet.  IPTV and carrier voice, as well as enterprise services, fit that model.  The FCC has to weave a complicated ruling to protect both the Internet and the business model for IP-converged services.  Title II is the best way to do that.

Welcome to Our New Blog Format!

Welcome to CIMI Corporation’s new blog.  This is the format for all of our future blogging, and it’s the same structure as we use on our TMT Advisor premium blog-based information service.

This blog is visible to everyone, but unlike our previous blog, this one will also support registration of users and some commenting.  We want to encourage you to use commenting with care because it will make your username and information at least somewhat visible.  If you want to comment despite the visibility issues, you’ll need to register.  To do that, send an email to inquiries@cimicorp.com and ask for registration.  You must provide your name, your email address (not a hotmail, gmail, etc.) and a company name and company URL to register.  You’ll be notified when you’re registered, or if we deny registration for any reason.