Weekly Economic Update

We’re coming into the heart of earnings season and also into a new round of economic numbers this week, which will give us a better idea of just where things might be headed for the economy.  So far it looks like recovery is on track even though employment is lagging in the US in particular, but there’s still the question of whether consumers are really getting their hearts into the holiday period or simply saving one place to spend in another with less overall positive spending growth as a result.

 The mortgage situation is a troubling factor.  The flood of foreclosures created a flood of shortcuts in the handling, and now it appears that the process was violated.  Since that process is a statutory requirement for foreclosing, a problem here creates a cascade effect, and some of the lower-tier impacts could hurt.

 The top of the pyramid isn’t bad; foreclosures might be limited or halted for the time it takes to get the paperwork legally straight.  That would delay entry of homes onto the market, keep people in their homes longer, and otherwise do fairly good stuff.  But for homes already foreclosed, it clouds the title the banks can give to purchasers.  That means title insurance may be impossible, making mortgages difficult.  That could slow the housing process down, given that foreclosed homes make up nearly a quarter of current-period sales in some areas.

 Another question is whether the combination of the credit crunch and stalled spending plans might not be reducing liquidity in the economy overall, thus contributing to what’s called “deflation” or an effective decrease in the cost of goods and services that’s caused by money appreciation.  Deflation typically stagnates economic growth, in part by encouraging deferring of spending to a future time when stuff will be even cheaper.

 If we (somewhat simplistically) say that inflation is excess liquidity in the economy and deflation is insufficient liquidity, then the classic mechanism to fix deflation is to do something inflationary like reduce the Fed rate to increase the money in circulation.  When that’s not possible because of pre-existing reductions that have taken the rate to near-zero, the next step is “quantitative easing”, in which the Fed prints money (in effect) and uses it to buy debt, usually its own bonds.  That gets new money into circulation and hopefully increases overall liquidity.

 Some of the problem with liquidity today is likely created by cautious consumer spending and tight credit, and the flight of investment dollars offshore to developing markets with higher returns is another factor.  All of these factors can be reduced with a resumption in growth, but it’s the classical Catch-22 and that’s what the Fed hopes to break with additional asset purchases.  This is a good time, I think, because it corresponds to the holiday period when there’d normally be a bit of an uptick in consumer spending.  But it’s clear that the Fed isn’t 100% sure how much if any is needed or they’d have acted already to get ahead of the curve.

 There is some information to suggest that businesses may be holding to their 2010 IT spending, but the 4Q pattern will tell the tale since there’s a larger un-spent budget excess than normal coming out of 3Q according to the survey data we’ve collected so far (which isn’t complete, so it’s suspect).  That will be released if economic trends continue to show some improvement, and it will also impact budget planning for next year.  If we win in 4Q we’ll win in 2011 overall, in short.  This week’s data may tell the tale.

A New Case for Agility

It’s always fascinating to listen to network operators and even large enterprises talk about their infrastructure projects, and I’m just starting to analyze the first round of our fall strategy survey so I’m getting that chance on a large scale.  It’s too early to say how everything is going to come out, but one thing that does strike me at this point is the discrepancy between how real network-builders see their networks and how vendors see them.

 In the service provider space, the classic vision of networks is the hierarchy—access to aggregation/metro to core.  The goal is supposed to be the creation of uniform connectivity and good bandwidth economy of scale, and the practice dates from the earliest days of packet networking.  It’s also nearly always at odds with what’s going on in the real world.  Today, hierarchy is being replaced with delivery.  It would be safe to say that were video content the only traffic source (or even the overwhelming majority of traffic) the Internet would look more like a CDN than a hierarchical network; all traffic would be user-to-cache and the only “core” traffic would be to populate caches.

 In the enterprise space, people are starting to realize that the applications that consume a mass of incremental bandwidth aren’t universally distributed through the business.  Enterprises tell us that 73% of all their data center traffic and 81% of their collaboration traffic moves less than 10 miles.  The data center network is growing very fast as inter-process and storage traffic multiply, and the LAN traffic in major headquarters facilities is growing nearly as fast, but branch traffic is growing at a much slower rate.  You need only reflect on what happens in a local sales office or branch bank to understand why.  Most remote-office traffic is transactional and thus doesn’t expand with anything but business activity.  Who does a bank teller collaborate with if not the teller in the next station, who’s hardly a communications destination?  Who does a real estate office manager or manufacturers’ rep telepresence with?

 It’s never a good idea to be disconnected from the market reality.  We can’t build optimum networks for anyone without understanding what their networks are really doing, and it’s not a problem confined to the network space.  Enterprises tell us that vendors are proposing data mining benefits across an employee population whose job activity doesn’t involve any of the data they’re proposing to mine.  Providers tell us that vendors are suggesting operations cost savings that exceed their total operations budgets because they don’t understand how much is really spent on operations, or even what the word means to the network operators themselves.  Or what new services will do to the space overall.

 The appliance players like Apple, Google (via Android), Microsoft, and the host of tablet and phone vendors are creating a more responsive consumer playground to cater to instantaneous trends, or to launch those trends where possible.  With a low-inertia alternative developing, we could end up with traditional networking and even IT simply getting out-danced.  You can see already that metro networking is becoming Ethernet networking.  Alcatel-Lucent’s recent super-switch shows that vendors realize that transport/connection is getting pushed to lower OSI layers, but do they realize that differentiation/innovation is harder to deliver there?  The more we want to dazzle the consumer, the more we need to create new in-network value, the more we need to examine a new conception of what a network is and what it contains.

Everything Old is Even Older

The rumor that AOL and some private equity firms have been in talks with Yahoo regarding an acquisition seems to be most rooted in the WSJ, but some insiders tell us that it’s true.  They also say there are still some significant points of dispute on the proposed deal, and that the odds against success are still better than even.

 At one level, something like this is inevitable.  AOL and Yahoo are both brands past their prime, victims of change (in the first case) and competition and complacency (in the second).  We think that’s the big issue here; we’ve got a marriage of inconvenience, a union being proposed where neither party brings really strong assets to the table.  AOL died in all but the real sense when broadband replaced dial-up, and Yahoo is dying not only from Google but now from the fact that more users are abandoning search in favor of social networks.

 The big revolution online is yet to come.  Just as we didn’t want to be informed by the Internet when there was an entertainment alternative, so we don’t want to be social only while sitting in front of our computer.  Appliances like iPhones and iPads are now driving the bus for the Internet’s future.  Applications not only anonymize information, they anonymize the Net itself.  You push a button and you get your answer; magic might as easily deliver it as search.  We’re mashing social networks to increasingly look like SMS.  In the drive to socially connect, we’re pulling everything that could be differentiable out of the process of being online.

 Social networking and apps are their own worst enemies; they’re creating a world where there’s no room for billboards, and they’re funded by the expectations of those who want those billboards in everyone’s line of sight.  This conflict will self-limit the whole social network process, and so invalidate anyone’s deal for Facebook.  It will also reshape the market so thoroughly that older brands have no meaning, making the Yahoo/AOL alliance kind of like ignoring the lifeboat as the Titanic goes down and grabbing onto another drowning victim instead.  All these guys need to examine the future more carefully, because the present is becoming the past.

Sorry, Microsoft

Microsoft has launched its Phone 7 initiative, and from what’s been revealed so far and how operators have reacted in conversations with us, the new mobile strategy is fatally flawed.  In fact, unless Microsoft makes truly radical changes or has some literally unprecedented success, it’s probably the end of Microsoft in the mobile space.

 With Phone 7, operators have only limited ability to create their own services or their own “brand” with the handsets they have to subsidize.  Microsoft justifies this with the notion that the user experience is seamless everywhere, but that’s not what operators want—they want an experience that differentiates them.  Even Apple was smart enough to realize that they couldn’t tell operators that they were “partnering” without providing some sort of unique service-to-phone tie.  Alcatel-Lucent recognized that a seamless experience had to start with a mobile operator ecosystem largely because without it there would be great resistance to any kind of roaming data plan, or any cross-operator feature set.  They thus established their Open API program as a federation.

 Federation is what Microsoft needs to be looking at; the operators want to build a global ecosystem of national or even regional players.  Microsoft had a chance to bring something tangible to the table in that space, something broader and more powerful than even Alcatel-Lucent proposed.  Instead, they’ve extended their past CSF strategies into their present mobile offering, and compromised it fatally.

Is Facebook Showing Us Something with Groups?

With its new implementation of Groups, Facebook has revamped its way of mapping relationships into something multi-dimensional instead of the classical “star” configuration.  At least they’ve offered the option of doing that; whether users will bother is another matter.  What’s more interesting to us is the drivers to make the change in the first place, and what it says about social behavior online, and the services that might be based on it.

There’s always a certain number of people on social networks who measure themselves by the number of “friends” they have.  On business-based LinkedIn, for example, there are whole groups of users who do nothing but race to establish tens of thousands of contacts.  The problem with this approach is that it tosses out the whole notion of a relationship in a social sense.  Studies show that people don’t have that many real “friends” or “contacts”.  About 80% of the average person’s social interactions take place in a group of less than 100 people.  In real life as in cyberspace, we have to balance how many such relationships we create against the difficulty of sustaining them.

Social communications has to reflect the value we place on each individual, but social networks have to build community mass.  We can’t use the latter principle to establish frameworks to support the former; social communications has to be personal.  Will you ask your whole community of Facebook friends for advice on a car?  Perhaps you’d ask the question but you know that a) most wouldn’t respond and b) you wouldn’t weigh the responses equally.  How we watch television, or buy cars, or plan projects may all involve social interactions and thus appear to map to social networks, but it’s not a simple direct matter of building a flat community that centers on each of us.  Groups “friend” other groups, they have fuzzy boundaries, they interact through filters.  In short, they’re not Facebook groups, even now, and we’re entering a time when the difference between real social groups and online groups will matter a lot to us.

Wireline social networks are free and relatively controllable, but that’s not true for mobile.  Constant tweets or status updates are a lot more distracting because I’m living when I’m mobile and I’m sitting when I’m on wireline broadband.  Mobile broadband is going to change our notion of groups and friends from our current Facebook simplicity because it’s going to force us to decide between living our own lives and paying with airtime and distractions for how others want us to think they’re living theirs.  If you want to look for a truth that Microsoft could exploit to get into the mobile market, look there.

Will Cisco’s Ume Break Networks, Policies, or Both?

Cisco released its expected home videoconference solution with the somewhat cutsey name of Umi, which to make things worse is supposed to have a horizontal accent line over the “U” to indicate a “you” pronunciation.  Whatever the spelling and character set, it’s potentially a significant product.  Umi brings HD videoconferencing to the home TV, and while it doesn’t have some of the social/chat features that Cisco promised, it could still be a game-changer in a number of ways.

Free Internet video calling is already available from a number of sources in PC-PC form, but Umi promises a friendlier form, from the living room and on a big screen.  If adoption is what Cisco hopes, it could popularize video calling and generate a ton of new traffic.  For a router vendor who already has a big market share, organic growth of that sort is a good thing—maybe.

The “maybe” here is that it’s very possible that a strong showing for video calling in any form could push operators over the edge into metered usage pricing, which would be a bad thing for router vendors, Internet users, and frankly just about everyone.  There are many who believe that it’s inevitable (we’re among them) but extravagant video growth would certainly hasten the day, and in particular it could push a pricing change as early as 2011 for some markets, particularly the cable MSOs.  Because these guys have constrained upstream capacity, applications like video calling that source as much as they sink, bit-wise, are particularly challenging.  It could also polarize the current public policy debates on net neutrality, mixing billing/cost issues with neutral carriage issues.  It could be a destructive mix, and we’re likely to see the impact sooner rather than later.

More Color on Alcatel-Lucent’s Strategy

Alcatel-Lucent had an invitation event for industry analysts yesterday, and since the group was small relative to normal events there was a good opportunity for discussion and engagement.  The goal was to give us an idea of where Alcatel-Lucent was going in the near term and in a more strategic sense, and I think they accomplished the goal overall.

It’s clear that Alcatel-Lucent is still having a bit of an identity crisis—several, in fact.  They’re still apologizing for the aftermath of the merger, which looks to be finally accomplished in fact and not just in name.  They’re also having a bit of a confidence crisis, even though their articulation is strong and their strategic credibility numbers lead the network equipment vendor space by a pretty decent margin.  They’ve been battered a bit by Wall Street and by the internecine struggles of the past, and they kind of need a hug .

In a tangible sense, the big news out of the event was that Alcatel-Lucent has a much broader capability set in Open API than was first apparent.  Yes, the program is linked to applications and developers and the smartphone universe, but it’s really more than that.  Open API is a federation engine that absorbs multiple APIs, orchestrates unions, and exposes the results.  It could be used to federate CDNs (which is something Alcatel-Lucent says it’s working on, though they didn’t say if the Open API was part of the work), cloud computing, and even multiprovider service provisioning of the type that TMF/IPSF has been involved in.  How far they’ll take this capability probably depends on operator traction, but watch the space for some action later this year as a possible signal.

It’s also clear that they’re betting heavily on LTE and still doubling down on IMS, which is logical given their LTE focus.  I still think there are a few too many IMS references; yes, we know they have it, that operators will leverage it if they deploy it.  We need to know what else they will have in the way of enablers for their Open API to expose.

The Alcatel-Lucent challenge, in fact, is to try to rise above legacy,  including IMS, without turning their back on it.  Part of the secret of Alcatel-Lucent’s high strategic credibility is their broad engagement.  They can’t sustain their whole portfolio forever, but they need to exploit the parts of it that continue to involve them in the broad strategic sweep of the service provider space.  At the same time, they have to stop making every application look like IMS in a brown paper bag, or every benefit come down to offering QoS.  The future is built on the past, and present, but that doesn’t mean the three march in lockstep.

What Verizon’s Datacenter Spending Portends

Verizon said it would be making a major investment in data centers for, among other things, cloud computing.  The result will be an addition of space for over 5 thousand servers and an expansion to about 200 data centers worldwide, including sites in Australia and the UK.  While “the cloud” gets a lot of play on this deal, it’s really more about enhanced services and a shift in their profit model from selling bits to selling experiences.  A couple decades ago, new services meant new network equipment.  These days, it means servers and software, and it’s being driven today by a rush to create a meaningful strategy for content delivery and monetization.  While that’s the hottest issue in the market, it’s an example of the broader issue of generating revenue in an age where transport and connection matter a lot less.

 The next generation of carrier “services” will be experiences.  The foundation of experiences is software, running on a connected set of data centers—a cloud.  But media hype about the value of the cloud has been several miles wide and a lot less than an inch deep, and most operators would echo the Pacnet exec quoted in a recent article; he’s glad he’s not the only one working through the fog of the cloud.  For operators, in particular, the imprecision of “cloud” is a challenge because they want an architecture on which to build their infrastructure plans.  They had that for networks, and now they need it for clouds.  All those datacenters need to be filled with gear, and how that will work and how it will make money are now the critical question for operators–and vendors.

HP’s New CEO: Best Available but Not Perfect

HP picked former SAP CEO Leo Apotheker as its new CEO, a move that surprised many in the industry but that doesn’t particularly surprise us.  The criticism of Apotheker stems largely from the fact that his tenure at SAP was hardly stellar; the company lost market share to Oracle throughout and he was unable to stem the tide.  But truth be told, the problems at SAP were more related to the conservatism of SAP’s marketing processes and board, something that Apotheker had little chance of changing.

 This is a software age in IT, and also an age where software and hardware form a one-stop ecosystem.  Who should HP have picked?  A hardware guy from within?  That’s bad on two counts, given companies’ tendency for internecine warfare and given that hardware isn’t where it’s at.  Who has succeeded against Oracle?  Nobody.  Apply both those truths and you have no candidates at all.  Apotheker actually understands the role of software well, and understands the relationship between software and all of the competitive hardware platforms.  He’s the best choice they had, in our view, but he’s also a good choice.

 He’s not a perfect choice.  The missing skill is networking, the understanding of which is critical to position the hardware/software ecosystem for virtualization in the data center and beyond it, in the cloud.  We think that networking is the card that HP will need to play against Oracle and against IBM as well.  True, Apotheker has no evil preconceptions in the space, but he doesn’t have the broad grasp of the current market issues, or at least hasn’t demonstrated that grasp.  Given that Cisco’s entry into the datacenter IT space has already started to force competitors like IBM to give more thought to a networking mission, and given that HP has networking products already, exploiting networking is not only essential, it’s something that has to be done in the short term.  Can Apotheker do that?  We don’t know at this point, and if he can’t do it or find someone to delegate it to, then HP will face some challenges.

Net Neutrality–Again?

Rep. Waxman, the House sponsor of an attempt to pass legislation to direct the FCC’s decisions on net neutrality, has withdrawn the bill for lack of support.  This ends, at least for the moment, another of Congress’ attempts to create telecom policy through explicit legislation.  It’s not the first time bills have been dropped; since 1996 virtually every attempt to change policy has died without coming to a vote.

I still believe that Title II classification with reasonable wholesale rates (the Canadian model, for example) is workable, and in fact might be more logical given the range of services that are likely to migrate to IP without being part of the Internet.  IPTV and carrier voice, as well as enterprise services, fit that model.  The FCC has to weave a complicated ruling to protect both the Internet and the business model for IP-converged services.  Title II is the best way to do that.