Does Apple’s Lion Strategy Threaten More Disintermediation?

Apple’s moves to converge its iOS and MacOS platforms over time and to create a unified developer environment between their disparate devices is a smart move that responds to the reality of the market and competitive environment.  The question is how far they’ll go and what impact the efforts will have on the appliance space, the developer community, and even the service provider market.

 The iPhone launched the smartphone revolution, which in turn launched the applet/widget revolution, which in turn is opening the question of whether device-resident intelligence will play a commanding role in the development of what the buyer/user perceives as “services”.  The iPad has had a similarly transforming effect in the tablet space, and competitors have already established themselves with smartphones—primarily via Android-based phones in the broad market and on RIM’s building on its enterprise incumbency.  Competition is also increasing from both sources in the tablet space, with pretty much the same cast of competitive characters.

What creates Apple’s platform dilemma is that broader installed bases begat greater support for developer opportunity, and thus a larger application community.  As I’ve noted before, this was one of the factors behind Apple’s loss of its early lead in the PC market to the IBM-compatibles.  An open framework attracts support because it is open, but it also reduces the originator’s ability to control and monetize its own marketing, which is why Apple has traditionally rejected such an open approach.  But a marriage of its Mac operating system and the OS used for appliances and the harmonizing of a development environment across both would have the effect of increasing Apple’s developer mass.

The challenge is that it will also almost certainly cause Google to prioritize Android as a tablet OS, thus exacerbating the competition between these two industry giants.  The further the Android OS goes in terms of supported hardware, the harder it will be for Apple to sustain itself as an appliance walled garden.  Some gestures of openness exist through the developer program, but Apple’s long-standing feud with Adobe over Flash illustrates where walled-garden thinking can take you and how it can create a lot of gratuitous enemies.

On the service provider side, the competition between Apple and Google (through its Android proxies) creates yet another path to disintermediation.  Ceding service-creation innovation to OTT players was a problem in wireline, and ceding it to smart device vendors and developers in the wireless space only makes things worse.  The so-far-ill-fated Microsoft phone strategy has been toying with hosted services, but probably more as a means of getting Microsoft into the OTT feature business than as a means of empowering operators.  Can operators respond with an approach of their own, and in time?  Their service-layer revenue future may depend on it.

Beware of Free

The Facebook scandal, where popular application providers shared private data without user permission, is only the latest in a series of targeting-related breaches of privacy and violations of “policy”.  The FTC has been of two minds regarding the issue, with some believing that regulation was necessary to protect consumers and others believing the industry could regulate itself.  The current direction appears to be toward self-regulation, despite the mounting evidence that the industry is unwilling and unable to do that.  What’s going on here, systemically?

First, to understand targeting you have to understand motivation.  The goal here is NOT to get the right ad in front of the right person, it’s to get ads in front of fewer wrong people.  The ad industry knows that things like TV commercials blast ads to the point where it’s unlikely that there’s any possible consumer who doesn’t see them.  Thus, a well-targeted ad isn’t any more likely to be seen by the prospective buyer than one that’s simply broadcast.  What is likely is that a well-targeted ad will be seen by a lot fewer people who aren’t likely prospects.  Even if you pay more for such ads, you spend less with “overspray”.

This goal of advertisers sweeps into a consumer market with an appetite for free stuff.  Nobody wants to pay for anything if they don’t have to, and so nobody really wants to pay for content, or Internet services, or online applications, or whatever.  They’re therefore likely to surrender a certain amount of privacy to secure what they want, believing that the cost to them is less than the benefit.  Since one could argue that the goal of regulation is to protect the public interest, it would seem illogical to regulate consumers out of the benefits they’ve elected to trade for.

But that presupposes they’ll actually get what they want, which is the big fallacy of targeting.  As I noted, all you need to do is run the numbers for online ads versus TV commercials to see that what’s happening isn’t a flight to quality in terms of consumer targets but a flight away from the non-engaged.  That flight is motivated by cost, not additional revenue, and thus it’s necessarily a less-than-zero-sum game.  And that means that success in targeting funds not more experiences but less.  Consumers are giving away their secrets to lose, rather than gain, and that’s something regulators should be dealing with.

Regulators need to be thinking more about the future of the industries they regulate; the financial crisis proves that point.  Those in the industry need to think a bit too, because the real opportunities in the long run are created by the long-term money flows.  Cost conservation never leads to anything but commoditization, no matter what part of the network food chain you’re in.

Reading the Apple-IBM Tea Leaves

Chicken Little got everyone upset by spreading the rumor that the sky was falling.  Apple and IBM both reported their earnings yesterday, and there were immediate calls for a Chicken-Little-like response.  Duck and cover?  It’s not that simple.

Both Apple and IBM beat estimates, for starters.  It’s nothing new for Wall Street to sell off companies who meet and even beat expectations.  The classic “buy on the rumor, sell on the news” mindset is exaggerated these days because hedge funds don’t hold stocks as an investment, the trade them to take advantage of either price increases or dips.  When earnings have been reported, the prices tend to settle and news is more scarce, and thus it’s fashionable to sell them off and move to something else.  All this means that sudden stock movements don’t necessarily mean anything, which is comforting for Apple whose shares dipped over 5% after hours.

We need to see how Apple’s or IBM’s outlook might reflect on 4Q.  The holidays are critical, not only to draw down inventory but also because consumer attitude can’t easily be turned around except in November and in May.  If we miss the current opportunity, we’ll have six months of slow growth and 2011 won’t measure up.  If we meet or exceed holiday expectations, our model says we’ll have an upside surprise next year in tech and in the economy overall.

Weekly Economic Update

We’re coming into the heart of earnings season and also into a new round of economic numbers this week, which will give us a better idea of just where things might be headed for the economy.  So far it looks like recovery is on track even though employment is lagging in the US in particular, but there’s still the question of whether consumers are really getting their hearts into the holiday period or simply saving one place to spend in another with less overall positive spending growth as a result.

 The mortgage situation is a troubling factor.  The flood of foreclosures created a flood of shortcuts in the handling, and now it appears that the process was violated.  Since that process is a statutory requirement for foreclosing, a problem here creates a cascade effect, and some of the lower-tier impacts could hurt.

 The top of the pyramid isn’t bad; foreclosures might be limited or halted for the time it takes to get the paperwork legally straight.  That would delay entry of homes onto the market, keep people in their homes longer, and otherwise do fairly good stuff.  But for homes already foreclosed, it clouds the title the banks can give to purchasers.  That means title insurance may be impossible, making mortgages difficult.  That could slow the housing process down, given that foreclosed homes make up nearly a quarter of current-period sales in some areas.

 Another question is whether the combination of the credit crunch and stalled spending plans might not be reducing liquidity in the economy overall, thus contributing to what’s called “deflation” or an effective decrease in the cost of goods and services that’s caused by money appreciation.  Deflation typically stagnates economic growth, in part by encouraging deferring of spending to a future time when stuff will be even cheaper.

 If we (somewhat simplistically) say that inflation is excess liquidity in the economy and deflation is insufficient liquidity, then the classic mechanism to fix deflation is to do something inflationary like reduce the Fed rate to increase the money in circulation.  When that’s not possible because of pre-existing reductions that have taken the rate to near-zero, the next step is “quantitative easing”, in which the Fed prints money (in effect) and uses it to buy debt, usually its own bonds.  That gets new money into circulation and hopefully increases overall liquidity.

 Some of the problem with liquidity today is likely created by cautious consumer spending and tight credit, and the flight of investment dollars offshore to developing markets with higher returns is another factor.  All of these factors can be reduced with a resumption in growth, but it’s the classical Catch-22 and that’s what the Fed hopes to break with additional asset purchases.  This is a good time, I think, because it corresponds to the holiday period when there’d normally be a bit of an uptick in consumer spending.  But it’s clear that the Fed isn’t 100% sure how much if any is needed or they’d have acted already to get ahead of the curve.

 There is some information to suggest that businesses may be holding to their 2010 IT spending, but the 4Q pattern will tell the tale since there’s a larger un-spent budget excess than normal coming out of 3Q according to the survey data we’ve collected so far (which isn’t complete, so it’s suspect).  That will be released if economic trends continue to show some improvement, and it will also impact budget planning for next year.  If we win in 4Q we’ll win in 2011 overall, in short.  This week’s data may tell the tale.

A New Case for Agility

It’s always fascinating to listen to network operators and even large enterprises talk about their infrastructure projects, and I’m just starting to analyze the first round of our fall strategy survey so I’m getting that chance on a large scale.  It’s too early to say how everything is going to come out, but one thing that does strike me at this point is the discrepancy between how real network-builders see their networks and how vendors see them.

 In the service provider space, the classic vision of networks is the hierarchy—access to aggregation/metro to core.  The goal is supposed to be the creation of uniform connectivity and good bandwidth economy of scale, and the practice dates from the earliest days of packet networking.  It’s also nearly always at odds with what’s going on in the real world.  Today, hierarchy is being replaced with delivery.  It would be safe to say that were video content the only traffic source (or even the overwhelming majority of traffic) the Internet would look more like a CDN than a hierarchical network; all traffic would be user-to-cache and the only “core” traffic would be to populate caches.

 In the enterprise space, people are starting to realize that the applications that consume a mass of incremental bandwidth aren’t universally distributed through the business.  Enterprises tell us that 73% of all their data center traffic and 81% of their collaboration traffic moves less than 10 miles.  The data center network is growing very fast as inter-process and storage traffic multiply, and the LAN traffic in major headquarters facilities is growing nearly as fast, but branch traffic is growing at a much slower rate.  You need only reflect on what happens in a local sales office or branch bank to understand why.  Most remote-office traffic is transactional and thus doesn’t expand with anything but business activity.  Who does a bank teller collaborate with if not the teller in the next station, who’s hardly a communications destination?  Who does a real estate office manager or manufacturers’ rep telepresence with?

 It’s never a good idea to be disconnected from the market reality.  We can’t build optimum networks for anyone without understanding what their networks are really doing, and it’s not a problem confined to the network space.  Enterprises tell us that vendors are proposing data mining benefits across an employee population whose job activity doesn’t involve any of the data they’re proposing to mine.  Providers tell us that vendors are suggesting operations cost savings that exceed their total operations budgets because they don’t understand how much is really spent on operations, or even what the word means to the network operators themselves.  Or what new services will do to the space overall.

 The appliance players like Apple, Google (via Android), Microsoft, and the host of tablet and phone vendors are creating a more responsive consumer playground to cater to instantaneous trends, or to launch those trends where possible.  With a low-inertia alternative developing, we could end up with traditional networking and even IT simply getting out-danced.  You can see already that metro networking is becoming Ethernet networking.  Alcatel-Lucent’s recent super-switch shows that vendors realize that transport/connection is getting pushed to lower OSI layers, but do they realize that differentiation/innovation is harder to deliver there?  The more we want to dazzle the consumer, the more we need to create new in-network value, the more we need to examine a new conception of what a network is and what it contains.

Everything Old is Even Older

The rumor that AOL and some private equity firms have been in talks with Yahoo regarding an acquisition seems to be most rooted in the WSJ, but some insiders tell us that it’s true.  They also say there are still some significant points of dispute on the proposed deal, and that the odds against success are still better than even.

 At one level, something like this is inevitable.  AOL and Yahoo are both brands past their prime, victims of change (in the first case) and competition and complacency (in the second).  We think that’s the big issue here; we’ve got a marriage of inconvenience, a union being proposed where neither party brings really strong assets to the table.  AOL died in all but the real sense when broadband replaced dial-up, and Yahoo is dying not only from Google but now from the fact that more users are abandoning search in favor of social networks.

 The big revolution online is yet to come.  Just as we didn’t want to be informed by the Internet when there was an entertainment alternative, so we don’t want to be social only while sitting in front of our computer.  Appliances like iPhones and iPads are now driving the bus for the Internet’s future.  Applications not only anonymize information, they anonymize the Net itself.  You push a button and you get your answer; magic might as easily deliver it as search.  We’re mashing social networks to increasingly look like SMS.  In the drive to socially connect, we’re pulling everything that could be differentiable out of the process of being online.

 Social networking and apps are their own worst enemies; they’re creating a world where there’s no room for billboards, and they’re funded by the expectations of those who want those billboards in everyone’s line of sight.  This conflict will self-limit the whole social network process, and so invalidate anyone’s deal for Facebook.  It will also reshape the market so thoroughly that older brands have no meaning, making the Yahoo/AOL alliance kind of like ignoring the lifeboat as the Titanic goes down and grabbing onto another drowning victim instead.  All these guys need to examine the future more carefully, because the present is becoming the past.

Sorry, Microsoft

Microsoft has launched its Phone 7 initiative, and from what’s been revealed so far and how operators have reacted in conversations with us, the new mobile strategy is fatally flawed.  In fact, unless Microsoft makes truly radical changes or has some literally unprecedented success, it’s probably the end of Microsoft in the mobile space.

 With Phone 7, operators have only limited ability to create their own services or their own “brand” with the handsets they have to subsidize.  Microsoft justifies this with the notion that the user experience is seamless everywhere, but that’s not what operators want—they want an experience that differentiates them.  Even Apple was smart enough to realize that they couldn’t tell operators that they were “partnering” without providing some sort of unique service-to-phone tie.  Alcatel-Lucent recognized that a seamless experience had to start with a mobile operator ecosystem largely because without it there would be great resistance to any kind of roaming data plan, or any cross-operator feature set.  They thus established their Open API program as a federation.

 Federation is what Microsoft needs to be looking at; the operators want to build a global ecosystem of national or even regional players.  Microsoft had a chance to bring something tangible to the table in that space, something broader and more powerful than even Alcatel-Lucent proposed.  Instead, they’ve extended their past CSF strategies into their present mobile offering, and compromised it fatally.

Is Facebook Showing Us Something with Groups?

With its new implementation of Groups, Facebook has revamped its way of mapping relationships into something multi-dimensional instead of the classical “star” configuration.  At least they’ve offered the option of doing that; whether users will bother is another matter.  What’s more interesting to us is the drivers to make the change in the first place, and what it says about social behavior online, and the services that might be based on it.

There’s always a certain number of people on social networks who measure themselves by the number of “friends” they have.  On business-based LinkedIn, for example, there are whole groups of users who do nothing but race to establish tens of thousands of contacts.  The problem with this approach is that it tosses out the whole notion of a relationship in a social sense.  Studies show that people don’t have that many real “friends” or “contacts”.  About 80% of the average person’s social interactions take place in a group of less than 100 people.  In real life as in cyberspace, we have to balance how many such relationships we create against the difficulty of sustaining them.

Social communications has to reflect the value we place on each individual, but social networks have to build community mass.  We can’t use the latter principle to establish frameworks to support the former; social communications has to be personal.  Will you ask your whole community of Facebook friends for advice on a car?  Perhaps you’d ask the question but you know that a) most wouldn’t respond and b) you wouldn’t weigh the responses equally.  How we watch television, or buy cars, or plan projects may all involve social interactions and thus appear to map to social networks, but it’s not a simple direct matter of building a flat community that centers on each of us.  Groups “friend” other groups, they have fuzzy boundaries, they interact through filters.  In short, they’re not Facebook groups, even now, and we’re entering a time when the difference between real social groups and online groups will matter a lot to us.

Wireline social networks are free and relatively controllable, but that’s not true for mobile.  Constant tweets or status updates are a lot more distracting because I’m living when I’m mobile and I’m sitting when I’m on wireline broadband.  Mobile broadband is going to change our notion of groups and friends from our current Facebook simplicity because it’s going to force us to decide between living our own lives and paying with airtime and distractions for how others want us to think they’re living theirs.  If you want to look for a truth that Microsoft could exploit to get into the mobile market, look there.

Will Cisco’s Ume Break Networks, Policies, or Both?

Cisco released its expected home videoconference solution with the somewhat cutsey name of Umi, which to make things worse is supposed to have a horizontal accent line over the “U” to indicate a “you” pronunciation.  Whatever the spelling and character set, it’s potentially a significant product.  Umi brings HD videoconferencing to the home TV, and while it doesn’t have some of the social/chat features that Cisco promised, it could still be a game-changer in a number of ways.

Free Internet video calling is already available from a number of sources in PC-PC form, but Umi promises a friendlier form, from the living room and on a big screen.  If adoption is what Cisco hopes, it could popularize video calling and generate a ton of new traffic.  For a router vendor who already has a big market share, organic growth of that sort is a good thing—maybe.

The “maybe” here is that it’s very possible that a strong showing for video calling in any form could push operators over the edge into metered usage pricing, which would be a bad thing for router vendors, Internet users, and frankly just about everyone.  There are many who believe that it’s inevitable (we’re among them) but extravagant video growth would certainly hasten the day, and in particular it could push a pricing change as early as 2011 for some markets, particularly the cable MSOs.  Because these guys have constrained upstream capacity, applications like video calling that source as much as they sink, bit-wise, are particularly challenging.  It could also polarize the current public policy debates on net neutrality, mixing billing/cost issues with neutral carriage issues.  It could be a destructive mix, and we’re likely to see the impact sooner rather than later.

More Color on Alcatel-Lucent’s Strategy

Alcatel-Lucent had an invitation event for industry analysts yesterday, and since the group was small relative to normal events there was a good opportunity for discussion and engagement.  The goal was to give us an idea of where Alcatel-Lucent was going in the near term and in a more strategic sense, and I think they accomplished the goal overall.

It’s clear that Alcatel-Lucent is still having a bit of an identity crisis—several, in fact.  They’re still apologizing for the aftermath of the merger, which looks to be finally accomplished in fact and not just in name.  They’re also having a bit of a confidence crisis, even though their articulation is strong and their strategic credibility numbers lead the network equipment vendor space by a pretty decent margin.  They’ve been battered a bit by Wall Street and by the internecine struggles of the past, and they kind of need a hug .

In a tangible sense, the big news out of the event was that Alcatel-Lucent has a much broader capability set in Open API than was first apparent.  Yes, the program is linked to applications and developers and the smartphone universe, but it’s really more than that.  Open API is a federation engine that absorbs multiple APIs, orchestrates unions, and exposes the results.  It could be used to federate CDNs (which is something Alcatel-Lucent says it’s working on, though they didn’t say if the Open API was part of the work), cloud computing, and even multiprovider service provisioning of the type that TMF/IPSF has been involved in.  How far they’ll take this capability probably depends on operator traction, but watch the space for some action later this year as a possible signal.

It’s also clear that they’re betting heavily on LTE and still doubling down on IMS, which is logical given their LTE focus.  I still think there are a few too many IMS references; yes, we know they have it, that operators will leverage it if they deploy it.  We need to know what else they will have in the way of enablers for their Open API to expose.

The Alcatel-Lucent challenge, in fact, is to try to rise above legacy,  including IMS, without turning their back on it.  Part of the secret of Alcatel-Lucent’s high strategic credibility is their broad engagement.  They can’t sustain their whole portfolio forever, but they need to exploit the parts of it that continue to involve them in the broad strategic sweep of the service provider space.  At the same time, they have to stop making every application look like IMS in a brown paper bag, or every benefit come down to offering QoS.  The future is built on the past, and present, but that doesn’t mean the three march in lockstep.