Clouds and Chips

The IT world has provided us with a number of interesting developments this week, starting with a Google suit filed over a proposed Department of the Interior messaging system award to Microsoft.  Google feels that its own Apps could have been used for this, and that they should have been given the opportunity to demonstrate their compliance with federal security requirements and bid on the contract.  Thus, the lawsuit.

Some in the DoI have suggested to us that the problem with Apps is the same one that’s a problem for users of Google’s online competitors to Office; the features Google provides are a subset of those already in use rather than the full set.  What’s not totally clear is whether the missing features are actually used at DoI, but in some ways you have to be sympathetic with the department; how easily could they find out whether all or some features were used?  The suit may thus be an important one for cloud services in general.  Many (probably most) cloud-based alternatives to popular installed software tools are functionally more limited than the stuff they’re intended to replace.  That’s also true with most open-source tools.  I’ve tried Google’s document tools and they won’t properly process either our spreadsheets or our presentations, and they create problems with some publication/paper styles as well.  Same for OpenOffice.  But there’s no question that you could do most of what I do in either Apps or OpenOffice if you started from scratch.  So cloud applications would be promoted if they were deemed acceptable if they offered relatively full functionality, even if differently, or if they offered at least some way of doing what buyers actually did rather than what they could do.  Without that kind of ruling, it may be hard to promote the cloud version of many apps unless the cloud providers step up and fully duplicate capabilities.  Frankly, that’s what they should do.  You can’t sue your buyer into submission as a long-term business strategy.

The other interesting development is in the chip space, and the two vendors making the news were Intel and Oracle.  Intel abandoned a long practice of keeping its fab to itself by doing a deal with an FPGA chip specialist Achronix for 22nm capacity.  The actual volume of fabrication here is small, but what may be interesting is that Achronix is perhaps the speed king of FPGAs, which are field-programmable chips that can be used for fast responses to market needs or applications where volumes won’t justify a custom ASIC.  It’s not hard to see that such chips might be very valuable in the consumer device market, which could mean either that Intel may want a stake in Achronix later on, or that it may itself be thinking about getting into the consumer space on a larger scale.  Recall that Intel has its own mobile OS and that it’s often been said to have aspirations of being a player in a retail space.  What better one than devices?

Oracle’s move is if anything even more interesting; they’ve taken a stake in Mellanox, who is one of the key providers of chips for InfiniBand data center switches.  They’ve been a partner with Sun and also provide stuff for Oracle’s storage appliances, but as we’ve noted before, Oracle is the only data center player with no position in networking, and nowhere is that position more important than in the data center.  InfiniBand is a superior technology to at least the current generation of Ethernet in terms of latency and capacity, and were Oracle to be planning to do a big flat fabric for the data center, Mellanox would be a likely player in their decision.  It’s also interesting to note that the deal includes Mellanox supporting Solaris as one of its host OSs.  That suggests that Oracle may be planning to continue to field Solaris as an alternative to Linux.  We think that’s smart; Solaris has a good following, and for specialty applications like OLTP we think it’s the best OS out there.  Could Oracle be planning a major data center move?  It certainly could be.

Hopeful Economic Signs?

Economically speaking it would be hard to characterize last week as great, and yet it was better than expected and certainly better than many had feared.  The critical number, the 3Q GDP, came in above last quarter’s level, and that pretty much laid the double-dip recession theory to rest.  Far from showing wild swings of volatility, the stock market was remarkably stable, varying only about 250 points on the Dow between lowest and highest levels and closing only about 70 points lower.

This week, of course, the elections in the US will likely drown out any economic data released.  The campaign has been among the most bitter in memory, with negative ads souring virtually all of the voters polled.  Democrats hold a significant edge in voter registration, so it’s very likely that were turnout to be high they’d hold on to most if not all seats.  The challenge for them is that the party who wins a Presidential election in the US nearly always loses seats in Congress in the mid-terms.  The question is how many, whether it would be enough to give Republicans a chance at putting forward their own agenda, or whether Democrats would work with Republicans to support at least some sort of legislative progress.

Republican priorities, such as they’ve been hinted, seem to be focused on show.  Repeal of the financial reforms or health care is next-to-impossible lacking veto-proof majorities in both House and Senate, and nobody is predicting that level of Republican win.  Democrats really haven’t articulated any substantive agenda either, in my view; likely they don’t think they’ll be in a position to promote one.  Thus, we can’t expect much but reactive politics no matter who wins.

For the economy in general, and for tech in particular, that might not be bad.  We need better financial reform than we got; hedge funds that only millionaires can invest in manipulate the markets and their “bets against the market” are really bets against the average investor—and we know who’s been losing.  We needed better healthcare reform too.  We have the most expensive healthcare system of any industrial nation, and yet we aren’t anywhere close to the healthiest or longest-lived among them.  But neither of these areas are going to be fixed further, and so having at least a stable framework is better than being in a constant state of flux.  The economy will now likely slowly recover, but we do believe that restoration of “normal” employment levels may take five years—if it ever comes.  The US is shifting away from being a producer economy because productivity gains aren’t keeping more expensive US labor competitive with emerging economies.  As we’ve noted before, spending on IT since 2001 to enhance productivity has not kept pace with past history.  That has to change to increase jobs here.

An article (http://www.businessinsider.com/m2-velocity-suggests-a-stronger-q4-gdp-2010-10) has correctly noted that the M2 money supply trends can be correlated reasonably with economic conditions.  M2 is a broad measure of money supply, and when it sinks sharply it’s an indication of money being hoarded.  It did shrink during the downturn, and it’s now expanding again, which is a good sign.

We must point out, though, that our own chart on the downturn, which was published in our special report in the fall of 2008, illustrates that “wealth growth” in the broadest economic sense tends to create bubbles if it’s not accompanied by GDP growth.  We also note that neither wealth nor GDP growth correlates well with how consumers feel.  Yes, a big downturn will create a corresponding dive in sentiment, but often upturns in sentiment come during or after downturns in wealth/GDP.  The mindset of the consumer is more complicated than simple charts can show, and it’s going to be the consumer that gets us out of this eventually.

Tech, of course, is both directly and indirectly linked to an economic recovery.  Most companies will spend more when they make more, and that’s also true with households.  More succinctly, belief in future progress tends to fuel current spending.  We hope that the M2 upswing is an indication of feel-good behavior, and the fact that the 3Q GDP growth was fueled largely by consumer spending is a good sign.

Not Chicken Little Time…Yet!

This week saw what’s become the usual push and pull of supply- and demand-side issues, and perhaps a bit more than the usual confusion in the markets (financial, enterprise, and consumer) about the net outcome.  It’s not been the wild week of stock swings that could have happened had economic news been bad, but at the same time there wasn’t much that could be called a big upside of hope either.  In all—tepid probably says it best.

I’ve commented several times this week on broadband issues, many arising out of what’s increasingly clear are misleading or bad numbers about broadband deployment.  It’s not surprising that broadband would become a political football in this most political of all recent election years, but it’s bad for the industry because it’s pulling everyone’s eye off the real ball.  Despite continuous evidence that economic density is the most decisive factor in broadband market effectiveness, we continue to ignore it.  Despite the fact that there’s no clear indication that broadband has any societal value whatsoever, we continue to assert that it does.  A real plan, based on exploiting what we know and studying objectively that which we don’t know, could get the market moving.

Meanwhile, the mobile space is showing us the shape of the future.  4G is going to bring usage pricing to mobile, and it will leak back into 3G and into wireline eventually, at least in markets where economic density is low and access profits are likewise.  Smartphones are reported by one analyst firm to be creating a mobile market owned by the handset giants like Apple and Google and not the operators.  While that’s clearly an exaggeration, it’s true that smartphones are disintermediating operators in mobile just as the OTT players disintermediated them in wireline.  Operators fled wireline into mobile to flee low ROI.  If mobile gives them the same low ROI, can they then flee to telepathy or something?  Hardly likely; they’ll simply have to accept a tailing off of revenues, which means tailing off of capex.  Big telco Verizon and the cable industry overall both showed us that the Street will punish those who let capex rise as a percent of sales.

Enterprises have had their own challenges.  We’ve seen that spending on some hardware and software has been strong through the year, but that strength has been created in large part by the suppression of orderly upgrades of baseline IT infrastructure by the past economic crisis.  You can only catch up for so long; after that, growth will depend on exploiting new productivity paradigms, and the market hasn’t been very good at doing that since 2001.

I’m not playing Chicken Little here; the industry isn’t going to crash.  In fact, it’s likely that by 2012 it’s going to prosper, because any time demand overwhelms the insight of the sellers, there’s going to be a new crop of leaders created.  Incumbents in all areas of tech have gotten too comfortable with old paradigms, and new players are the ones agile enough to seize the opportunities.  Those “new players” aren’t likely to be startups, VCs having fled the equipment space to social networking and other areas with more potential for bubble-creation economics.  Instead they’ll be smaller vendors, often public companies.  Watch F5 and some of the deep-packet-inspection companies; they are looking to skim the networking cream.  In IT, watch Oracle; software has the most direct link to productivity and so software companies can transform to build new cost/benefit paradigms most easily.

Bad Numbers Mean Bad Decisions

Anyone who’s followed my writing knows that I’m no fan of the National Broadband Plan.  My main issue is with the data that’s been presented to back that plan, and some recent work I’ve been doing is making me even more skeptical—if that’s possible.

What started me off was a comment by a White House science type.  He said that he was sure that there were billions to be gained in productivity and jobs if broadband were more available, though he admitted he didn’t know exactly how those benefits were calculated or realized.  OK, I said, let’s then take a look at broadband versus economics and see if there’s a correlation.  The FCC has data that shows, by zipcode, where there’s a lot of broadband providers available.  Other agencies provide household income data, also by zipcode.  Suppose we correlated the two?

If broadband availability is in fact an economic benefit, we should see some correlation between the number of providers and the household income of consumers.  We do, but it’s the wrong kind.  The data shows that the correlation is overwhelmingly in the reverse.  The areas with the most broadband providers available are the areas with the lowest household income.

Let me illustrate with a random example from my own area.  Take two suburban communities in southern NJ as an example.  One, which is a kind of middle-upper community, has 11 providers according to the FCC.  The second, which is arguably the richest community in the area, has only 10.  Grab a random residential zipcode from across the river in Philadelphia, where the household income is a quarter that of the first community and a sixth that of the second, and you find they have 12 providers!

Now I’m not saying broadband is making people earn less, though in fact that’s a more supportable view given the data than the contrary assertion that it would help them earn more.  I’m not even saying that the urban poor have generally better broadband, the FCC’s rhetoric notwithstanding.  What I’m saying is that even a simple review of the data we’ve collected shows that our viewpoint on the role and value of broadband Internet isn’t supporting the popular views, or the views the FCC is presenting in its National Broadband Plan.

The data also seems to suggest that geographic factors like population density are by far the most significant forces in determining where broadband competition will develop.  Even in very poor zipcodes we see a lot of providers—more than in most of the richer ones.  Why would operators focus their efforts on places where household income is the lowest, if not because those places have population densities that overcome even four-to-six-to-one income disparities?   That proves our long-standing point that demand density means everything.

We’re also concerned that the FCC’s data includes non-facility providers of broadband, which in our view distorts the picture considerably.  The only way to get broadband to the user is to deploy infrastructure.  Riding as a wholesaler on someone else’s doesn’t create new options, only “new” providers.  In fact, there’s every reason to believe that multiplication of wholesale players might erode margins and further limit investment.  It certainly distorts the figures, and most people where I live couldn’t name more than two wireline and four wireless providers, which totals 6 and leaves at least four or five unaccounted-for.  Who are these providers, one must wonder?

The biggest problem here is the lack of clarity of data, or the reliance on incomplete or just bad data—it’s hard to say which.  The FCC appears to have gathered a lot of information through third parties, and also appears to have muddled its own data collection.  As I noted, it’s hard to say whether this was ineptitude or deliberate.  What’s easy to say is that bad policies are inevitable if bad data fuels them.

A “European Approach” for Us All?

Speaking yesterday at BBWF, Alcatel-Lucent’s CMO Stephen Carter talked about the need for creating a “European approach” to 4G broadband.  Some of the specific points in the talk weren’t new; we need to move beyond all-you-can-eat pricing, we need to add some specific partnership and settlement processes, and we need to recognize the intrinsic differences in the major markets.  What is interesting to me is that all of this is coming to a head right now.  Why that is might be the most interesting thing of all.

The reason is mobile 4G services, and the fact that these services are being driven by smartphones and tablets and even e-readers—appliances.  Mobile disintermediation via appliances is a real risk, and 4G bandwidth levels mean that there is truly an opportunity to create a new model of the user’s relationship to the network.  The risk that new model might end up being a reprise of the OTT-dominated wireline broadband market is very real now.  Further, 4G deployment offers operators a chance to reset the pricing and service relationships—to a point.  Operators either have to take the opportunity and level-set 4G differently, or they have to avoid 4G investment as being something unlikely to pay off for them in ROI terms.

Which of course is Alcatel-Lucent’s issue here.  Arguably companies like Alcatel-Lucent have been most successful in the wireless area, and an operator trend toward stagnation of wireless investment would be a major barrier to Alcatel-Lucent’s future profitability.  But the truth is that they aren’t the only one with a bet in the 4G game.  With the exception of Cisco, whose ambitions for revenue growth are spreading to markets adjacent to networking, every one of the major network vendors is a slave to wireless capex growth because wireline growth is not going to even sustain their current numbers.

What is clear to me is that everyone in the broadband game realizes that 4G is the watershed issue, the place where we either get control of network evolution in an economic sense or admit we can never control it.  In the latter case, it’s clear that we’ll see sharp capex declines beginning (according to our model) in 2012 as ROI pressure on operators constrains network investment.  In the former case, we could see the very thing Carter says we need—immersive broadband that touches all of us in all aspects of our lives, because it can profitably be made to do so.  It’s not a glamorous vision for the US market because we want to believe everything’s free.  It’s not simplistic like Cisco’s vision of driving infrastructure investment simply by forcing more traffic onto the network regardless of the ROI.  But it’s a true vision, and Alcatel-Lucent is perhaps best in all the industry in articulating it.

But can they deliver it?  The principles of Application Enablement are surely relevant to creating what Carter hopes for, but they’re not a sufficient condition as they stand.  There are too many holes in the story of the “European way” when you get to the rubber meeting the road.  Potholes are a bigger threat to ROI than the current disorder in a way, because without a clear path to invest everyone will hunker down and look ahead to when that path becomes clear.  That could hurt capex even earlier.  Four vendors (Alcatel-Lucent, Ericsson, Juniper, and NSN) have assets to build the kind of future Carter talks about, and not just for Europe.  Which one will come through?  We’ll likely know by spring.  Carter’s speech is proof that the issue is too acute to be ignored any longer.

Ecosystemic Security

Juniper announced a mobile security suite, building on its Junos Pulse agent/client software that operates across a wide variety of mobile and PC platforms.  The elements of the suite (the anti-virus, firewall, etc. that are common to most PC suites) are less news than the framework in which it’s being provided.  What Juniper is doing is binding security as an element in a device agent, then coordinating it through central management of that agent so that it’s effectively a part of a collective network- or organization-wide security program.

The newest problem both enterprises and operators are facing these days arises from the fact that a single user is extended across multiple appliances, and increasingly uses those appliances as facets of a virtual personality.  That’s true with social-driven consumers but also increasingly with productivity-driven enterprises.  Point-solution security not only doesn’t secure the range of devices, it forces those who want security to integrate disparate policies and processes to create a secure framework, and one miss destroys not only collective security but also risks cross-contamination of the other channels to the user.

I like the Juniper approach here not because of its capabilities or because of the need that Juniper-sponsored research was targeted at validating; we have security on devices, and we’ll have it on all eventually, and the problems of device security are hardly a surprise even without new research.  What I like is that Junos Pulse extends “the network” to the device itself and makes it an agent of network policy and services.  That seems the only long-term solution to both security issues and to creating service value-add.  Plus, the multiple device faces of the user are going to pop up in a lot of future service missions, and they will be problematic to those without a device-integrated approach.

It’s hard to pull this story out of the Juniper talk, in part because it’s focused so much on security needs and the point-solution remedy.  The real story is the ecosystem.

Is Ozzie Right?

Microsoft tech guru Ray Ozzie is leaving Microsoft, and in the wake of the announcement a memo from Ozzie was leaked to the media.  In the memo, Ozzie asks Microsoft to confront an age without PCs, an age where traditional Microsoft PC incumbency would thus be meaningless. 

What Ozzie is looking at is whether appliances like smartphones and tablets, combined with cloud-hosted services, could change the appetite of the public for personal computing.  I think that the answer is already known, but it’s ambiguous.

The question is whether cloud services can absorb all the functionality of local applications.  In theory?  Sure.  In practice, the problem is that of willingness to pay and profit.  If the total market for computing and applications among consumers is seen as being ad-sponsored, we’ve collapsed a multi-billion-dollar industry into something that’s likely a tenth its current size, simply because you can’t expect ads to sponsor all of content, all of software, and all of everything else when the world’s ad spend is only about $680 billion and isn’t even growing as fast as world GDP.  Thus, we’d have to expect that the consumer paid in some direct way for the incremental application services.  So whether that direct payment was less than the cost of central hosting of the applications becomes the question.

To answer it, we say that central IT resources are always cheaper—economy of scale, after all.  But the erlang curve shows that economies of scale taper off at volume, meaning that there’s a point where no further economy can be gained.  And you still need a screen, keyboard (even if its virtual and on-screen), CPU chip, and memory to create a network appliance.  The cost of making that into a computer isn’t incrementally enormous.  I can buy a netbook for three hundred bucks, get free or cheap software for writing, calculating, photo-editing, and more.  Sure I have to sustain the software, update it and secure it, etc.  But most of the threats to security come from the Internet, so don’t I have to secure my appliance anyway?

My point is that Microsoft is as much at risk for over-reacting to the future as it is to under-shooting it.  Its biggest problem is the same one it had before all the Internet appliance stuff hit the market—once everyone who needs a PC has one, what’s your future strategy for growing revenue?  Microsoft needs to capture the incremental revenue from the appliance-and-cloud craze, not to substitute that revenue for its current revenue stream.  If it does the latter, it dies pure and simple.

Revolutionary stuff is interesting, and in this mindless media age the only thing that matters is “interesting”.  Truth won’t create click-throughs.  But truth is what creates markets.

The Week Ahead: October 25th

There’s a significant potential for some swings in stock prices this week (not that we haven’t seen them in the past!) because of the volume of economic news and the number of earnings reports due.  The number that’s likely to be watched most closely is the 3Q GDP growth, which our model pegs at about 2.1% in annualized form.  While very few now expect a double-dip recession, this number will be seen by the stock market as an indicator of likely near-term future economic health, and hedge funds will certainly short the market aggressively if it dips.  I think it’s a bit of a tempest in a teapot over this one; whatever it is, it’s almost certainly better than 1Q and worse than 4Q so we’re slowly recovering.

The FCC is getting itself behind a wireless-based thrust at national broadband ills, but I don’t much like Genachowski’s style here.  He opened a recent talk with comments about the slide in the US economic standing worldwide, and then jumped to spectrum.  To me, that implies that we can lever our way into the top economic spot with wireless broadband, and if there’s any truth to that it’s yet to be substantiated by one piece of objective data.  Sure we might start a wireless bubble, but it’s not going to transform our economy to facilitate Twitter updates or let teens watch music videos.  If we have an economic problem (which clearly we do) it’s because we can’t produce substantive stuff any more; we’re trapped in social networks and deceptive advertisements for herbal supplements and consumer product gimmicks.  We became an industrial and economic giant by building the fundamentals—steel, cars, ships, planes.  Heavy industry is the base of everyone’s economic stability, and figuring out how to provide incentives (government and technology) there should be our top priority.

Does Apple’s Lion Strategy Threaten More Disintermediation?

Apple’s moves to converge its iOS and MacOS platforms over time and to create a unified developer environment between their disparate devices is a smart move that responds to the reality of the market and competitive environment.  The question is how far they’ll go and what impact the efforts will have on the appliance space, the developer community, and even the service provider market.

 The iPhone launched the smartphone revolution, which in turn launched the applet/widget revolution, which in turn is opening the question of whether device-resident intelligence will play a commanding role in the development of what the buyer/user perceives as “services”.  The iPad has had a similarly transforming effect in the tablet space, and competitors have already established themselves with smartphones—primarily via Android-based phones in the broad market and on RIM’s building on its enterprise incumbency.  Competition is also increasing from both sources in the tablet space, with pretty much the same cast of competitive characters.

What creates Apple’s platform dilemma is that broader installed bases begat greater support for developer opportunity, and thus a larger application community.  As I’ve noted before, this was one of the factors behind Apple’s loss of its early lead in the PC market to the IBM-compatibles.  An open framework attracts support because it is open, but it also reduces the originator’s ability to control and monetize its own marketing, which is why Apple has traditionally rejected such an open approach.  But a marriage of its Mac operating system and the OS used for appliances and the harmonizing of a development environment across both would have the effect of increasing Apple’s developer mass.

The challenge is that it will also almost certainly cause Google to prioritize Android as a tablet OS, thus exacerbating the competition between these two industry giants.  The further the Android OS goes in terms of supported hardware, the harder it will be for Apple to sustain itself as an appliance walled garden.  Some gestures of openness exist through the developer program, but Apple’s long-standing feud with Adobe over Flash illustrates where walled-garden thinking can take you and how it can create a lot of gratuitous enemies.

On the service provider side, the competition between Apple and Google (through its Android proxies) creates yet another path to disintermediation.  Ceding service-creation innovation to OTT players was a problem in wireline, and ceding it to smart device vendors and developers in the wireless space only makes things worse.  The so-far-ill-fated Microsoft phone strategy has been toying with hosted services, but probably more as a means of getting Microsoft into the OTT feature business than as a means of empowering operators.  Can operators respond with an approach of their own, and in time?  Their service-layer revenue future may depend on it.

Beware of Free

The Facebook scandal, where popular application providers shared private data without user permission, is only the latest in a series of targeting-related breaches of privacy and violations of “policy”.  The FTC has been of two minds regarding the issue, with some believing that regulation was necessary to protect consumers and others believing the industry could regulate itself.  The current direction appears to be toward self-regulation, despite the mounting evidence that the industry is unwilling and unable to do that.  What’s going on here, systemically?

First, to understand targeting you have to understand motivation.  The goal here is NOT to get the right ad in front of the right person, it’s to get ads in front of fewer wrong people.  The ad industry knows that things like TV commercials blast ads to the point where it’s unlikely that there’s any possible consumer who doesn’t see them.  Thus, a well-targeted ad isn’t any more likely to be seen by the prospective buyer than one that’s simply broadcast.  What is likely is that a well-targeted ad will be seen by a lot fewer people who aren’t likely prospects.  Even if you pay more for such ads, you spend less with “overspray”.

This goal of advertisers sweeps into a consumer market with an appetite for free stuff.  Nobody wants to pay for anything if they don’t have to, and so nobody really wants to pay for content, or Internet services, or online applications, or whatever.  They’re therefore likely to surrender a certain amount of privacy to secure what they want, believing that the cost to them is less than the benefit.  Since one could argue that the goal of regulation is to protect the public interest, it would seem illogical to regulate consumers out of the benefits they’ve elected to trade for.

But that presupposes they’ll actually get what they want, which is the big fallacy of targeting.  As I noted, all you need to do is run the numbers for online ads versus TV commercials to see that what’s happening isn’t a flight to quality in terms of consumer targets but a flight away from the non-engaged.  That flight is motivated by cost, not additional revenue, and thus it’s necessarily a less-than-zero-sum game.  And that means that success in targeting funds not more experiences but less.  Consumers are giving away their secrets to lose, rather than gain, and that’s something regulators should be dealing with.

Regulators need to be thinking more about the future of the industries they regulate; the financial crisis proves that point.  Those in the industry need to think a bit too, because the real opportunities in the long run are created by the long-term money flows.  Cost conservation never leads to anything but commoditization, no matter what part of the network food chain you’re in.