Well, Neutrality is (sort of) Here!

The FCC’s neutrality vote went as expected, with commentary by various people involved in the process, including the Commissioners.  I found a lot that I agreed with, but I disagreed with at least some of what virtually everyone said.  It’s not a disappointing order, though I’m sure that most will characterize it that way.  The only thing that’s disappointing is that it doesn’t in my view address the issue of the FCC’s authority to act.  The loss of the previous neutrality doctrine was a result of the Court of Appeals having overturned that doctrine for lack of authority to act.  I don’t think the current order establishes a strong position, and certainly there will be no lack of players to appeal the order.

The FCC’s position is pretty much as expected based on prior comments by the Commissioners.  The FCC will require that wireline broadband services be subject to handling rules that are transparent, non-discriminatory in terms of sites, devices, and traffic types.  For mobile services, the transparency rules are in force but non-discrimination is weakened a bit to reflect the special nature of wireless.  For mobile, blocking of traffic that’s competitive with the ISP’s own service is prohibited, but other blocking for traffic management may be allowed if the need can be proved.  The “specialized services” that flow in parallel with the Internet will be reviewed, but nothing will bar either payment for priority or tiered pricing per se.

The jurisdiction issue here is going to seem trivial, but it’s really central.  The current move is based on Section 706 of the Telecom Act, which the FCC itself has never before said offered it any independent authority to make new broadband rules (the Court of Appeals pointed this out in the Comcast ruling).  Further, Section 706 applies explicitly to telecommunications services, and in 2005 the FCC said that Internet broadband was not such a service.  Commissioner Copps took the strong stance that a return to Title II regulation was the right approach.  I agree.  The FCC’s “third way” would have given the order absolute legal foundation and would not have subjected the Internet to being regulated like a telephone network.

But Copps also said that we needed wholesaling for competition, which I’m not sure is true, and that we needed equal regulation in mobile services, which I’m pretty well convinced is not true. The Republican Commissioners laid out objections that boil down to “no neutrality” or “let the kids play”.  “Nothing is broken in the Internet access market that needs fixing” is one of the comments.  I don’t agree with that either.  So what we had was a bunch of political comments about a decision that was likely about as strong as the realities of politics could have allowed it to be.  If we saw the rules enforced, they’d likely not hurt anything, would almost certainly prevent egregious behavior, and might even help.  I’m not sure they can be enforced, and that’s my problem.

It’s not clear they even need to be enforced.  One valid point raised by the opponents of the order was the fact that the FTC and DoJ anti-trust regulations would cover consumers against anti-competitive behavior by ISPs.  That’s likely true, and thus you could reasonably say that the FCC’s order could simply be another round in a long-standing battle between the FCC and FTC for control over the telco markets.

So the Democrats, with Copps speaking to the impassioned Internet supporters, say that much more regulation is needed to keep the evil ISPs from our door.  Baloney.  Two of the three Democratic Commissioners said they wanted even more neutrality control than the order provides, but went along with the deal because it was the best available.  The Republicans say that these rules will kill the Internet, kill investment, kill society (online at least) as we know it.  Baloney.  The FCC that gave us the four principles was led by Republican-appointed Commissioners.  Were they in favor of industry-killing then, and have now changed their minds?  A pox on all politicians, and sadly the FCC Commissioners are politicians despite the fact that they’re appointed and not elected.

Might the politicians in Congress now jump in?  Sure, and they might pass other legislation despite their record of not getting much done.  Both parties can block action of the other here, and the division of the Commissioners by party makes it pretty clear that both parties would block Congressional action they didn’t favor.  There are some who believe that the Congress will move to give the FCC specific authority to cover the order, mooting any appeals, but I don’t think that’s likely.  We’ll have to wait until a Court of Appeals rules here, if not the Supreme Court, before we’ll see any differences in broadband as a result of the order.

How different is the new broadband under the order, anyway?  Despite all the hype on both sides, it’s not very different at all.  Likely the biggest changes will be the drive toward more settlement and payment options, moving away both from the unlimited-usage pricing and the bill-and-keep models of the past.  But even these changes may be modest until some legal validation of the order is available.  Thus, don’t expect to see very much from this in the near term.

Oracle Clouds, Neutrality-Eve, and NSN’s Vision of Three

We’re starting off what will likely (but you never know these days!) be a quiet week in the markets.  Top of the news is the announcement by Oracle that it will be supporting at least some of its PeopleSoft and JD Edwards applications on Amazon’s EC2.  This seems a reversal for the company, who had initially seemed to reject the cloud model, and I think it’s worth looking at it for some hidden truths.

First, the revenue impact of the decision isn’t significant on the face because Oracle will treat EC2 virtual machines just like customer virtual machines; same license terms and rules apply.  So what we’re seeing here is a model to accept infrastructure as a cloud service rather than to promote public cloud-based enterprise apps in SaaS form.  But why even do that?  I think that Oracle is realizing that the hybrid cloud is its path to enterprise prominence, and in particular a path leading past HP in a competitive sense.

Another factor is that Oracle’s database appliances are selling strongly.  These appliances provide DBMS-as-a-service, and thus could make it much more practical to have a cloud application access an on-premises database with reasonable performance.  Thus you could argue that the hybrid cloud model is perfect to socialize Oracle’s appliances in a market that already seems to be catching on to their value.

The FCC will be releasing its net neutrality order tomorrow, though it’s not fully baked at this point and might still be pulled from the agenda.  The order appears to be a curious mixture of logical application of neutrality and illogical legal foundation.  I’ve reviewed the Court of Appeals ruling in the Comcast case and it’s hard for me to see how this dodges the legal issues the court has already raised.  The only avenue forward would be for the FCC to now assert (and justify) the view that Section 706 of the Telecom act gave the FCC “new” powers to encourage broadband and not just a specific justification to exercise the powers it already had.  The FCC has taken the opposite position consistently.

Republicans in Congress are rattling their sabers, threatening to pass a bill that offers no funding for the FCC’s neutrality rules.  Apart from whether this is even legal, it’s pretty obvious that in the divisive political world of Washington it could never pass.  Similarly, it’s clear that neutrality legislation more aggressive than the FCC proposes (mandating no traffic management, no premium handling except for free, and full wireless regulation) wouldn’t pass either.  So whether either extreme is the right answer doesn’t matter.  What does is having a set of rules that will pass legal muster, and that’s where I’m concerned here.  The FCC’s “third way” was the right answer; it was clearly legal and it would have offered exactly what the situation needed.  Some of the Democratic Commissioners want it, and frankly I’d rather they held out.  I disagree that this order is better than no order—if it’s not enforceable then it is “no order”.

Economically, the EU sovereign debt problem is continuing to cloud things a bit, but even European stocks are up this morning and so are US futures.  I think that the only real question on the table has been whether Europe would let the EU sink rather than have the stronger countries guarantee the weaker ones.  That question appears to have been answered to the point where speculators aren’t quite as willing to play chicken.  The good news is that if the debt problem were to be put solidly at rest, Europe would likely start recovering faster.  That would be important because the bad news is that the austerity programs that would be demanded as a condition for loan guarantees to the weaker nations would certainly create social unrest, and possibly weaken the ties that bind them to the EU.  Would that hurt?  Truth be told, not much.  It’s doubtful that any of these nations could go it alone, and I think the voters there would draw back from the brink.  No question, though; better times would help a lot to ease tensions.

Alcatel-Lucent continues to showcase the developer side of its Application Enablement approach, including its Open API program that federates application services across multiple developers.  There is no question that the company has started to gain some traction in the market with this, but there is still a question in our mind regarding how quickly the program can adapt to market conditions.  The thing that made OTT players successful in the service layer is that they’ve dodged inertia.  They don’t worry about standards beyond blowing a casual kiss here and there, and thus they can expose features via APIs very quickly.  If you want for industry consensus on APIs, you’re putting yourself at the tail end of a multi-year process and then saying you’re running at market speed.  I’d like to see Alcatel-Lucent open up more regarding how it will create features in Application Enablement and how quickly it can expose them using RESTful APIs.

NSN’s CEO has recently suggested that the telecom sector will consolidate with only three major players remaining; Ericsson, Huawei, and NSN.  I think that vision of the future is a tad self-serving in terms of the players, but I think that it is very clear that somewhere around three players is what we could expect if the industry can’t find better feature differentiation.  If Alcatel-Lucent wants to make the cut here, they definitely need to make Application Enablement work, and it’s frustrating to me how close they are to that, and yet how far.  But it’s not atypical with service-layer strategies in the big vendor space.  Nobody has it right there.

One might ask where this consolidation would leave Cisco and Juniper, the other big players in the IP layer at least.  I think that’s another area where the NSN comments oversimplify.  We’re seeing, in operator trends toward “procurement zones” for buying, an attempt to create a market where a single giant with a full product line can’t dominate everything.  The operators would like innovation, particularly with respect to the service layer, and they can’t get it by having everything collapse into a single giant commoditized space.  But if the specialty guys like Cisco and Juniper can’t make a case in the service layer, then they can’t defend their narrower position in a commoditizing market.  Thus, we could see the NSN “vision of three” being right, even if the three turn out to be different from what NSN expects.

Economics and Profits

Even as economic conditions worldwide appear to be improving at the macro level, there are renewed pressures on the Eurozone sovereign debt issue, and concerns that managing a global shift from stimulus to the control of debt and inflation will be challenging.  Bond ratings for Ireland sunk and there may be further revisions in bond ratings for Greece, and even for Spain and Portugal.  Some financial experts think that the “edge countries” in the EU will all require support as sluggish economic growth and relatively expensive social programs create a gap that only borrowing and austerity can fill.

One proposal now gaining strength is the issuing of a Eurozone bond that would be used to fund a large rescue fund, essentially transferring the faith and credit of all the major (and more successful) EU economies to the debt of the weaker members.  This won’t mean that austerity programs won’t kick in where the funds are transferred, though, and as a result the measure will trade the tension of disparate debt ratings for a new tension in disparate quality of life, something labor in the impacted countries is already protesting.  But a debt crisis would produce a lifestyle crisis too, so the choice is the latter alone, or both.

In the US, we had a rare show of partisan cooperation with the passage of the tax bill, a bill that includes a Social Security payroll tax reduction of 2% for next year that is a form of stimulus and also an extension of unemployment benefits.  The general view is that this will keep the US economy on track in 2011, and that’s what our model says.  It probably adds about 0.2% to GDP growth for next year, and may reduce unemployment by a half-percent according to our own numbers.

Another significant event in Congress is the fact that the behemoth spending bill that was prepared to fund the government has been pulled in favor of interim funding because it cannot be passed over Republican opposition.  The problem here, at least on the surface, is that the bill contains billions for projects of questionable value, and likely millions in special earmarks that were a specific target of Tea Party activists who were elected to the next Congress.  Some kind of reform of the bloated federal budget process may be forthcoming, which couldn’t hurt.  It may also be a sign that Congress is going to work harder to be bi-partisan in 2011 and beyond.

Alcatel-Lucent may be looking to change video collaboration, announcing that Bell Labs and a Belgian research giant IBBT will collaborate on applications to “bring a new dimension to video communications”.  The scope of the work appears to include both stuff likely useful in the near term (like video content analysis and management of each user’s view of a conference relationship) to things like immersive panoramic experiences, ultra-high-def, and even 3D that we think may be simply going too far to be relevant to people who don’t want to be on camera when they’re feeling ugly.  Our research has long shown that a better and more socially linked collaborative dynamic would be highly valuable, and in fact might kick off a wave of productivity-based IT investment that would restart an industry stalled in underperformance relative to its glorious past.  The question is whether the research process will deal with the real and current market issues; the future of 3D telepresence is still a bit off, I think.

Oracle is clearly not off at all.  Their revenues were up 47% in large part on strong sales of Sun hardware.  Pipeline deals for the Exadata servers were about $2 billion.  Clearly Oracle is a Big Player now, and clearly they’re a special threat to HP, at whom Ellison took a shot during their call.  HP’s weakness is software in our view, which is Oracle’s strength, and there is a very good reason to believe that the special strength of Oracle in middleware is the secret sauce for the company’s diet of competitors in the data center.  IBM matches Oracle’s credentials here, but the company poses a threat to everyone else’s data center plans, including Cisco’s.  The Cisco comment raises the key point, one I’ve been raising with respect to Oracle for a year now.  What will they do in networking?  If they want to be a full-scale data center player, they need a network strategy.

A New NSN?

There are renewed stories that NSN is looking to sell about a third of itself to a private equity consortium.  The stories aren’t indicating at this point how the share would be divided among the buyers, nor where it would come from in terms of Nokia and Siemens.  It’s a classic good news versus bad news item no matter how it divides, though.

The good news side of this is that nobody buys something that’s worthless.  NSN does in fact have strong assets, and certainly those assets could be leveraged to produce a good return on any private equity investment.  The bad news is that if you’ve got good assets that could produce a good ROI, why aren’t they producing one for you, if not that you’re messing up?  Clearly neither Nokia nor Siemens would be looking to sell off a stellar activity.

But there are reports that the “managed services” space that Alcatel-Lucent, Ericsson, and NSN all crave share in is expanding; Ericsson won a 3 Italia deal to revamp their IT processes.  Not exactly a giant deal, and in any case it isn’t a broad endorsement of a outsource-based service-layer strategy.  Operators tell us that they’re happy to outsource stuff that’s a cost center, that has no direct competitive impact, and that depends on skills they don’t have and don’t want to develop.  They’re less sanguine about outsourcing what makes them profitable.  I think that the question here is whether the private equity guys are drinking the PR Kool-Aid on managed services or whether they see that changes need to be made in NSN’s service-layer positioning and are confident they can make them.

We said in our 2009 analysis of vendors that NSN needed to sing prettier at the strategy level to create service-layer-strategic traction with buyers.  We also said that such traction would be increasingly critical to success and to sustaining margins at lower layers in the network.  The problem is that our surveys have shown that NSN lost credibility in the period since that analysis.  While their worst dip was from the fall of 2009 to the spring of 2010, they’ve gained little ground between spring and fall, and in some key areas (like the radio network in mobile infrastructure) they actually lost slightly.  There’s absolutely nothing wrong with their product line or their technical skills here—their problem is purely marketing/positioning.

That’s the centerpiece of the dilemma that confronts any organization who buys a piece of NSN. You can believe that managed services tides will lift all boats, including NSN, and that you see this great truth even though neither of the current partners does.  Or you can believe that the problem of the service layer can be solved for NSN by singing their song more effectively.  Given that, I’d be looking at creating an NSN choir if I were senior management there!  Otherwise a deal could go sour simply by having the current NSN trends continue in the face of a newly aggressive position by one of the competitors.

Plucking the Differentiation Fruit

Enterprises are pushing through a set of complex political and project dynamics in 2011 according to our surveys.  The changes and their motivations offer us an interesting view on the cross-currents that really define what enterprises buy and how they buy it.  Thus, they offer a vision of what we could expect in terms of competitive dynamics for the balance of this decade, at least.

Over the past couple of decades, spending on IT and networking has oscillated between modernization-driven and benefit-driven.  In rough terms, the former is reflected in the “budgets” for IT spending that are assigned to the IT organizations themselves.  The latter represents special off-budget activity that carries an IT cost component but generally is justified by an operations benefit case.  Over the years, there’s been increased pressure on the budget side, pressure to deliver more applications at a lower overall cost.  It’s this pressure that has created things like server consolidation and its successor concepts.  Over the years, this pressure has been relieved to a degree by the growth in the mission of IT, its expansion to new operations areas.  That pushes up spending and increases the role of IT within the company.

The relationship between IT and budget spending has proved to be a fairly reliable indicator of whether IT is in an expanding mode or in a consolidating mode in the market overall.  Expanding IT means that feature differentiation is easier because their new missions not yet committed to current vendors and not necessarily supported by current features.  Consolidating IT tends to empower incumbents and makes TCO the only strategy to argue.  In the last 50 years, the thing that drove the expanding/consolidating cycle was the advent of new productivity-augmenting IT paradigms.  We’ve had cyclical budget behavior based on that for nearly all of IT history, until 2002.

Enterprises have been following a path toward a different paradigm of computing, but their route has been complicated by the fact that like most consolidation measures it has a short focus.  You can get from NY to LA by making a decision at every intersection based on local conditions, but it’s not likely to be a happy (or short) journey.  Server explosions created in the heyday of falling server costs were stemmed by server and data center consolidation.  That’s because support costs were now higher than capital equipment costs.  Now, we’re seeing consolidation in the form of static VM assignment to applications giving way to virtualized resource pools, and enlightened enterprises see these as giving way to private and hybrid clouds.

I think that most of us realize that if you follow a path long enough you get a sense of its destination even if you didn’t have that from the first.  I think most would agree that when that sense of destination is achieved, progress along the path is faster because it’s backed by greater confidence.  So it is here.  But the question for the “market” is how this greater confidence and speed of progress might impact the sales of IT components, and the progress of IT evolution.

The number of IT executives who realize that something more profound than “modernization” is occurring (and is required) has grown significantly in the last year.  We’re creating a new architecture from IT by trying to make IT less expensive and more easily supported.  While those aims are tactical, the same changes in IT paradigm could empower new mechanisms to improve the IT/operations link, and business produtivity, in the future.

Any time a new paradigm is on the rise, differentiation opportunity can also be expected to be higher.  The consolidation-project differentiation apples may not be as easy to pick as apples on the productivity-differentiation side, but they’re just as sweet.  For all vendors, they represent the space that has to be attacked to gain market share in 2011 and for all the space that must be defended to sustain it.  There probably has never been a year in recent IT history when the balancing of strategic and tactical demands focused on the same issue set.  Next year will be one.

Everything Changed will Change Again

In the last week we’ve seen web attacks, password and private data theft, and in all a lot of things that raise the fair question of whether the Internet is becoming the wild west.  It has been for some time, of course; what’s happening online now is simply a continuation of a set of problems that the Internet as a community refuses to solve and that the governments of the world are unwilling to confront.

Ultimately we’ll have to deal with the issues of security and privacy that the Internet is presenting us, even though those issues have become more divisive for our having delayed so long to address them.  The question is how much worse the problems will have to get before the public marshals support for change.  We are, I think, only a couple of years of neglect away from doing real harm to the basic principles of the Internet—the openness and the lack of a tie to a specific business model.  It would be tragically ironic if we lost those benefits largely because of the unenlightened way we’re pursuing them.

We may see some changes at least in the US in 2011, and there are also signs that Europe may be taking some steps.  I could offer as proof all manner of arcane regulatory comments and trends, but more convincing is the sudden decision by Google to be much more accommodating to the telcos, and even to ally itself with Verizon in proposals for neutrality.  Google is also obviously planning its own transition to a broader business model than advertising, recognizing that only paid services can expand its total addressable market fast enough to sustain its stock price.  Google knows that it’s one thing to offer free best-efforts delivery of content and another to offer paid delivery—in the latter case you’ll have to provide some assurance things will work, but more significantly you’ll have to share the revenue.

Speaking of changes, it’s interesting to see that the Comcast vision of the future of video seems to be emerging.  First, Comcast has forced Level 3 to pay more to enable delivery of Netflix to Comcast’s customers.  Second, Comcast has been running an experiment in socially-linked video as a means of further differentiating its TV Everywhere online offerings.  But just as the biggest proof point for regulatory changes was indirect via Google, the biggest proof point for a Comcast change may be Verizon’s Seidenberg and his comments on a future Verizon model.  Verizon seems to be saying that they’re prepared to be much more “granular” in their video offerings and in their broadband pricing.  On the surface that would seem to be undermining their own FiOS model, but what it’s really doing is exploiting the fact that OTT competition in video hurts the down-market competitors more than up-market Verizon.  If Comcast is one of those, then Comcast has to embrace a bit of the technology of cord-cutting to avoid losing to the business model the technology represents.

TV is changing, but I wonder if the changes are as radical as some say; certainly my own research doesn’t bear that out.  Does the average household watch 13 hours of TV per week as one study shows?  I don’t know of any typical household where that would be true, do you?  It is true that people are spending more time online.  It is true that online time is pulling some viewers away from TV, but so far as I can tell this is what I’ll call “settle-for” viewers.  There’s nothing on they like.  They used to settle for something they sort-of-liked, but now they check Facebook instead.  That’s not destructive to TV viewing; wait till they skip their favorite shows to do something online before you start to worry.  Thus, Comcast’s experiments with “social viewing” may be at least on one potentially valuable path.  We’ll probably see many more experiments like that in 2011.  Meanwhile, reports that things like Netflix are going to kill channelized video are, to quote Time Warner’s CEO, like thinking “the Albanian Army is going to take over the world”.  The establishment has time to work some magic for sure.

Microsoft is also trying to change, and according to the latest rumors from the WSJ it will be launching not only a new line of tablets at CES but also a preview of Windows 8.  The challenge for Microsoft in the tablet space is formidable because tablets are seen today as a kind of fat smartphone without voice instead of being a laptop without a keyboard.  For Microsoft, any tablet win that promotes that simple model is a loss for Microsoft.  It’s not a big player in the smartphone space, it’s not a recognized consumer cloud powerhouse, and a tablet strategy would almost have to be synthesized from both these fundamental elements.

But why Windows 8?  The problem that’s been reported is that Windows 7 is too gadget-intense for a tablet GUI where real estate is limited.  The buttons become too small to manage.  Some have pointed to the fact that when netbooks with Win 7 appeared, they often ran into trouble with applications whose window sizing strategy assumed a specific display form factor, and so cut off the bottom of menus and other windows when displayed on a netbook.  But just having a new version of Windows doesn’t establish a new GUI; developers would still have to embrace the change, and Microsoft would be breaking the momentum of Windows 7 at a time when that momentum may be critical for Microsoft.  How long would it be before users realized Microsoft was going to churn OSs every couple of years, and jumped ship to the thin-client-and-cloud approach.  Which would take us back to the tablet as the ultimate thin client.

Everything is circular, I guess.

Leading Up to a Critical Decision

The holiday season is always dominated by consumerism, but it should be pretty clear to everyone that networking itself is increasingly dominated by the consumer.  I think that we’re headed very quickly for a time when the consumer essentially funds all public networking, creates the design paradigms and the economic trade-offs.  Along the way, though, we’re facing some potentially significant hurdles and shifts in the course.

The Internet has already made public IP infrastructure the basis for public networks, though of course that infrastructure tends to be less homogeneous than many see.  Ethernet is a smarter edge strategy, for example, because most consumer services will haul traffic to either a metro off-ramp or a metro cache/server farm.  You don’t need a lot of connectivity to get to one place.  Still, the Internet has won IP a victory at the service layer, where the IP address space is the only framework we could expect to see in the network of the future.

This month, we’re heading to a kind of financial watershed with public network services.  There’s been a surge of growth in online services funded by advertising, but advertising represents only a fraction of the money needed to fund a public network, and recent legal disputes (on Interclick’s history-tracking, for example) show that advertising-related sites are pushing the limits of public and judicial tolerance in a quest to tie up those limited dollars.  Ultimately people have to pay for stuff to fund a three-trillion-dollar-worldwide industry like networking.  The FCC is likely to set the boundaries of where pay works and where it doesn’t in its December 21st order on net neutrality.  But whatever they do, there’s no turning away from the fact that advertising isn’t ever going to fund the public network, so something else has to.

Consumers would love a free Internet, just like they’d like free automobiles, homes, or cheese.  That doesn’t make the concept practical, even in a political climate where give-aways are the rule and not the exception.  We’ve taken free-ness about as far as we can at this point; even Google I think understands that it has to move from being totally ad-driven to having some set of for-pay products and services.

What the FCC’s order will do is establish the legal framework for an Internet that’s cooperative in a broader way than at the pure connectivity level.  What’s needed is the same today as it was back in the mid-90s when I participated in an attempt to bring financial order to the Internet by creating a formalized mechanism for peering and settlement that included QoS.  We have the technical means to do what’s necessary, but we don’t have regulatory air cover.  The question now is whether we can get it.

Genachowski’s attitude on net neutrality appears to have undergone a transformation, and at the same time the Comcast/Level 3 settlement seems to open the door for settlement between content providers and CDNs and access providers.  Any settlement at all here would be better than we have, but Comcast/L3 doesn’t go far enough.  It comes down to a question of whether the relationship is “peering” or “transit”, and neither of these concepts goes far enough because both are simply different ways of viewing the permitted traffic balance.  There’s still no QoS-based settlement, and without that the Internet can’t provide pan-provider quality of experience.  There will be a transformation of investment and a transformation of Internet architecture if we can’t settle QoS-based relationships across ISP boundaries.  Such limitations favor investments in caching over interconnection, and favor larger and larger players to create fewer and fewer inter-provider boundaries.  We may start to hear some details on the forthcoming order leaked this week, in advance of the meeting.  Pay attention; it could be critical.

Economic Recap: December 10th

The economic situation worldwide continues to become more clear and more stable, though it’s sometimes hard to glean that out of the media processes.  Yes, there are still issues aplenty, but under all the swoops and swings of financial news and even financial markets, there is a clear sense that we’re trying to manage a recovery now and not trying to prevent another slump.  Volatility is a price we’re going to pay here, but it’s not a bad price overall.

The recovery-management process has created what are essentially two polar extremes in terms of global economy.  One, epitomized by China, is trying to manage a growth explosion that threatens to destabilize not only the economy but even the political system.  The other, epitomized by the EU and in particular the UK, is trying to control the cost in public debt that stimulation policies have created.  The US is somewhat the man in the middle here, for a lot of reasons.  Because the US is the largest global economy and by far the largest consumer market, everyone sees it as a potential ladder for their own growth.  Because of the size of the US economy, government has been prepared to risk a higher debt level than the Eurozone, and recovery here creates a shadow of the inflation issues of China.  In the middle again.

But US trade numbers this week were very favorable.  That’s clearly not because imports have sagged; the consumer economy in the US is recovering.  Exports to China and other emerging markets (Mexico, for example) were stronger.  This demonstrates that the world isn’t acting as a brake on US growth, at least with current trade and currency balances.  It is clearly acting as a brake on employment growth; US companies still tend to offshore jobs to reduce labor costs.

The challenge of the here and now, in truth, is more political than economic.  Look under both our poles and the US middle ground and you see economic processes that are polarizing the population or threatening to.  China has to effectively bribe farmers to stay on the farm.  In the US, financial prosperity for business comes at the price of decades of essential zero-growth incomes for most workers.  In Europe, populations threaten disorder in response to austerity measures that are seen as “populist” in societies that aren’t used to having no wealth to redistribute.

The current Bush Tax Cut fight is an example of this political tension, but of course it will ultimately pass, with perhaps the sacrifice of an estate tax break that we think was likely put in the deal to be sacrificed in the first place.  The good news is that the bill will net to a positive economic impact for 2011, likely raising GDP by at least a quarter-point and perhaps a half.  But the bad news is that it won’t really “create jobs” of any quality.  The fundamental problem of labor as a cost in a business with a profit goal has not been solved, nor has technology (for the first time in fifty years) offered the prospects of a solution.  We are restructuring the nature of employment, and that will of course exacerbate the political polarization.

The issues between China and many of the West, especially the US, also loom as an example of polarization.  Success at the economic level is stressing China, creating a risk of internal strife.  For political reasons, it’s convenient for the west to fan that a bit, and the combination of the North Korea crisis and the Nobel Peace Prize debates are creating more tension with China than usual.  That seems to be polarizing political opinion within China, hardening the hard-line elements.  While we don’t think this is much more than a lot of song and dance on both sides, it does raise the risk of a slip that would create more economic pressure and uncertainty, and so at the moment it probably represents the greatest threat to world economic recovery.

In Europe, Ireland is struggling with austerity measures, and other countries like Spain and Portugal realize the cost of succumbing to the bond raiders.  The EU has been signaling that it will intervene more effectively, making attempts to attack sovereign debt or national banking more risky.  That process may not yet be over, but financial industry moderates realize that pushing too far will rekindle a demand for regulations that would be truly effective, and thus curtail the industry’s free-market piracy.  That would be good, but another fallen sovereign financial pillar is a high price to pay.  Some regulatory sanity is in order, but money talks in politics and we probably have to see much worse to do much better.

Our Annual Technology Forecast issue of our technology journal Netwatcher is coming out this month as usual, and we’ll be looking at the world economy and technology markets in depth there.  This issue will run over 25,000 words, making it the most thorough appraisal of the new technology year that’s likely to be provided anywhere, by anyone.  More economic details, including forecasts for growth, will be provided there.

Circling Chrome

Google’s let the industry have its first look at Chrome OS, which it sees as being the framework for a “cloud client” device and a platform that combines a Google desktop position with one in the smartphone space (Android) and a service-side position (Google’s cloud services) to create a new and complete (yes, and completely Google) solution for future computing and communication.  I don’t think Google has grandiose visions of owning the computing/networking world, but I do think that they’re thinking through the process of ecosystemic computing more seriously and effectively than most.  The launch of Chrome OS commercially may be delayed, but some of the impacts may be visible even before the launch.

If you look at Chrome OS and Chrome (the browser on which the OS is based) what you see is a reflection of the fundamental truth that cloud computing is not a ceding of computing to the cloud, but a rebalancing of computing activity between the cloud and the client.  In effective cloud computing, the process-intense tasks of information editing and display should be pushed outward to the client to reduce the impact of these tasks on central resources and to insure that the network connection to the client doesn’t become congested with a lot of unnecessary display-oriented babble.

A good example comes from a Google demonstration of the WebGL 3D rendering framework.  If you want to show sharks swimming you can either send pixel-by-pixel information on the successive positions, compressed information on the same thing, or objects that can be locally rendered.  The latter will be better than any of the former choices from a cloud performance perspective.

I think it’s clear that Google is thinking, but what’s not so clear is whether they can actually achieve their goals of creating cloud dominance, and if they do whether they can monetize their success there.  I pointed out yesterday that some of Google’s recent ventures, like Nexus S and Editions, seemed to be a bit less than half-baked in terms of maturity of the business plan.  Chrome OS has been around conceptually for a long time, and so has Google’s cloud aspirations, and that the two were related is no secret.  But it’s how they might relate in a business sense that’s hard to see.  Does Google think they can sell ads to enterprises that display during their work-day applications?  Seems doubtful.  Does Google then think that Chrome OS and their cloud approach is a consumer solution to computing?  If so, they why make such a fuss about things like replacing Microsoft’s Exchange or SharePoint?

Whether Google makes a success of Chrome OS or not, though, they’re going to show us some things about computing in the future.  The network isn’t going to be the computer, I think, but it’s going to be one of the computers, a new kind of partner in a much fuzzier relationship between users and computational tools.  In that new relationship, there will also be a lot more to worry about in terms of how each piece integrates with the other pieces, and likely more functional segregation of tasks than administrative segregation.  The GUI-versus-application thing is an example.  A cloud application, like all applications, will have a network subsystem, an application subsystem, and a database subsystem that serve the user appliance.  Some smarts will reside in all of these places, and those smarts will be marshaled in some coordinated way to serve the mission.  We’re creating a future that blends SOA principles with principles of GUI design, database design, device design, security, and connectivity.  It’s the creation and sustaining of that complex web of stuff that forms the opportunity for the future, and also its challenges.

But there’s more!  Part of the Google Chrome OS preview was a comment that at least the prototype netbooks that will be deployed in the extensive pre-release test will be equipped with Verizon wireless services.  Google and Verizon, once seemingly irreconcilable enemies, are showing increased coziness.  Their net neutrality proposal, which was hated and criticized by everyone including FCC Chairman Genachowski, looks a heck of a lot like what’s likely to emerge from the FCC’s December 21st public hearing on the topic.  It’s all about ecosystems, again.

A pride of lions that eats all its prey species quickly dies off too.  Google knows that as the OTT giant de jure, it can’t afford to let the problems of disintermediation become critical enough for operators like Verizon to reduce network investment or impose usage pricing with tiers that result in what are effectively taxes on new applications.  When I survey users about pricing sensitivity, the results are probably unsurprising at one level.  They want unlimited-usage pricing the most.  They want low-threshold usage pricing the least.  In between, what they’d prefer as an alternative to the latter is application-specific pricing, meaning that they’d like to see any premium charge for usage bundled into the charge (which they or advertisers pay) associated with the application or experience.  That way they don’t have to worry about a secret price being added to the visible price and called due later on with their monthly bill.  So it may well be that Google recognizes that the Comcast/Level 3 deal, whatever the rights and wrongs of how it should be characterized might be, is still the right industry answer.  Charge for what the user wants, all at once.  That means having the content provider collect and settle with the access provider.

What about Genachowski’s kiss blown at usage pricing, then?  It may be that he’s simply waving a troll at the kids, creating a threat that makes a spindly carrot look more appetizing.