Policy on Advertising, Submissions, etc.

There’s a lot of complicated wording in these areas on many sites, all sorts of talk about click-throughs and what constitutes an actual serving of an ad, and how someone might suggest a topic.  Well, we have good news for you.  Our own situation is a lot simpler.

We absolutely do not accept ads, and we’re never going to.  Don’t ask about it.

We absolutely don’t accept suggestions for topics or contributed material.  I’m Tom Nolle, this is my blog, and every word is written by me and represents what I believe.  So don’t send suggestions.

We will never, never accept any form of payment or favor in return for saying something here.  What I say is what I believe, period.  Don’t offer anything.

If you’re a reader, a fan of this blog, then I thank you for your interest and loyalty, and I offer a promise in return.  This is uncontaminated information.  Nobody influences me in what I say; it’s my very best effort to tell anyone interested what’s going on in our very complicated industry.  There aren’t many things you can believe, or believe in, in the telecom, media, and technology markets today.  It’s my goal to make this blog one of those things, and that’s my promise to those who read it.

Please see the note on quoting this blog if you plan to share the information in any way!

Forget “Dim Sum”; Think “RIM Dumb!”

Anyone who thinks that being a giant, THE giant, in your market makes you safe needs to be looking at RIM this morning.  The fabled Blackberry vendor has sunk to a loss, replaced its CEO, and is clearly in a world of hurt, and the reason is that it forgot the simplest rule; which is that you have to be leading in the OPPORTUNITY space rather than in SALES or in the BASE to be even slightly safe.  Now they’re giving up on the consumer market at the time when BYOD makes it clear that there’s probably no other market left.

When a vendor ships 80% of the units sold, they don’t have the market locked up, because the majority of prospective buyers may not have purchased anything yet.  What Apple did in smartphones was to expand the total addressable market (TAM), and do so radically by including virtually every phone user in the new definition.  They then gained a big lead in that new and broader market, which was bad enough.  What was worse was that once consumers could get their own smartphones they wanted to use them for business rather than carry two phones.  Thus, BYOD.

What makes RIM’s move to focus on its roots idiotic is that its roots have been eaten already, and the trend that’s eating them is the trend it’s fleeing.  There is no chance of success in any market that’s shared by consumers and business other than to be successful on the consumer side.  We proved that in PCs, and we’re proving it in smartphones, in tablets.  You think RIM has fallen on bad times?  Just wait.

Google is facing the Apple demon too, and a bit differently.  In the smartphone space the Android handset partnership strategy worked well, and Google has managed to take the top spot, even if it’s collectively rather than for a single device/model.  In tablets that’s not working out as well for Google, in no small part because the vendors aren’t keeping their devices up to speed in Android releases.  Google has decided that it’s going to sell tablets, and obviously it will have tablets to sell from MMI at the least, though other vendors are mentioned in the early report.  I think what’s happening here is simple; Google knows that Android can compete with the iPad only if there’s essentially only one Android, and that means getting all the tablet players to do two things; organize their value-add so it can be layered easily onto new releases, and commit to keeping their devices up to the latest release, at least for the core Android.  That’s where the new Android builds are heading, I think.  But to get the tablet players in line, Google has to give them something to fear, like Google stepping in.

You might think that Google would worry about Windows 8 here, but they’re not apparently and they’re probably right not to.  Microsoft was never a marketing company in a real sense.  MS-DOS was a serendipitous success, and from there Microsoft simply exploited early momentum.  They don’t have tablet or phone momentum to support.  Yes, players like AT&T will jump in to “support” Windows phone models, but that’s in large part because they don’t want to see any single standout handset.  If there is one, as the iPhone proved, then you have to cut confiscatory deals with the vendor to carry the device or lose market share to a carrier competitor who does.  The carriers would love to break the device duopoly, but that’s going to be a very hard thing to do given that Windows 8 is the only hope, and that it won’t be out until the fall, at the earliest.  Even then it’s not likely it will be ported to non-Intel devices.

Other incumbents might want to look at RIM’s decision, networking players in particular.  Any major market shift creates new buyers and their activity will at least initially and perhaps perpetually outweigh the old.  Resting on your laurels and lying under them is only a matter of inches.

 

Finding the Golden Link

This week’s activity seems to me to be pointing to the future course of network equipment.  On the one hand, we have Huawei reporting $32 billion in sales, good growth year over year, and demonstrating that networking from their perspective is a value market.  On the other hand, we have Cisco making moves (including its acquisition of ClearAccess, a home-network-gateway-management company) that suggest that they believe there’s still a place for features.  How that particular dynamic plays out will decide whether the US has any future as the kingpin in network equipment.

I think Cisco’s been making smart moves for the last year.  The first thing they did when their credibility numbers started to slip in our survey was to beef up their strategic sales engagement, and it’s been working exceptionally well.  They then started in on programs to add value to the higher layers of their offering—the NDS buy and the recent ClearAccess deal.  This would boost their ability to engage network operators to help them with profit growth, which is what every equipment seller needs to be doing.  But does Cisco have this figured out?  Not yet.

The networking budgets of the future are going to consist of layers of inexpensive stuff that pushes bits around, topped by a veneer of service intelligence that creates something profitable out of the bits.  Cisco knows, more than any player in the market except rival Huawei, that it is not possible to be even a decent-margin competitor in those low layers over time.  Thus, if you want to sell the bit-pushing stuff that creates the revenue but not the profit, you need to think VERTICAL INTEGRATION of your service veneer and your bit-pushing.  And that, friends, is what’s been hard—for Cisco and for everyone else.

Talk to a network guy about service-layer integration and it’s like pulling the string of a ventriloquist’s dummy—the words “QoS” come out automatically.  The best thing that could happen to network vendors today would be for the Good Market Fairy to waive her wand and make their tongues cleave to their palates when they tried to utter the words.  Vertical integration between network and service layers has to be via something that’s directly merchandisable, or it’s not going to do any vendor any good—because it won’t help the buyer make money.

Curiously, none of the vendors seem to get this.  Everyone in the marketplace sees this (except the vendors) and so everyone is waiting for some big stumble and fall, looking for the Lilliputians who will fell Gulliver.  That’s what’s behind the current notion that OpenFlow switches are going to rise and smite the mighty.  Wrong.  OpenFlow is just a concession to commoditization, and there’s no smiting in commoditizing markets, it’s more like death from erosion.  Where the real risk of a sudden loss exists is in that critical binding of network to service.  A success there, by ANYBODY, could create a real market wave, because we’re entering a future that redefines the relationship between consumer and network service, and the magnitude of that relationship’s impact is simply incalculable.  There are trillions of dollars in service revenues at stake, and the total infrastructure impact could run into hundreds of billions of dollars, enough to make anyone a real contender.

Everything we talk about in networking, whether it’s online advertising or streaming video or even QoS, is aimed implicitly at a cost-driven evolution.  That’s not going to build the network of the future, it’s going to undercut the network of the present.  The solution to industry stagnation and infrastructure commoditization is the same no matter what the industry is, it’s RAISE THE TOP LINE.  That’s what somebody is going to do; it’s just a question of “who?” and “when?”

Two Market Lessons from Comcast

Comcast is joining with the telcos in offering a hosted PBX and UC product, and the move is significant both for the overall competition in the UC space and for the evolution of voice and UC overall.  Cisco is expanding its Jabber UC, and also linking it better with telepresence.  Sprint is partnering with Cisco in cloud UC too.  But the big guns have yet to make their move in this space, and when and if they do it could be a major market-changer.

Cisco’s Jabber update and integration with telepresence is no slouch from the perspective of a change in the dynamic.  It’s a strategy for the cloud, for enterprises and service providers (as Sprint is showing).  It may be a goad for Google, because Sprint is the only operator to have actually integrated its service (wireless of course) with Google Voice, a product with great potential but which has been stalled in its current shape for a long time.  The telepresence integration is important to Cisco, too, because of the way collaboration works in the real world.

Our research on this particular topic is extensive, and what I’ve seen is that most intense collaboration (the kind you use telepresence for) evolves out of simpler attempts to get something done.  You try the phone, you engage sharing a document view, and you then move to video.  This pattern means it’s really hard to get to the finish line if you keep trying to bypass the starting gate.  Chambers, who doesn’t understand why people just don’t buy routers till the bleed from the ears, kind of skipped that part of the collaborative lesson.

They may be enough of a threat to lever the other guys, of course.  I already noted that the Sprint deal might give Google some pause, and some of the features of the Jabber UC platform seem to hit at where Microsoft is likely to take Skype.  Here the problem for Cisco is the free platform side of Skype.  Cisco doesn’t want to become a provider of free communications itself, nor do Cisco’s carrier customers.  The latter are not adverse to having a low-cost UC platform, but as the AT&T, Verizon, and now Comcast moves show they’re far more comfortable with a digital-voice PBX hosting mission than a full-blown UC mission.

Which gets us to Skype and Microsoft.  Everyone expects that Microsoft will create a kind of super-Skype that will build on the basic UC capabilities of the free platform and avoid driving away the current massive user base.  At the same time, Microsoft has a potential conflict with its own customers on tap because Skype is a revenue-drainer for the telcos and Microsoft still wants telco partnerships for cloud services.  They need free Skype, they need Skype integrated with SharePoint, and they need super-Skype as a carrier platform.  That’s a lot to get done.  Google could move faster; Cisco could have.  Will it take Microsoft, hardly the market’s leading innovator these days, to follow along?  If so, there may be a lag here because I think Microsoft wants to have super-Skype linked to Windows 8 and its phone and tablet strategies.  There will be a tendency to keep adding bows and buttons and laces and pretty soon you don’t have a dress, you have a blob.

Comcast is in the news for a reason beyond UC, and it’s neutrality (again).  Comcast’s traffic engineering practices and appeal of an FCC ruling were what got us neutrality regulations in the first place.  Now, even as those regulations are being appealed, Comcast is pushing the envelope again.  Its X-box streaming service uses the carrier’s “off-net” IP bandwidth, the same thing that Comcast uses for voice and that AT&T uses for video.  The purpose for now is to avoid having the traffic hit the bandwidth caps, but it could also provide enhanced QoS.  This isn’t specifically prohibited by the neutrality rules, but it’s something the FCC said it would watch because it could become a threat.  Some are now saying that time of threat has arrived.  AT&T has also indicated that app developers would be able to pay for traffic their apps use and thus to not have them charged against usage for the customer.

Operators are pretty confident that the FCC can’t stop “bypass-Net” here for premium services, which means that things like cloud services are also likely to  jump off the Internet bus even at the access level.  Given that they do that inside (see my prior blogs) the cloud already and within CDNs, we’re seeing a pretty quick trend toward creating a different model of services that won’t be based on an open and incrementally free connectivity fabric.  That threatens the Internet model, but the model is already being threatened by the fact that profit on incremental capacity is impossible to earn in the current service model.  We may be facing a choice between two poles of solution that people aren’t going to like.

 

OpenFlow: To the Cloud?

If demand-side issues are driving changes in the industry, then it’s fair to ask where the industry is going.  We talked yesterday about the major drivers, and today I want to talk about the major technology shifts a bit more.  In particular, I want to make the connection between the changes in opportunity and the changes in technology.

The Internet is a cheap bandwidth fabric, and as such it’s not the sort of thing to attract a lot of interest from those with ROI targets.  However, the Internet is here to stay in no small part because there’d be an international revolution if you tried to do away with it.  Thus, we’re seeing the changes in demand that we’ve talked about create changes on top of the Internet.

We can visualize all of the demand-side forces acting to create a broader notion of the Internet as a cloud—broader because the new notion involves the information and processing resources needed to support decisions.  In a way, we’re drawing the hosts into the network because apps and Siri-like personal agents really act as intermediaries, hiding the sources and resources because those indirect agents do all the interacting with them.  Google does a lot of this today, and so do CDNs.  So as we cache and disguise things, what happens?

With relatively few points of cloud hosting to worry about, likely most cloud providers would create fat optical pipes between data centers and thus create what would be effectively a flat data center MAN/WAN.  The technology a lot of people favor for this is OpenFlow, and I think that it’s logical to assume that OpenFlow could play a major role.  In OpenFlow, the forwarding rules are created by a central software controller, hence the term “Software-Defined Network” or SDN.  The strength and weakness of OpenFlow is this explicit path control.  It doesn’t scale to Internet size, but in the cloud model it doesn’t need to because the internal connections aren’t between users, they’re between agents and resources, and among the resources themselves.  The structure isn’t “open” like the Internet any more than a CDN is in its internal paths, so it doesn’t need Internet flexibility and scale.  The Internet model is a liability inside the cloud, in fact.

The key point here is that we should not look for something like OpenFlow to become the architecture of the Internet.  It’s the architecture of the cloud, and most specifically of the two-layer inside-outside model of the cloud.  I think that model will prevail, and so I think OpenFlow will prevail—eventually.

Remember my rule, though.  Only new revenue can drive revolution in infrastructure.  In order for the OpenFlow revolution to happen the cloud revolution has to happen, and today’s cloud computing isn’t focused on making it happen at all, it’s focused on the cost-based optimization of tech resource consumption by enterprise IT.  I’m covering this whole issue in greater depth in the next (April) Netwatcher, but for now let me say that the total revenue available to the cloud model I’ve been discussing is nearly an order of magnitude more than the revenue available from displacement of enterprise IT from data center to cloud.  That’s enough revenue to pay for a tech revolution, but we won’t get it all at once.

A year or two from now you’ll probably be consuming some OpenFlow-based services, but you won’t know it.  In fact, if my model is correct there will never be a time when the nature of “the Internet” appears technically different at the client level.  The IPv4 to IPv6 address transition would create far more visible change (and some will make their cloud transformation under the covers).  We’ll need the big Internet address space, the open model, for access to services and for a declining population of legacy websites and services.  We’ll have it, forever, even when a transition to the cloud model of the Internet is complete.  Which won’t be for five to ten years.

I’ve been getting some very optimistic predictions about how OpenFlow is going to push everyone out of the market; how it will kill major vendors.  Not likely.  Not only is the cloud-transition process going to take a long time, it’s going to be evolutionary and thus will tend to favor trusted incumbents.  That’s particularly true given the fact that recent studies have suggested that open source software libraries contain some serious malware vulnerabilities.  I think it’s likely that more and more open-source success will come from having commercial providers take responsibility for the sanitizing of the code and the integration of the elements.

 

What’s Driving the Market Bus?

We’re hearing a lot about the changes in networking, in IT, in pretty much everything technological.  Likely there ARE going to be tumultuous changes, but if we want to understand them, to get the planning-for-the-future process right, then we need to understand what’s driving them.  We tend, in the IT and networking world, to focus on tactical issues rather than the big picture, so let’s broaden out a bit and see what’s creating our future for us.

Broadband mobility is our primary driver right now.  The fact that we can link people to high-powered information and processing resources via a simple portable device is a profound change in the notion of “empowerment”.  What broadband mobility has done is to shift focus from the instruments of satisfying our needs to the instruments through which we receive our answers.  The limitations of the appliances, whether created by technology or simply by form factor, are shaping our future.

Smartphones are the next in our list of drivers.  A smartphone is the ultimate information appliance because it’s handy and easy to manipulate, but it’s also the most limiting.  We can’t do a lot of browsing and gathering on a smartphone; the form factor and our desire to use the device while at least semi-engaged in other activities is critical.  Smartphones gave us apps because we needed a way to conveniently get not RESOURCES but ANSWERS.  And because more and more of our regular interactions are via smartphones, more of our expectations are set to “answer-me” mode.  Smartphones, combined with ubiquitous mobile broadband, create the “information-coupled society”.

Tablets are revolutionary in no small part because they’re a slight shift in the trade-off point that smartphones created.  They’re big enough for an immersive experience but portable enough to be considered a personal accessory.  But the most significant thing about tablets is that they’re arguably the first legitimate child of the web.  They’re a browser in a box, impossible before the Internet age.  They do most of what the average person needs an information appliance for (as opposed to a superphone, which does “calling-plus”) and so they relegate personal computers to being productivity tools.  They’ll never replace the PC but they will clearly cut off the “Internet model” driver from PC buying, which means that they’ll likely displace nearly half the total PC demand over time.  Ultrabooks?  Ultra-silly.  If you want a PC buy a cheaper laptop.  If you want portability, buy a cheaper tablet.

The smartphones and tablets are both “related drivers” to mobile broadband.  Our next driver is a bit more orthogonal; it’s the ebb and flow of broadband pricing policy.  Broadband capacity per unit cost, in technology-market terms, is like lemmings.  When there’s a lemming boom, predators explode because the ecosystem is stacked with prey.  Now I’ve been privileged to see more than my share of lemmings, and they don’t march to the sea as claimed, but prey booms followed by predator booms inevitably lead to prey busts.  And you know where that leaves predators.  Our challenge in networking is to insure that we have a kind of soft landing here, which is challenging because nobody really wants to admit to the problem of low return on capacity—except the operators.

We’re heading into a transformative period, and there’s little question that the explosion of innovation we’ve seen has been the result of the rich growth medium of the zero-marginal-cost-per-bit Internet.  All of our other drivers depend on not having a collapse of capacity or a radical increase in capacity cost, which is why I think it’s so important to consider what could be done about the problem.  The answer is either to make capacity a lot cheaper to produce or to subsidize its production with other revenues.

 

Cut Cords, Cost-Based Clouds? Not.

Friday’s usually a bit of a slow news day, and so it’s often a good day to recap some things that were pushed to the rear of the interest queue by other events.  There are a couple that fit this category that I’d like to explore a bit today, and the first two come from the Internet.

I’ve never been a supporter of the popular notion of cord-cutting, meaning abandoning cable/satellite or telco TV in favor of a pure streaming OTT TV model, and the latest data shows very clearly that cord-cutting is just not happening.  TV viewing is up, subscribers for TV services rose even in 2011 as Netflix gained its greatest ground.  What I think is clear is that mobile/portable devices and streaming are combining to make it possible for some people to watch things at times they couldn’t or wouldn’t watch TV, or to jump off onto an alternate program when they didn’t like the consensus choice for the home TV.

What this says is that (again despite the hype to the contrary) the TV Everywhere model is probably well-grounded on the viewer-behavior side.  If people are supplementing their channelized viewing, then the best way to support their needs is to offer them alternate delivery of what’s essentially the same material.  Rights to network programming, then, are conveyed with subscription to the broadcast of that material.  Once you’ve got the rights, you can then stream both roughly current and past episodes of your programs.  This is the force that I think is transforming the video space, and also the equipment space.  In the latter, the change is coming about because TV Everywhere viewing is a proactive monetization strategy and it shifts the focus of some early CDN interest from pure traffic management.  I think that players like Akamai who have been public CDN providers and are now dabbling in licensing their stuff to the carriers are doing so because TV Everywhere monetization could make carrier CDNs a reality on a larger scale (but a longer timeline, says my data; the projects take longer to get approved because they have more integration requirements than simple CDN projects).

A related point to this is advertising of course.  According to the analysis of 2011 numbers, online ad growth slowed and in some areas reversed.  Paid search, for example, declined not only in Q4 but for the year overall.  Network TV ad spending rose by almost 8% in Q4, but it was also off a bit for the year.  This suggests that there isn’t a persistent flight from TV commercials to online advertising (and so it may explain why Google is honing its privacy policy to target better).  Interesting how objective data often flies in the face of “classical wisdom”.

Classical wisdom is a problem in the cloud too, if you look objectively at the numbers.  I just saw a survey that said that the main reason why people want to move to the cloud is not for flexibility (something the cloud actually delivers) but for savings, which the cloud cannot deliver for the average IT app.  So what this says is that the expectations of businesses for the cloud migration are not realistic and that the cloud will be a tragic failure.  Here’s the thing.  You cannot, now or ever, fund a technology revolution with anything but NEW BENEFITS.  The cheapest way to do something is almost always how you’re doing it already.  You have sunk costs (unamortized capital cost), commitments to support, and most of all operating procedures and performance benchmarks that you’ve build a business around, and you’re then going to toss this for something hosted on the Internet?  Believe in cost-based cloud adoption and you’ll see your shadow next time you go out; there will be no clouds to block the sun.

I don’t believe in that model, of course, nor should you.  IBM had part of the truth in their recently published survey—they said that business process optimization would drive cloud adoption.  We analyzed this report in our March Netwatcher.  I believe in business-process-based cloud adoption.  IBM’s only problem was that they said that it was simple cost elasticity and scaling that would be the “revolution” and that’s nonsense again.  We are going to redevelop our model of IT, and our model of the Internet, based on the cloud and the changes will touch every aspect of every business and every consumer’s life.  But it’s not instant gratification.  This is a long process, which is why nobody bothers to hype it.  For those vendors who can stay the course, there’s an opportunity here the like of which we have never seen before.  There are hundreds of billions of annual dollars on the table, more new revenue than there has ever been, and new benefits fund revolutions, remember?

 

Seeking Direction: HP and MSOs Both Struggle

HP has been having its problems, obviously, and they’re interesting from a whole-industry perspective because HP is both a broad-based player and a potential major contender for leadership in the future network/IT fusion.  The problem right now is that they’re locked up in a restructuring mess and there’s a real risk they’re not going to emerge—ever—as a healthy player.

When HP said it would dump its PC business, everyone got all up in arms.  It’s a third of their revenue after all.  The problem is that PCs are at best a thin-margins-till-the-end-of-time game and at worst something everyone is going to have to cede to cheap offshore manufacturers in any case.  And that’s if you believe they will survive at all in a post-tablet world (which I do, by the way).  The trouble is that when Whitman reversed the decision, it left unaddressed the really big problem, which was that HP really didn’t have a firm strategy to lead them into the future.  Even Whitman couldn’t believe PCs were that strategy, so what was it, and where is it?

Like every vendor in the IT or networking space, HP has to either believe in a cloud vision or not.  If there’s not going to be much of a cloud revolution, then there will likely always be a fundamental division between information appliances that are designed for some heads-down production and those that are more for looking and doing simple-low-input tasks.  PCs, especially laptops, are in the former group and tablets and smartphones in the latter.  That means the no-cloud future guarantees PC survival but makes it clear that the value proposition isn’t likely to expand.  How many producers are there out there compared with the number of dabblers?  HP doesn’t have much of a win here.  So how about if there IS a cloud vision?  That’s also a forked road.

If cloud success comes from displacing internal IT, then the cloud will commoditize and marginalize servers and PCs alike, because that’s what current IT is run on.  You can’t displace internal IT with hosted IT at higher costs, and you can’t assume that hosted IT will consume more servers than internal IT did if costs aren’t higher.  Thus, HP loses servers in a cloud future, UNLESS that cloud future somehow builds from more than just cost-based shifting of apps from data centers to hosting.  HP’s success would depend on a non-displacement success for the cloud.  OK, that seems simple to me, but not to HP, apparently, because they’re not doing anything that suggests they have a view of where the new, non-displacement, cloud apps are coming from.  I’d sure like to know what HP’s cloud vision is, because I think that their reorg focus so far has been mostly on keeping the Street in the dark about how badly components like their PC business are doing.

We’re possibly on the verge of another revolutionary change too, this time in the cable space.  Comcast and other cable operators are looking seriously at how to transition to something that’s more IP-oriented in the delivery of their video programming.  Here, as in the cloud, I think we’re seeing an example of someone testing the waters without knowing whether they want to swim or fish.  How good is Comcast’s convergence platform RFI going to be when it’s far from clear how much IP video can be made profitable—if any?

Linear RF is the best way to deliver broadcast, if broadcast is all you’re going to do.  The problem is that for most operators, you also have to contend with things like VoD and broadband Internet.  These compete for cable spectrum, and the more you have to provide to these non-broadcast apps the fewer channels you can fit into the remaining space.  At some point you have to ask yourself whether you’d be better off pushing everything out in IPTV form from a single pipe, particularly if you need to be delivering TV Everywhere content over IP to other devices anyway.

The problem here is that we have an RFI going out for a market that has no convincing business, technical, or regulatory foundation.  Add this to the fact that over 80% of all carrier RFIs never deliver any equipment and you have a picture of the cable industry in the throes of the same confusion that HP is in.  They’re just confused about something different.

 

Did Oracle’s Quarter Have Clouds or Need Them?

Oracle is one of the more interesting tech companies, if you’re looking for an indicator of where markets might be heading overall.  They have a broad exposure across hardware and software and also a nice combination of “offensive” and “defensive” products, meaning those that do well when confidence is high and those that are more conserving.  This quarter, though, it’s a little hard to read the tea leaves.

At the company level, the numbers were good news; Oracle beat Street estimates and its stock gained in after-hours trading.  It was a contrast to a disappointing report this time last year, but at the same time there were still some common elements with that sad period—hardware.  Despite good order growth for their hardware appliances (database and analytics) the company’s hardware sales were again below expectations.  At this point, I’d say it was clear that Oracle is not going to be able to sustain an independent server business; its hope lies only in the appliances.

To some of the media pundits, Oracle’s dependence on things like analytics seems to fly in the face of the “cloud revolution”, but that’s because these people don’t understand the difference between cloud commitment and cloud stories.  Truth be told, any analytics application that’s so data-intensive as to demand fast-memory processing of the results is clearly not going to run in the cloud, where users would have to contend with data latency or storage costs.  What is then responsible for the hardware dilemma?  It is the cloud in a way, but not the way people think.

What vertical bought more Sun servers than any other?  Hosting and carrier companies.  In the hosting space, the x86 platform has been stronger for cost reasons.  The cloud drive, particularly IaaS, makes it impossible for a company to rely on a non-x86 server because the machine images produced for non-x86 machines are different.  The telcos, who have traditionally used the Sun platform for all manner of good stuff, have been under-investing in OSS/BSS because it’s not a profit center and in service-layer features because they lack an architecture.  Oracle needed to push a vision of the service layer if they wanted to rely on their traditional carrier vertical, and they still have not done that.  Oracle needed to push a vision of PaaS if they wanted a cloud model that didn’t put SPARC platforms at an inherent market disadvantage, and they didn’t do that either.

A few people have said that the problem is that Oracle isn’t in networking and rivals like Cisco and HP are.  I don’t buy that either.  Networking isn’t creating too many heroes on Wall Street right now.  However, it is true that with a combination of servers, middleware, and network hardware you could present a pretty darn compelling data center and cloud story.  The question is whether Oracle’s lack of networking is worse than, say, Cisco’s lack of software.  I don’t think so.  The cloud ultimately is a platform, and it’s actually pretty easy to build a good cloud-service story with middleware.  Easier than building it on networking (as Juniper is proving).

So it’s not the success of the cloud that puts Oracle’s vision at risk, it’s the failure of Oracle to control a darn good cloud opportunity.  There will be a transformation of IT to a cloud/services model.  It will look something like PaaS today, something like SOA (today and for the last five years), and a LOT like the stuff Oracle provides.  Why then do we hear so little insight from Oracle?  Could it be that they understand sales too well and strategy too little?  That sounds like the “Tech Company Disease”; it’s that prevalent in the space today.  Well, Oracle needs a cure or the current quarter could be an aberration.  You can avoid waking a sleeping giant, but a whole population of them puts the odds against a pure defensive strategy.  There are too many players who have the desire and the capability to answer the cloud-service challenge.

 

A Zero for NetZero, a Hope for Microsoft

The media is having a bit of a problem coming to terms with the NetZero proposal for “free” wireless broadband, and what’s sad is that their writers’ block is for the wrong reason.  They want to say “everything is free” because that will get a lot of reader attention, but they’re skeptical that NetZero is really free at all.  One year, 200 megs per month, max and you’re shut off.  Some point out that this is based on Clearwire WiMAX so it’s not going to work with most devices (you’ll need to buy a gadget/dongle).  All fluff.

First, this is clearly a marketing ploy and it’s one based on the fact that WiMAX isn’t exactly setting the world on fire.  The “other 4G” as it could fairly be called, WiMAX was a decent strategy for the five-year-ago period when it was being conceptualized.  Then, having a hundred meg or so to divide up in a large mobile cell seemed reasonable; the average wireline broadband user had a couple of megabits of bandwidth to play with.  You could plot large cells with more people, and you have a less expensive service that would be a good alternative to something like a WiFi hotspot.  But WiFi got faster and faster, people wanted to watch HD movies, and the rest is history.

So of course is NetZero with this plan.  The insurance industry have something they call “adverse selection”.  The notion of insurance is one of pooling risk; a bunch of people buy a policy to insure against something that’s not very likely to happen within the group, but is catastrophic for those to whom it does happen.  The collective premiums fund the losses of the unlucky few and a profit for the insurance company.  But suppose everyone could figure out whether they’d be one of the unlucky?  Only those people would buy the insurance, the premiums wouldn’t begin to pay the losses, and you’d be history as an insurance company.  That’s the issue here.  Free wireless broadband is really appealing to people with little money to spend on broadband.  So a bunch of people will jump on this, collectively consuming a lot of capacity, and few of them will ever convert to a paid plan.  If they were willing to pay, after all, they can get a competitive plan from “real” wireless providers already.

So is this a way to get somebody to buy a WiMAX dongle, hoping that it will then rope that person into WiMAX forever?  Good luck with that one, NetZero!

In the cloud world, it’s interesting to see that Microsoft is gradually exposing some sense in its marketing of Azure, even as from a pricing and positioning perspective it still seems locked in the notion that Azure is an EC2 competitor.  This is a classic example of letting yourself get positioned by counterpunching.  PaaS, which is what Azure is, is a far more logical cloud architecture for the mainstream of cloud demand.  Because more software is included in the cloud, the unit cost and support for the user is less.  Because the platform of the cloud can easily incorporate cloud-friendly development features, you can build cloud-specific apps more easily than you could with IaaS platforms that look like naked iron.  Where does this come across in Microsoft’s material?

Their latest is a rather nice note in MSDN Magazine on Node.js, which is a joint Java activity with Joyent, a startup who recently got a nice round of financing.  Node.js is a server-side javascript interpreter that’s optimized for HTML and I/O handling and that is easily incorporated into modern web applications to extend a basic web server with back-end application power.  Yes, you can install Node.js on an IaaS machine, but if you can get it as part of a platform why not cede the software support/maintenance to your PaaS provider?  The forum Microsoft picked here is the problem; yes you need developers to develop applications, but you need senior management support to pay the developers while they’re doing it.  Microsoft needed to position Azure better for the decision-maker.  The good news for them is that it’s not too late, though the big pressure point for cloud change is not too far off.

What’s that pressure point?  A new EC2 or a better IaaS price, or what?  None of the above.  The big pressure for cloud evolution is tablets and smartphones.  Your app is an icon that’s a gateway to a cloud service.  Siri is a voice-based search engine that instead of giving you a list of possible results, gives you a result that’s probably contextually correct.  As you move to mobile devices, you change the use of the web from research assistant to personal valet.  That’s a cloud mission not a website mission.  Can you build “contextual clouds” using IaaS?  Sure, same as you could with bare metal.  It would be much easier to do that with a platform that had actual features to facilitate those kinds of apps, which is PaaS.  Azure might still be Microsoft’s secret weapon.