Reading the Facebook and Google Moves

Facebook’s double buy of mobile-related companies raises a couple of interesting questions, obviously related to motive and future direction for the social-net giant.  While the deal for mobile photo-app player Instagram caught everyone’s attention on price (a billion), it takes at two points to define a line, and to guess where the line might lead.  The TagTile deal offers us that second point.  From them we can draw a line from the framing issue to the driving issue for the deals.

The issue framing the deals is the simple fact that mobile broadband is the only thing that matters in social networking growth.  You can’t have an always-on social life with a social portal you leave on your desk.  Facebook has a good mobile following, obviously, but it was launched before smartphones had redefined the online social process.

So what, you ask?  If Facebook is used on smartphones (where it’s in most markets the most frequently accessed app) then why are any steps necessary to lock in mobile success?  This brings us to the driving issue, which is GOOGLE.  Google has Android, the smartphone OS that leads in market share.  Google has Plus, which it’s trying to make into a Facebook competitor.  Google may have Glass, which could bring social-cool to every set of eyebrows on the planet.  Facebook has to assume that Google is going to push forward toward a union of their Plus and Android lines, particularly when they’ve announced they’re planning mechanisms to defend founder control over the next couple years at least.

Lines go somewhere, of course, and we might now ask where this one is heading.  I think the answer to that is FACEBOOK’S IPO.  The company is going to be doing one of them this year, and they raised a literal ton of bucks at the VC well, which means they need a staggering offering just to buy out all that early capital.  They also need to be sure that they don’t suffer from a post-IPO slump in share price, and that means they need a big revenue ramp.  Mobile is the only way to get that because mobile devices go with you when you buy, and that means they’re a more direct conduit to the purchase decision.  If Google locks a mobile/Plus thing that can somehow get Google into the buyer’s mind right before the buyer goes for the wallet, then Facebook has no future even if social networking does.  Ad dollars go to influence and not to popularity.

There’s the other end of the line, of course.  Nothing creates more excitement at first, and more angst later on, in a company’s developer community than the company starting to buy out developers to set up its own apps.  For a brief shining moment this creates lottery-like fever, but eventually everyone sees that their chances of winning are similar to the lottery and they look for a place where developer investment can bring a less polarized business model than “be-bought-or-die”.

Another interpretation of Google’s non-voting stock move is that the founders are afraid they’ll be kicked out because they can’t keep up with the OTT market any more, and Sergey Brin might actually have given that view some credibility through a recent interview, where he suggested that Google could never have gotten started in a Facebook-dominated world.  What he’s angry about is the closing of the information ecosystem; players like Facebook make their material non-searchable to keep people like Google from monetizing at the other’s expense.  Google also rides traffic-charge-free on carrier networks.  “Freedom” means the other guy pays and conforms to my business model?  That doesn’t sound like a confident person.  And might Google be far from realizing Glass, and be announcing it now only to generate buzz and take the pressure off?

Mobile markets are the most behaviorally linked of all online markets, and so they’re the fastest to adapt.  That’s why Facebook is buying into mobile.  That’s why Android is important to Google.  But many of Google’s other projects, including Voice and Wave, could have fleshed out a new communication model that could have been incorporated into Plus.  Might that have made Plus the new mobile social communication framework?  Might Google be guilty of specific flaws of strategy more than Facebook or Apple is guilty of creating “walled gardens”?  Behavior changes are what’s creating Google’s risk, and opportunity.

Behavior changes always create opportunities and risks, and not only in mobile.  OTT video is an example; yes, it’s true that mobile video is growing, but my data is saying that the most important trend in OTT video isn’t mobile at all.  It’s disgust with normal programming.  By 2020, my numbers say that a whopping 31% of viewing time will be escaping the normal channelized programming, triple what it is today.  This isn’t the soccer mom or dad at the game, it’s people sitting in their living rooms wanting to watch TV and finding nothing that doesn’t stir their gag reflex.  These people are the mainstay of long-term channelized viewing, and it’s their habits that will decide how much the linear-delivery franchise of the various cable and telco players really means.  Mobile will never break that model, but gagging might.

What Google and Apple are likely after in TV is the fruits of this shift, in no small part because both companies appeal to the younger elite who lead the “run-away-from-what’s-on” crowd.  But the fact that there is still a check-what’s-on that precedes the streaming decision is why channel guide support and integration is so important.  Every step needed to integrate streaming viewing with channel viewing is one step too many, for now.  At some point, that may turn around.

 

Google: Do New Evil or Do New Things?

Google is making news, again, for its left-of-center management practices.  The company turned in an exceptionally good quarter that beat estimates handily, and at the same time it announced it would be doing a 2:1 stock split but one that created only non-voting shares.  The goal, obviously and as the company admitted, was to prevent the founders from being diluted by future sales by them and options by others, to the point where they’d lose control.  This has some in the Street up in arms because of “governance”, they say.  Well, Google at least is being honest, which is more than you can say for the Street.

The issue here is simple.  Since the NASDAQ crash of 1999 regulators forced out some of the practices that made it easy for stock analysts to kite shares through hype, and so gains in share price had to be linked to earnings growth in most cases.  This forced companies to take very short-term perspectives on new opportunities.  Google has always dabbled in a lot of weird stuff from an earnings perspective, and over the years they’ve drawn the ire of some shareholders for not focusing on the stuff that makes money (and boosts the stock price).  My view is that the founders don’t want to be hemmed in by quarterly pressure, and this is their way of making sure it won’t happen.

Google’s “Glass” project is an example.  Here’s something that could revolutionize mobility, make “personal computing” truly personal.  It has perhaps more and broader impact than anything Google has ever proposed, but it’s not going to be something they can push out in three months.  Google’s Wave project could have revolutionized collaboration, and yet it was dropped.  Google Voice, which might totally change voice communication forever, hasn’t itself changed in over a year.  So you can point to Google’s activities today and see plenty of places where the company seems to have picked the short term over the long.  Isn’t that enough reason to try to keep the Street at bay?

It would be, in my view at least, if we could be sure that all of the wonderful revolutionary things Google wanted to do would really be pursued by the founders.  The story with many of the projects that have lost momentum is not that they were forced out by profit pressure, but by lack of support from those very founders.  The problem with the Google plan is that the whole company stands or falls on the insight (and ego) of two men.

I think it’s clear that Google knows that to compete with Apple they need to have the kind of latitude that Steve Jobs had in promoting things truly new and revolutionary.  It’s also clear that having the right to pursue revolution doesn’t make you a revolutionary.  Can Page or Brin stand up to Jobs in sheer marketing genius?  They’ve not done it so far, and remember that the two have had voting control all along; this is only to prevent their losing it.  Thus, it’s probably to let the pair gird Google’s loins for the Final Battle with Apple.  Glass is a nice lance for that engagement, but Google needs more.  It needs to make good on the other things it’s started, because if Google backs away from Voice and Plus and Docs without making them into credible plays, nobody will believe in Glass.  And shareholder suits work no matter who has voting control.

Google’s self-imposed test of Google’s insight is going to be critical, sooner than they may think.  The mobile revolution that rival Apple began and is still driving is creating radical changes in the market.  It’s a revolution that Apple began by focusing on what was essentially elitist positioning, and their margins and image have been protected by the subliminal message that the really important people had Apple products no matter whether Android was cheaper and in some ways better.  Apple launched the PC, remember, and ended up a small-time player in that market.  They launched the visual GUI and lost to an inferior product, Windows.  Early leads lead often to early mistakes, but in the new mobile appliance space Apple has yet to make one.  That likely means that Google will have to win instead of waiting for Apple to lose, and this in a market where Apple has so far set the agenda.  Glass is critical because if it can’t reset that agenda, then Apple wins and in the Last Days, Google becomes Yahoo.

Wave was a leader.  Voice was a leader.  Docs and Apps are leaders, and yet they’ve not revolutionized.  Plus is a follower strategy; how many times does Google think that successors in the social networking space can launch on the stupidity of the early leaders?  Link Plus with Glass and it’s a whole new story.  Same with Voice, Wave, Docs, Apps…need I go on?  This is Google’s moment.  Hopefully Page and Brin know that and are insuring they won’t be diverted by the Street (who resent the pair’s actions only because it undermines the Street’s own greedy pleasures).  If they don’t know it, they’re going to learn it very quickly.

 

HP and IBM Duel for the “New Cloud”

Whatever you want to call it, we’re having a revolution in IT.  HP’s announcement of a cloud-friendly architecture to build data centers, one that includes prefabricated provisioning tools, was mirrored in many ways by the IBM PureSystems launch yesterday.  I think IBM is right in that the new wave is the most important thing in IT in decades, but I also think IBM could have done a better job of explaining not only the “why?” but even just what the heck they actually did.

Underneath all of this stuff from HP and IBM is an interesting truth, which is that it’s getting more costly to support IT than it is to buy the gear.  This is particularly true for SMBs who lack the large IT organizations to create career paths for IT professionals, and who often have only occasional need for key specialties like network engineering.  The solution that both HP and IBM propose is the creation of package systems that include operations tools tuned not only to the configuration but also to the application/vertical.

Under the covers there aren’t big differences in the technology between the two IT rivals; it’s largely a matter of positioning.  IBM is stressing a data-center-in-a-box, something purpose-configured in many cases for key applications.  HP is stressing a private-cloud-in-a-box.  However, a single data center running virtualization is by anyone’s definition a small private cloud, and a PureFlex infrastructure or PureApplication vertical platforms are both cloud-ready as well.  Thus, we’re seeing vendors addressing the operations cost issues in a cloud or no-cloud environment, equally.  Which is as it should be; the best strategy for any business is the one that provides the lowest overall cost.

This will probably impact the cloud space, however.  First, any standardization in configuration at the data center level makes hybridizing potentially easier, at least for public cloud services that use the same or highly compatible facilities.  Second, the real benefit of the cloud has always been pooled operations efficiencies, and this will tend to reduce those benefits by lowering the internal IT cost of the cloud alternatives.  And despite the similarity under the covers, I think it’s likely that HP plans to use the cloud more proactively as a differentiator than IBM.

HP’s cloud positioning may be reflected in another HP announcement yesterday, a follow-on to its cloud-talk.  HP said it was introducing a series of SOA-based “templates” that would define the operations (deployment and lifecycle) for applications in target vertical markets.  If you’ve followed my cloud musings, you know that I’ve always believed that SOA clouds based on PaaS services were the most logical kind to deploy because they could make the distinction between public and private clouds largely one of simple hosting.  Further, any applications actually written for the cloud and designed to open new business benefits would likely be componentized, orchestrated, and thus SOA-based.  I’m a little surprised at IBM not nailing this particular point down in its own marketing of PureSystems.  IBM has slipped a bit in our surveys in strategic credibility, and perhaps they’re not done slipping yet, which could give HP an opportunity.  Or Cisco.

HP may see that same dynamic emerging with Cisco, and that could be why one of their first SOA-empowered vertical templates is for the network service provider.  Here, HP has gone as far as any player in the market at defining a SOA-cloud architecture as the basis for future services.  Since the cloud (public provider infrastructure or enterprise private clouds) has been the primary place were Cisco’s UCS has been successful it makes sense for HP to try to push as many Cisco buttons as it can, and the service layer is a place where Cisco is potentially vulnerable.

Taking Tech Temperature, Pre-Earnings.

Euphonic, huh?  Well, earnings season is about to take off, which means we’ll likely have more financial results to review than tech bombshells.  Some companies are working to get their stuff out before they go quiet in their pre-earnings period, though, and one of them is HP.  After seeming to ignore the cloud, they now appear to be playing catch-up in earnest.  Why might that be?

HP’s announcement of its cloud strategy, on the whole, positions the company better for the future.  It may also help to gel the issues of all forms of cloud computing, from public through hybrid to private.  The question is whether the company is really up to selling something this ambitious, and whether the story itself risks creating a perceived complexity barrier.  The “Converged Cloud” sounds like as close to Apple Pie and Mom as you get in tech these days; it’s like “How many buzzwords can you get into a slogan?”  But the fact is that there’s more here than cloud- and convergence-washing.  HP really is trying to create a cloud unity, more as a defense against the widespread cloudwashing than a contributor to it.  If it succeeds even a little, it could be a threat to arch-rivals Cisco and IBM…and perhaps even more a threat to others in the network space.

If you take HP’s announcement this week, Dell’s earlier one, and the Red Hat Storage 2.0 thing in combination, I think you may be looking at the start of something interesting.  Pushing cloud stack technology and cloud/big data isn’t a public cloud promotion, but one that pushes the private cloud and the value of the public cloud as a satellite.  That’s a big shift from the old model of “everything goes to the cloud”.  Yes, I know that I said that was crazy all along, but since when has the market done something rational?  Why start now?

Because, perhaps, the public cloud is already falling short.  Part of the problem there is that IaaS services are just not saving enough for buyers to adopt them broadly.  Yes, there are special apps and missions that work, but the broad economics never will, and never could.  I’ve always favored PaaS, and I think that all we’re seeing is going to promote the notion of PaaS by promoting the migration of and cooperative behavior of apps across the cloud boundary.  Amazon may be in for trouble from HP, but not for the one-to-one competition the media thinks will be the problem.  For HP’s driving of a new cloud paradigm.

Speaking of Dell, they’ve decided to get into the service provider space with a CDN offering, created out of a partnership with EdgeCast, whose licensed CDN has been popular with network operators.  It would seem on the surface that Dell is picking a rather odd space in which to go out and promote itself as a supplier to network operators.  Might Dell, like others, see content delivery as a function of the service layer, and the service layer as being hosted in the cloud?  The problem for Dell here is that jumping into the service layer is a big step, particularly now when operators are so concerned about integrating it across monetization projects and vertically into networks and OSS/BSS.

One of the things that I’ll be watching (along with most on the Street) this earnings season is how the network equipment vendors do.  The Street expectation is that “the Internet” will float all boats and that capex will rise to the gain of all.  Not likely.  Verizon just announced that it would be dropping its DSL-only product in favor of a bundle of voice and DSL.  If broadband Internet based on existing loop isn’t profitable enough for a former common carrier to support the offering without a voice-line profit kicker, then broadband is in real trouble.  That means wireline capex is in trouble immediately, and that in the longer term we’re likely to see more investment in mobile by far.  That means less bandwidth for switching and routing because mobile network access is cell-capacity constrained relative to wireline.  Then we have the news that everyone is into packet inspection, which is about limiting how much bandwidth people use.  Are you getting the picture here?  This isn’t the behavior of a buyer community who intends to push bits till they bleed from their ears.

Another interesting thing to watch this quarter is whether there’s any overhang of OpenFlow on enterprise network capex.  As I’ve said before, it’s crazy to think that adding OpenFlow to current switches and routers makes any business sense except as a transition strategy, which means that enterprise networks based on OpenFlow are the end game.  If that’s the case, then you’d expect to see enterprises stretch out their capital plans in switching and routing until they can buy native OpenFlow products at a lower cost.  THEN you’d see migration investment.  It may be too early to see much in Q1 numbers, but it’s worth watching for.

 

Can Two Halves Make a Whole, Service-Layer-Wise?

I’m seeing some progress (perhaps, at least) in the evolution of a logical structure for operator-hosted OTT services.  One startup, UXP Systems, has created a kind of shim layer that orchestrates and manages the OTT form of a number of operator services, including voice, messaging, and video.  The idea is to provide componentized, orchestrable, services by adding a layer above traditional services and using APIs that expose those traditional features.  It COULD be a major step, but I have some reservations largely arising out of positioning.

UXP talks about the goal of “multiscreen” delivery, which means the ability to deliver a service in an arbitrary appliance context.  The concept is a bit hard to get your head around, not because the term is hard to understand but because it’s hard to understand just how UXP proposes to build a service foundation to support it.  First, the paper on the topic is long and complicated (you’re at Page 9 before they introduce the product, called MINT).  Second, there’s relatively little time spent on the details of the implementation.  The website isn’t helpful either, and all of this may be why media coverage of their approach has been off the mark.

The hopeful part is that UXP does identify a reasonable set of platform components for next-gen services, and a broad architecture for making a shim layer of this platform so that it intermediates between new services and users on top and legacy infrastructure and OSS/BSS on the bottom.  If these guys have what they say, and if they could articulate it and apply it more broadly, they might be contenders for some operator projects in the service layer.  We’ll have to see how it develops.

Another item that may be worthy of more attention than it’s getting is an HP announcement about Virtual Appliance Networks.  The notion of a virtual appliance comes from the growing popularity of database and analytics appliances, which are hardware/software combinations deployed as functionality in a box.  Obviously the same functionality could be represented in the cloud as a kind of SaaS, providing that you could deploy it as a unit.  That’s a task for the project area known as “DevOps” that fuses development and operations to permit seamless cloud (and other) provisioning.  HP apparently intends to lay some claim to this space too, and that may be because Cisco is already there with its Donabe project, something I’ve blogged on before and that we cover in some detail in Netwatcher for April.

HP has a vision for OpenFlow, but as I indicated yesterday it’s not a very differentiable one.  If HP also has a vision for DevOps in a cloud context, the question is whether they might try to unify these visions to create a mechanism for a software-defined network based on OpenFlow to learn and apply the connectivity requirements.  A Virtual Appliance could “carry” policies as easily as a “Container” or “Charm” in DevOps.  The problem is that it’s not clear what HP intends to do here, or even if HP knows what it wants.  The same can be said for Cisco, who has yet to make Donabe any official product/policy element.  You can search for the term “Donabe” on Cisco’s own website and you don’t get any hits.

IBM?  With their Green Hat buy, IBM could field a development and deployment kit that would span the data center and extend all the way to mobile devices.  That would let them pull the devices themselves into the cloud model, which would extend the utility of any orchestration/logic or DevOps process.  Search on “DevOps” on IBM’s site and you get nearly a million hits; you get none on HP’s or Cisco’s sites.  I’m not hearing much from users or operators about IBM’s position here, though, and it’s certainly not being picked up much in the tech media.  Still, I think that with this kind of positioning groundwork being laid, it’s more than likely that IBM has something planned.  Which may be what’s behind HP’s moves.

 

Why Cisco’s Cloud Spin-In May Not Be Enough

The fate of the cloud is also the fate of the network vendors, as I’ve noted before.  There is nothing that will stop the use of OpenFlow and SDNs for data center networking, and while arguments that “merchant silicon” can produce the optimum OpenFlow switch are false, it’s true that commodity switches could produce results so close to optimum that other factors would stomp out the differences.  April’s Netwatcher will talk about how the two forces of today’s market—the cloud and OpenFlow—might combine.

Cisco, HP, and Juniper are in their own ways fighting for the future of “cloud” networking when they’re fighting for the future of the data center.  Juniper had a lead in this when they announced QFabric over a year ago, then followed it up with the WAN-optical-core PTX.  The problem is that they muddled the whole positioning in the first place and in the second apparently didn’t see the real connection with the cloud and thus didn’t provide for it at the technical level.  Enterprises tell us that they still position QFabric as being a response to the cloud, but not a UNIQUE one, meaning that they don’t tout specific cloud features, only cloud traffic.  Cisco is said to be pushing a new spin-in that might be around OpenFlow, or maybe cloud storage, or maybe nothing at all, given that Juniper’s QFabric launch hasn’t created the market pressure Cisco feared.  HP seems to believe that all it needs to do is to embrace OpenFlow as an option on Ethernet switches, which would argue that it would have no impact on hardware design.

OpenFlow isn’t going matter if it never moves beyond being an option on an Ethernet switch or an open-source controller.  Many of the benefits can never be realized in this limited application, and most users probably wouldn’t see the value in adopting it.  The real value of OpenFlow is to replace traditional switching/routing in applications where the number of flows is manageable.  That means inside the cloud and inside the data center…and of course the former implies the latter.

There are two issues with OpenFlow that all vendors need to consider.  First is the way that its use could impact the transit path between network points.  In theory, an OpenFlow switch could have as many trunks as ports, creating more of a mesh, because it’s not limited by Ethernet bridging principles in setup.  If OpenFlow switches had cut-through switching capability they could be the foundation of a mesh architecture that would likely make further evolution toward a fabric unnecessary for most applications.  Who says that?  Nobody.  In theory, OpenFlow switching works because application awareness can be pushed down to the flow level, which is the whole point of software-defined networking.  But how does that happen?  Nobody says.

There’s every chance that today’s vendors really want OpenFlow to fail, but most could benefit from its success if they positioned it properly.  Cisco could shut down Juniper’s data center hopes, Juniper could create the first true cloud network, and HP could rain all over every network vendor’s switching position.  But not the way it’s going today.  Vendors need to first articulate a real position in what an OpenFlow switching architecture would look like, how the elimination of the bridge model of Ethernet and implicit or discovered connectivity would free connectivity.  They also need to articulate how applications control flows.  Right now, this isn’t happening even for vendors in the OpenFlow space, and it’s whoever gets this right that wins.

 

Google versus Apple and a New Cloud Reality

Apple stock may be shooting upward, but Google has its own plans for new products and concepts, and two in particular have what I think is real potential.  I’m not saying that they’ll suddenly acquire the momentum of Apple, but I am saying that I think Apple and Google are destined to grow into more direct competitors, and that the transition may be easier for Google.

The jazziest development from Google is its new “augmented reality” notion.  Technically, this is a display device that looks like a pair of glasses (we don’t know what the real thing would look like yet, only a concept dummy), designed to show graphics/text overlaying life.  I’m sure that everyone realizes the impact that this could have.  It would give a whole new meaning to driving directions, to social situations, to business meetings or to field service.  Combine augmented reality with the cloud and you have a formula for as close to direct coupling of computer intelligence to human behavior as you’re likely to get.

What’s interesting is that Google is prepared to spin a tale about this technology without actually having a product ready, something that would seem to risk competitive interdiction.  It’s likely that Google thinks it has the patents needed to lock this deal down pretty thoroughly, but it’s also possible that they think Apple is already doing something, is ahead of them, and thus wants to rain on Apple’s early parade position.  I favor the former; I don’t think that something like augmented reality (“Project Glass”) could drive toward product on the Apple side without creating supply-chain leaks.  Might Google be closer than we think?  Could be.  It’s also true that Google may be antsy about making a big move in a special project area, something that the financial press says its investors don’t like.  Hey, if you want to do something truly stupid, investor-wise, why not try FTTH?  There’s a business that’s marginally profitable for telcos, whose rate of return is about half of Google’s.  Stupid is in the eye of the beholder, or in their perspective at least.  Maybe the media needs augmented reality.

Augmented reality may not be real yet, but Google’s deals with movie studios to stream rentals on YouTube is clearly coming together.  Paramount has been added to the stable, so there’s only one major studio not in a pact with Google.  I think that the value of this for Google lies in creating a strong partnership between YouTube and Google TV, because that would play to a powerful asset that Google has and Apple doesn’t—caching.  Google deploys cache technology even in operator networks to improve video delivery, which means that for things like rental video it’s less dependent on CDN deals.  Apple has far less in the way of network assets to distribute streaming video, and so if Google can make YouTube rental work and link it effectively to Google TV, Apple might be forced to take steps to deploy more streaming infrastructure, which would be costly and likely generate some angst among Apple investors.

You can see that augmented reality could promote this same theme of making Apple catch up.  iCloud is the least effective and insightful of all of Apple’s i-offerings, and arguably Google is a leader in cloud-deployed technology of every sort.  Creating something like Project Glass to push a hosted-augmented-reality experience to users would not only steal some glamour from Apple but again force it to come to Google.  Thus, both these moves could be indicators that Google is going to take Apple on more directly.  If Apple can’t rest on its laurels, that means it has less time to push Google in tablets and smartphones, too.

Speaking of clouds, I’m starting to get some signals from our survey base that both enterprises and network operators are polarizing a bit in terms of their cloud activity.  In the case of enterprises, I’m seeing four distinct cloud models emerging.  For operators, there are two visions.  The important thing is that before this month I didn’t see any real consensus on the cloud other than that you needed to cloudwash everything.

I think that the evolution of cloud goals and viewpoints is a part of an essential maturing process.  If the cloud is a revolutionary model then we can’t do it through traditional IT practices and planning.  We can’t really even talk about the cloud in the same frame of reference as the old IT.  If we can, then the cloud isn’t revolutionary at all.  The problem is that you can’t easily sell something that has a major paradigm shift associated with it.  That’s why SaaS to SMBs is so attractive as a target; it’s a service that faces forward in a traditional way because a cloud-hosted application looks like a data center app.  The deeper value of the cloud requires that we atomize business processes and reassemble them around the presumption that the cloud and mobile broadband and maybe even augmented-reality glasses will combine to create a whole new model of worker empowerment.

 

Is “Ad-Sponsored” Joining “Free” in the Myth Category?

Let’s face reality here.  For a decade now, we’ve had a vision of the future as one of a romp through a vast collection of entertainment, funded by advertising that we all try to never notice.  The fact is that all the ad metrics in the world don’t prove that anyone actually sees an ad, and insiders in the industry tell me that attempts to get better numbers by measuring “engagement” in terms of clicking through or even eventually buying something aren’t measuring up either.  This probably doesn’t surprise a lot of people, and the only reason I’m getting into it is that we’re seeing some interesting trends and shifts that may indicate that ad sponsorship is about to hit the wall.

The biggest issue is privacy, of course, and here we’re moving toward giving consumers more ability to see what people are keeping in the way of personal data, and opting out.  This isn’t surprising either, but if you think of it there’s a surprise inside.  Why would websites push the issue to the edge of regulatory intervention, risk backlash by consumers?  Sure, shortsighted greed is a part of it (what isn’t that a part of these days?) but why now?  My answer is that the industry knows it’s being eaten by fads.

What makes online advertising powerful is its tactical nature.  You can launch a campaign in hours instead of months, you can get demographics to help target.  Because the Internet is such a media darling, whatever new comes along is treated to a chorus of editorial praise, including free exposure on popular TV shows.  Even before MySpace or Facebook the industry was tapping into social, viral, marketing.  But social marketing is fad marketing, and the trouble with a fad as we all know is that eventually the newest thing is old.  And while you chase the future, you often forget the present.

Look at Google.  Look at their “Plus” social launch.  A little-known fact is that Google’s decision to adopt the term resulted in changes to how Google products work.  Some are little; if you look carefully at the Chrome tab bar you’ll see the “+” symbol is gone from the tab stub used to add a new browser tab.  More insidious is the fact that the search process has been changed so the “+” operator no longer indicates that the next word/phrase is mandatory in the results (you now put a word into quotes instead).  How many people might have missed the memo on that change, and have suddenly found Google yielding less relevant results?  Did some go to Bing?  This, because Google wanted to chase the Facebook Fad.

Or because Google wanted to sell more ads?  AdWords and the goal of user searching are at odds, let’s face it.  On a given search, Google wants to serve as many ads as it can.  Give the user too much precision in finding what they want and there may be no ads that qualify.  But the user wants to find something, not be subjected to advertising, so maximizing the latter at the expense of the former is hardly in the consumer’s interest.

The ad-sell mindset has gotten Google in trouble in a number of ways, not the least being allowing advertisers to take out AdWords on a competitor’s brand or name.  In Australia, regulators just ruled that Google is responsible for protecting the trademark and rights of these competitors because the consumer has a reasonable expectation that an ad showed as the result of a search represents the brand being searched for.  Here, of course, SEO is often used to hijack search results.  Is that also going to come under scrutiny?  And all of this to try to boost search in the face of a social-network fad.

How much of Facebook really IS just a fad?  When you get your first bowl of ice cream you eat until you’re sick, and then you taper back to more rational levels of consumption.  Do we think that everyone is going to spend more time on social media every week, until our entire lives are spent watching faked videos of basketball shots?  And GDP growth hasn’t exploded as a result of all the new purchasing being made through social networks, has it?

 

Dell’s Un-Wyse, Citrix’s Short Stack

In yet another move that could be called “cloudy” in more ways than one, Dell is buying Wyse, a former terminal vendor who now specializes in “thin clients”, which I guess is what they call terminals these days.  It seems to me that there’s a lot of questions about this kind of move, and that would mean more questions about Dell.

On the surface, this seems like a pretty straightforward virtual desktop play.  Companies are looking to reduce their cost by replacing personal computers with thin client devices.  Hey, isn’t Google pushing Chromebooks?  Certainly a thin client device would present a lower cost both in terms of capex and support.  And if we presume that we’re really moving to the cloud, then a thin client is a cloud client, forgetting the incremental VD cost and complexity.  The problem comes when you look deeper.

Yes, there are companies who are tossing PCs for virtual desktops, particularly where PCs are being used to support clerical workers with a relatively low unit value of labor.  The question is whether this sort of activity demands a specific thin client, a device that’s somehow not a PC or ultrabook or netbook or tablet or maybe even smartphone.  Frankly, I don’t see how that’s going to work out.

A broadband client device needs a screen and basic navigation.  It needs a keyboard (soft or hard), memory, and storage.  There are differences in how a given device will trade among these features, or trade for cost, and those differences reflect primarily the expected use and type of worker.  But while you can assemble displays, keyboards, memory, and processors in different ways they’re all going to end up looking something like a laptop or something like a tablet, and doesn’t Dell have both?  Thus, why buy somebody to create thin clients?  Especially when that whole market area is clearly going to be cost-driven with razor-thin margins.

Citrix has decided to drop OpenStack in favor of its CloudStack concept, with more compatibility with Amazon’s EC2 APIs.  The move is unfortunate for the cloud but inevitable in some way.  Amazon is the market leader, and they’ve resisted being dragged into standardizing cloud APIs and features for the obvious reason that it would make it easier to move stuff from their cloud to a competitor’s based on price.  However, it’s bad for the market to have cloud innovation stifled by having a single provider be the standard reference for features and APIs.  I like some of the work that’s being done with OpenStack, but it’s hard to see given Rackspace’s role in OpenStack how it could ever have gotten along with Amazon.  And it’s harder to see how this move will advance the state of the cloud toward being a real candidate for a new distributed computing model–which it has to be in order to succeed.