The Network Core: Opto-Electrical Wars

The optical networking conference this week is opening some interesting issues about the future of “the core”, and probably even more interesting issues regarding networking overall.  While the focus of media coverage has been 100G Ethernet, the real question is how networks are valued, or made valuable.

We might call the current situation in the network core a contrast in speeds.  Optical fiber is used for virtually all important transport applications, and the state of the optical art sets a per-fiber capacity.  To get the most bang for your fiber buck, you’d like to use that capacity, obviously.  At the same time, the electrical interfaces in networking have their own capacity level.  You’d want to use that too, but the big question in networking is the ratio between these speeds.  Right now, for example, the realizable capacity of a fiber strand using DWDM is many times the electrical interface speed, such as that of 100GE.  That means that to use a fiber effectively you need to optically multiplex multiple electro-optical paths onto a fiber.  The greater that multiplier, the more optical work you do and the more dollars are transferred from routing to optics.

Service providers have been pushing to create a more opto-centric core network for the simple reason that it’s less costly.  Optical multiplexing of wavelengths is by some carrier measures a fifth or less the cost of accomplishing the same thing through routing.  But it’s also less flexible and less agile in the face of an outage.  Nevertheless, pressure from operators (like Verizon, who has had an optical core RFI floating for almost a decade) has forced vendors like Cisco and Juniper to come up with tighter coupling between their routing layer and the optical core.  Yes, everyone agrees that this will reduce core cost and thus core vendor profit, but since somebody is sure to do what the carriers want, both vendors know they had to go along eventually.  It’s a demonstration that the core network is nothing but a bit pump, something that has always been difficult to differentiate and that will eventually become virtually impossible to differentiate.  Huawei, of course, is counting on that and looking to enter a suite of low-cost products in the deep-core electro-optical field at the right moment.

It used to be that the unit cost of a fiber bit was high enough that efficient utilization of fiber was the only issue, but now that cost is declining as fiber technology improves.  The value of aggregating traffic in electrical devices to efficiently fill transport pipes is declining with it.  Under-utilization of fiber might well be a cheaper option even today (as it is in the core) in some metro areas, and that’s likely to be true in about 75 of the US’s 250-odd standard metro statistical areas (the old LATAs) within two years.

If you can’t justify electrical aggregation in a low transport-cost-per-bit world, then what else is there?  One answer is “features”, but everybody has long realized that you can’t get customer- or service-aware in the core, or in fact in any aggregated stream.  You need to touch at the edge, where you don’t have to sort out traffic to do that.  Another answer is to assert that the management processes are somehow more effective in an electrical network, but operators resoundingly reject that—it’s more expensive to manage higher-layer devices.

The net-net here is that it is not possible to defend core routing.  But where core routing loses, edge routing wins.  Services are what people buy, and so creating them is what creates the network’s value.  Services touch networks where you can afford customer touch, which is at the edge.  While we always draw networks as layered structures, even when the old OSI model is long obsolete, the factual map of the network would show the “service layer” and “network layer” touching in the edge device, the edge switch or router.

Cloud services in any form, and in fact the service layer in any form, explicitly undermines the core because you can’t make the core anything other than a bit conduit in a service network.  Nobody will be able to defend a contrary position even two years from now, I believe.  BT’s discussion of vertical-market clouds is a proof point here; if you are going to focus not only on cloud computing but on cloud computing in an industry-segment slant, you clearly are moving to a level of service differentiation that’s beyond what the network can realize.  But because a vertical is a company characteristic, a connection to the company is an implicit connection to that vertical, and you can differentiate by industry at the edge.  Moral:  The future is all about services and their coupling to edge routers.  That’s what you need to watch to understand the fortunes of the big network vendors.

 

Dell, and Netflix: The Meaning

Intel has embarked on what might be the biggest battle of its corporate life, the battle to become relevant in the embedded system and appliance space.  While Intel has a license to produce ARM chips, it realizes that exercising it isn’t the answer to getting into the smartphone/tablet space.  Not only would it suffer in terms of profits after the license fees, it would be perpetuating someone else’s processor architecture in the hottest space in the market.  But wanting relevance isn’t getting it.

The big barrier for Intel to cross is getting big-name appliance OSs, which I’ve been calling “Embedded Control OSs” or ECOSs, ported to their architecture.  One reason why Intel got so into the MeeGo Linux model was that they could easily support the porting of that OS to their architecture.  They can do the same with Android (and in fact are doing just that) but it’s harder to get iOS moved over; Apple is in sole control there.  However, even getting the OS ported isn’t going to solve the problem because there are hundreds of smartphone and tablet models out there already, and more arriving every day.  Given that Intel won’t be ready with even a minimal offering until 2012 and won’t be competitive in performance until likely 2013 or even 2014, things could get tough for them.

The reason Intel cares is shown by another thread of discussion in its recent conference.  The company was very defensive about the future of the PC, saying it wasn’t going to become an irrelevant dinosaur in a world of tablet mammals.  Intel made the PC market, and still commands it (AMD’s efforts notwithstanding).  If that market takes a hit because consumers start buying tablets (which HP’s results say is already happening, but clearly there haven’t been enough tablets shipped to have had the effect), then the loss to Intel in PC chips has to be made up.  That means not just matching the volume of CPUs lost, either, because appliance CPUs have much lower prices and profits.  They have to command the appliance space.

The only thing Intel has going for it there is the fact that both the key appliances—smartphones and tablets—are going to enter a kind of “window of susceptibility” in late 2012.  In the smartphone space, the combination of 4G rollout and normal product cycles will put a large number of users in the market for a new phone.  In the tablet space, the Apple iPad onrush will have generated effective Android response, which means that mass-market rollout of tablets will be starting.  If Intel can be ready for that two-barreled market shift, they can be a player.  The question is how to do that.

What they need to avoid at all costs is linking up with Microsoft and Phone 7 on this point, something that we hear is being promoted by Microsoft/Nokia to Intel even now.  As tempting as re-launching the “Wintel” alliance might seem, Phone 7 isn’t the star Intel wants to hitch their wagon to.  Similarly, they need to abandon MeeGo in favor of Android simply because they can’t promote another OS at this stage; there are already too many out there and developers won’t latch on.

In a related matter, Dell reported its numbers and showed a sharp gain in profit contrasting to HP’s dismal numbers and outlook.  The difference, of course, is that Dell has much lower consumer exposure than HP and a narrower product line with less management cycles spent trying to organize all the profit pieces.  There are a number of interesting lessons to be learned here.  While the cloud is shifting compute focus back to a central data center model, it’s not driving the PC out of businesses.  Also, professional services aren’t the cure-all that many hoped they would be; a product company has to be a product company, or get out of that business and become Accenture.

Moving on to another topic, Netflix has been named the number one source of downstream traffic in the US, accounting for just under a third of all bandwidth consumed.  Obviously that means that video is the overwhelming majority of downstream traffic since there are many other sources than Netflix.  This only further highlights the problem that operators face.  Not only are they being asked to capitalize increased traffic that their current all-you-can-eat pricing model doesn’t monetize, they’re subsidizing the cannibalization of their own TV revenue opportunity.  That’s particularly true for the cable MSOs whose primary revenue stream has always been TV.  The markets are getting close to breaking here.

 

 

A Tale of Two Companies

HP has lowered its forecast for the year, and the threat of that move that broke yesterday caused tech stocks to shudder.  It raises a serious question about just what’s going on in tech, a question that doesn’t have any easy answers.  That means the future of tech as we know it may be…well…uneasy.  To understand the issue, we need to tell a Tale of Two Companies.

When companies like HP and IBM were founded, they were profitable based on the sale of business technology.  There was no personal computer, no tablet or smartphone.  In 1980, IBM and HP were duking it out for a growing business computer market, and Digital Equipment Corporation was number two, between these two giants.  Then, in 1982, IBM launched the PC and the market (and world) changed.  Within a decade, PC competitor Compaq had bought DEC, whose market position was compromised because its CEO couldn’t read the handwriting on the wall.  HP bought Compaq.  IBM shed its PC business—the business that started it all—to Lenovo and became a pure business computing play.  HP tried PDAs, bought Palm to take a run at smartphones.  IBM shed its networking group, and HP bought networking giant 3Com.  It sure seems like IBM and HP have gone in opposite directions, and certainly their current financial position seems very different.

Should HP have never gotten into personal computers?  IBM bailed, and won on that bet.  Same with networking.  But remember that IBM launched the PC and rode the PC wave convincingly for a time, and IBM networking (SNA) was the bastion of enterprise networking during the formative time of distributed computing.  IBM didn’t avoid new things, but they avoided things that had seen their best days.

For everything, there is a season.  The cost of consumer technology has fallen steadily, and that’s the most critical trend in the market.  With the price of gadgets falling from a time when they cost a worker an average of six months’ income to the point where they cost half-a-week’s income, people jump in and out of trends with alarming speed.  There’s no financial inertia to overcome.  IBM saw that, I think, and decided that market wasn’t going to sustain margins and was going to require making an increasingly large number of risky strategy bets (buying Palm comes to mind).  So they pulled back, betting on the more stable business market that they’d never walked away from.  HP, during that same period, had re-focused on the consumer, and it paid off for a while.  Not now, nor likely any time soon.

Consumers are now, so the classical wisdom goes, “abandoning PCs”.  Not so; they’re just doing what they’ve always done, which is to spend to self-gratify.  Every year PCs get more powerful, and yet every year the range of things you do on one has diminished not expanded.  PCs used to be the gaming system of choice, but they’ve been displaced by low-cost game consoles and portable devices.  They used to be powerful productivity tools, and we’re now dumbing them down to thin clients because we can’t afford to support their complexity.  If power doesn’t matter in PCs any more, what does matter?  Cheapness.  IBM didn’t want to be in that kind of market, and HP doubled down its bet there.

Would IBM be where it is today without the PC.  No way.  Would it be where it is today had it focused on the PC as HP did?  No.  IBM would also have wasted a zillion dollars and management cycles trying to defend a position in networking.  So what IBM did right wasn’t to avoid consumerism, or avoid new things, but to jump on the bus when it was going IBM’s way, and off when it took a strategic detour.  HP has, in contrast and all too often, gotten on too late and gotten off way past their stop.

What could HP do at this point?  There’s no easy answer if one defines “easy” as being facile to execute.  There’s an easy one in terms of ease of discovery, though.  They need a strategy, a vision, that unites their purchases.  IBM, whether in mainframes or PCs, in networking or out, always had that vision and still does.  HP never had a unifying mission for its elements, only a unifying theme of making money from them.  They couldn’t be symbiotic because there was no ecosystem to cooperate within.  That made the sum of the parts less financially valuable, and more risky, than the whole.  And it’s going to be darn hard to fix this problem quickly.

 

Cloud Musings

A recent research report on cloud computing says that SMB buyers prefer to get their cloud applications from a single provider rather than to mix and match.  That’s not surprising given that for over a decade, SMBs have cited difficulties in sustaining strong technology talent as being among their top three tech problems.  But it also shows that “the cloud” means different things to different people.

In my own research, SMBs have consistently said that a “cloud service” is any SaaS offering, meaning that they equate the term to “hosted”.  Interestingly, while 100% of enterprises know what “IaaS” and “PaaS” mean, only 31% of SMBs do, and almost a quarter of those who say they’re interested in or consuming SaaS ask to have the term defined before answering.

If you strip out the “hosted” applications of the cloud, SMBs currently spend less than 3% of their IT dollars on cloud services, bordering on statistical insignificance.  If all hosted services are considered, the number is about 6% for mid-sized businesses and about 11% for small businesses, the latter being higher primarily because of hosted web presence, email, and backup.

There’s a moral here.  All the surveys about the cloud success or the cloud explosion are dodging a hard reality, which is that it’s not the number of companies who use cloud services but the percentage of IT budget moving to the cloud that matters.  “Cloud penetration is doubling” just means that twice as many people are trying the cloud, not that the cloud’s role in IT is increasing that rapidly.  Further, since most of these surveys target SMBs more than enterprises, the results are biased first by the differences in the SMB space and second by the lack of understanding on the part of SMBs of what “cloud computing” really is.

Among cloud players, IBM and Microsoft get the highest marks on practical cloud strategies, and the second-highest go to the common-carrier cloud services, even though AT&T’s cloud offerings are still developing and Verizon’s (via Terremark) are only recently branded by the carrier.  The reason is that enterprises see any outsourcing as a risk, IT outsourcing as a conspicuous risk, and outsourcing of any critical applications or data as a risk bordering on unacceptable.  They demand both a trusted partner and a credible strategy for risk management.  Right now, both the big vendors we’ve named offer both, and the carriers are trusted in terms of financial stability, professionalism, and quality of infrastructure.  Where vendors have the edge is in the planning of a cloud-ready IT commitment.  I think that the latter is more important than most people realize; simple IaaS cloudsourcing doesn’t address enterprise needs except in development and pilot testing.  Anything other than IaaS requires significant SOA-like integration, something IBM and Microsoft realize and others either don’t realize or don’t address.

The assertion that the Sony PlayStation network hack was hosted on Amazon’s EC2 isn’t raising all that many hackles among the cloud promoters, but it has demonstrated to enterprises yet again the concept of “collective risk”.  A single company, particularly one with a low public profile and little customer credit data on file, has relatively little risk of being targeted by hackers.  A cloud hosting a thousand or ten thousand or a million companies is a much more attractive target.  Sony gets attacked because they’re big, but would Mom’s Pizza be at risk?  Not as a stand-alone, but it might well be part of a larger risk pool if their cloud host is attacked.  Thus, moving to the cloud could raise risks of hacking.  Not only that, if the cloud is hosting the hackers, might they not be able to hack others on the cloud more easily, exploiting interprocess issues or opportunities for denial of service?  Hacking is an ROI- or publicity-driven process, after all.

Some of the earliest cloud successes (in total-revenue terms) are likely to be the kind of services that AT&T’s offering for Windows support.  These services don’t ask customers to outsource data or their critical applications, only to outsource support, and cloud resources are applied not to run customer apps but to improve support economies of scale and thus improve both pricing and profits.  This illustrates why I believe that service provider cloud computing and service provider service-layer intelligence are likely to converge on the same architecture.  It’s only logical to assume that a provider who successfully sells a support service would be successful in selling cloud services, if the tie between the two was clear.  It’s also better for economies of scale if IT functionality (whether the providers’ own apps used in customer support and services or the customers’ apps hosted in a provider cloud) uses common servers/software and common support tools.

Speaking of Amazon, early responses from my spring enterprise survey suggest that the EC2 problem they had didn’t impact enterprise cloud planning much.  That’s because enterprises were not, in the main, considering EC2 as the host for their critical applications.  What the survey shows among SMBs is that those with clouds in their eyes, the early adopters, took the process in stride while those who were on the fence were more likely to be hardened against cloud usage.  In short, it increased the skepticism among those still considering the cloud, which may (if the feeling persists) impact the sales cycle for cloud services.

That’s a good issue to close on.  We tend to forget that a market normally classifies as either being push or pull, meaning sales-driven or demand-driven.  There are some number of buyers who will go out into the marketplace (meaning the Web, in most cases) and look for cloud providers.  That’s not likely to be how mission-critical apps are handled, though.  For that, you need a sales effort to create a sense of personal accountability—my salesperson will look out for me.  The larger players like AT&T, IBM, Microsoft, and Verizon have a sales force that can hug and cuddle wary buyers, and that is more likely than anything to propel them to the top of the cloud heap.

 

Google Boots It’s Chrome OS Launch

Google’s developer conference has generated a flood of news, and that’s a bit of news itself.  There was a time when big announcements were linked to industry events like trade shows, but the new trend to link them to developer meetings shows a new dynamic in the industry.  Actually, it shows a revalidation of an old one.  The PC, in 1982, wasn’t an objective competitor to Apple’s line but it was an open platform that encouraged developers and even hardware add-ons at a time when Apple was at best reluctant in that space (CIMI was an integrator at the time, and we could not join Apple’s developer program).

The second major thrust in the Google stream after Android was Chrome OS and the “realization” of its thin-client promise.  I put that term in quotes because the release of Chromebooks did instantiate a Chrome OS promise but it may not have fulfilled it.  The Chromebook has the features that were expected, meaning that it’s a thin client that integrates tightly with Google’s Docs, it provides a client for desktop virtualization (Citrix demoed this) for access to Windows apps, and you can even buy it on a license program, almost like a set-top box, bundled with the Google business services.  So what’s not to like.

The price.  There were speculations that Google might offer Chromebooks for a small annual increment over Docs, which would make it a compelling deal.  They didn’t; a Chromebook/software package would cost a business about $350 per year.  If you bought the Chromebook outright the price would be over $400, which is more than low-end Windows laptops and almost twice the price of some netbooks.

The value proposition here is vastly complicated by the price, of course.  If a Windows laptop costs less than a Chromebook, then how long will it take for the Chromebook to pay back?  Obviously in capital cost terms it never will.  Yes you can argue that you save more on the Google replacement for Office, but the problem is that you can run that replacement on a Windows laptop too.  The same goes for thin-client apps.  And if you need Windows virtualization you need Windows, and if you have a Windows laptop instead of a Chromebook you have it already.

The Chromebook launch, in my humble view, is a fiasco for Google.  They’ve taken a promising notion, a cloud client, and created a market entry strategy that most companies won’t be willing to adopt, which means they’ll have to fight their way back into consideration at some later date—or fail.

Larry Page needs to take some inspiration from Steve Jobs.  Not only would Apple never have done a launch that lacked a compelling early value proposition, they would probably never have done this sort of deal to start off with.  Why has Apple, who aspired to enterprise success for literally decades, have failed to grab onto the Thin Client Brass Ring, other that it wasn’t a brass ring?  The problem is that hardware and software costs are declining.  If you can build a netbook for less than a Chromebook including hard drive and a Windows license, then you’re charging too much for the Chromebook, you’re trying to pad your profit margins.  Apple knows that you never want to get into a business that’s commoditized from the first.  Sure, there are always price wars, but not at market entry!  At least not for companies that need higher margins.  Google’s Android model, where somebody else has to make and sell the box, is also the Chrome OS model, and the failing here is clear.  Manufacturers of PCs have nothing to gain by pricing Chromebooks aggressively and reducing their own profits.

But whether Chrome OS succeeds or fails in the long run, it’s clear that what it will do is crystallize the thin-cloud-client picture.  There are benefits to pulling complexity off the desktop, just as there are benefits to substituting tablets for laptops in the consumer space.  If what you want from the computer you use is simply an online onramp, then strip off the junk that’s not essential to that mission and create a cheap device that’s a cloud dependent.  If that proposition is valid, which tablet sales show it is, then cloud services are literally the way of the future.  All of networking will collapse into hosting services in some way.  All network operators will become cloud providers, or they’ll sink into financial ruin.  All equipment vendors will become cloud equipment providers, or they’ll follow that same path.

Another dimension of Google’s developer conference also impacts the computing space, though.  Android is a Linux variant.  One of the major problems Linux has had is a lack of good desktop software and another is lack of reasonable support.  Android, especially given Google broadening vision of the devices it runs on, could become a kind of next-gen Linux distro, competing with SUSE and Ubuntu and the rest.  With Google behind it, with a broad population of devices supporting it, Android could capture more interest as a desktop and laptop OS.  My surveys and modeling show that if you could make LibreOffice run truly equivalent to Microsoft Office, and if you had good video and photo editing software on Linux, you’d end up with something that could make users abandon Apple and Microsoft.

That’s what I think is the craziest thing about the Chrome OS picture.  Why would Google push a computer replacement by a thin client as they expand the value of their premier OS as a platform for computing?  If Google wants to hurt Apple and Microsoft, pair Google Docs up with LibreOffice, launch Android as a PC operating system, port Picasa and some video tools, and then offer it to hardware vendors.  You can’t win in PCs and eliminate them at the same time, Larry!

Is There a Future in Your Cisco?

Cisco reported its results, which the Street has described as those of a “company in transition”.  I disagree; they’re the results of a market in transition and a company not yet transitioning.  The signs of the conditions that have dumped Cisco’s stock and fortunes have been clearly visible for over four years, and alarmingly visible for two.  You can’t hope your way out of market change, you have to do something proactive.

Financial analyst comments on the Cisco situation had some common themes.  It’s clear, they say, that profit margins are collapsing under competitive pressure.  It’s clear, they say, that Cisco can’t be a “growth company” any more.  It’s clear that major cost-cutting is in order.  Some say it’s clear that to obtain shareholder value, Cisco needs to split up.  None of this clarity is clear to me, frankly.

“Competitive pressure” is an effect and not a cause.  Price differentiation comes out of the absence of feature differentiation.  For years now router and switching vendors have pushed more and more arcane bit-pushing tricks as “differentiators” when none of the buyers believed or even understood the points.  All this time, buyers have cried for substantive strategies to lead them through the transitions of networking—and didn’t get them.  Cisco more than its competitors relied on sales pressure and incumbency.  That won’t work when all the buyer wants is the lowest price for a hamburger.  If you want the buyer to get off the hamburger kick, you need to have a strategy driver to push.

A “growth company” is a company who can draw on new benefits to justify higher spending on the buyers’ part.  That’s what fuels growth.  Creating more traffic isn’t creating more benefit.  Enterprises don’t get measured by Wall Street by how many bits they push, but by what their bottom line looks like.  Cisco probably has a glimmering of this particular truth now, but they’ve been worse than most at finding the real value in networking.  It’s not a divine right, or mandate.  It’s a business tool, or a business, for Cisco’s buyers—for the market.

Cost-cutting?  Cisco may as well lie down, put a rose in its collective chest, and await the inevitable.  Just as some economic problems demand you spend your way out of them, so do some market problems.  Cisco is absolutely right in its determination to broaden its TAM.  What they’re wrong about is how to go about that broadening.  You can’t just say “I’m a networking icon, so I’m an icon through and through.”  Fix the growth-company business case problem for your buyers.  It’s as simple as that, and transport/connection networking is only a tool in improving productivity for enterprises or selling services for operators.  There’s an application or service layer involved.  Cisco took a huge stride toward service/application relevance with UCS, and they need to staff the hell out of it and spend like a sailor to make it work.

Which is why breaking up is a bad strategy.  Cisco dismembered is a bunch of mediocre business units that are as long on aspirations and as short on execution as Cisco as a unit has been, but that lack the ability to cross-fund and cross-sell.  You might get a bubble of stock growth fueled by speculation, but no value growth, and then the whole thing will collapse.

The final comment here is that the Street is dissing Cisco by comparing its year/year growth with competitors like Alcatel-Lucent and Juniper.  That would be fair if either of those guys were doing anything better at the market strategy level, but they aren’t.  Like Cisco, both companies have shown they see the Elephant, the “Life Fabric” we’re creating now with ubiquitous broadband.  Both have launched initiatives that address little teeny network pieces of that, but neither has taken ownership of the movement of the market that’s creating those pieces.  Both have the same chance Cisco had, both are making the same mistakes, and unless they change both will suffer the same fate—only likely much faster.  Did you see how long it took for Cisco to change from market darling to goat?  You ain’t seen nothing yet, as the song goes.

 

The Data Center, and the Life Fabric, of the Future

The big takeaway from Interop so far has been the battle for the data center, which is no surprise given that particular item has been on top of my surveys for both enterprises and service providers for eighteen months or more.  Interestingly, the financial analysts aren’t seeing anyone decisively winning that battle—at least not right away.  I agree, but the reason why that’s the case is more important than the factoid itself.  You can’t fix an outcome; what is, is.  In theory at least, you could mitigate a cause.

Data center networks are migrating because data centers are, and the drivers of the IT side of the migration are obviously most likely to be giant server and/or software companies.  If you want a buyer to approve an eye-popping cost, and if you want to keep most of the money that’s changing hands, you’ll tend to exalt the benefits of your own gear or software and underplay the other component requirements—the ones you can’t fill on your own.  Why is IBM doing OEM deals for network gear and not making the stuff itself?  Because it wants to lance any objection boils at the network level, but focus on the IT side.

In the service provider space—both network operators and OTT giants—we’re seeing a drive to create a cloud without much vendor support from either the IT or the network giants.  This is in sharp contrast to past practices in the industry, where vendors presented a Fuller-Brush-Man-like inventory of stuff that represented the operators’ only choices, and thus drove network evolution.  The buyer is out of control; today in the operator space and eventually everywhere that consumes data centers.  That’s going to create a whole new tech industry.

Google is likely an example of all of this.  At the developer conference, they introduced what might be (note the qualifier “might”) the most revolutionary thing in Android since the first notion of an Android smartphone.  Android@home is an attempt to make Android king of a little-known but highly important market space, that of “embedded control”.  A device that has computer technology support for its own functions but doesn’t provide a GUI through which the general power of computing can be exercised has historically been called an “embedded control device”.  There are arcane EC-OSs, and there have been for years, and there have even been attempts to make a general-purpose OS like Windows or Linux into an EC-OS.  None have had much success, because nobody has really promoted the notion.  Google aims to change that, creating an Android-inhabited universe in the home where a grid of intelligence manages the environment, fills our needs, etc.  If you believe in a smart home, you believe in a pervasive EC-OS.  Google wants you to believe in Android as one.

There is no question whatsoever that Apple sees something similar, and is moving perhaps with less public fanfare toward that same goal.  Google’s move may flush out Apple’s own intentions, which I believe include the linking of an iOS home network with an Apple-provided cloud-hosted set of services.  That’s surely Google’s direction.  The two companies are envisioning “the cloud” in a more dramatic way, as a kind of “Life Fabric” that surrounds us through wireless connectivity and that hosts smart agent devices that each cooperate to play a role in how we work, play, and live.

 

 

 

Microsoft/Skype Threats, and a Juniper Exec Defects to the Enemy

Skype and Microsoft?  Well, apparently it’s more than just a possibility.  There have been rumors swirling around a buyer for Skype for a week or more, but they’ve been just rumors.  The possible deal with Microsoft is a lot more than that—Microsoft confirmed it at about 8 AM today.  So now, the question is “why?”  From what’s been said, the big reason appears to be the creation of a communications ecosystem built to envelope Microsoft’s gaming and mobile products, and I think it’s clear that it would be extended to Microsoft desktop products as well, and could even offer an attractive reason for hardware vendors to offer a Microsoft-based tablet.

Skype is two things; a community that already includes tens of millions of active users worldwide, and a technology that can create a “behavior-centric” communications framework around any activity that’s persistently interesting to users and that has a social dimension.  Gaming is surely such an activity, and so is unified communication and collaboration for the enterprise.  I think it’s clear that Microsoft is aiming at this, but I also think it’s clear that Phone 7 and Microsoft’s smartphone fate is tied up with this deal as well…and that’s complicated.

Technologically, this might be an interesting time to make a Skype-based play.  Mobile operators are transitioning rapidly to LTE, which is pure IP.  While there are ways to tunnel TDM voice over LTE networks, a quick migration of mobile users to LTE would mean that an all-IP calling community would develop quickly.  That would call into question the whole IMS voice evolution because without much interconnect between TDM and IP voice, a lot of IMS is redundant.  If you don’t believe that, reflect that Skype already inter-calls without IMS.  So might Microsoft put Skype voice on its handsets instead of conventional voice?

It would depend on the operators.  Voice services are clearly not going to be profitable in mobile any more than in wireline, but they do sustain some revenue from non-broadband customers and justify at least part of the investment in wireline copper loop.  They’re also still a big source of mobile revenue, if one that’s clearly in decline.

P2P voice is the cheapest way to offer voice services, which is why you can offer free Skype.  Given that universal broadband will create a universal framework for something Skype-like, it’s hard to justify spending bigger bucks to create another voice model.  Yes, the carriers have low IRR and can win a race to the bottom, but their horse in a future-voice race is more likely to be P2P-based than central-mediated and server-based.  Remember that signaling issues were what was supposed to have brought down Verizon’s LTE network.  Why create more of it?

Another interesting news item is that David Yen, Juniper’s QFabric engineering ace, is leaving Juniper for Cisco, where he’ll take over a new business unit built around the data center and including the UCS server lines and virtualization and the cloud.  This might, at one level, be an indication that Cisco is taking the data center and its strategic shifts more seriously, but I still have a problem believing that one person can be the catalyst to make a sales-driven company appreciate marketing and market strategy.  There’s still a lot of Cisco cotton wrapping Juniper-bred Yen, and even Juniper seriously underplayed QFabric at the strategy level.  Does David see that, and will he do something better for Cisco?  It’s a hard role to have to play now, because little time remains to position Cisco’s assets effectively.

The Microsoft/Skype thing also puts pressure on Google, who after all was one of the companies said to be in discussion with Skype.  Google Voice and Chat have been modestly successful, and Google now has to either concede the IP/UC space to Microsoft (fat chance!) or step up its own efforts here.  The problem is that monetizing that kind of effort will be difficult unless Google can really involve outside partners.  The timing of the Google developer event is thus problematic; it’s too soon for Google to have devised a strategy for Voice/Chat that would integrate with Android, and yet it’s now clear they’ll need one.

 

Cisco Angst and Open Switching/Routing

Today begins with more negative comments on Cisco, and it’s hard not to wonder at the way a company that was once a model of tech success (“I want to be the next Cisco”) is now seen as a model of a transition failure.  The idea of breaking Cisco up into pieces so that valuable parts can grow faster than “legacy” parts has been raised again on the Street (and in the media).  There’s a lot of questions on whether Cisco can recover its past glory.  That’s the wrong question, and if competitors for example keep asking about the past they’re going to be looking back on it from their own virtual retirements.

The challenge Cisco faces is embodied in (but, I hasten to point out, not caused by) the notion of “OpenFlow”.  This is a concept of switch and router control that definitively separates the forwarding and control planes of network devices, centralizing the latter.   The separation concept is far from new, and truth be told the notion of central network control arguably goes back to IBM’s SNA.  Thus, there’s nothing conceptually revolutionary about OpenFlow.  There may well be something commercially new, though, and even revolutionary.

Focus on the “Open” part here for a moment.  We’ve long had open-source software in networking, and the Berkeley Software Distribution (BSD) component of UNIX was the source of much of the networking software during the 80s and even early 90s.  That includes software running in “routers” or network nodes.  What OpenFlow is doing is making open-source fashionable again in networking, at a time when pushing bits as a differentiator is highly unfashionable.  I’ve cataloged the issues that have created the bit-pushing problem before, but those issues have created a largely unfocused angst up to now.  Two factors have changed that, and Cisco’s fall is the first.  Make no mistake, when the market incumbent is in trouble, the market is in trouble.

The second factor is the Open Networking Foundation.  This is a non-profit formed in March and sponsored by Deutsche Telekom, Facebook, Google, Microsoft, Verizon, and Yahoo.  Others have joined (including Cisco) since.  The goal of ONF is to promote what it calls “Software-Defined Networks”, and OpenFlow is the explicit example of such a network.  There’s going to be a fair blizzard of OpenFlow stuff at Interop, and that shows that there are commercial legs behind the idea.  That’s critical because to convert unfocused angst into commercial drive, you have to be able to buy something.

This is especially relevant now because Alcatel-Lucent, in contrast to Cisco, presented good numbers for the quarter.  They’re also gaining in strategic influence in my surveys; it looks like they’ll turn in their best performance of the last five years in the Spring 2011 sweep now underway.  The question is whether Alcatel-Lucent will try to (somewhat belatedly) value its own service-layer strategy and thus preempt some or all of the business value and technical momentum of ONF and OpenFlow.  They are not members of ONF either.  With the combination of a strong lightRadio-created RAN position and one of the two only semi-articulated service-layer strategies in the market (Application Enablement), they could be a player still.

So is this also the path for Cisco?  I doubt it, simply because Cisco has never been a strong supporter of standards because of their erosive impact on margins.  But it’s a dilemma for Cisco because it suggests (not yet proves) that there will be uncontrollable commoditization at the router/switching level.  Not supporting the instrument of change doesn’t undermine change, it undermines you.  On the other hand, you don’t want to push a market trend against your own interests.  Cisco is a member of ONF, but whether to try to adopt the idea or to try to manipulate it isn’t clear at this juncture.  Might OpenFlow be the path to Cisco’s future?  No.  But it might be something that shows Cisco that there is a different path to its resurgence than it might think.

 

Nielson’s Data, Red Hat’s Cloud

Nielson is giving us some new data on video viewing habits, and there are a lot of interesting interpretations you can draw from it.  It’s particularly fascinating if you factor in some of the data I’ve collected on content use, and the results of my content consumption model.

At the high level, the Nielson study says that viewing is increasing slightly, fueled in part by expanded use of online material and in particular mobile video.  But there is a direct correlation between the amount of traditional TV viewed and the age of the viewer, something I’ve also noted in my own models.  The high-school-age crowd watches half the number of TV hours that seniors do.  It’s how this statistic interpreted that’s significant.

The classic view is that we’re a population being weaned away from TV by online viewing.  The reasoning is that as the population ages, the teen behavior moves up the ladder of age until everyone is watching half the TV.  It’s plausible on the surface, but if you look deeper at the numbers you see some issues.  In particular, if you look at the hours spent homebound for each population segment, you see that TV viewing is actually correlating nearly 100% with homebound time.  In short, people watch TV when they’re home, and as they age they’re home more.  Teens, who escape home with fervid determination to evade supervision, aren’t home to watch TV.  It’s as simple as that.

Another interesting fact is that time-shift viewing increases quickly with age up to middle age, then declines.  That shows that as people age, they reconnect with TV and develop a “show dependence” on favored material that they then record for later viewing.  This demonstrates that the habit of not watching is being broken; why record something you didn’t even know was on and never watched?

Youth is using mobile video more, because they’re more mobile.  The goal of “not-at-home” is a goal of avoidance; you have to tune your entertainment behavior away from place-dependence and thus toward mobility.  It’s also more likely you’ll do this as a viewer not yet committed to network TV, without many “favorite” shows.

Moving more to the enterprise side, Red Had it releasing a beta of its OpenShift cloud platform.  What’s interesting here is that OpenShift is a PaaS framework that’s designed to support development of cloud-enabled apps, not a virtual machine framework like an IaaS service would be.  This could be a way of dodging big incumbents like VMware, but it might also be a recognition that cloud computing based on cloud-enabled apps is far more efficient and performs better than cloud computing based on non-enabled apps, no matter what the framework of the cloud.

Microsoft and IBM preach a more cloud-enabled app story than most vendors, and they also preach more PaaS, hybrid cloud, and private cloud.  This month in Netwatcher we’ll take a more detailed look at the architecture issues here, and how enterprises are seeing their cloud plans developing.

In the economy, futures have responded positively to the jobs report, which beat the estimates in number of jobs added.  Still, it does appear to me that job growth this spring has been slower than expected, no doubt pushed down by higher fuel prices.  The good news there is that commodities in general are retreating, which I think is due to speculators clearing their positions as they realize little more growth can be expected.  Gold and silver have also fallen, which suggests that even here we had a more speculative bubble than an arbitrage on the value of the dollar.  I think we’ll see some improvement over the summer.