Tech Revolutions?

Nobody can say that the tech space isn’t moving at frightening speed, and that may be true on the business side as much as with the technology itself.  We’ve got two stories of stunning shifts in the tech business landscape, and while neither are confirmed at this moment, there’s certainly a level of plausibility that can’t be ignored.

The first of the stories, one that’s been percolating below the surface for a bit, is that the “wholesale 4G” hopeful LightSquared is looking to dump its plans to build out a network and instead piggyback on Sprint’s LTE modernization plans.  This particular tune goes back at least six months; I’ve been skeptical from day one about a pure MVNO-host model for LightSquared given the plummeting margins in mobile broadband.  In fact, I told a vendor then that I’d not put much stake in the LightSquared build-out plans.

The obvious question now, presuming the rumor is true, is what this does to NSN.  Given that in my view they never had a shot at this because it was never going to be financially feasible, the answer is “nothing” in an objective sense.  Subjectively it obviously would hurt NSN because it looks like another slip from a company that seems to be slipping all too often.  NSN’s Motorola-pieces acquisition has had its closing delayed again, for example, and the company seems unable to shake a marketing and positioning paralysis that’s almost a legend in the space.

The least obvious question, and the most important one, is whether this is a clear sign that the mobile broadband space is already circling the drain before any significant LTE deployment and before the full wave of tablets hits the market and drives down revenue per bit even further.  If there’s any sanity in the current neutrality flap here in the US (since it’s politics, such sanity would be a happy accident), the core truth is that we’re attempting to prop up a business model with regulations that can’t be sustained at the financial level.  That would suggest the right approach is minimalist; guarantee non-discrimination and stay out of issues like compensation and settlement, premium handling, etc.  Give the industry no exit path but quitting, and they might LightSquared en masse.

At the root of the problem here is the lack of industry foresight into the impact of mobile broadband.  A full year ago I started pushing the concept that mobile broadband, flexible appliances, and consumer behavior were a big interdependent feedback loop (premium blog clients have access to the presentation, “Mobility, Behavior, and the Transformation of Telecom”).  Recent research by comScore shows that smartphones and tablets are already transforming behavior simply because you can use them everywhere you go and not just at home or in the office.  If checking Facebook is your thing, it’s likely that mobile checking will come to dominate simply because you can do it all the time.  Habitual behaviors that can be universally fed by one technology and only episodically by another will migrate to the place where they’re satisfied most of the time.  That means that wireline quickly becomes linked primarily to things that are necessarily anchored, like viewing stuff on a HDTV.  Tablets that size would really jam up the subways, after all.

In other tech news, the ITU has decided firmly to go its own way with a MPLS-Ethernet management standard, an issue it’s been fighting with the IETF over for a year or more.  This is another of those issues that I think is being covered badly and analyzed perhaps even worse.  The core of the dissent here, as operators tell me, is that the IETF wants transport MPLS to be managed as an IP service alone, and the operators want it to be managed at least optionally as an extension of the carrier Ethernet management standards.  Why?  Because you may recall that the whole T-MPLS/MPLS-TP thing was really spawned by a desire do cut off a set of Ethernet enhancements known as either PBB-TE or PBT.  Where MPLS-TP is intended to supplement Ethernet metro infrastructure, it makes sense to manage it using Ethernet standards (hence, the ITU Y.1731 standard).  That would provide end-to-end management capability, and it’s not clear to me why anyone thinks that providing E2E management as an option in carrier Ethernet deployments of MPLS is a bad thing.  I guess I don’t understand standards-religious wars.

But then, tech is just a subset of society.

 

“Free” May Not Be!

Politicking over the net neutrality rules continues, with the House holding a hearing on the matter.  It’s pretty hard for the House to overturn an FCC order without enacting legislation, and that’s not going to pass the Senate or a Presidential veto, so the whole thing is clearly an exercise.  The real test for the order will come in the courts, and it’s virtually impossible to say how long that might take to work through.  But the debate shows the depth of idiocy associated with the whole process.

The FCC’s position can be reduced to “consumers lack enough choice in broadband providers to allow them to vote against site blocking with their feet”.  True, but that’s not really the part of the order that most people object to.  You can simply say “no blocking of sites or discrimination in traffic handling based either on the site or the traffic type” and be done with it.  The FCC didn’t do that; instead they took excursions into things like whether offering premium handling was tantamount to a kind of blocking-one-by-exalting-another relativism.  Even the question of whether a type of traffic could be blocked is in my view kind of moot; as long as operators didn’t have the ability to apply different rules in different geographies, there are no providers who would not face immediate competitive disaster if they were to impose unusual handling restrictions.  But the real problem is whether the FCC has any authority at all in this matter, and that’s what the courts will decide.

Meanwhile new developments in the market continue to raise the stakes for operators.  Facebook’s deal with Warner Brothers to stream movies is an example of how many different kinds of players are emerging to treat “the Internet” as a kind of inexhaustible free spectrum to be used to broadcast anything at near-zero cost.  But the “near-zero cost” really means “near-zero price” because the operators are forced to carry the traffic and often with no compensation at all.  Which opens my “technical” (as opposed to jurisdictional) objection to the order.  We need settlement for the Internet, period.  The bill-and-keep-and-laissez-faire-peering model is just too prone to arbitrage and that can kill investment incentives in key parts of the ecosystem.  The Republicans are right in fearing that, but they’re not raising the settlement issue either because it’s not politically popular.

What’s interesting here is that everybody’s pandering to the voter’s silly lack of understanding on how the Internet works (and has to be sustained from a business perspective) is heading toward a point where the worst-case solution is the only one left to apply.  You can’t stop usage pricing with regulations, even from Congress.  You can’t order a company to invest unprofitably.  Of all of the options available to deal with the explosion in incrementally free traffic, the worst is charging the user.  We need broadband fat pipes to deliver premium services, and we can’t get them by essentially declaring that there can’t be any premium services (no special handling) and that everyone can send for nothing.  Usage pricing, here we come, unless Congress or the FCC gets smart, and if that were possible they’d have done it already.

The media’s not helping here, nor the industry.  OFC is the usual bandwidth-is-the-main-course love feast, even though at least one optical component vendor is signaling the industry that forward demand is looking really problematic.  The lack of a way to monetize incremental broadband traffic is an almost-certain fatal disincentive to develop it, and yet we’re prattling on about 100 GB Ethernet like that was the solution to the problem.  It’s not capacity that’s a problem, it’s making capacity valuable.

In the IT world, an interesting development is that two major computer vendors (HP and Asus) plan to offer something other than Windows on laptops or netbooks.  HP will be making every one of its PCs dual-boot between Windows and WebOS, something that could be a very big move toward popularizing its Palm-acquired mobile OS.  Asus will be offering, according to a rumor, Android and/or MeeGo netbooks.  The decision by some major players to move to a non-Windows OS even as an option could be huge.  On the one hand, it could be valuable for small-form-factor PCs to stave off tablet competition, but on the other it could be a major problem for Microsoft and could also create market backlash from users who don’t understand that the new “laptops” may not run the apps they’re used to having.  Securing two OSs in a dual-boot situation is its own support problem, of course, for the enterprise.

All of this testifies to the enormous problem that tablets combined with ubiquitous broadband and cloud computing could create for the market.  If we had, in theory, a completely ubiquitous “cloud” to tap into, then the only thing a user would ever need is an HTML5 browser glued onto a minimalist OS (does this sound like Google Chrome?)  The  problem is that what we may be doing is creating another set of industry convulsions whose impetus depends on that inexhaustible broadband virtual spectrum.  If we don’t solve the problem of creating a survivable industry ecosystem here, we risk stranding billions in investment in a paradigm of computing that won’t work without the current bandwidth pricing model, as that model breaks down.

See?  Free isn’t any more free than freedom.

 

For the Week: March 7th 2011

It’s obvious that the big question this week, politically and economically, will be what happens in Libya.  Politically, the situation poses a kind of double threat.  First it’s a continuation of a kind of Middle-East-domino problem that might or might not result in democratic sweep of the region.  Second, the turmoil puts western governments in a quandary, balancing the hope of reducing the loss of life against the risk of effectively entering in on the side of the rebels.  Economically, the problem is the rising cost of oil and its effect on consumer prices (gas and goods) and the recovery.

There really isn’t a supply problem with oil; the Saudis have pumped enough to make up for the Libyan loss.  The issue is speculative purchase of futures contracts, which is what’s drive up oil most of the times it’s jumped in the recent past.  Some curbs on the financial industry (which certainly needs curbing overall) could help the situation more than actions like releasing oil from the strategic reserves, but the administration knows that a credible threat to release reserves could curb speculation and help pricing.  It’s not helped so far this morning.

In tech, we’re counting down to the RIM tablet and wondering how competitors will manage the new iPad 2 in their plans for the fall.  The challenge for them all at this point is the sense that there’s still got to be another generation of Android tablets to catch up, which means that the current generation may be obsolete even before it’s released.  Not only does that hurt sales, it could even discredit a complete product line by stomping on its launch and limiting early interest and market share.  It’s the first announcement that gets the most ink.

Enterprises are also starting to work through the issues of tablet-based collaboration, and interestingly that’s one of the things that RIM is expected to try to exploit.  A tablet is most valuable as a collaborative tool for “corridor warriors”, in what my research identified as “supervisory intervention” applications rather than team activities.  In supervisory collaboration, a worker seeks approval or answers on a particular issue, an issue normally represented as a document or an application screen.  The process demands the supervisory/support person share the document/application context and simultaneously discuss the problem.  Thus, you need voice and data together.  Some tablet vendors and media types have suggested that video collaboration is the answer—tablets have the cameras after all.  The problem is that video takes a lot of capacity, people don’t like random video calls that intrude on their current context, and there’s no evidence that video helps pairwise relationships be more productive.  Voice is the answer, but how exactly do we use collaborative voice with tablets?  RIM’s answer is likely to be by creating a tight link between the tablet and a Blackberry, and that may be a good approach.  We’ve noted this issue in some enterprise comments on the difference between iPhones on AT&T and the same phone on Verizon; the collaborative multi-tasking support is better on the first than on the second, obviously.

In the service provider space, I’m seeing renewed activity on service-layer projects, but not so far any conclusive sign of forward progress.  We’ve been working with five operator projects in monetization and two of the five are now looking like they’ll actually start doing something in the next three or four months.  The barrier is still the question of how to insure that assets created to monetize a specific opportunity like content delivery are compatible with the monetization of other opportunities that may or may not be targets of projects at the moment.  The need to repurpose assets across services is clear to the operators, and while it’s becoming clear to at least some vendors (IBM has been pushing this with increasing effectiveness) it’s not universally recognized.

The thing that seems to be catalyzing the service layer is the cloud.  Network operators see cloud computing as a revenue opportunity, and they also realize that cloud-compatible infrastructure is a good platform for OSS/BSS and feature-generating software—even for content delivery.  IBM’s renewed push into the service layer is coming through its Cloud Service Provider Platform, which it explicitly touts as reusable as a service-layer framework in addition to hosting retail or wholesale cloud computing services.  How far this sort of thing gets is hard to predict, though.  It might be that by this fall real projects will be committed, and real money spent.

 

Cloud Futures

The most interesting set of tech developments today relates to cloud computing positioning and services.  At the Enterprise Connect conference, Salesforce and Global Crossing both made cloud announcements, and both had what I see as a common thread; create a SaaS cloud service and build a platform-as-a-service offering around it.  Salesforce did this based on a social-network-integrated customer service platform (Service Cloud 3) and GC did it based on an integrated UC-friendly model that bundles the cloud with SIP trunking.

We don’t need more cloud acronyms (or hype) but there’s some substance here in the trend, at least, and possibly both offerings as well.  Enterprises and SMBs have sharply different appetites for cloud services, with the former preferring IaaS/PaaS services targeted at backup and overflow management and the latter preferring SaaS.  The primary reason for the difference is that most enterprises already run most applications they need internally, and so there’s an issue of tossing one model and embracing another, something that gets assigned a high risk premium in cost-benefit terms.  But UC and social customer service aren’t typically implemented internally at this point, so there’s less pushback.  That could converge the sales models for all business sizes, and not only create a better total market picture but also create broader reference accounts to accelerate deployment.

There were also a number of service provider cloud announcements this week, beyond the GC one, and it’s becoming clear that the providers plan to be major contenders in the cloud space.  IBM and Microsoft are both positioning actively to support provider cloud plans, with the former stepping up their game in what we’ve called the “Symbiotic Cloud”, the model of provider cloud that combines internal IT (OSS/BSS), feature hosting and monetization, and cloud service offerings into one infrastructure.  Obviously this trend, and the fact that GC is already calling its cloud “network-centric”, means that network vendor cloud plans will have to mature in a hurry if they want to be doing something other than holding the coats of the IT giants.

The SaaS model is interesting to operators because it displaces more cost and thus justifies a higher price (that’s also true of PaaS versus IaaS).  Early indications are that operators are most interested in getting into SaaS via partnerships or through services like UC/UCC, where they believe they have a natural market.  Our research consistently shows that network operator cloud services are more credible when presented through a sales force than when presented through a retail portal.  It appears that some of the portal disadvantage could be overcome through effective marketing, but of course “network operator” and “effective marketing” are hardly synonymous even in areas where the operator has some incumbency.  Partnerships thus seem likely to rule here.

Most infrastructure players are not looking to partner in the cloud, largely because it reduces the profit margin.  Where operators have a potential advantage is that their internal rates of return are low, their ROI expectations are more easily met, and thus they can be profitable on a relationship with tighter margins.  Operators can also normally create what’s perhaps the best economy of scale in capital infrastructure and operations of anyone in the market, particularly if they involve their own applications to build up their cloud base.

Economic recovery is going to help the cloud market, I think.  We’re going to see demand grow faster than confidence, and that means that there will be at least an early tendency to take a service-based solution to incremental computing demand rather than to commit to a capital project.  In total revenue, this absolutely will not be the “Year of the Cloud” but it may be the “Year of the Cloud Paradigm” for those who want to sell the services.  Positioning an offering is likely to get a lot harder in 2012.

 

We Try to Position Juniper’s PTX

Juniper made a second major announcement in two weeks, this time its PTX MPLS-optical supercore switch.  The product’s roots probably lie in early interest (“early” meaning the middle of the last decade) by Verizon in a new core architecture for IP networks that would eliminate the transit routing that was common in hierarchical IP cores.  Since then, everyone from startups (remember Corvus?) to modern players like Alcatel-Lucent, Ciena, and Cisco have been announcing some form of optical-ized core.  What makes Juniper different?

Good question, and it’s not easy to answer it from the announcement, but I’d say that the differentiator is the chipset.  Junos Express appears to be the same basic chip used in the recently announced QFabric data center switch.  Thus, you could say that the PTX is a based on a low-latency MPLS switching architecture that’s more distributed than QFabric.  Given what we perceive as a chipset link between the products, I’m creating a term to describe this; Express Domain.  An “Express Domain” is a network domain that’s built using devices based on the Express chipset.  A PTX network is an Express Domain in the WAN and QFabric is an Express Domain within a data center.

If you look at the PTX that way, then what Juniper is doing is creating an Express Domain linked by DWDM and running likely (at least initially) in parallel with other lambdas that still carry legacy TDM traffic.  It becomes less about having an optical strategy than it is about creating a WAN-scale fabric with many of the deterministic features of QFabric.  Over time, operators would find their TDM evolving away and would gradually migrate the residual to TDM-over-packet form, which would then make the core entirely an Express Domain.  The migration would be facilitated by the fact that the latency within an Express Domain is lower (because packet handling can be deterministic, as it is with QFabric) and because the lower level of jitter would mean it’s easier to make TDM-over-packet technology work.  Overall performance of the core would also improve.  In short, we’d have something really good for none of the reasons that have been covered so far in the media.

This (if my interpretation is right) is a smart play for Juniper; create an MPLS-based virtual domain that can be mapped to anything from a global core to a data center.  Recall that I noted in the QFabric announcement that Juniper had indicated that QFabrics could be interconnected via IP/MPLS.  Clearly they could be connected with PTXs, and that would create a supercloud and not just a supercore.  What would make it truly revolutionary, of course, would be detailed articulation of cloud-hosting capability.  I think that capability exists, but it’s not showing up at the right level of detail in the positioning so far.  In any event, if you add PTX to QFabric in just the right way, you have a cloud—probably the best cloud you can build in today’s market.

If Juniper exploits the Express Domain concept, then the PTX and QFabric combine to create something that’s top-line valuable to the service providers.  Yes, there are benefits to convergence on packet optical core networks, but those benefits are based on cost alone and cost management isn’t the major focus of operators right now—monetization is.  You can’t drive down transport cost per bit enough for it to be a compelling benefit in overall service pricing, nor enough to make low-level services like broadband Internet profitable enough.  Furthermore, achieving significant capex savings for the operator means achieving less total sales for the vendor.  That’s the old “cost-management-vanishes-to-a-point” story.  But you can do stuff at the service layer that was never possible before, drive up the top line, and sell more gear overall rather than less.  Or so I think.  We’ll be asking for clarification on these points, and in our March Netwatcher we’ll report on what we find.

iPad 2 and Beyond

The big news today is Apple’s new iPad announcement, an event whose usual Apple drama was upstaged by a surprise visit by Steve Jobs.  The essence of the announcement was familiar; iPads are making us smarter, healthier, richer, better looking, and so forth, and that’s from the first version.  Now look what’s going to happen!

What is going to happen?  Well, 2011 is the “Year of the Copycats” according to Jobs, but Apple isn’t resting on its laurels.  The iPad 2 is based on a new dual-core chip that’s twice as fast, with new nine-times-faster graphics, front-and-rear-facing video cameras, built-in gyro, 33% thinner (thinner than an iPhone 4)—you get the picture.  The new model will source HDMI at up to 1080p, which makes it a logical companion to HDTVs and probably presages more Apple work there.  Network-wise, it’s not breaking any ground yet—still 3G and WiFi and no LTE or WiMAX.  Pricing is the same; starting at about five hundred bucks.  Overall, it’s a major upgrade in performance and a modest improvement in features—the improvement being the dual cameras.

The new iPad 2 will certainly make things harder for the Android guys, particularly those who like Motorola have just announced their own tablets.  The current Android lot are just about equal at best to the iPad, though most are significantly heavier/thicker, and the new iPad 2 trumps that form factor.  There’s a lot of clever engineering in the gadget, even to magnetic catches on the cover that are sensed by the device and used to trigger a power-up when the cover is removed.  But you really don’t expect to see a cover demonstration on video at a launch event.  Apple is rising to the challenge of competition, but it’s also showing that even its own dramatically innovative culture can’t create a revolution every year.  The biggest bison can still be dragged down by a large pack of even little wolves.

But in the meantime, we do have a clear trend to follow.  Appliances are going to get lighter and more convenient but also more powerful, with better and better video.  That’s going to make enterprises look even harder at using tablets for worker empowerment, and it’s going to make tablets a more and more attractive way to consume video, making multi-screen strategies all the more important.  And most of all, we’re seeing yet again that the market is in the hands of the consumer device vendors.  Nobody else is making any real progress being exciting.  Without excitement there’s no engagement with the media.  Without media engagement there’s no ability to market.

In the mobile space, Verizon has decided to eliminate its unlimited-usage iPhone plan in favor of usage pricing, and if anyone thinks that usage pricing isn’t going to be universal for mobile broadband now and wireline broadband soon, they’re delusional.  Already the media is lamenting the death of the “bandwidth fairy” and beating their breast about the impact this will have on consumers and on the Internet.  Hey, I want a free iPad, and a nice Audi A8 for twenty bucks, and I could really use a Nikon D3 with a 70-200 VR lens (just ship it; no need to send a note to see if somebody already sent one because I can use as many as you provide!)  The market’s not made up of wants but of exchanges of goods or services for dollars.  There has to be a willingness to exchange.

AT&T, who has been into usage pricing for mobile broadband for some time, is also becoming a major carrier proponent of cloud services, and announcements are expected from other providers through the spring.  Cloud computing is a perfect space for network operators because they’re experts at providing services with a low ROI, and that means better pricing and faster market uptake.  In fact, it’s a testament to the problems of revenue per bit on broadband access and Internet services that cloud computing is considered a profit opportunity.  Cloudsourcing applications have to be significantly (22-35%) cheaper to be credible.  What makes network operators so interested is that their own cloud infrastructure (for OSS/BSS and feature/content hosting) will create formidable economies of scale if they’re done right.  That makes the operator a cost leader in a cost-driven market.

You have to wonder everything technical is going to become either a consumer plaything or a service of a giant telco, simply because we’re losing the ability to communicate with the market.  Jobs, even on medical leave, has more star power than anyone else in tech, maybe more than everyone else combined.

 

Take a Lesson From Cable/Retail

The Internet has proved disruptive to a lot of traditional business models, and possibly none more than the retail model.  Recent numbers from Forrester say that online retail sales will hit nearly $280 billion by 2015, and I think they could easily top $350 billion.  While this is small potatoes in absolute terms, the online model has also changed the pricing and margins of retailers.  Anything that’s expensive-ish and that has a model number is going to be priced online even if the consumer comes into the store to see it first.  That changes the whole way that buying behavior has to be manipulated, or it sets retail storefronts as involuntary live catalogs for people who use Amazon to visit.

The role of the Internet in buying stuff combines with social media to generate about 80% of all online time spent by consumers, with video making up nearly all that’s left.  People do little or nothing, comparatively speaking, to further their education, manage their finances, improve their health, or any of the other things that broadband proponents say are the reasons why everyone needs better/faster/some broadband.  With the exception of video (which, remember, is only about 20% of online time) none of these applications are bandwidth-intensive.  Mobile video is a bandwidth hog in mobile terms, but a mobile video stream is small potatoes in the spectrum of wireline broadband, where nearly everyone who has broadband at all can get at least 6 Mbps.

The question of how much broadband you need has implications beyond public policy.  Vendors would love to visualize the future as one where video-greedy consumers demand more and more capacity while network operators draw on somehow-previously-concealed sources of funding to pay for the stuff.  The fact is that’s not going to happen, of course.  Recently the cable industry offered us some proof of that.  If you cull through the earnings calls and financial reports of cable providers, you find that they like the telcos are focused on content monetization and not carrying video traffic.  The difference is significant; monetization means figuring out how to make money on content, where traffic-carrying is simply providing fatter pipes.  For cable, the difference is whether they utilize DOCSIS 3.0 to provide some new video services or to expand broadband capacity, and they’re voting to do the former.

The fact that all kinds of network operators are looking for monetization beyond bit moving may explain why the big IT vendors like IBM are working to be seen more as a cloud partner to these players than as a cloud service competitor.  Microsoft alone of the big vendors seems focused on going their own way with their Azure cloud offering, and that’s likely because Microsoft is focused on competition from Google.  I’ve been hearing rumors that Oracle has decided against a hosted cloud offering and decided instead to focus on service provider cloud opportunities.

The complexity of the cloud market is shown in the latest IDC numbers, which give IBM the leading spot again.  What’s interesting is that IBM outgrew the x86 commodity server space, and in large part because of its strength in mainframe and mini-frame non-x86 products.  In fact, growth in that area doubled the server industry average.  What this shows is that enterprises were telling me the truth when they said that there were really two models of IT evolution; virtualization-centric (based on x86 and Linux) and service-centric and largely based on other OS platforms that used SOA for work distribution.  IBM’s strength could be its ability to harmonize these two worlds, though so far that’s not how they’re positioning themselves.  But then the media’s not understanding the existence of the two groupings, so what can we expect?

In economic news, Fed chairman Bernanke said that he expected there would be a small but not worrisome rise in inflation, and it does seem as though the basic strategies for economic recovery are working.  Wall Street is also showing it’s less concerned about a major problem with the oil supply, though obviously oil prices are up on the risk so far.  It’s important to note that oil, like nearly every valuable commodity, is traded.  That means that speculative buying of oil contracts drives up prices even though none of those speculators actually intends to take delivery on oil, and thus there’s no actual impact on supply or demand.  They’re betting on future real price increases at the well-head or on more demand, and we pay for the profits on their bets.  It’s an example of how financial markets influence the real world, and sadly there’s more of that kind of influence today than there is of cases where the real world influences financial markets.

Monday, Monday

The weekend brought more disorder to the Middle East, particularly Libya, but while the initial turmoil there had knocked stock prices down a bit, the decline has not been alarming and it was reversed on Friday.  Today futures and the European exchanges both suggest an up market again.  Even cooler-than-expected growth in US consumer spending isn’t hurting, and some suggest that Buffett’s bullish letter to investors may be the cause.

In the tech world, Cisco’s stock-price woes continue; the company has been largely flat since its earnings call while competitors Alcatel-Lucent and Juniper have been on a bit of a tear.  Fundamentals isn’t much of a motivation for stock movement these days, but it is clear that investors in the main believe that the latter two stocks have a potential for an upside and Cisco doesn’t have that same potential.  Objectively, I think that’s all true.  Cisco need to work through some very real product issues as well as redefine its internal sales-driven (as opposed to “value-driven”) culture.  Alcatel-Lucent and Juniper both need to learn how to sing better, but both have made what could be very significant product announcements in the last couple weeks.

OK, Cisco is in the dog house for now, but I still have to point out in fairness that the company could largely eliminate its problems in a stroke with some light-weight M&A and some heavyweight re-positioning and strategizing.  The service layer, which means the cloud-to-network binding for both enterprises and service providers, is the sweet spot of the future market.  Own it and you can hope to pull through your solutions en masse.  It’s still open territory.

There may be cloud architecture competition emerging from new quarters.  F5 today announced it had worked with IBM to develop a reference architecture for the cloud.  The architecture clearly covers the creation of private clouds based on virtualization, and F5 promises that it will be extended to envelope public cloud components to hybridize them with private clouds.  We see no reason why the architecture (which looks much like Eucalyptus, and that’s no accident according to F5) can’t be used for public cloud applications, including service provider clouds.  IBM has specific aspirations in the service provider space, and the reference architecture may be a step in helping prospective SP clients build cloud services that can then easily hybridize with enterprises.  It seems to us that the approach would also support SOA applications, but that’s not a specific part of the release.

Staying in the cloud, Verizon is planning to offer UCaaS, hoping to capture a share of business buyers who want unified communications and collaboration that includes users on mobile devices.  Generally, businesses embrace the notion of service-based pricing as opposed to building their own solutions because they like the cash flow better and because they may fear making a capital investment in a space that’s undergoing major change.  However, carriers have for years lost market share with hosted communications options relating to voice services, and it seems to me that this offering would be all too easy for OTT giants like Google to counter if they feel like getting into the space.

Moving to consumer social networks, JP Morgan says it’s going to take a stake in Twitter, and speculation is that will happen by buying out some existing investors.  The deal is said to value Twitter at over $4 billion, and it’s the sort of thing that already has the SEC concerned that private equity is circumventing the protections created by public corporation status while keeping the companies private in name.  I’ve got major reservations about any strategies that have the effect of empowering the “professional” investors and not the general public, which this would surely seem to do.  Further, I wonder whether we’re not creating another opportunity for bubbles by creating a whole new exit strategy set; companies don’t sell out, they don’t go public, but they sell pieces off privately to pay off early investors.  How do we avoid collapse when eventually the public has to bail out the last of the “private” investors like JP Morgan?

The murky regulatory area isn’t getting less murky.  Republicans have recently signaled that they’re not prepared to compromise on their rejection of any sort of net neutrality principles. While that doesn’t mean there won’t be any (Democrats can block any attempts to un-fund or weaken the FCC’s position here), it does mean that if the courts throw out the FCC’s latest order (which I think is likely) then there’s no option to create comparable rules through legislation.  That would mean market forces would decide what happens, always a risk but perhaps not as great a risk as bad explicit policy.  The current FCC order isn’t bad in my view, but I think there’s less than a 30% chance it will stand.

Another semi-regulatory issue is raised by Comcast’s announcement it would not be offering paid streaming video service to non-subscribers, something at least one satellite TV rival says it’s preparing to do.  That may raise an issue with regulators who think that Comcast must make at least NBCU content available to competitors on the same basis as they offer it internally.  Does not offering separate streaming video satisfy that condition?  Comcast may have another reason to appeal the FCC’s order—which is already a target of appeal by other players.  The Comcast/Level 3 dispute may even join the parade here!

Tech, overall, is in a bit of a state of flux, which may be why it’s off today when the Dow is up.  Good economic conditions overall don’t guarantee tech company success these days, and since bad economic conditions guarantee failure in most tech sectors, the industry may be headed for some whipsawing as investors try to price out the current muddy trends.

Huawei’s Open Letter versus US Innovation

Image counts, in every way and at every level of purchase decision-making, and Huawei is one who knows that better than most.  From the first, it’s been tarred with its association with China at multiple levels; first as a poster child for the “cheap Asian economics” story but also often behind the scenes as a sinister agent of communism.  The company’s failure to complete the intellectual property acquisition of 3Leaf was apparently the last straw, and Huawei issued an unprecedented open letter to US officials and in parallel to the US market.  “We’re not your enemy” was the sense of the letter, and while there’s no question the message is self-serving and at the economic level inaccurate, it’s true at the political level in my view.

With China, there seems to be a combination of cultural and economic xenophobia that taints our perception of the country.  Huawei knows that, and they’re asking us to re-examine our motives. Personally, I think everyone needs to go through that exercise, but whether you do yourself is your issue.  Here, I want to focus on the industry import of the move.

Huawei needs to succeed in the US market for sure.  US (and other major national) vendors would like them to fail, because as a price leader Huawei is destructive to their margins in the near term and their market share in the longer term.  The open letter is a signal that Huawei is going to address the points of resistance to its success, and that it intends to make a more aggressive move in the US.  That has major implications/consequences in the market because the US is a proving ground for so many networking innovations.

The first is that Huawei feels that it can compete here, even in a market that’s more driven by trends and coolness and where innovation counts most.  Why?  Either because Huawei thinks it’s innovative enough to play with the big guys, or because it thinks we’re slipping into commoditization—if you demand polar extremes.  I think the truth is that Huawei thinks between the lines here.  Networking as a dynamic industry has lost its way for sure; we’re not driving the bus now in services and infrastructure as much as we are driving it in self-indulgence at the consumer level.  But we’re still the proving ground.  Huawei, I think, understands our fundamental shift of focus toward validating the demand side without any consideration of the supply.  They see an opportunity to offer a combination of a little more “transport innovation” and a lot better pricing.  They intend to exploit it.

That makes Huawei’s open letter a kind of counterpoint to the recent lightRadio announcement by Alcatel-Lucent and the QFabric announcement by Juniper.  Huawei could have offered something in both these areas; they’re engaged in the markets.  Other vendors took special steps to create special values.  They’re betting those vendors will fail.  Historically, they’re probably right because network equipment vendors have failed so tragically at articulating their value propositions that they’d almost just as well not to have bothered to innovate.  Our blatant consumerism hasn’t helped; neither the lightRadio nor the QFabric announcements received any truly insightful coverage.  Yes, the vendors needed to do better to position, but you can’t reduce all of human history to a 350-word hastily composed “get-me-online-first” blog entry.  You can reduce a market to one, or through it.  The critical media intermediary between seller and buyer is pretty much gone.  Huawei thinks feature validation will fail with the failure to understand the features, and they’re right.

The data center transformation is an IT transformation and not a networking one.  Cisco does gain credibility as a driver of the transformation because it fields server products, and we saw that in our strategic credibility survey results.  But if we’re going to focus on the data center network, then we have to focus on Cisco’s ability to transform “credibility” into signed orders.  Part of that is an ability to pull through networking with its UCS successes, and there I am not seeing the kind of traction Cisco needs to have.  The good news for Cisco is that HP is booting it, and that our survey agrees with UBS’ in saying that Oracle is still an also-ran here.  The bad news for Cisco is that QFabric could really transform not only Juniper’s position but also the issues driving the market, and that Oracle is certainly not going to finish 2011 in the same strategic data center doldrums it’s started the year in.

If Huawei’s right, then even a success in the data center is going to be less than a full success for Cisco because it will come at the expense of margins.  If they’re wrong, it’s looking like somebody other than market leader Cisco will have to prove it.

The Good, the Bad

It’s not uncommon to find a combination of good and bad news in the tech space, and we’ve got that today.  For example, on the bad side, HP’s numbers.  On the good, Juniper’s new QFabric.

HP announced disappointing results, a contrast not only to Street expectations but to competitor Dell’s recent numbers.  The problem, says the company, is softness in the consumer PC sector and the fact that HP doesn’t sell much to businesses relative to their total PC sales.  The real issue, I think, is the company’s management agony and a nearly total loss of focus on business.

I was particularly unmoved by the CEO’s promise to get something going with cloud computing.  Where has he been, anyway?  This is no time to start laying out your cloud strategy; competitors have been doing that for over a year.  HP’s decision to buy Palm is another point of future challenges; they’re not sustaining their momentum in their core markets, so how do they expect to take on Android and Apple in the tablet and smartphone space?  This is a company that’s been on a roll for years, and it’s now at serious risk to lose credibility and market share.  They have perhaps two quarters now to turn things around, after which they’re probably going to risk permanent damage.

The cloud also figures in the Juniper announcement.  The company has been talking about their “Stratus” project for several years, and they’ve finally started delivering on the new data center fabric officially called QFabric, with the nodal element, the QFX3500.  The details and the roadmap are impressive, and it’s very possible that Juniper has something here that will change the game, change some minds, and produce significant competitor angst.  We’ll cover this in detail in our March Netwatcher, but let me summarize here.

The QFabric architecture consists of three elements, the primary of which are the nodes.  These are essentially line cards in a basic case, designed to be linked to each other as an entry strategy or for full QFabric configurations linked back to the interconnect box.  The links are made with multi-homed fiber and the result is a semi-mesh of the nodes that has a large cross-sectional bandwidth.  The nodes learn the configuration and connectivity and use this to propagate a forwarding table both at Level 2 and 3, and this table then creates the full forwarding path decision so that no matter what route is taken from source to destination within the mesh, there are no further forwarding decisions needed.  The configuration has a current maximum capacity of 40 Tbps and it’s fully non-blocking, lossless, has microsecond-level delay, and negligible jitter.

The QFabric can be partitioned into virtual networks, and can host services that are created by attaching engines that perform the service processing.  Security is an obvious example of a service.  Services are created by routing data paths through the appropriate engine(s) on the way to the destination.  A director device creates a black-box virtual device abstraction for the management plane and to the outside world so the structure is opaque and opex and configuration complexity are reduced.

While it’s not possible to sustain microsecond-scale latency over WAN distances, you can connect QFabric paths with a decent-performing IP/MPLS connection and thus extend the fabric beyond a single data center.  This means that a cloud computing offering (either to support a service, a private cloud, or an operator IT application/feature hosting platform) could in theory be created and maintained as a single QFabric.  The whole process is operationally linked vertically to the Junos Space cloud feature and management platform, and you can also use Space to create applications and service features that become services of a QFabric cloud.

What’s interesting about this beyond the obvious in-data-center benefits of cost and footprint is the notion that QFabric might become the architecture for private, public, and hybrid clouds.  So far, nobody has really articulated how you’d build a service provider cloud, for example, and with the WAN extensions QFabric could be just that.  The capability could generate some really valuable cloud, content, and mobile engagement for Juniper and thus could pre-empt plans by competitors like Cisco to get a lead in defining how a provider cloud would look.  Since QFabric is also likely to be a compelling migration option for companies with two years or less of undepreciated data center switch assets,  and at least a consideration for companies with three or even four years remaining, it could boost Juniper’s market share and credibility in the critical data center networking space.  Which, obviously, neither Cisco nor HP would like.  Both these arch-rivals have their own quarterly performance issues to work through, and Cisco named a COO (Gary Moore) to help them streamline their operations processes.  There may be a window for Juniper to put the hurt on both companies while they’re distracted.