We Try to Position Juniper’s PTX

Juniper made a second major announcement in two weeks, this time its PTX MPLS-optical supercore switch.  The product’s roots probably lie in early interest (“early” meaning the middle of the last decade) by Verizon in a new core architecture for IP networks that would eliminate the transit routing that was common in hierarchical IP cores.  Since then, everyone from startups (remember Corvus?) to modern players like Alcatel-Lucent, Ciena, and Cisco have been announcing some form of optical-ized core.  What makes Juniper different?

Good question, and it’s not easy to answer it from the announcement, but I’d say that the differentiator is the chipset.  Junos Express appears to be the same basic chip used in the recently announced QFabric data center switch.  Thus, you could say that the PTX is a based on a low-latency MPLS switching architecture that’s more distributed than QFabric.  Given what we perceive as a chipset link between the products, I’m creating a term to describe this; Express Domain.  An “Express Domain” is a network domain that’s built using devices based on the Express chipset.  A PTX network is an Express Domain in the WAN and QFabric is an Express Domain within a data center.

If you look at the PTX that way, then what Juniper is doing is creating an Express Domain linked by DWDM and running likely (at least initially) in parallel with other lambdas that still carry legacy TDM traffic.  It becomes less about having an optical strategy than it is about creating a WAN-scale fabric with many of the deterministic features of QFabric.  Over time, operators would find their TDM evolving away and would gradually migrate the residual to TDM-over-packet form, which would then make the core entirely an Express Domain.  The migration would be facilitated by the fact that the latency within an Express Domain is lower (because packet handling can be deterministic, as it is with QFabric) and because the lower level of jitter would mean it’s easier to make TDM-over-packet technology work.  Overall performance of the core would also improve.  In short, we’d have something really good for none of the reasons that have been covered so far in the media.

This (if my interpretation is right) is a smart play for Juniper; create an MPLS-based virtual domain that can be mapped to anything from a global core to a data center.  Recall that I noted in the QFabric announcement that Juniper had indicated that QFabrics could be interconnected via IP/MPLS.  Clearly they could be connected with PTXs, and that would create a supercloud and not just a supercore.  What would make it truly revolutionary, of course, would be detailed articulation of cloud-hosting capability.  I think that capability exists, but it’s not showing up at the right level of detail in the positioning so far.  In any event, if you add PTX to QFabric in just the right way, you have a cloud—probably the best cloud you can build in today’s market.

If Juniper exploits the Express Domain concept, then the PTX and QFabric combine to create something that’s top-line valuable to the service providers.  Yes, there are benefits to convergence on packet optical core networks, but those benefits are based on cost alone and cost management isn’t the major focus of operators right now—monetization is.  You can’t drive down transport cost per bit enough for it to be a compelling benefit in overall service pricing, nor enough to make low-level services like broadband Internet profitable enough.  Furthermore, achieving significant capex savings for the operator means achieving less total sales for the vendor.  That’s the old “cost-management-vanishes-to-a-point” story.  But you can do stuff at the service layer that was never possible before, drive up the top line, and sell more gear overall rather than less.  Or so I think.  We’ll be asking for clarification on these points, and in our March Netwatcher we’ll report on what we find.

iPad 2 and Beyond

The big news today is Apple’s new iPad announcement, an event whose usual Apple drama was upstaged by a surprise visit by Steve Jobs.  The essence of the announcement was familiar; iPads are making us smarter, healthier, richer, better looking, and so forth, and that’s from the first version.  Now look what’s going to happen!

What is going to happen?  Well, 2011 is the “Year of the Copycats” according to Jobs, but Apple isn’t resting on its laurels.  The iPad 2 is based on a new dual-core chip that’s twice as fast, with new nine-times-faster graphics, front-and-rear-facing video cameras, built-in gyro, 33% thinner (thinner than an iPhone 4)—you get the picture.  The new model will source HDMI at up to 1080p, which makes it a logical companion to HDTVs and probably presages more Apple work there.  Network-wise, it’s not breaking any ground yet—still 3G and WiFi and no LTE or WiMAX.  Pricing is the same; starting at about five hundred bucks.  Overall, it’s a major upgrade in performance and a modest improvement in features—the improvement being the dual cameras.

The new iPad 2 will certainly make things harder for the Android guys, particularly those who like Motorola have just announced their own tablets.  The current Android lot are just about equal at best to the iPad, though most are significantly heavier/thicker, and the new iPad 2 trumps that form factor.  There’s a lot of clever engineering in the gadget, even to magnetic catches on the cover that are sensed by the device and used to trigger a power-up when the cover is removed.  But you really don’t expect to see a cover demonstration on video at a launch event.  Apple is rising to the challenge of competition, but it’s also showing that even its own dramatically innovative culture can’t create a revolution every year.  The biggest bison can still be dragged down by a large pack of even little wolves.

But in the meantime, we do have a clear trend to follow.  Appliances are going to get lighter and more convenient but also more powerful, with better and better video.  That’s going to make enterprises look even harder at using tablets for worker empowerment, and it’s going to make tablets a more and more attractive way to consume video, making multi-screen strategies all the more important.  And most of all, we’re seeing yet again that the market is in the hands of the consumer device vendors.  Nobody else is making any real progress being exciting.  Without excitement there’s no engagement with the media.  Without media engagement there’s no ability to market.

In the mobile space, Verizon has decided to eliminate its unlimited-usage iPhone plan in favor of usage pricing, and if anyone thinks that usage pricing isn’t going to be universal for mobile broadband now and wireline broadband soon, they’re delusional.  Already the media is lamenting the death of the “bandwidth fairy” and beating their breast about the impact this will have on consumers and on the Internet.  Hey, I want a free iPad, and a nice Audi A8 for twenty bucks, and I could really use a Nikon D3 with a 70-200 VR lens (just ship it; no need to send a note to see if somebody already sent one because I can use as many as you provide!)  The market’s not made up of wants but of exchanges of goods or services for dollars.  There has to be a willingness to exchange.

AT&T, who has been into usage pricing for mobile broadband for some time, is also becoming a major carrier proponent of cloud services, and announcements are expected from other providers through the spring.  Cloud computing is a perfect space for network operators because they’re experts at providing services with a low ROI, and that means better pricing and faster market uptake.  In fact, it’s a testament to the problems of revenue per bit on broadband access and Internet services that cloud computing is considered a profit opportunity.  Cloudsourcing applications have to be significantly (22-35%) cheaper to be credible.  What makes network operators so interested is that their own cloud infrastructure (for OSS/BSS and feature/content hosting) will create formidable economies of scale if they’re done right.  That makes the operator a cost leader in a cost-driven market.

You have to wonder everything technical is going to become either a consumer plaything or a service of a giant telco, simply because we’re losing the ability to communicate with the market.  Jobs, even on medical leave, has more star power than anyone else in tech, maybe more than everyone else combined.

 

Take a Lesson From Cable/Retail

The Internet has proved disruptive to a lot of traditional business models, and possibly none more than the retail model.  Recent numbers from Forrester say that online retail sales will hit nearly $280 billion by 2015, and I think they could easily top $350 billion.  While this is small potatoes in absolute terms, the online model has also changed the pricing and margins of retailers.  Anything that’s expensive-ish and that has a model number is going to be priced online even if the consumer comes into the store to see it first.  That changes the whole way that buying behavior has to be manipulated, or it sets retail storefronts as involuntary live catalogs for people who use Amazon to visit.

The role of the Internet in buying stuff combines with social media to generate about 80% of all online time spent by consumers, with video making up nearly all that’s left.  People do little or nothing, comparatively speaking, to further their education, manage their finances, improve their health, or any of the other things that broadband proponents say are the reasons why everyone needs better/faster/some broadband.  With the exception of video (which, remember, is only about 20% of online time) none of these applications are bandwidth-intensive.  Mobile video is a bandwidth hog in mobile terms, but a mobile video stream is small potatoes in the spectrum of wireline broadband, where nearly everyone who has broadband at all can get at least 6 Mbps.

The question of how much broadband you need has implications beyond public policy.  Vendors would love to visualize the future as one where video-greedy consumers demand more and more capacity while network operators draw on somehow-previously-concealed sources of funding to pay for the stuff.  The fact is that’s not going to happen, of course.  Recently the cable industry offered us some proof of that.  If you cull through the earnings calls and financial reports of cable providers, you find that they like the telcos are focused on content monetization and not carrying video traffic.  The difference is significant; monetization means figuring out how to make money on content, where traffic-carrying is simply providing fatter pipes.  For cable, the difference is whether they utilize DOCSIS 3.0 to provide some new video services or to expand broadband capacity, and they’re voting to do the former.

The fact that all kinds of network operators are looking for monetization beyond bit moving may explain why the big IT vendors like IBM are working to be seen more as a cloud partner to these players than as a cloud service competitor.  Microsoft alone of the big vendors seems focused on going their own way with their Azure cloud offering, and that’s likely because Microsoft is focused on competition from Google.  I’ve been hearing rumors that Oracle has decided against a hosted cloud offering and decided instead to focus on service provider cloud opportunities.

The complexity of the cloud market is shown in the latest IDC numbers, which give IBM the leading spot again.  What’s interesting is that IBM outgrew the x86 commodity server space, and in large part because of its strength in mainframe and mini-frame non-x86 products.  In fact, growth in that area doubled the server industry average.  What this shows is that enterprises were telling me the truth when they said that there were really two models of IT evolution; virtualization-centric (based on x86 and Linux) and service-centric and largely based on other OS platforms that used SOA for work distribution.  IBM’s strength could be its ability to harmonize these two worlds, though so far that’s not how they’re positioning themselves.  But then the media’s not understanding the existence of the two groupings, so what can we expect?

In economic news, Fed chairman Bernanke said that he expected there would be a small but not worrisome rise in inflation, and it does seem as though the basic strategies for economic recovery are working.  Wall Street is also showing it’s less concerned about a major problem with the oil supply, though obviously oil prices are up on the risk so far.  It’s important to note that oil, like nearly every valuable commodity, is traded.  That means that speculative buying of oil contracts drives up prices even though none of those speculators actually intends to take delivery on oil, and thus there’s no actual impact on supply or demand.  They’re betting on future real price increases at the well-head or on more demand, and we pay for the profits on their bets.  It’s an example of how financial markets influence the real world, and sadly there’s more of that kind of influence today than there is of cases where the real world influences financial markets.