Juniper, Clouds, and Broadband Pricing

Juniper has announced a series of new MX router models and a more integrated positioning between its branch routing, service management, and data center strategies.  Called the “Universal WAN”, the approach is designed to both create a more deterministic and cloud-ready network to support the evolving enterprise IT plans of enterprises and to leverage Juniper’s QFabric and Junos Space assets more broadly.  I think this is a smart move for Juniper, who like most network vendors has a tendency to get bogged down in isolated product details and lose the big picture.

The Amazon outage proved some important points about public cloud and SaaS migration, too.  The cloud is not inherently more reliable than a data center.  In fact, because it involves distributed and thus signaling-coordinated features like the Elastic Block Store that “broke” in the recent outage, it may be somewhat less reliable.  Certainly it’s less proven, and while those users who elected to deploy Availability Zones to increase their resiliency were not impacted, these additional reliability features increase cloud costs.  That’s a double-barreled hit at the value proposition.  I’m not saying that clouds are bad; they’ll become universal in fact.  What I am saying is that they’re not mystical, not a whole new architecture that’s somehow universally what we expect to get at a price that’s universally compelling.

Speaking of myths, the myth of unlimited bandwidth took a hit when AT&T announced caps on both DSL and U-verse customers (150G/month and 250G respectively).  The notion of usage caps isn’t new even in wireline (cable MSOs have imposed caps before) and the cap levels aren’t onerous—initially.  It is clear, though, that usage pricing is on the rise and that the runaway growth of bandwidth propelled by zero marginal cost is coming to an end.

Comcast, meanwhile, is jumping out to create its own app/developer ecosystem, showing yet again that network equipment players have been too sleepy at pushing their own strategies for service-layer development.  Comcast’s approach is somewhat like Google’s or Yahoo’s in that it’s based on RESTful interfaces and exchanges of XML payloads for requests/responses.  It demonstrates, I think, that operators of all types are more interested in vendor help developing the underlying elements of the service layer—the assets they’ll expose for resale or use—rather than getting vendors to define the wholesale/retail exposure APIs.  The latter are highly dependent on the business priorities of the operators in the first place, and secondly the REST/XML model isn’t complicated enough to require much help in supporting.  As I’ve noted many times, it takes about a man-week to prove in a new API of that type, starting from scratch, and perhaps a day to write to one.

Comcast also noted that they wanted the distribution network for video to be “as dumb as possible”, which suggests that they are creating a service layer plan rather than a network plan.  This is the sort of value migration that equipment vendors have been faced with for years and have done little to prevent.  It doesn’t mean that no network assets can be leveraged in services, of course, but it does show that it’s going to be harder to convince operators that the network vendors have the best approach to services.  Every operator that throws in the towel and admits to dumbing down the network is voting for bit commoditization as an enduring reality.

 

Leave a Reply