TV: Everywhere, Network-Where, or Nowhere?

Amazon has cut a deal with Discovery to stream its programming, and the announcement has spawned a serious question about the future of TV in general, and of TV Everywhere in particular.  Like just about everything else in video, this is complicated.

Let’s start off with some data.  The largest segment of viewing that flees standard TV (including standard, integrated, VoD) is the segment who’s disgusted with what’s on and jumps onto the Internet in quiet (well, sometimes not so quiet) desperation.  This group is, in terms of population, half again as large as the youth segment.  It streams more than three times the material as youth.  Its material is almost totally monetizable (you can sell it or sell commercials into it) where the youth segment consumes a lot of YouTube clips of basketball tricks.  In terms of market segment, this one is also the fastest growing.

The big question is where the stuff will come from, which is where the Amazon deal fits in.  Adults in the 30-45 year age range said that they liked eleven TV series that had run more than ten years ago better than anything (other than news or sports) currently broadcast.  TV series are the most popular fodder for those disgusted with “what’s on”, in part because people tend to have “slots” an hour in length filled with unsatisfactory material but bounded by stuff they’d still like (or be willing) to watch.  Thus, movies are less valuable, and thus Amazon’s deal with Discovery could be significant.

The other dimension of this is production, of course.  TV Everywhere isn’t a viewing strategy, it’s a paying strategy.  The problem with online TV is that the total value of commercials in the material is about 3 to 5% the value of what could be sold into standard channelized programming delivery.  That’s not enough to fund the production of the show, so the immediate problem with a pure streaming strategy is that even the die-hard Lucy fans will eventually get tired of watching her eat chocolate.  TV Everywhere says, in effect, that you have to pay for the shows in channelized form to get them in streaming form.

Ah, and here’s then the real reason why the Amazon/Discovery thing is important.  It’s not that getting TV Everywhere knocked off means that somehow the Internet is going to replace all of TV; if it does there won’t be anything to watch.  The thing that could be not only important but critical is that the networks that produce the content are the ones that we need to keep in the game.  If they can go directly to the consumer or to a portal/distributor player like Amazon and cut a better deal than they can get by working with cable companies, the total amount of content in the mill might actually INCREASE.  So the thing to watch now is whether the networks, starting of course with the cable networks, start to work “deeper” content distribution deals.  Sure, Amazon is a step, but the real news would be that Discovery decided to stream its own stuff for pay.  If that happens, the whole industry changes.

Why Cisco’s NDS Deal Could be Huge

Cisco, no slouch in the world of streaming video to start with, may have made an over-the-top (no pun intended—or maybe a little one!) move by announcing it’s acquiring OTT video software company NDS.  The move, I think, may be one of the biggest Cisco has made in the last decade, and it poses a major threat for Cisco’s competitors.

NDS is interesting because they’ve focused on supporting a practical video ecosystem and not just a romp on the streaming bandwagon.  They have highly modularized, highly orchestrable, video components but they also have one of the broadest video-service portfolios in the market, which means that they can address nearly anything in the way of opportunity.  Their stuff is API-based, easily customized and exposed to developers, integrates across service boundaries…in short, it’s a pretty complete service layer.  Maybe even the most complete now available.

For Cisco, NDS could be a killer idea—literally.  There are three monetization pillars that control strategic engagement with operators; cloud, mobile/behavioral, and content.  Cisco has a leg up in terms of cloud strategy among the network equipment vendors, because of UCS.  This week, recall, Cisco upped the UCS ante with new server platforms and a new fabric-connect architecture that leverages its combined position in servers and networks.  Cisco has been less successful has been in mobile and content.  One win and two losses doesn’t add up to Chambers’ kind of sales math, and content is potentially golden for a company like Cisco because it not only gives them a new monetization engagement path, it gives them another path to mobile victory as well.  Might Cisco be wrapping up all the valuable pieces?

Yeah, they might, but Cisco has a way of snatching strategic defeat out of acquisition victory.  You can corner the opportunity for houses by getting prefab walls, roof, furniture, and so forth, but if you forget to tell anyone you’re building a house you risk generating more yawns than dollars.  Worse, the absence of a strong position for the pieces of Cisco’s winning design will give competitors an opportunity to step in with their own moves.  Every day that Cisco doesn’t sing risks a competitor stacking the market choir’s sheet music to their own advantage.  And competitors are out there.

Alcatel-Lucent has arguably the most complete and mature content strategy, but it’s been a bit hung up on IPTV and hasn’t advanced its positioning to cover more recent trends as quickly as it should have.  They also have the best technology for a “service layer” but they’ve never been able to get the message out.  Ericsson has made radical advances in their cloud networking positioning, taking them from a non-player status to being a contender, and they obviously have strong mobile credentials.  They have, in fact, pretty much all of the key technology for mobile and the cloud.  NSN has a CDN relationship with Verivue and a well-positioned multi-screen mobile video strategy, not to mention their strong mobile RAN position.  Any of these three could make life more difficult for Cisco as it tries to capitalize on its new technology.

Juniper may be the player with the most to lose here, simply because the other players in the market space keep starting new races and Juniper has the smallest number of runners.  The IP and Ethernet switch layer is a tough place to jump off into a service story from.  As a mobile player, Juniper lacks the same things Cisco lacks.  In the cloud space, Juniper has switches but no servers, so Cisco’s strategy for making exciting linkages between the two is hurting, and Juniper hasn’t played “cloud networking” nearly as well as Ericsson has.  Now it’s pressured in content, where its own positioning has been anemic.

But will this Cisco move generate counter-moves at all?  That’s the question I can’t answer, because I don’t know whether Cisco’s acquisition is aimed at building a strategy or just managing sales-generated objections on product coverage.  How many times have we seen Chambers run out and buy a company because a sales person said they needed a feature to close a deal?  How many of those companies have turned out to be albatrosses?

DT and Pew: The What and Why of the “Cloud Network”

DT  has become what’s possibly the first major carrier to show us what the network of the future is going to look like, though the description is still a bit cryptic and not always being picked up correctly in stories.  All of the factors I’ve been blogging about for the last week, both the demand-side changes and technology issues, are focusing on a new vision of the network.  It’s not the PSTN, but it’s really not the Internet either.

For the last four or five years, operators have been aware that the major revenue opportunities were likely to come from the delivery of services/experiences that were created by network-distributed intelligence.  Content relies on CDNs and caching, and the cloud obviously relies on distributed computing.  They recognized that in such networks, the “edge” was simply an agent or service connection point, and the “core” really didn’t exist in a traditional sense.  Instead you had a web of trunk connections whose purpose was to create pooled facilities and not to pass end-to-end traffic.  This is what a “cloud network” would look like.

Despite early carrier recognition of this point (I blogged publicly about this four years ago!) network technology in general and network equipment vendors in particular have been slow to accept what’s happening.  In part, I think this was because they were uncomfortable with any change—sellers want to sell what they have—but in fairness the operators themselves haven’t really articulated the picture.  That’s what DT has finally done.  Just be aware that DT isn’t speaking just for itself; these plans are in the works in every single Tier One on the planet, and they’re going to revolutionize networking.

The Street hasn’t really picked up on this yet either, except to note that capex growth now seems slower than before (yes, because nobody wants to invest in something earning less ROI every month).  I thought there might be a bit of an overhang effect here too; operators could slow their spending awaiting a paradigm.  It will be interesting to see whether the DT comments stimulate more disclosure, more positioning, and more spending growth.  But remember, the spending won’t be on networking-as-usual.  That’s gone now; 2012 is the high-water mark of the old Internet.

The new network is driven in large part by the need to marshal resources ad hoc to fulfill users’ needs in a mobile broadband (“hyperconnected” is the current buzzword, which I hate because I hate all current buzzwords) world.  There have been a slew of studies for literally generations showing that people think more quickly and more incisively when standing, and range more philosophically and broadly when laying down.  Well, mobile people are clearly more in decision mode than couch potatoes and that’s creating a whole new demand for services, shifting from research to decision support.  It’s that shift that moves us to cloud fulfillment.

It’s also a shift that may be rewiring our brains.  A Pew study just announced suggests that people weaned on social networking and always-on mobility will by 2020 be a completely different kind of consumer, a completely different kind of worker.  This change is the thing that I think is going to revolutionize cloud computing at the enterprise level.  We draw workers from the pool of humanity after all, and as that pool rewires itself to become “hyperconnected” brain-wise, they are rewiring the optimum productivity flows and support tools that will support them as workers.  It could bring about profound changes in every application we run in business, not to mention how business computing is done.  There is zero chance that every application running today will migrate to the cloud based on cost; there is a slight chance that every application running today will be replaced by a cloud-hosted component set whose structure, interconnection, and behavior is attuned to the brains of the workers of the future.

New Models for Streaming and the Cloud?

It’s a surprise to some (the media, at least, is pretending surprise) but Intel is looking to get into the streaming video space.  My cynicism here regarding “surprise” is of course related to the fact that they’ve been working on that for some time, and probably Intel has little choice.  It really gets back to portable devices and the fact that they’re built almost entirely without Intel processors.

Entertainment is a big market, as Apple has shown.  The PC space Intel has dominated almost from the first is big too, but it’s clearly plateaued.  Other chips own portable devices and while Intel may hope for an against-all-odds-Gingrich-like victory of Windows 8 against iOS and Android, it’s probably not smart to mortgage the house for the bet.  While TV may not be the ideal space (Intel actually seemed to be exiting the space just last year) it’s about the only game in town.

What Intel is apparently working on is something along the “Apple TV” and “Google TV” line, which is a pseudoSTB that would mediate Internet-served content to a TV and presumably try to integrate the viewing experience with that of traditional channelized television.  I’m hearing that all three of these companies (and a couple more besides) have been working on the features that might augment basic streaming viewing to increase utility.  There are obvious ones; smart TV could “tell” you that nothing you’re typically willing to watch is coming on in the next couple hours but there’s a couple of streaming options available that your current viewing pattern suggests you may be “in the mood” for.  Everyone will do this sort of thing, or sue everyone else on alleged patent infringements.  It’s the new hot stuff that has to be considered, and of course we don’t know what that is yet.  It’s not that Intel or the rest think they’ll make it all up of course; the TV model will be app-based just like phones and tablets are.  The thing is, anything that requires major hardware or connectivity concessions may require hardware support and you don’t want the platform to fall short of early needs.

Streaming video is becoming interesting, and believe it or not the cause isn’t the portable device as much as crummy programming.  The problem is the advertising battle, in two dimensions.  First, network TV now has to compete with everything online in terms of ad budget.  Advertising is largely a zero-sum game, so whatever is gained by Google or whoever is lost by ABC or NBC.  Less money, less programming budget, more “reality TV” that appeals to a very limited demographic, and more viewing segments cut adrift.  In 2012 flight from boredom will be a larger driver for new streaming users than mobile video will be.  And of course streaming gives people a way to avoid TV and commercials, which makes the problem worse.

Video isn’t the only space where changes in demand may bring changes in technology.  In cloud computing, I’m seeing some order emerging from the hype-driven chaos.  For a couple of years now we’ve been hearing that the cloud was going to absorb enterprise IT, something that responsible surveys have continually showed is untrue.  Not to mention that half an hour with a calculator will show you that it can’t be true; the cost of the cloud is actually higher than the cost of internal IT for mainstream computing apps.  Most of what’s driving the cloud today is really a set of hosting-like applications, expansions of the basic web model, and most comes from web startups and not enterprises.  Despite what you read, enterprises are still just dipping their toes in the cloud model for the good reason that the model isn’t quite ready for them yet.  But it’s getting there.

What’s emerging in the cloud world is what we could call a vision of the cloud as a CONTEXT for applications and services, not as a place to run stuff cheaper.  IBM alludes to this in its just-released survey on cloud business models, but the document is surprisingly inept given IBM’s normal strategic leadership.  The real innovation in the cloud today is coming largely from open-source projects that are aiming at what some call “DevOps”, the marriage of operational and developer activities.  This is important because it addresses first the reality that important cloud apps will more likely be written for the cloud than migrated to it, and second the reality that managing real relationships among virtual resources and making it all work for a user who is also part of a virtual relationship can get a bit hairy.  There’s interesting stuff happening here, and it will likely impact even networking, eventually.

 

New Network Tech Issues Emerge

The pressure to create new profit sources for network operators is starting to generate some momentum in the network technology and architecture space, but it’s too early to call a trend in part because there are still a lot of profit options being pursued.  Not only that, even those who might see the same profit goal could well see a different path to achieve it.

One thing that seems likely to happen is that network operators in general and cable operators in particular will push the boundaries of the FCC’s tolerance for IP-non-Internet or “specialized services”.  In the Neutrality Order (itself under appeal, if you recall) the FCC declined to say that specialized services were a violation of neutrality principles, instead looking at the question of whether the services were being separated so as to put OTT services at a competitive disadvantage and undermine the open Internet.  The problem of course is that anything that can be delivered on IP can be delivered on the Internet in a technical sense, but the business model for offering services that require performance stability, high availability, or security may be difficult to achieve using the Internet.  That in part is why IP video services from telcos have generally been exempt.  But how much can the operators push the FCC on what’s excluded?

Another profit point is the managed services over IP.  Some services, like home monitoring, are seen by users as more critical and thus more credible when sourced by a trusted provider.  Nearly all buyers trust their common carrier the most, followed by the cable company, followed by OTT players.  The profit margin on these services are also thin, and that makes them more interesting to the carriers whose internal rates of return are historically low.

My surveys still show that operators are more interested in the “Big Three” monetization targets; content, mobile and cloud.  I’ve seen the last of the three leaping into the forefront at most operators, not because it’s the most financially interesting (operators see the total revenue and total ROI for the cloud space to be lower than the other two) but because the path to the market is much more clear.  In fact, most of the “managed services” drive is likely to be implemented on cloud infrastructure.  Some operators also think that Amazon’s continued reduction in cloud service pricing is an indication that basic IaaS is going to be a commodity market that will, as it builds, kill off its supporters with low profit and high cost.  That would give operators an opportunity to again use their low IRR to step in and take advantage of the pent-up opportunity.

 

Clouds, Services, and Network Equipment

I’ve been watching to see how long it would take for network vendors to begin to recognize the need for something more than pushing boxes, and we have indications that for some at least the time may be here.  Don’t expect sudden logic from these guys; there’s an internal culture to fight whose inertia has to be seen to be believed.  But maybe the head-in-the-sand period is ending, and competitive pressure may spread the effect to the whole market.

Cisco, who has recently added to its line of UCS servers, is reported by UBS to be working seriously on OpenFlow and SDN, with the intention of being a leader in this space.  OpenFlow is a standard protocol for supporting networks where connection must be explicit, a major shift from the discovered-route model of Ethernet and IP.  While the governmental, educational, and resource support for OpenFlow made it hard for any vendor to ignore, I’d not really expected any to get behind it in a meaningful way, and now the story is that Cisco will do just that.

Inside a cloud, not only in the data center but between data centers, OpenFlow could have significant benefits because it’s inherently more secure and easier to traffic-engineer for specific service levels.  It would fit well with a data center switching strategy, which is apparently where Cisco intends to include it.  From there, expansion to the cloud isn’t rocket science, and Cisco might therefore be the first to provide what’s coming to be called “Network as a Service”, a cloud-connectivity model.  Interestingly, the cloud and any cloud application (specifically CDNs) are exempt from net neutrality even in the narrow FCC conception.

The “inside-the-cloud” point here is critical.  Cisco, with UCS servers, has the best logical jumping-off point for a cloud story, but it’s not yet been able to come up with anything that’s transformational.  In fact, network vendors have been struggling to come up with any compelling reason to think the cloud will change the network and that there’s any special contribution that the network can make.  If Cisco gets these two developments together and then sings a pretty song (John Chambers, after all, is the industry’s master crooner) they could really grab control of this space.

Another development is Ericsson’s Smart Services Router (SSR), hardly the first “smart” edge device announced (Cisco and Juniper also have them) but possibly the first that really makes any case for edge intelligence by linking edge routing with FMC and also with experience management.  Operators who have looked at the SSR tell me they think it’s a significant advance over competitive products, and I think what they’re really doing is reading a new and more hardware-directed way Ericsson is positioning its assets.

Ericsson is one of the most savvy of the network vendors, but it’s tended to bias its strategic vision on professional services, OSS/BSS, and other stuff that’s not only “not-news” and hard to position to the media, it’s disconnected from the major monetization initiatives.  Even now, some operators who are early adopters of SSR are not really looking at Ericsson in monetization projects that are actively underway during the SSR deployment (again, this sort of sales disconnect from strategic positioning isn’t uncommon for network equipment vendors).

Ericsson could in theory use SSR to make the service-to-equipment connection that everyone has been struggling to make.  If they do, then their initiative not only impacts players like Cisco and Juniper who have also launched new smart-edge products, it could impact players like Alcatel-Lucent who is still struggling to link its service layer to network equipment, and NSN who is divesting itself of non-mobile lines of business and may not have a network to link to.  And remember, Ericsson also has a strong optical portfolio, and no big-iron router core position to defend.  Could they be a contender for leading the next-gen optical core advances?  Maybe, but Huawei clearly wants to take a run at that space, and Huawei’s optical strategy seems more metro-focused, where I think the real need will be.  Video traffic is intra-CDN-and-metro after all.  And as I’ve said before, I think OpenFlow could play a role in the optical core too, if it’s melded with other IP-discovery schemes.

 

Reading the “New iPad” Tea Leaves

Well, Apple has finally quashed (most of) the rumors and announced its “new iPad”.  It has the quad-core A5X processor, Retina display with photorealistic resolution, and is much faster on cellular wireless—21Mbps HSPA+, DC-HSDPA at 42Mbps, and LTE at 73Mbps.  However, the notion of a fully software-define radio capable of supporting anyone’s service isn’t in the cards; while Apple says the new iPad has the “most bands ever” and is world-capable, there are separate versions for at least AT&T and Verizon.  The price is pretty aggressive too; $499 for the 16G version in WiFi and $629 for the same in LTE.  That’s the same price as the current iPad.

Apple fans will say the new gadget is revolutionary, and it is certainly noticeably better than the earlier model in both display quality and processor performance.  To me, though, it’s the LTE dimension that will be the revolution, and it’s hard to say at this instant just how far that revolution will go.

Up to now, the majority of tablets have been WiFi-enabled and the current market trend has been to increase the WiFi lead rather than to reduce it.  Tablets on 3G, even the HSPA versions, are somewhat limited after all.  But on LTE there’s a decent chance that the new iPad would outperform a WiFi device.  And the quality of the device, particularly in gaming, may pull through additional units and thus generate a larger base of tablet buyers who have at least the potential to migrate to LTE.

So imagine these great graphics-and-video gadgets exploding on the marketplace, pulling in new LTE customers.  It’s easy to see how you could start to generate a lot more traffic, how some of those MWC predictions from vendors could seem sensible.  The problem of course is that all this traffic only increases the pressures on operators that have already driven them away from all-you-can-eat mobile pricing and into tiers and caps and throttling and excess charges.

It’s also likely that the new iPad is both driving and is driven by the video-on-the-run interest.  Video, in TV and movie form, is entertainment.  People who are not engaged in some activity typically want entertainment.  When they get home, finish dinner, and relax they “watch TV”.  Arguably, video entertainment is the diversion of choice and so it’s not surprising that if you equip users with the technical means to enjoy it while they’re on a bus or train or sitting in a waiting area, they will.  My own research and most of the other stuff that I think is thorough and objective has showed that we’re not really “cutting the cord” in TV viewing as much as in living; we don’t have TV everywhere we want to watch something.  Lifestyle shifts do shift entertainment needs and it doesn’t mean people don’t watch TV any more.

This whole scenario is what I think is behind Netflix’s rumored dance with the cable MSOs.  Cable may have some lingering paranoia about being cord-cut, but most of the savvy planners in the industry that I’ve talked to realize that this isn’t about protecting their core—it’s about extending their influence in an incrementally new eyeball market.  Those extra viewing hours are a chance to make money.  One way is TV Everywhere, a kind of ad-supplement strategy that links viewing rights to channel subscription.  Another is to sell VoD, and Netflix is a way of doing that.

But that still gets us back to network impact.  Ciena has been stumbling along, like many players in carrier networking who weren’t router vendors, but now it’s coming into its own just as it seems that routers may be in jeopardy.  The same factors are driving both movements, in fact.  Networks need capacity, and we’ve long-since passed the point where efficient capacity management is cheaper than raw bits for the kind of capacity growth the network needs.  Operators are therefore looking to flatten the OSI stack, to create a network that is more about capacity and less about aggregation and bandwidth management.  You can’t get capacity without the optical layer, so this new trend is favoring optical vendors.  That’s why Cisco and Juniper are singing optical songs even though optical convergence “undermines routing”.  In fact, it’s explosive demand that undermines routing; optical convergence is just the right way of coping with it.  So either you have a good optical strategy or you lose, because you’re going to lose in routing one way or the other.

 

Fiber, “Fixed” LTE, and Privacy: Our Complex Future

Networking is the business of traffic and capacity, supply and demand, and we have some news in both of these spaces.  I’d love to say that we had news suggesting that the balancing of these two factors—critical for any market—was being achieved.  I can’t.  Like politics, business often bogs down in posturing and fails to address the key issues.

There are some interesting developments in the area of serving rural customers with broadband, developments that might even end up impacting how anybody without FTTH potential might be served.  Verizon has announced a HomeFusion service that would use a home hub linked to LTE via a special installed external antenna that would presumably offer better range than in-appliance antennas and thus permit the service to be extended beyond traditional cell boundaries.

Another supply-side development is the news from BT and Alcatel-Lucent that they’ve pushed fiber to the 400G level.  While the media tends to focus on fiber in the FTTH context and yearn for the gigabit-to-the-home future, data shows that there’s pretty much no market for high-end broadband even today when 100 meg is fast.  You also hear that “the core” is the problem, but most video traffic will never see the core of the Internet.  The real issue is metro, and what we’re trying to work out today is how fiber would play into the delivery of metro-cached content and mobile backhaul, both applications that have significant traffic and cost implications.  Video delivery is many-to-one in architecture and the deployment of a smart CDN strategy (we’ll cover vendor CDN strategies in Netwatcher in April) could significantly impact cost and performance.  But mobile services’ impact on metro networking is harder to predict because the space is evolving.  Forget fiber in the core or to the home, friends.  It’s metro fiber we need to be working on now, and that’s not just a numbers game as I’ve said before, it’s a complete rethinking of IP network principles around the reality of traffic, profit, and topology.

Apple is launching its new iPad today, and one thing that everyone is expecting (particularly Wall Street) is that it will be targeted more to enterprises.  Some think that Apple is going to reverse a long-standing yuppie-individualist slant in its vision of company IT, but I think that’s doubtful.  They’re finally winning with that vision.  The BYOD wave, which is a bit of the worker rebellion against intransigent IT sort of wave that Apple has always seen coming, is combining with thin client and cloud to create a more individualistic edge.  That opens new security, stability, and performance issues at the traditional level but most of all it opens the opportunity to integrate cloud-hosted worker-enablement tools without worrying too much about IT backing.

Between supply and demand is regulation, and EU regulators have told the Internet Advertising Bureau that its do-not-track button in browsers will satisfy EU rules only if users are given a chance to make an informed choice about tracking and if companies then suspend information collection except where it’s needed to support the actual web service the user has elected to invoke.  This suggests we may have another collision on the roadmap, one between the ever-exploding number of eager social startups who want to leverage advertising revenue, and the likelihood that the more companies try to collect and personalize, the more users will get scared or disgusted and opt out.  Given that ad revenues are a very small part of total service revenues for network services, ad sponsorship can’t afford to lose much without starting a downward spiral that threatens them with losing everything.  Some, I hear, are already saying that usage pricing in broadband may usher in usage pricing for social network and even search.

 

Juniper: Settling for Marketwashing?

One of the kingpins in Juniper’s financial future, according to the Street at least, is the success of its PTX strategy.  PTX is an optical-core evolution that responds to network operator pressure for some way to build IP cores other than with humungous gigarouters.  We noted at the time that we believed that the PTX could make a kind of natural partner with QFabric as a cloud strategy, but Juniper apparently never saw it that way, or didn’t position it.

The latest in the saga is that Juniper and NSN launching a new model for IP core networking called the Integrated Packet Transport Network.  This is a combination of Juniper’s PTX and the Nokia Siemens Networks hiT 7300 high capacity optical packet transport platform, a product that seems outside the range of the new focus of NSN.  Obviously that raises the question of whether Juniper might plan to buy NSN’s optical business (there are plenty of speculative buyers for various NSN business elements, though).

Juniper and NSN have had a partnership for years, and in some ways a tighter relationship between the two might be logical if NSN is indeed to shed the non-mobile elements of its   product line.  Juniper doesn’t have any compelling mobile strategy and so it would gain from that, perhaps.  The reason we say “perhaps” is that such a partnership would inevitably surrender service-layer dominance to NSN, who has something there even if it’s primarily focused in the mobile space.  If the service layer is the only place where real differentiation can occur, then does Juniper cede all differentiation to NSN?

The question may be all the more relevant for another pair of announcements from Juniper.  Last week, recall, Dell announced its own data center architecture, built around its servers and Force10 acquisition.  Given that our research has always shown that data center networks transition totally in sync with server/software architectures and their demands, this puts four companies (Cisco, Dell, HP, and IBM) with servers against Juniper in the data center, and Juniper doesn’t have servers.  So Juniper announced that QFabric had won an independent network test of data center networking scale and performance, and that Juniper “delivers the new network” for Microsoft Lync (UC/UCC) servers.

It’s hard not to see this as a counter-move, and if that’s true then it’s kind of a weak one.  We have as many independent tests in this industry in every space as we have vendors (does that suggest anything?), and each has a different outcome.  Switches switch, so it doesn’t matter what the application is.  As with NSN, this is more about a kind of marketing partnership.  I don’t think Juniper is asserting it works only (or even better) with NSN optics or Microsoft UC, but rather that it’s making its support of both “partners” explicit to respond to market conditions.  I’ve been critical of Juniper’s lack of high-level strategy and engagement in the past, and at the same time I’ve been impressed by their hardware technology.  I hoped they’d get the former up to speed to complement the latter, but these moves suggest to me that they’re not going to do that, but rather position themselves inside the story of another.  And if the story is about somebody huffing and puffing and blowing houses down, you really want the hero role for yourself.

Network Planning for a Usage-Priced World

The flap over usage pricing, renewed last week by announcements by AT&T and TW, has raised again the question of how network infrastructure might respond to a new broadband world, one where unlimited usage no longer stimulates new apps.  In such a future, operators would be accepting a role of bit-pusher, and the growth of broadband would no longer be unbridled.  Many, myself included, think that the social-network and video bubbles would burst.  Operators are somewhat cautious about the topic, but there are a few comments/directions we can relate and this is clearly a good time to be thinking about them.

So what’s a worthy approach?  Operators’ number one strategy for the moment is to reconsider their content strategies.  Video traffic is by far the greatest source of profit problems, and inside some of the current moves you can see the video issue weaving its threads.  For example, we’re told that operators have concluded that there are no “hogs” or heavy users who are not heavy consumers of video.  By setting usage cap points at the top end of what non-video users are likely to use, operators believe that they can focus their price-pressure remedy on the specific culprit of streaming.

CDNs are also a part of the video optimization picture, of course.  In the past, CDNs have been largely promoted by content owners as a means of dealing with peering-limited delivery performance.  But by staging content artfully closer to the access edge, you can unload the deeper metro infrastructure where operators say content is creating the largest pressure in wireline.  In wireless, CDNs are helpful but can be made more compelling if you offload video from the 3G/4G cells, which could be done through expanded use of WiFi.  That’s because cell congestion in wireless can happen before metro aggregation congestion.

Another strategy gaining credibility is the “usage-free” notion, something that is a perilous course from a regulatory perspective.  The idea is to exempt applications or content sources from usage pricing, either because they’re your own or because the app/content source has elected to pay on your behalf.  This seems to be at least a potential violation of the US neutrality principles, but because CDNs are explicitly exempt from neutrality it may be possible to concoct an architecture that would pass muster, and of course the whole order is in the courts on appeal anyway.

The next approach operators are looking at is pushing traffic down the stack.  A three-layer architecture with IP at the top is the most expensive to deploy and support.  Reducing the number of active network elements has a major value in cutting costs.  Operators have been pushing for more fiber and less IP, and while vendors have blown kisses in this direction it’s very possible that things like OpenFlow might be used to augment basic optical switching/steering and create a network and even metro core with more optics than electronics, so to speak.  This is what operators hope for in the long pull, but they believe they may have to wait until either optical players or other cost disruptors (like Huawei) figure out exactly how to do this; the big equipment vendors are in their view dragging their feet on optical consolidation.

If the current trends in revenue per bit are sustained, our model says that global network infrastructure spending would have to fall to less-than-replacement in the next couple of years, meaning that the total new assets being placed in service would be less costly than those being retired.  In some geographies this is already true in wireline, but the thing that’s interesting is that we’re also approaching a point where retirement of older TDM stuff will largely cease to be a factor.  Up to now, we’ve had a net gain for IP and Ethernet and optical even in the face of declining capex, but that’s coming to an end.