An Example of an App-to-Cloud-to-Flow Ecosystem

I mentioned in a blog last week that there was some important progress being made in the fusion of cloud development and deployment—what the industry calls “DevOps”.  There are also important developments in the area of cloud networking, another topic I’ve blogged about recently.  One indication of a unified approach to these critical problems was announced today by Big Switch at 11 AM EST, too late for my normal blog.  We’re going to talk about the Big Switch Open SDN announcement here, but first I need to summarize why I think it’s important.

The cloud has preoccupied nearly everyone, but not much attention has been focused on how the cloud changes the model of network services.  In the past, we obtained services by linking OVER the Internet to a URL that represented the capability or information we wanted.  On the surface, the cloud model doesn’t seem too different.  We have stuff hosted “in the cloud” but the stuff is still accessed via a URL.  Sure there are issues that are associated with the way that a dynamic resource is mapped to that URL, but hey it’s not rocket science.  Look deeper, and you see more difference, perhaps enough to create a revolution.

In a cloud future, users’ needs are more dynamic too.  Imagine a Siri-like process front-ending a dynamic resource pool and you get a glimpse of what’s coming.  The user makes a request of a friendly agent in the cloud and the agent marshals all sorts of processing power and information to fulfill it.  That information isn’t delivered directly to the user, but through the agent, and the information paths are internal to the cloud and not external to the user.  That’s cloud networking; the separation of cloud-flow from user-flow.  Content delivery has already taken to a similar model; a CDN is a set of caches (pushed increasingly forward toward the user) and an interior network that delivers data to those caches.  Users connect not with distant content hosts but to local cache points.  It’s a service-network and service-access dichotomy; like the cloud.  Inside the CDN are a limited number of (you got it!) flows.

And enter another flow, OpenFlow.  OpenFlow is an explicit-connection model of networking where flows are authorized not automatic.  For the whole universe of the Internet it doesn’t scale, but for the flows inside a cloud it’s perfect.  Even VPNs likely fit will in the OpenFlow model, and data center networks darn sure do.  The cloud validates OpenFlow, providing that you can get an OpenFlow cloud model built in the real world.

Architecturally it’s not hard to see how to do that, and to create a utopian model of linking applications to explicit network flows.  A switch controller simply creates forwarding rules; that’s the OpenFlow model.  In practice, though, you obviously need to worry about things like how you manage persistent flows, how you create VPNs or VPLSs, how your applications actually drive policies—you get the picture.  The point is that there is a lot of stuff that has to be added to basic standards to create a flow-based future network, and the process has to start with a conceptualization of the problem at an ecosystemic level, from apps to flows.

Some of that could in theory be provided by a model of cloud networking, because the cloud problem is that resources have addresses but applications can’t have them, at least until they’re assigned to resources.  There’s a virtualization layer needed here that players in the OpenStack area, for example, have recognized and are attempting to address through work like Melange and Donabe.  Here we have policies linked to applications and provisioning, but we need to link that to network flows.

Sound like two faces of a common problem, separated by a logical inch or so?  Sound like something we need to get solved?  If you’ve followed my blogs, you know that’s what I think, which is why today’s announcement is important.

Big Switch is a startup player in OpenFlow, one of the early ones in fact.  We wrote about them in our Netwatcher OpenFlow piece in October 2011, in fact. Then, they were a controller play.  What they’re now doing is defining a broader ecosystem, an open structure (called, not surprisingly, “Open SDN”) that is based on open standards, open APIs, and open source.  Their own business model, like that of other open-source players, is to provide professional services and a hardened version of some software for commercial application.

The Open SDN model is a flow from application to switch, focusing on how you build a practical flow network and sustain its operation.  It handles things like multi-tenancy, essential for the cloud, on-demand or policy-based flows, and best of all it handles integration with things like OpenStack.  While Big Switch isn’t asserting direct compatibility with all of the various OpenStack network-related projects, it does have Quantum project involvement and a submission there.  Quantum is an open network-service offshoot of OpenStack’s inherent vision that the network is also a resource in the cloud, and it could be linked to Melange and Donabe for a more cohesive DevOps strategy.  The point is that this makes Big Switch arguably the first player to link all the way from the cloud-resource vision down to an OpenFlow switch.

My view is that all of this is really just the tip of a cloud-network-and-NaaS iceberg.  If you can do cloud networking, then you can do everything that’s part of the cloud, and since the cloud is the abstraction of computing and network-delivered services for the future, you can do what the future needs.  It would be easy to get all excited over the pieces of the network of the future, but we can’t build it by thinking at the pieces level, which is why there’s a real need for a top-down model that links apps to clouds to flows.  At least we now have one such model.

One model doesn’t make a market, usually.  We’re likely to have a lot more action in this space.  As I said in an earlier blog today, Cisco is now rumored to have an OpenFlow spin-in on the drawing board, but I think that’s likely to be a hardware play.  The stuff that acts as the bridge between the application, the cloud, the resource control, addressing, and information flows could be really critical as a competitive point for vendors in the networking space, and even for OSS/BSS developments.  Thus, Big Switch may be like a tiny magician who has pulled a 900-pound gorilla out of a hat instead of a fuzzy bunny.  Can they control their own fate here?  We’ll see.

Cisco Spin-In and Ciena Praise Equals Cloud Network

There are more indications this week of a sea change in networking that goes beyond the simple question of whether you switch or route or whose boxes you use.  One data point comes from a Credit Suisse story and the other from a Cisco rumor.

Credit Suisse is saying good things about Ciena, a company who’s been stuck in low gear for as long as I can remember.  They’re “bucking the trend”, says the analyst, but that’s probably the one statement that’s out of whack in the whole report.  It’s not that Ciena is bucking the trend, which in this case is the trend toward lower capex, it’s that it’s on the cusp of the new trend.  Hold that thought till I get through the next data point, though.

Point number two is that Cisco is rumored to be launching another incubated startup, this one in the SDN space, and with some of the same players that were involved in one of the prior one according to some insiders.  The new space is SDN, which means OpenFlow.  Cisco has been fairly articulate in support of the new explicit-switching notion, going further than the reluctant-acquiescence response of many other switch/router players.  SDN is a new notion of data movement by permission, a contrast to the open Internet connection model.

What’s common here?  It’s not technology as much as drivers for the changes.  What we’re seeing is a remaking of the Internet model, created by the sum of the modern forces of iPhones and iPads and Facebook and Twitter and Google and Netflix and LTE and more.  The old Internet relied on creating a community that was reachable only with universal connectivity.  The new Internet recognizes that however much freedom you offer in connective choice, the user is going to spend most of their time on a few sites doing a few things, and that the thing that will have the most traffic impact is content.

The shift to CDN and cloud is inevitable given the direction of services online, and CDNs and clouds mean a service network with a sharp boundary, an “interior” where you have a few high-powered valuable connections, and an “exterior” where you have conventional Internet addressing.  Like it or not, vendors, this is how the network of the future will be built.  The architecture means you don’t need big routers, just big on-ramps (Ericsson’s SSR comes to mind) and fat pipes inside to coordinate service traffic among a small number of supersites.  It’s a perfect model for an optical network and SDN.  You can see why Ciena (optical pipe player) and a Cisco SDN incubator (holds a place for Cisco without making it look like routing is throwing in the towel) are important.

Ciena has a potential advantage here.  Raw optics isn’t the solution either.  They have the potential advantage of moving from the space with the lowest margins into a higher-margin space, no matter where they move.  It’s Ciena who should be doing OpenFlow, and not just in partnership with universities and science projects!  Listen up there, guys.  The low-margin space for them is the high ground for you, so don’t give it up.


TV: Everywhere, Network-Where, or Nowhere?

Amazon has cut a deal with Discovery to stream its programming, and the announcement has spawned a serious question about the future of TV in general, and of TV Everywhere in particular.  Like just about everything else in video, this is complicated.

Let’s start off with some data.  The largest segment of viewing that flees standard TV (including standard, integrated, VoD) is the segment who’s disgusted with what’s on and jumps onto the Internet in quiet (well, sometimes not so quiet) desperation.  This group is, in terms of population, half again as large as the youth segment.  It streams more than three times the material as youth.  Its material is almost totally monetizable (you can sell it or sell commercials into it) where the youth segment consumes a lot of YouTube clips of basketball tricks.  In terms of market segment, this one is also the fastest growing.

The big question is where the stuff will come from, which is where the Amazon deal fits in.  Adults in the 30-45 year age range said that they liked eleven TV series that had run more than ten years ago better than anything (other than news or sports) currently broadcast.  TV series are the most popular fodder for those disgusted with “what’s on”, in part because people tend to have “slots” an hour in length filled with unsatisfactory material but bounded by stuff they’d still like (or be willing) to watch.  Thus, movies are less valuable, and thus Amazon’s deal with Discovery could be significant.

The other dimension of this is production, of course.  TV Everywhere isn’t a viewing strategy, it’s a paying strategy.  The problem with online TV is that the total value of commercials in the material is about 3 to 5% the value of what could be sold into standard channelized programming delivery.  That’s not enough to fund the production of the show, so the immediate problem with a pure streaming strategy is that even the die-hard Lucy fans will eventually get tired of watching her eat chocolate.  TV Everywhere says, in effect, that you have to pay for the shows in channelized form to get them in streaming form.

Ah, and here’s then the real reason why the Amazon/Discovery thing is important.  It’s not that getting TV Everywhere knocked off means that somehow the Internet is going to replace all of TV; if it does there won’t be anything to watch.  The thing that could be not only important but critical is that the networks that produce the content are the ones that we need to keep in the game.  If they can go directly to the consumer or to a portal/distributor player like Amazon and cut a better deal than they can get by working with cable companies, the total amount of content in the mill might actually INCREASE.  So the thing to watch now is whether the networks, starting of course with the cable networks, start to work “deeper” content distribution deals.  Sure, Amazon is a step, but the real news would be that Discovery decided to stream its own stuff for pay.  If that happens, the whole industry changes.

Why Cisco’s NDS Deal Could be Huge

Cisco, no slouch in the world of streaming video to start with, may have made an over-the-top (no pun intended—or maybe a little one!) move by announcing it’s acquiring OTT video software company NDS.  The move, I think, may be one of the biggest Cisco has made in the last decade, and it poses a major threat for Cisco’s competitors.

NDS is interesting because they’ve focused on supporting a practical video ecosystem and not just a romp on the streaming bandwagon.  They have highly modularized, highly orchestrable, video components but they also have one of the broadest video-service portfolios in the market, which means that they can address nearly anything in the way of opportunity.  Their stuff is API-based, easily customized and exposed to developers, integrates across service boundaries…in short, it’s a pretty complete service layer.  Maybe even the most complete now available.

For Cisco, NDS could be a killer idea—literally.  There are three monetization pillars that control strategic engagement with operators; cloud, mobile/behavioral, and content.  Cisco has a leg up in terms of cloud strategy among the network equipment vendors, because of UCS.  This week, recall, Cisco upped the UCS ante with new server platforms and a new fabric-connect architecture that leverages its combined position in servers and networks.  Cisco has been less successful has been in mobile and content.  One win and two losses doesn’t add up to Chambers’ kind of sales math, and content is potentially golden for a company like Cisco because it not only gives them a new monetization engagement path, it gives them another path to mobile victory as well.  Might Cisco be wrapping up all the valuable pieces?

Yeah, they might, but Cisco has a way of snatching strategic defeat out of acquisition victory.  You can corner the opportunity for houses by getting prefab walls, roof, furniture, and so forth, but if you forget to tell anyone you’re building a house you risk generating more yawns than dollars.  Worse, the absence of a strong position for the pieces of Cisco’s winning design will give competitors an opportunity to step in with their own moves.  Every day that Cisco doesn’t sing risks a competitor stacking the market choir’s sheet music to their own advantage.  And competitors are out there.

Alcatel-Lucent has arguably the most complete and mature content strategy, but it’s been a bit hung up on IPTV and hasn’t advanced its positioning to cover more recent trends as quickly as it should have.  They also have the best technology for a “service layer” but they’ve never been able to get the message out.  Ericsson has made radical advances in their cloud networking positioning, taking them from a non-player status to being a contender, and they obviously have strong mobile credentials.  They have, in fact, pretty much all of the key technology for mobile and the cloud.  NSN has a CDN relationship with Verivue and a well-positioned multi-screen mobile video strategy, not to mention their strong mobile RAN position.  Any of these three could make life more difficult for Cisco as it tries to capitalize on its new technology.

Juniper may be the player with the most to lose here, simply because the other players in the market space keep starting new races and Juniper has the smallest number of runners.  The IP and Ethernet switch layer is a tough place to jump off into a service story from.  As a mobile player, Juniper lacks the same things Cisco lacks.  In the cloud space, Juniper has switches but no servers, so Cisco’s strategy for making exciting linkages between the two is hurting, and Juniper hasn’t played “cloud networking” nearly as well as Ericsson has.  Now it’s pressured in content, where its own positioning has been anemic.

But will this Cisco move generate counter-moves at all?  That’s the question I can’t answer, because I don’t know whether Cisco’s acquisition is aimed at building a strategy or just managing sales-generated objections on product coverage.  How many times have we seen Chambers run out and buy a company because a sales person said they needed a feature to close a deal?  How many of those companies have turned out to be albatrosses?

DT and Pew: The What and Why of the “Cloud Network”

DT  has become what’s possibly the first major carrier to show us what the network of the future is going to look like, though the description is still a bit cryptic and not always being picked up correctly in stories.  All of the factors I’ve been blogging about for the last week, both the demand-side changes and technology issues, are focusing on a new vision of the network.  It’s not the PSTN, but it’s really not the Internet either.

For the last four or five years, operators have been aware that the major revenue opportunities were likely to come from the delivery of services/experiences that were created by network-distributed intelligence.  Content relies on CDNs and caching, and the cloud obviously relies on distributed computing.  They recognized that in such networks, the “edge” was simply an agent or service connection point, and the “core” really didn’t exist in a traditional sense.  Instead you had a web of trunk connections whose purpose was to create pooled facilities and not to pass end-to-end traffic.  This is what a “cloud network” would look like.

Despite early carrier recognition of this point (I blogged publicly about this four years ago!) network technology in general and network equipment vendors in particular have been slow to accept what’s happening.  In part, I think this was because they were uncomfortable with any change—sellers want to sell what they have—but in fairness the operators themselves haven’t really articulated the picture.  That’s what DT has finally done.  Just be aware that DT isn’t speaking just for itself; these plans are in the works in every single Tier One on the planet, and they’re going to revolutionize networking.

The Street hasn’t really picked up on this yet either, except to note that capex growth now seems slower than before (yes, because nobody wants to invest in something earning less ROI every month).  I thought there might be a bit of an overhang effect here too; operators could slow their spending awaiting a paradigm.  It will be interesting to see whether the DT comments stimulate more disclosure, more positioning, and more spending growth.  But remember, the spending won’t be on networking-as-usual.  That’s gone now; 2012 is the high-water mark of the old Internet.

The new network is driven in large part by the need to marshal resources ad hoc to fulfill users’ needs in a mobile broadband (“hyperconnected” is the current buzzword, which I hate because I hate all current buzzwords) world.  There have been a slew of studies for literally generations showing that people think more quickly and more incisively when standing, and range more philosophically and broadly when laying down.  Well, mobile people are clearly more in decision mode than couch potatoes and that’s creating a whole new demand for services, shifting from research to decision support.  It’s that shift that moves us to cloud fulfillment.

It’s also a shift that may be rewiring our brains.  A Pew study just announced suggests that people weaned on social networking and always-on mobility will by 2020 be a completely different kind of consumer, a completely different kind of worker.  This change is the thing that I think is going to revolutionize cloud computing at the enterprise level.  We draw workers from the pool of humanity after all, and as that pool rewires itself to become “hyperconnected” brain-wise, they are rewiring the optimum productivity flows and support tools that will support them as workers.  It could bring about profound changes in every application we run in business, not to mention how business computing is done.  There is zero chance that every application running today will migrate to the cloud based on cost; there is a slight chance that every application running today will be replaced by a cloud-hosted component set whose structure, interconnection, and behavior is attuned to the brains of the workers of the future.

New Models for Streaming and the Cloud?

It’s a surprise to some (the media, at least, is pretending surprise) but Intel is looking to get into the streaming video space.  My cynicism here regarding “surprise” is of course related to the fact that they’ve been working on that for some time, and probably Intel has little choice.  It really gets back to portable devices and the fact that they’re built almost entirely without Intel processors.

Entertainment is a big market, as Apple has shown.  The PC space Intel has dominated almost from the first is big too, but it’s clearly plateaued.  Other chips own portable devices and while Intel may hope for an against-all-odds-Gingrich-like victory of Windows 8 against iOS and Android, it’s probably not smart to mortgage the house for the bet.  While TV may not be the ideal space (Intel actually seemed to be exiting the space just last year) it’s about the only game in town.

What Intel is apparently working on is something along the “Apple TV” and “Google TV” line, which is a pseudoSTB that would mediate Internet-served content to a TV and presumably try to integrate the viewing experience with that of traditional channelized television.  I’m hearing that all three of these companies (and a couple more besides) have been working on the features that might augment basic streaming viewing to increase utility.  There are obvious ones; smart TV could “tell” you that nothing you’re typically willing to watch is coming on in the next couple hours but there’s a couple of streaming options available that your current viewing pattern suggests you may be “in the mood” for.  Everyone will do this sort of thing, or sue everyone else on alleged patent infringements.  It’s the new hot stuff that has to be considered, and of course we don’t know what that is yet.  It’s not that Intel or the rest think they’ll make it all up of course; the TV model will be app-based just like phones and tablets are.  The thing is, anything that requires major hardware or connectivity concessions may require hardware support and you don’t want the platform to fall short of early needs.

Streaming video is becoming interesting, and believe it or not the cause isn’t the portable device as much as crummy programming.  The problem is the advertising battle, in two dimensions.  First, network TV now has to compete with everything online in terms of ad budget.  Advertising is largely a zero-sum game, so whatever is gained by Google or whoever is lost by ABC or NBC.  Less money, less programming budget, more “reality TV” that appeals to a very limited demographic, and more viewing segments cut adrift.  In 2012 flight from boredom will be a larger driver for new streaming users than mobile video will be.  And of course streaming gives people a way to avoid TV and commercials, which makes the problem worse.

Video isn’t the only space where changes in demand may bring changes in technology.  In cloud computing, I’m seeing some order emerging from the hype-driven chaos.  For a couple of years now we’ve been hearing that the cloud was going to absorb enterprise IT, something that responsible surveys have continually showed is untrue.  Not to mention that half an hour with a calculator will show you that it can’t be true; the cost of the cloud is actually higher than the cost of internal IT for mainstream computing apps.  Most of what’s driving the cloud today is really a set of hosting-like applications, expansions of the basic web model, and most comes from web startups and not enterprises.  Despite what you read, enterprises are still just dipping their toes in the cloud model for the good reason that the model isn’t quite ready for them yet.  But it’s getting there.

What’s emerging in the cloud world is what we could call a vision of the cloud as a CONTEXT for applications and services, not as a place to run stuff cheaper.  IBM alludes to this in its just-released survey on cloud business models, but the document is surprisingly inept given IBM’s normal strategic leadership.  The real innovation in the cloud today is coming largely from open-source projects that are aiming at what some call “DevOps”, the marriage of operational and developer activities.  This is important because it addresses first the reality that important cloud apps will more likely be written for the cloud than migrated to it, and second the reality that managing real relationships among virtual resources and making it all work for a user who is also part of a virtual relationship can get a bit hairy.  There’s interesting stuff happening here, and it will likely impact even networking, eventually.


New Network Tech Issues Emerge

The pressure to create new profit sources for network operators is starting to generate some momentum in the network technology and architecture space, but it’s too early to call a trend in part because there are still a lot of profit options being pursued.  Not only that, even those who might see the same profit goal could well see a different path to achieve it.

One thing that seems likely to happen is that network operators in general and cable operators in particular will push the boundaries of the FCC’s tolerance for IP-non-Internet or “specialized services”.  In the Neutrality Order (itself under appeal, if you recall) the FCC declined to say that specialized services were a violation of neutrality principles, instead looking at the question of whether the services were being separated so as to put OTT services at a competitive disadvantage and undermine the open Internet.  The problem of course is that anything that can be delivered on IP can be delivered on the Internet in a technical sense, but the business model for offering services that require performance stability, high availability, or security may be difficult to achieve using the Internet.  That in part is why IP video services from telcos have generally been exempt.  But how much can the operators push the FCC on what’s excluded?

Another profit point is the managed services over IP.  Some services, like home monitoring, are seen by users as more critical and thus more credible when sourced by a trusted provider.  Nearly all buyers trust their common carrier the most, followed by the cable company, followed by OTT players.  The profit margin on these services are also thin, and that makes them more interesting to the carriers whose internal rates of return are historically low.

My surveys still show that operators are more interested in the “Big Three” monetization targets; content, mobile and cloud.  I’ve seen the last of the three leaping into the forefront at most operators, not because it’s the most financially interesting (operators see the total revenue and total ROI for the cloud space to be lower than the other two) but because the path to the market is much more clear.  In fact, most of the “managed services” drive is likely to be implemented on cloud infrastructure.  Some operators also think that Amazon’s continued reduction in cloud service pricing is an indication that basic IaaS is going to be a commodity market that will, as it builds, kill off its supporters with low profit and high cost.  That would give operators an opportunity to again use their low IRR to step in and take advantage of the pent-up opportunity.


Clouds, Services, and Network Equipment

I’ve been watching to see how long it would take for network vendors to begin to recognize the need for something more than pushing boxes, and we have indications that for some at least the time may be here.  Don’t expect sudden logic from these guys; there’s an internal culture to fight whose inertia has to be seen to be believed.  But maybe the head-in-the-sand period is ending, and competitive pressure may spread the effect to the whole market.

Cisco, who has recently added to its line of UCS servers, is reported by UBS to be working seriously on OpenFlow and SDN, with the intention of being a leader in this space.  OpenFlow is a standard protocol for supporting networks where connection must be explicit, a major shift from the discovered-route model of Ethernet and IP.  While the governmental, educational, and resource support for OpenFlow made it hard for any vendor to ignore, I’d not really expected any to get behind it in a meaningful way, and now the story is that Cisco will do just that.

Inside a cloud, not only in the data center but between data centers, OpenFlow could have significant benefits because it’s inherently more secure and easier to traffic-engineer for specific service levels.  It would fit well with a data center switching strategy, which is apparently where Cisco intends to include it.  From there, expansion to the cloud isn’t rocket science, and Cisco might therefore be the first to provide what’s coming to be called “Network as a Service”, a cloud-connectivity model.  Interestingly, the cloud and any cloud application (specifically CDNs) are exempt from net neutrality even in the narrow FCC conception.

The “inside-the-cloud” point here is critical.  Cisco, with UCS servers, has the best logical jumping-off point for a cloud story, but it’s not yet been able to come up with anything that’s transformational.  In fact, network vendors have been struggling to come up with any compelling reason to think the cloud will change the network and that there’s any special contribution that the network can make.  If Cisco gets these two developments together and then sings a pretty song (John Chambers, after all, is the industry’s master crooner) they could really grab control of this space.

Another development is Ericsson’s Smart Services Router (SSR), hardly the first “smart” edge device announced (Cisco and Juniper also have them) but possibly the first that really makes any case for edge intelligence by linking edge routing with FMC and also with experience management.  Operators who have looked at the SSR tell me they think it’s a significant advance over competitive products, and I think what they’re really doing is reading a new and more hardware-directed way Ericsson is positioning its assets.

Ericsson is one of the most savvy of the network vendors, but it’s tended to bias its strategic vision on professional services, OSS/BSS, and other stuff that’s not only “not-news” and hard to position to the media, it’s disconnected from the major monetization initiatives.  Even now, some operators who are early adopters of SSR are not really looking at Ericsson in monetization projects that are actively underway during the SSR deployment (again, this sort of sales disconnect from strategic positioning isn’t uncommon for network equipment vendors).

Ericsson could in theory use SSR to make the service-to-equipment connection that everyone has been struggling to make.  If they do, then their initiative not only impacts players like Cisco and Juniper who have also launched new smart-edge products, it could impact players like Alcatel-Lucent who is still struggling to link its service layer to network equipment, and NSN who is divesting itself of non-mobile lines of business and may not have a network to link to.  And remember, Ericsson also has a strong optical portfolio, and no big-iron router core position to defend.  Could they be a contender for leading the next-gen optical core advances?  Maybe, but Huawei clearly wants to take a run at that space, and Huawei’s optical strategy seems more metro-focused, where I think the real need will be.  Video traffic is intra-CDN-and-metro after all.  And as I’ve said before, I think OpenFlow could play a role in the optical core too, if it’s melded with other IP-discovery schemes.


Reading the “New iPad” Tea Leaves

Well, Apple has finally quashed (most of) the rumors and announced its “new iPad”.  It has the quad-core A5X processor, Retina display with photorealistic resolution, and is much faster on cellular wireless—21Mbps HSPA+, DC-HSDPA at 42Mbps, and LTE at 73Mbps.  However, the notion of a fully software-define radio capable of supporting anyone’s service isn’t in the cards; while Apple says the new iPad has the “most bands ever” and is world-capable, there are separate versions for at least AT&T and Verizon.  The price is pretty aggressive too; $499 for the 16G version in WiFi and $629 for the same in LTE.  That’s the same price as the current iPad.

Apple fans will say the new gadget is revolutionary, and it is certainly noticeably better than the earlier model in both display quality and processor performance.  To me, though, it’s the LTE dimension that will be the revolution, and it’s hard to say at this instant just how far that revolution will go.

Up to now, the majority of tablets have been WiFi-enabled and the current market trend has been to increase the WiFi lead rather than to reduce it.  Tablets on 3G, even the HSPA versions, are somewhat limited after all.  But on LTE there’s a decent chance that the new iPad would outperform a WiFi device.  And the quality of the device, particularly in gaming, may pull through additional units and thus generate a larger base of tablet buyers who have at least the potential to migrate to LTE.

So imagine these great graphics-and-video gadgets exploding on the marketplace, pulling in new LTE customers.  It’s easy to see how you could start to generate a lot more traffic, how some of those MWC predictions from vendors could seem sensible.  The problem of course is that all this traffic only increases the pressures on operators that have already driven them away from all-you-can-eat mobile pricing and into tiers and caps and throttling and excess charges.

It’s also likely that the new iPad is both driving and is driven by the video-on-the-run interest.  Video, in TV and movie form, is entertainment.  People who are not engaged in some activity typically want entertainment.  When they get home, finish dinner, and relax they “watch TV”.  Arguably, video entertainment is the diversion of choice and so it’s not surprising that if you equip users with the technical means to enjoy it while they’re on a bus or train or sitting in a waiting area, they will.  My own research and most of the other stuff that I think is thorough and objective has showed that we’re not really “cutting the cord” in TV viewing as much as in living; we don’t have TV everywhere we want to watch something.  Lifestyle shifts do shift entertainment needs and it doesn’t mean people don’t watch TV any more.

This whole scenario is what I think is behind Netflix’s rumored dance with the cable MSOs.  Cable may have some lingering paranoia about being cord-cut, but most of the savvy planners in the industry that I’ve talked to realize that this isn’t about protecting their core—it’s about extending their influence in an incrementally new eyeball market.  Those extra viewing hours are a chance to make money.  One way is TV Everywhere, a kind of ad-supplement strategy that links viewing rights to channel subscription.  Another is to sell VoD, and Netflix is a way of doing that.

But that still gets us back to network impact.  Ciena has been stumbling along, like many players in carrier networking who weren’t router vendors, but now it’s coming into its own just as it seems that routers may be in jeopardy.  The same factors are driving both movements, in fact.  Networks need capacity, and we’ve long-since passed the point where efficient capacity management is cheaper than raw bits for the kind of capacity growth the network needs.  Operators are therefore looking to flatten the OSI stack, to create a network that is more about capacity and less about aggregation and bandwidth management.  You can’t get capacity without the optical layer, so this new trend is favoring optical vendors.  That’s why Cisco and Juniper are singing optical songs even though optical convergence “undermines routing”.  In fact, it’s explosive demand that undermines routing; optical convergence is just the right way of coping with it.  So either you have a good optical strategy or you lose, because you’re going to lose in routing one way or the other.


Fiber, “Fixed” LTE, and Privacy: Our Complex Future

Networking is the business of traffic and capacity, supply and demand, and we have some news in both of these spaces.  I’d love to say that we had news suggesting that the balancing of these two factors—critical for any market—was being achieved.  I can’t.  Like politics, business often bogs down in posturing and fails to address the key issues.

There are some interesting developments in the area of serving rural customers with broadband, developments that might even end up impacting how anybody without FTTH potential might be served.  Verizon has announced a HomeFusion service that would use a home hub linked to LTE via a special installed external antenna that would presumably offer better range than in-appliance antennas and thus permit the service to be extended beyond traditional cell boundaries.

Another supply-side development is the news from BT and Alcatel-Lucent that they’ve pushed fiber to the 400G level.  While the media tends to focus on fiber in the FTTH context and yearn for the gigabit-to-the-home future, data shows that there’s pretty much no market for high-end broadband even today when 100 meg is fast.  You also hear that “the core” is the problem, but most video traffic will never see the core of the Internet.  The real issue is metro, and what we’re trying to work out today is how fiber would play into the delivery of metro-cached content and mobile backhaul, both applications that have significant traffic and cost implications.  Video delivery is many-to-one in architecture and the deployment of a smart CDN strategy (we’ll cover vendor CDN strategies in Netwatcher in April) could significantly impact cost and performance.  But mobile services’ impact on metro networking is harder to predict because the space is evolving.  Forget fiber in the core or to the home, friends.  It’s metro fiber we need to be working on now, and that’s not just a numbers game as I’ve said before, it’s a complete rethinking of IP network principles around the reality of traffic, profit, and topology.

Apple is launching its new iPad today, and one thing that everyone is expecting (particularly Wall Street) is that it will be targeted more to enterprises.  Some think that Apple is going to reverse a long-standing yuppie-individualist slant in its vision of company IT, but I think that’s doubtful.  They’re finally winning with that vision.  The BYOD wave, which is a bit of the worker rebellion against intransigent IT sort of wave that Apple has always seen coming, is combining with thin client and cloud to create a more individualistic edge.  That opens new security, stability, and performance issues at the traditional level but most of all it opens the opportunity to integrate cloud-hosted worker-enablement tools without worrying too much about IT backing.

Between supply and demand is regulation, and EU regulators have told the Internet Advertising Bureau that its do-not-track button in browsers will satisfy EU rules only if users are given a chance to make an informed choice about tracking and if companies then suspend information collection except where it’s needed to support the actual web service the user has elected to invoke.  This suggests we may have another collision on the roadmap, one between the ever-exploding number of eager social startups who want to leverage advertising revenue, and the likelihood that the more companies try to collect and personalize, the more users will get scared or disgusted and opt out.  Given that ad revenues are a very small part of total service revenues for network services, ad sponsorship can’t afford to lose much without starting a downward spiral that threatens them with losing everything.  Some, I hear, are already saying that usage pricing in broadband may usher in usage pricing for social network and even search.