We filed comments with the operators who generated the white paper on Network Functions Virtualization (NFV) and I’ve referenced those comments in posts this week. I’m not sure how to find them in the NFV site or whether everyone has access to that site, so I’m posting the material below. This material can be freely distributed but it may not be used for a commercial purpose or altered in any way, particularly to deface our own copyright. If you quote this document, attribute the quotes to Tom Nolle, President CIMI Corp.
Month: January 2013
It’s Cisco’s Turn at SDN Bat
It’s probably not going to surprise anyone to hear that Cisco is going to be providing more detail on its SDN and NFV framework at its own partner conference in later January, given that Juniper took its shot this week. The two companies have always tried to pee in each other’s yards, stepping on announcements and muddying up waters. Who was first and who was stirring the pot here is impossible to say, and it probably doesn’t matter much anyway.
Cisco’s approach to SDN and NFV has always been rumored to be API-and-orchestration, aiming at defining the properties of the network “black box” by defining its external interfaces. I’ve noted in the past that there are three models of SDN; the virtual-network Nicera model, the purist OpenFlow model, and the “black box” model that focuses not on HOW SDN works but on what it does. It’s been clear for some time that Cisco intended to target the latter model, and that’s a smart play for two reasons.
Reason one is that right now the advantage in the SDN world lies with the black box model, simply because that model can accommodate current equipment and protocols and because it doesn’t depend for functionality on “northbound APIs” from OpenFlow controllers that lead (at present) to nothing in particular. The higher layers of SDN (the “Topologizer”, “SDN Central” and “Cloudifier” layers of our model, explained on our YouTube channel and our OhNayItShay website) don’t exist yet. Thus, purist SDN necessarily lags, letting Cisco establish an SDN model it favors.
Reason number two is that competitors have been ineffective in countering Cisco’s early moves, which suggest that Cisco can gain significant competitive advantage by moving SDN forward more aggressively than it otherwise might. Juniper’s “SDN” announcement this week appeared to be a quick cobbling of principles (the “Seven Myths” in Bob Muglia’s blog) and some product concepts drawn from QFabric and its chips, and from the Universal Edge announcement last year. None of the other competitors have really told an SDN story, so Cisco is like a racehorse who broke fast from the gate and found everyone else trotting. You don’t stand around in that situation, you RUN!
Where to? The story I hear is that Cisco is going to frame network services in the network and in the cloud (the service layer, including NFV) in a single set of APIs, which Light Reading and Network World are already reporting as OnePK in a kind of expanded mode. Currently the API set harmonizes development of various Cisco network OSs, and it includes things like route services that are essential to building that higher SDN layer, as well as for NFV support. They supposedly plan to incorporate hooks into OnePK that would provide a more versatile control link to network devices, something needed if you’re going to cloud-host functionality based on NFV or to build a tightly-coupled service layer in the cloud. They’ll also be providing componentization and orchestration of service-layer elements, which is likely what they mean by “slicing”.
How well Cisco fills their black box is important from a competitive perspective. Juniper took its shot already and failed to establish a compelling SDN position, so now Alcatel-Lucent, Ericsson, HP, Huawei, and NSN all get an at-bat once Cisco has laid out its positioning. That means that Cisco has to do enough to fend off rivals who will be able to dissect their weaknesses (or read them here!) and counterpunch. But no matter what, those SDN layers I’ve been talking about all along are table stakes. You have them, you explain them, you link them to the cloud and to the network, or you don’t have SDN. It’s a simple litmus test, but it’s fair and effective IMHO.
I also think that the real issue for networking is NFV, more even than SDN. That’s one reason I think Juniper should have labeled its NFV story for what it really was and made that point positively. You can argue that SDN is a proper subset of NFV, in fact, and we’re presenting an architecture for NFV hardware aimed at the space where NFV and SDN are most likely to be focused early on (the metro network) in our January Netwatcher tech journal.
The importance of NFV is that it pulls network functionality out of devices, something that’s reflected in both Cisco’s and Juniper’s focus on software licensing. Obviously adaptive discovery and route control could be examples of that functionality, which is why I say that SDN is a subset of NFV. For Cisco (and Juniper) the question now is whether they should support NFV to allow operators to focus on non-transport/switching functions or support SDN to allow operators to obtain quick network-level benefits. In either case, network vendors lose in the long term because everything that transfers functionality out of the network transfers differentiation and margins along with it. That may be why Juniper says their “SDN” won’t be in place till 2014/2015 and why Cisco may still drag its feet.
Two truths intrude here, though. First, you can stick your head in the sand only as long as everybody does. Otherwise you risk having your body eaten. Alcatel-Lucent, Ericsson, and HP in particular could gain significant competitive advantage by playing SDN and NFV correctly. Second, every vendor had a chance to build “network functions” that were incremental to current services and not displacing of them. Operators wanted revenue-driven transformation five years ago, and consistently told me that they were frustrated by a lack of vendor support. Well, vendors, you should have supported them. If you had, you’d be fighting over who gets the lion’s share of a growing pie and not the hyena’s share of the carcass.
Is the QFabric Chip the Root of Juniper’s “Service Chaining?”
I had noted in my blog on Juniper’s SDN announcement yesterday that we had covered the Juniper QFabric launch in early 2011 and had written an article on QFabric, the PTX, and its potential value in the cloud. The article included comments on the potential for “service chains” similar to that offered in Juniper’s SDN presentation. Readers of our premium blog were given a copy of that issue, and I was asked by some readers of this public blog if they could have at least a copy of that article. I attach it below. This can be distributed freely but may not be altered in any way or used in any commercial activity without our express written consent.
This is not intended to infer that I believe I had any role in “inventing” the service chain concept. The comments I made came out of questions I asked to Pradeep Sindhu, Juniper’s CTO, at the analyst briefing before the public announcement. It was my view that service chaining was intended as a QFabric feature because it was a feature of the basic ASICs. I believe that the semiconductor presentation at the Juniper Partner Event yesterday suggested that was true as well. My goal is simply to say that this concept is an asset that could have been developed years ago, and had it been it might have lead the market’s conception of both NFV (which is where service chaining really fits now) and SDN. Thus, this piece speaks to the value of positioning, something I’ve been preaching to all network companies.
First Glance at Facebook, Second at Juniper
Facebook’s new search concept is going to be seen as a threat to Google, and it is. It’s also a threat to Facebook and it may even be a threat to the whole of online advertising. Revolutions have a way of turning from change to destruction, and while that’s certainly not intended here it is a very possible outcome.
The essence of Facebook’s Graph Search is that it looks through social network information and not web pages. The idea is that recommendations from friends, negative or positive, would manifest themselves as product or product class mentions and could then be translated by natural language query processing into results. That Facebook will kick the problem to Bing under conditions where it can’t yield a reliable result is less a problem for Google than the shift from web-forward to social-forward data as the core of product/service recommendations.
The only thing is that in the product review area, 1) Facebook users haven’t been posting reviews in most cases and 2) retail sites like Amazon already have reviews associated with nearly every popular product. So arguably Google has little new risk here.
The flip side is that Facebook may have little upside. Unless they can extract a lot of utility from their social results, Facebook is just creating a front for Bing and clearly they’ll have to revenue-share with Microsoft. How many advertisers might look at the Bing relationship as a cheaper and easier “in” for social advertising? Facebook could lose as much as it gains.
It also appears that the early Graph Search isn’t fully mobile-integrated, and that’s troubling because you have to wonder why Facebook would not have recognized from day one that mobile users behave differently online, and need different forms of advertising support to make decisions. If you’ve been supplying mobile apps, how could you not have realized that you need to identify these posts and treat them differently in analysis? And despite what analysts on the Street think, LBS recommendations first and foremost rely on accurate location data and a complete base of “local” options. Will Facebook have that?
By making a major launch out of what, from a stock perspective in particular, is a bit of a yawn, Facebook may be signaling that it really has no revolutionary ideas on how to monetize itself. Given that reports yesterday indicated it had lost 1.4 million active users, the bloom may be off the Facebook rose in more ways than one.
I also want to offer a follow-up on my blog on Juniper’s SDN announcement yesterday. My main problem was that what Juniper announced was Network Functions Virtualization (NFV) and not SDN, and that’s particularly interesting given that they picked a partner event where strategic initiatives like SDN or NFV are unlikely to be resonant. Reports from the show indicated that many of the attendees couldn’t follow the story at all. There is in fact an NFV kick-off this week in France; why not aim the pitch at that event given it’s an NFV pitch no matter how you title it? Could it be that Juniper believes some competitor is about to jump out and grab the SDN spotlight, somebody like Cisco? It could be, and it would explain why the focus and the forum of the partner event were selected.
I also listened to a later section of the Partner conference to a session on Juniper’s semiconductor evolution. When the new chip family was first presented at the QFabric launch in early 2011, I saw the potential for the chip to be used to link SERVICES into a delivery chain, and asked a question of the Juniper CTO on that very topic. He agreed that the potential was there, and I wrote all that up in March. Juniper’s CTO also gave us a kind of slogan for the future of networking: “Centralize what you can, distribute what you must!” So sense my frustration here; we have had for two years the ingredients of yesterday’s story. We’ve had what might easily be considered the launching slogan of both SDN and NFV. Why after TWO YEARS are we not getting an effective presentation of a product? In fact, the chip story and the “SDN” story were in different presentations. In fact, the “SDN” story evolved from the Universal Edge preso of last October, which at the time had no clear SDN link at all. But then neither did yesterday’s pitch.
Earth to Juniper. Software is software, not boxes. If you want to have a software story, to be a software company, then you have to start by creating a software architecture and tracing its support roots down to the box level. The stuff can’t be presented as ships in the night. This is a simple articulation problem, Juniper. You have the pieces and you’ve had them longer than anybody. You’ve even had the critical insights, as CTO Sindhu proved with the cloud, and with SDN/NFV principles. Why can’t you learn to sing?
Juniper’s SDN: Really it’s NFV!
Juniper just completed their SDN event at their partner conference, and as is often the case they haven’t made things easy for those who, like me, are charged with analyzing the result. But we soldier on, and so let me jump right in.
At the high level, Juniper’s SDN strategy is really a Network Functions Virtualization strategy. For those who have followed my commentary on SDN and NFV through this blog, you may recall that I’ve suggested that SDN was effectively a subset of NFV, focusing on virtualizing and hosting the adaptive behavior of forwarding devices. You could do SDN without doing NFV and vice versa, something the NFV white paper suggested as well.
In the Juniper approach, SDN is essentially redefined as targeting the centralization of network-related functions rather than the adaptive control-plane functions. There are two ways that you could understand this.
First, in March of 2011 I published an issue of our technology journal Netwatcher wherein I described the QFabric launch and explained some of the things that I was told QFabric would do. One of those things was to link services along a data path between source and destination within the fabric. Juniper’s current SDN concept of service component chaining is identical to that, except that it’s not specific to QFabric. If you’re a subscriber to Netwatcher and have that issue, read it as a starting point.
Second, if you go back to the Juniper launch of the fall where they defined a new card that would host service features, you may recall that I commented at the time that the initiative seemed to be at odds with the already-defined carrier notion of NFV. What Juniper has done is to allow those service features to be hosted outside the special card on a standard server.
Both of these things end up in the same place, which is that for Juniper “Software Defined Networking” is about virtualizing “layer 7” service components and making them orchestrable as part of a service flow from packet source to packet destination. That’s what makes this hard to analyze, because on the one hand it’s a useful thing and on the other it’s not SDN.
NFV is something operators are serious about, something that creates the framework for the service layer of the future. Juniper is articulating an NFV strategy and that’s a first. Their collateral material on JunosV is explicit in the relationship between the concept and NFV goals, and even though Juniper didn’t get mentioned by operators in our survey (perhaps because the NFV external hosting connection wasn’t in the fall announcement; it wasn’t in my deck) they have now offered a useful NFV model.
The thing that separates it from being an SDN model is that “layer 7” reference made by Bob Muglia in his presentation. The real goal of SDN, what I’ve called “purist” SDN, is to centralize the control function. That would explicitly include the replacement of adaptive discovery with central control. Google’s SDN application is a poster child for that. Juniper isn’t talking about that at all, and in fact they are implying that the control processes remain largely distributed.
Years ago, Juniper’s CTO articulated what might have been the anticipatory form of the SDN principle: Centralize what you can, distribute what you must. I have to wonder why Juniper didn’t take their own advice here. With a few words, they could have created a complete NFV and SDN position, the first public example of that combination in the market. You have to wonder if they didn’t because they were defending their router proposition.
Cisco isn’t happy about purist-model SDN either, but they appear to be devising an alternative that is focusing on providing northbound API controls to permit cloud-to-network application requirements communication and also network-to-cloud status and telemetry to be exchanged. This distributed model of SDN isn’t “purist SDN” either but you can argue that it’s SDN because it meets the black-box test; it looks like an SDN at the external interface. If that’s a part of Juniper’s SDN story they didn’t express it. They also didn’t elaborate on the value that their Contrail buy brings. That will come in the 2014/2015 timeframe when Juniper indicated their SDN approach would be mature.
It may be that in 2015 Juniper will have an SDN approach that would pass the only objective tests I can use to define what an SDN is—either conformance with purist SDN models or support for the cloud-black-box interface-defines-service model. Right now they aren’t providing any proof points for either of those things.
The thing is, their NFV concept is potentially very good. We’re going to present an architecture for the network device of the future in our January Netwatcher, and it wouldn’t be hard for me to make the Juniper story supportive of that model. In fact, they are arguably on the way to the right answer that we’ve defined as their optimum path. If they’d have said this was their NFV approach I could have given them a solid thumbs-up for the story. But they didn’t, and they didn’t establish its credibility as an SDN story.
The Times They Are a’Changin
Some of you may know I’m a fan of the folk music of the ‘60s, and one of my favorites is “The Times They Are a’Changin” (Dylan and also Peter, Paul, and Mary). We’ve got some proof of that with news today.
Most people in tech today don’t remember a time when the personal computer wasn’t king of consumer technology, but it’s not that any longer and it won’t be that again. It’s not just a matter of having been slighted at CES; PC sales are consistently trending off (for everyone but Apple) and Windows 8 has done nothing to improve the situation. Now there’s a rumor that Dell may be looking to go private, meaning to buy back its shares. That’s not something a company who thinks their stock is appreciated by the masses is likely to be doing.
The core problem here isn’t that everyone is abandoning the PC for the tablet or smartphone but that many who are now looking at tablets and smartphones never needed PCs to begin with. The “personal computer” is a COMPUTER, meaning that it’s designed to support activities that need local processing. For the last decade, most of the usage of PCs hasn’t been computing at all, but web-related activity. Once we had appliances better-suited to that mission, people moved to them and that starved the growth engine for PCs.
That’s not the end of the problems, though. If you add always-on broadband (WiFi or cellular) to tablets, you have a framework to substitute the cloud for local processing and storage. This isn’t going to eliminate the PC completely (I know it won’t eliminate my own use because I have too much processing that I’m unwilling or unable to cede to the cloud) but it’s going to further reduce the drivers for PC growth. There was a time five years ago when everything you did in tech demanded a PC. Today, about a third of users don’t have to use a PC at all, and within three years that number will likely grow to half of all users. Even with the user base growing in developing countries, we’re not going to see a return to the glory days of PCs.
Microsoft may or may not know that. The story is that Windows RT is a bust in the pure tablet space. Windows 8 isn’t making anyone very happy (I use it on a laptop and it’s more trouble than it’s worth). Windows Phone isn’t setting records, though Lumia may be doing OK for Nokia. The thing that’s interesting to me is that none of this really matters because all of it would represent classic “shoot-behind-the-duck” behavior. Even if Microsoft “won” in tablets, the market would be so commoditized by the time it could claim victory that Microsoft’s win would be a loss.
We have a similar situation in networking. Later today, Juniper is going to be talking about its SDN strategy, something I’ll be blogging on when the event has finished. For now, it’s interesting that CRN has taken Juniper to task on a number of things, ranging from the rate of executive departures to issues with its data center and security strategies. The summation is a question: “Growing pains or something more?” So what is it?
“Shrinking pains” is what it is, and the problem isn’t unique to Juniper. I’ve said this before but it bears repeating. The network market is shrinking just like the PC market. There are no good spaces left in network equipment any more, and there never will be. There are some spaces better than others, some where vendors can continue to drive revenue and modest profits. None of them will restore companies to the glory days because those days are (get this, now!) G..O..N..E!
Cisco and Juniper are the archetypical rival vendors, and they both face the same elephant. I’d argue that both companies started off in the “denial” phase; they thought of things that would force operators and enterprises to spend more even though the ROI wasn’t there. For Cisco, it was the famous “traffic-is-growing-so-suck-it-up-and-buy-routers” approach. For Juniper it’s been their wearying TCO diatribe. But Cisco found its path to truth and says it’s going to be an IT company. What is Juniper going to be? When the pond is drying up, fish become amphibians. Or food.
That’s why the Juniper SDN talk today is so important. Juniper’s recent announcements have been failures in an objective sense; they didn’t increase the company’s influence and didn’t position them for future success. Juniper can’t be an IT company, so it has to be a cloud network company. That means it has to have the Mother of all SDN Positions, something that’s way ahead of the proverbial duck. That means no mindless defense of routers, no stultifying TCO discussions. It means true insightful SDN positioning, and Juniper has the collateral to make that happen, even to lead in the space. Whether Juniper has the will is something we’ll know by the afternoon, and I’ll do a rare second blog today to offer my view. This is important to the market and critical to Juniper. It may even be their last chance.
On SDN, it’s RADspeak Today, Juniper Tomorrow
RAD isn’t exactly a household word in SDN, but their CTO did a blog on SDN that raises some interesting points, particularly given that another vendor (Juniper) is scheduled to do an SDN announcement tomorrow.
The first point in the RAD blog is that “SDN returns to centralized control as compared to a distributed control plane”. This is important because it captures the distinction between the “strict-construction” model of SDN and the “loose construction” favored by the router vendors. Forget the idea of “separating” control and data plane; that’s shorthand for saying that the network continues to forward based on adaptive discovery and not central control. I’m not opposed to a loose-construction vision of SDN, but I am opposed to dodging the differences.
Which the blog does a nice job of identifying. Strict-construction, central-controlled, SDN has a traffic and topology management process that collects information about the network and makes global decisions on routing, which are then communicated (in some way, OpenFlow for example) to the devices that actually handle the traffic. This makes the process of forwarding much more cohesive (“consistent” in the blog’s terms). It also creates the potential for a single point of failure, which is the tip of the iceberg of the second point.
Why only the tip? I believe that SDN is justified by the cloud, and one of the cloud’s primary requirements is that new resources (servers) be able to “boot from bare metal”, meaning to commission themselves from a hardware install through OS and middleware to becoming a full member of the resource pool. My view is that SDN has to do the same, meaning that we have to be able to install a device and have it “boot from bare metal” to become part of the network. We also have to be able to recommission the network after a failure, but the fact is that in SDN the two are related. OpenFlow is the control protocol to set forwarding, right? OK, then, how do we deliver OpenFlow control packets without the forwarding entries that OpenFlow sets up? How complicated will it be to restructure connectivity around a failure when you need to be connected to every device that’s to receive a forwarding-table change through the entire process? How does a node that’s isolated become “un-isolated?”
The router guys have/will say that they have the answer, which is to “separate” the control plane but keep it in the device. That does in fact fix the problem of booting from bare metal since every device discovers connectivity as before, but it doesn’t move the ball much in the way of network complexity or cost since it’s really not changing anything. So these loose-construction players have to be more articulate about just what value they do bring to the table. You can’t claim central-control value without doing something centrally, and you have to explain not only how that works but also how it’s different from today’s routing or switching.
Then there’s the issue of network functions virtualization, the operator effort to host more network functionality on servers. Loose-construction SDN would seem to be taking a step away from complete NFV because it doesn’t dispense with local adaptive discovery processes. SDN’s central control of routing seems conformant to NFV principles. But what would NFV mean to SDN? If you presume that an SDN switch is a forwarding engine under central control, what is an NFV appliance? I’m looking at that issue in detail in our January issue of Netwatcher, but for now let me say that we have to think beyond OpenFlow and beyond forwarding to get to NFV. And I contend that we have to get to NFV because 1) the buyer wants to and 2) SOMEBODY will listen and give them what they want, forcing others to conform.
This sets the stage for Juniper’s SDN session at its partner event tomorrow. Bob Muglia’s “Seven Myths of SDN”. You can read a lot of defense-of-routing-business into these points, of course, but the fact is that there is an element of myth in each of the four areas. We do have to think of SDN in a broader way than a cloud data center, we do need hardware to move packets, we do need operations and performance benefits in addition to capex benefits, centralization isn’t the only answer nor is OpenFlow, and getting to SDN is complicated but not insurmountable. The challenge for Juniper will be to make real, useful, points in each of these areas and not just blow marketing smoke. Cisco is clearly going to take a loose-construction view of SDN. Juniper is likely to do the same, and being the second guy to paint “SDN” on an existing box isn’t going to impress anyone (or at least it won’t impress me). So say something useful tomorrow, Juniper. RAD, after all, said something useful today.
Two Nascent Recoveries?
Read the tech-financials these days and you see two maybe-recovery stories. Nokia has been a company in decline for several years, but perhaps no longer. With sales of Lumia handsets much stronger than expected and NSN turning in a profit, Nokia may have the bad times behind them…perhaps. Alcatel-Lucent has seen some bad times too, and recently it’s raising some more working capital has given its stock a boost. Now it’s bought a small optical player at a time when the industry seems to be running away from optics. Maybe it knows something…or not.
I think Nokia’s Lumia success is still more to be proved than to be banked. The operators would like to see handset options beyond Android and iOS, and in particular they’d like to see some low-end devices. Any time a new gadget comes out there’s a period of infatuation, and Lumia and Windows Phone are in that period. Can they get out of the transition/fad phase and into a real market?
The biggest risk may have nothing to do with phones. Windows 8 is an attempt by Microsoft to create a tablet-laptop-phone ecosystem that’s built primarily around a “tiles” interface. The problem it has is that the laptop side of the picture is artificial; Windows 8 looks like Windows 7 with a new boot-up GUI that nobody likes. You could make it workable with a touchscreen laptop, but that would raise the price and validate the tablet model. And a laptop with a touchscreen also eats into the Windows tablet differentiation of a neat keyboard. So the ecosystem Microsoft wants may be tough to promote, and if it fails it might well drag Phone along with it, and Lumia along with Phone.
On the NSN front we have a similar tension between near- and long-term. NSN has cut costs and dropped out of a number of marginal markets. But you can’t build a company by cutting costs; it vanishes to a point along with your business if you try to keep it up. NSN is focusing on mobile but mobile is RAN plus IMS and IMS is going to the cloud (via NFV and other forces). Will all mobile software follow, and become a simple cloud app? If so, how does NSN differentiate itself. RAN alone doesn’t make you a network vendor it makes you a radio vendor.
With Alcatel-Lucent, the devil here is going to be in the details. Their Capella deal could signal they’re interested in metro optics on a large scale (very smart), or agile core optics (semi-smart in the near term and deadly in the long term). Metro optics, I think, is the Golden Ring of the optical space, the area where there’s significant deployment likely and significant new architectural needs to help differentiation. In theory, Alcatel-Lucent might have some pretty hot ideas in the space. For example, could agile metro optics help walled-garden IPTV? Could it be a differentiator in mobile backhaul? Yes and Yes! In the core, though, optics are buried in an IP envelope that consigns the core optical space to being the cheapest possible core connect technology. That’s not a business I’d buy into, even on the cheap. So where is Alcatel-Lucent going?
They COULD be thinking of a new metro. They are surely thinking of cloud-hosting service logic; they are the only vendor with name recognition in NFV and they’re also a long-standing player in leveraging infrastructure with services. Yes, their articulation is bad in that area, but they have assets. The challenge is to utilize them now, when they’ve been fallow for years. To do that they have to reinvent themselves, and they actually have a play to do that.
So we have two recovery stories that are still more stories than recovery. I think both Nokia and Alcatel-Lucent could do a major come-back, continuing their recent momentum, but they’ll need more than transient phone success or a small-play optical buy to do that. Most of all they may need time, a commodity neither has an unlimited quantity of.
All for One…Network-Wise
According to an FT article, EU telcos are mulling over the idea of sharing infrastructure rather than deploying multiple, interconnected, often competitively-overbuilt, national networks. The move is partially a response to regulatory issues in Europe, where operators believe they are being asked to improve services and lower costs while their own costs are rising. Some see this move as a potential win-win, for operators who’d likely see lower cost and for consumers who might see the same, or at least see competition focusing on things that really matter.
Then there’s Asia-Pac. Australia, a country where regulations have long favored consumers over operators, launched a new broadband network company that took over wireline infrastructure and offered its transport/connection services to all. Whether that’s really doing anyone in Oz any good is something you’d have to ask them; those who talk to me aren’t at all sure.
Does this sound like an industry under pressure? Maybe a lot of pressures at once? Sure does to me. The question is whether any of this consolidating-to-manage-cost would work.
The underlying concept is logical. If five carriers cover the same geography, each of them has to build out network technology at a high cost. For wireless they compete for spectrum, too. But the five operators would be at each other’s throats in competition, driving costs down and features up. Maybe, anyway. The problem with the theory is that all those parallel networks are more expensive, which limits how much cost can be wrung out by competition. You don’t fight each other to sell at a bigger loss, after all.
Regulators like those in the EU and Australia want to make consumers happy (they vote) and also make carriers happy because they contribute to political campaigns (and junkets). The key is to walk a fine line to please consumers but recognize that a dead carrier can’t offer service. Those I survey in Europe and Australia believe that it’s national policy to keep the operators on the ragged edge of dissolution, picking consumerism over potential political contributions from operators. I’m not sure that would work in the US, but in any event the knee-jerk reaction in Europe and here is to say that “competition” is good. In Australia, as I’ve noted, they’ve decided that competition wasn’t working. Maybe the dominant operator was too dominant, maybe the cost of supporting a largely unpopulated country is too high…it’s hard to say now because we can’t run a controlled experiment. But the point is that we have two very different perspectives in a regulatory sense, and now we have operators suggesting maybe a third option. Competition “higher” in services and cooperation below.
The FCC here believes that the solution to the problem is to drive down roaming costs, allowing operators to decide whether it’s too expensive to build out in a given area and sharing via roaming where it is. The problem with that is that it doesn’t stop over-building, and lower roaming fees might create a “SVNO”, or “semi-virtual network operator” who built out enough to be able to share in roaming agreements but only just enough. This kind of operator would then drive down prices and undermine ROI for the others.
There’s another dimension to this, though. Operators don’t suggest this sort of thing unless they feel their back is to the wall, financially. Bond raters don’t like telcos in Europe any more, or at least not as much. We have clear signals of pressure, and that means trouble for the equipment vendors. Why anyone thought otherwise is a mystery to me; even Cisco’s drive to connect every watch and toilet to the Internet to drive up traffic could work only if unit bandwidth cost fell even further. Operators won’t pay more to connect more things unless customers pay them, and we all know how far getting people to pay for online watches and toilets will get!
This operator push, combined with network functions virtualization, demonstrates that the operators are going to drive cost out of the network some way or another. If you’re a network vendor you need to be thinking about what that means. For Cisco, the notion that the company will be forced to become an IT player is smart because they can’t hope to gain market share from where they are. For all the rest, there’s a chance that grabbing share by supporting a new vision of networking would offset the lower prices. That tells me that these other-vendor guys need to be looking very carefully at how the network of the future, driven by cost now as much as by value, might be different. And how they could capitalize on it.
But let’s not forget an important point. Every single network vendor out there has themselves to blame for the cost pressure today. For FIVE YEARS operators tried to get vendors to cooperate in a service-layer transformation of their business model that would have made the operators less vulnerable to unit-bandwidth-cost trends. For five years, vendors blew them off. You reap what you sow.
Can the Content Cloud Learn from Videoscape?
Cisco’s Videoscape has been literally everything, both in a positioning sense and in its mission and component count. Operators griped about the fuzziness of the concept at the first announcement, but there are some signs that Cisco is starting to pull things together, and that could be significant for Cisco, as well as for a cloud-content story for the market.
The big problem with content projects has been the same problem that’s dogged Videoscape, in my view. Operators have just stuck too many things inside the “Content” project bin, and as a result they’ve totally failed to establish any holistic vision for it. The number of functional components in an operator content project has hovered in the 20-ish range, which is way too many components given that the most comprehensive commercial strategy couldn’t hit more than 70% of those boxes.
Cisco, who’s more often than not chasing sales rainbows with its strategies, opened up the nozzle on the Videoscape sprayer and blasted away at everything in sight, which resulted in Videoscape having no real architecture behind it. The latest Cisco video Unity strategy is attempting to fix that, but what’s hard to see is whether it’s a band-aid or some real suturing at the fundamentals level.
Unity appears to be a front-end that harmonizes video services from the buyers’ side, collecting disparate Cisco stuff into a common presentation framework. That’s a useful step, but it’s not a video architecture any more than a paint job is a new car. There is still an issue of an underlying architecture for content delivery, something that creates a true content cloud and not just the appearance of one.
One reason for the market confusion is the fact that we really have multiple video problems lumped into the content category. Users want channelized network TV in the usual way, delivered to their sets. Most of them, according to research I’ve seen and believe is accurate, want to supplement that with PC and increasingly tablet viewing, either in-home in parallel with TV (watching another program) or out-of-sync because they’re on the road/move. This is what’s taking things in a kind of network-DVR-ish direction. If TV Everywhere is flexible network DVR, then the user sets “record” and plays back on good old multi-screen.
But this is still a mission or a requirement and not an architecture. If we’re moving to cloud video, and of course to cloud-everything under operator initiatives like Network Functions Virtualization, then we need to be thinking about what kind of cloud we’re talking about. Is a content cloud or an NFV cloud a general IaaS cloud? Why would it be, given that it’s really not multi-tenant and it actually likely needs a lot of horizontal integration and composition. Cisco’s Unity proves that you need to draw on multiple cloud sources to create a cloud-content GUI. That’s not ships-in-the-night multi-tenant applications in action, it’s more like SOA.
Which is I think the key point. Is the service providers’ “service cloud of the future” really more like SOA, or at least like a more lightweight kind of hypervisor virtualization than we’re used to thinking about? Are “virtual networks” for carrier services going to have to include the ability to cross application boundaries to integrate experiences? If they can’t do that, then we’d need to create a complete cloud architecture for content and set its boundaries in a virtual network sense, confident that because we know it’s complete we know we don’t need to expand those boundaries. Does that sound like where we are? No way.
I think Cisco is showing us something, though. Videoscape is a work in progress, but it’s progressing. Tactical sales stimulation isn’t an efficient way to build an architectural model, but it may be a suitable way of dealing with a market that can’t make up its mind. Cisco could be building a pretty façade on Videoscape, but that doesn’t mean it WON’T do something behind the painted sets. Maybe even something useful.