Transformation: Failing at the Service Level, Starting in Network Equipment

If you assemble the “little stories” of the last week, looking past the big product/technology announcements, you see the indicators of an industry in transition—or one trying to transition at least.  To understand what’s happening you have to go back to the US Modified Final Judgment back in the ‘80s and look at the carrier data.

When MFJ came about it created a “competitive” telecom business, in that it relaxed regulatory barriers to competition and broke up the basic US telecom monopoly.  That was followed (in 1996) by the Telecom Act, and it was mirrored in other countries.  By the end of the ‘90s we had largely “privatized” or “deregulated” telecom.  But getting back to the US and the MFJ, we saw immediate impact in the long-distance space, where barriers to market entry were being lowered by technology improvements.  Competitors focused on long-distance because there was more profit there, and less capital investment.  Local exchange never really got that competitive (most CLECs died) because there’s simply not enough ROI.

But even long-distance suffered.  As the ‘90s progressed you could chart the cost/price curves for telecom services and see that by the end of the decade they’d cross.  And they did, and the result was the acquisition of the long-distance players by the local exchange players.  We’re seeing those same cost/price convergences today in telecom overall.  The unlimited-usage pricing model has created bandwidth-exploiters because it’s lowered the bar for market entry, but there’s no interest in bandwidth-creators because the margins are already too low, and getting lower.  Mobile is in a better place than wireline, but my own surveys say that mobile ARPU will peak in late 2013 to early 2014, and then decline.

It’s this world that SDN and NFV and the cloud have to address, or at least are expected to address.  Operators have flocked to the cloud, among the primary transformational opportunities they see, because it’s a new market—and “total addressable market” or TAM is everyone’s touchstone.  The challenge with the cloud, though, is to break into new positive ROI territory, and IaaS cloud services is low-margin because it’s a cost play for the buyer.  There are a lot of things you can do to elevate cloud ROI, most notably moving up to SaaS services or hosting your own service elements, and NFV is a big part of making that happen.

But underneath there’s always the network.  Say we have an OTT guy and a telco offering the same service set.  The telco has to deliver both services to the user, and the delivery process is at best marginally profitable.  The OTT guy, free of that burden of low ROI, can offer his service at a lower price because there’s no delivery millstone around the OTT neck.  So in order for telcos to compete, they not only have to be as agile at the service level as an OTT, they have to be profitable enough at the network level that their network won’t force them to price themselves out of the service market.

The early focus of NFV is cost management, cost management that comes about because it’s the perception of operators that vendors have been pushing the notion that somehow the operator has a divine mandate to transport bits under any terms necessary to boost network equipment consumption.  We’ve all seen Cisco releases about the explosion in traffic, the clear implication being that it’s high time for operators to step up and start investing ever if ROI is negative.  Take one for the team.  You can see how the operators might view this as opportunistic and manipulative, not to mention financially unreasonable.  So they counter with a strategy to unload network cost by commoditizing network equipment.

Which will happen, one way or the other.  Open source is increasingly on the operator radar, and one of the things that NFV may bring about is a renaissance of operator interest in the space.  We already have some open-source tools being designed to be cloud-hosted, cloud-operationalized, and NFV-deployed.  Some vendors are also doing that, though carefully avoiding their own market sweet spots.  Huawei and ZTE, who enjoy a price advantage in most deals, are winning and growing even as traditional competitors like Alcatel-Lucent and Ericsson and NSN struggle to find the right story and product mix.  Operators are also rethinking their whole architecture to focus more on transport and less on electrical-layer networking, because it makes sense to flatten layers to reduce both capital and operations costs.

We could have changed this picture perhaps five years ago with enlightened regulatory policy.  Providing for settlement, even if it were limited to QoS-specific services, would have provided some run room for current network design as vendors developed their own strategies to face the future.  It’s now too late for that; we are going to have a network transformation because we’ve refused to support or sanction a transformation at the network operators’ level.  So now it’s time to address that.

SDN is a tool, but it can’t be the solution because it’s limited to the network, or to supporting something above the network.  Offering SDN services is either a surrender to a cost-based model of the future (which will ultimately fail, like it did for the long-distance carriers in the US) or an actual acceleration of commoditization by reinforcing operator dependence on cheap bits as their revenue bastion.  NFV is the better tool, and it’s what both operators and vendors should be watching.  If there is a solution to the problem of the network, a strategy that leaves the largest number of both equipment buyers and sellers alive, then it’s going to come out of NFV.  I think there’s no question of that, only a question of whether NFV as a process can produce the outcome everyone needs before the casualty rate gets too high.

Is “Evrevolution” a Word?

True revolutions are pretty rare, especially these days when hype seems to get a decade or more out in front of reality, tapping off all the interest before there’s even anything to buy or deploy.  Evolution is something that you generally don’t have to worry (or write) about; timelines are long enough to allow for orderly accommodation by normal business means.  Sometimes it’s the stuff in between (an “evrevolution?”) that can be really interesting, and we have an example of two in-betweens today in the SDN and NFV space.

In the SDN space, I’ve been saying that there are two models or layers emerging—the “shallow” SDN that supports tunnel-overlay software-based virtual networks and the “deep” SDN that actually involves network devices and (perhaps) new protocols like OpenFlow.  One of the things that I think is a key differentiating point for SDN is end-to-end support, meaning the ability to extend an SDN from the classical data center starting-point to an arbitrary user access point.  For shallow SDN, this presents a challenge because the access point would have to be equipped with a software agent to terminate the tunnels and support the protocols used.  For deep SDN, the challenge is that the wider the scope of an SDN domain the less experience we have with creating and managing one.

What now seems to be emerging, according to both network operators and some larger enterprises, is a notion of “long-tailed” SDN.  With long-tailed SDN, you adopt deep-SDN principles in the data center or perhaps across multiple data centers, and you may even overlay this deep SDN with shallow-SDN-based virtual partitioning of applications/users.  But instead of pushing either the deep or shallow SDN models out to branch offices, for example, you rely on policy management based on information about applications you’ve gathered at the data center.  You apply policies in the “tail” connections of SDN to extend your knowledge of traffic and user needs without extending SDNs physically and facing some of the challenges I’ve already noted.  In the enterprise space, long-tailed SDN seems to have some real potential, but the challenge is creating some real utility.  Absent some means of managing QoS in the WAN services used, enterprises can’t do much more with a long-tailed SDN concept than they could do with application performance management or acceleration.  Which, I thinkwhere long-tailed SDN policy management marries with VPN services.

The other potentially interesting development is in the NFV space, where we have news that Akamai will be pushing the technology it acquired with Verivue last year to go after cable companies.  Now one might wonder what this has to do with NFV, and the answer is that Verivue was the most cloud-ready of all of the CDN technologies, which makes it the most NFV-ready.  And guess what?  CDN was one of the target areas identified by the NFV white paper last year.

What I’m hearing is that there is a linkage, at least, between cable-CDN needs and federation, and between federation and WiFi roaming as a mobile broadband competitive strategy.  MSOs generally don’t want to deploy their own cellular services but they do want to exploit their content to multiple devices.  That leaves them with a hole; how to exploit mobile content when you don’t have 3G/4G services, and WiFi is the solution.  But for most cable companies, WiFi deployment in a federated relationship with other cable companies is a better approach than spreading hot spots like Johnny Appleseed reputedly spread trees.  However, to do this, you have to not only federate access, you have to federate CDNs.  The reason is that you don’t want to deliver content traffic from your home area to a customer on another cable hot spot when that other cable company also has the content cached—perhaps a mile or less from your customer.

How stuff gets federated has been a major question, but it’s especially an NFV question because when you move to virtual function hosting as the basis for services, you have options to “virtually install” a “virtual function” in somebody else’s real infrastructure, and to otherwise share functionality in ways that would be impossible if your services were based on hard-wired devices.  So the question here, given the explicit link between federation and Akamai’s offering to MSOs, is whether Akamai has some ideas.  Might these guys be drivers in the federation area?

The unifying theme here?  Well, we are likely seeing attempts to harmonize standards processes that weren’t launched top-down as the likely should have been with market realities that always start at the top, where the money is.  Will SDN or NFV respond?  We’ll see.

SIP and SDN: Perfect Together, Says Sonus and Juniper

We tend to think of SDN as being some modern revolution, but in fact there have been some pretty significant SDN antecedents in play for decades.  One, believe it or not, is in the VoIP space, and now this proto-SDN may be joining with the mainstream SDN wave through a vendor partnership.

One of the issues with session-based voice or video communication is the difficulty in providing QoS.  With pair-wise communication taking millions of possible paths, it’s essentially impossible to pre-position high-quality paths to carry the traffic.  While there are protocols available to reserve resources (RSVP), these aren’t typically supported for user-service missions today because there’s no mechanism for charging and settlement.  When the Session Initiation Protocol (SIP) and related services were launched, the architecture included the notion of intermediate session handling devices—Session Border Controllers or SBCs—to act as authentication elements for session services.  These gadgets created a kind of virtual-network overlay on top of IP, and if you provided quality paths between SBCs you could get traffic to go where it was assured better QoS because the SBCs were anchors that defined specific route points.  You can’t QoS-route a million voice sessions end to end, but you can QoS-route a couple dozen sessions between SBCs.

What’s now happening is that Sonus (one of the old-timers in the VoIP space) and Juniper have formed an alliance to marry SIP services and Juniper SDN.  The technical details of the marriage aren’t fully articulated, but what appears to be happening is that SBC-to-Juniper control traffic will allow policy-based management of the traffic flows, particularly for video but also voice and generalized UC/UCC.  Some Sonus SBC-related application elements are also expected to be made available on the JunosV App Engine at some point.

Sonus’ SBC-based services are already compatible with SDN even without the alliance, for the reason I stated above.  If I can create a virtual overlay with a few fixed “router” points I can easily prioritize paths among those points.  What the Juniper deal would do is to make it possible to accommodate the kind of dynamic traffic flows that are likely to emerge when video is added to the UC/UCC picture.  Reserving the capacity to support full video collaboration would be impractical for cost reasons; better to expand and contract traffic support as needed, which is what the deal can do.

Underneath all of this there’s a series of potentially significant developments at the systemic/strategy level.  First, we may be seeing an example of Juniper’s intentions to provide the same kind of policy-based vertical integration between an overlay-virtual-network technology (including SIP/SBC) and the data plane as it offers in its JunosV Contrail example.  That might mean that Juniper could add some value even to non-Contrail overlays.  Second, we may be seeing a way for Juniper to gain some end-to-end benefits for its JunosV Contrail Controller approach; link higher-layer service policies to the traffic plane from any source those policies can be made available.  Third, we may be seeing some specific progress toward defining how “SDN applications” could develop.

UC/UCC is one of those areas that everyone has said was on the verge of taking off, all the way back (so it feels like, anyway) to the Nixon presidency.  It hasn’t.  However, there is no question that enterprises are very interested in a fusion of mobile/behavioral services, cloud computing, and UC/UCC.  The “UCaaS” (pronounced “you-cass” for those who don’t like to spell out acronyms) model is for many network operators a natural leap into the software-as-a-service promise of the cloud.  SaaS, you will recall, will always support the highest profit margins because it displaces the largest costs.  Operators know that, and they also know it’s easier for their sales force to sell UCaaS than to sell CRM or some other business service.  The question has been how to create and differentiate the service.  Linking it to SDN would help in the latter mission, but not in the former.

The devil is in the details, of course, and you can see from all the bold qualifiers in the statements above that we don’t have much of the details yet.  This move could be highly strategic for both companies or simple naked opportunism.  It could even be a precursor to some M&A; Sonus rival Acme Packet was after all just bought by Oracle.  We’ll need to watch this development to see how it evolves, but in particular watch the way that the session infrastructure, the SDN framework, the underlying transport resources, and the UCaaS application layer all work together inside this alliance.  Something strong could be a very strong play for both parties, given operator interest in UCaaS as a kind of camel’s-nose SaaS strategy.  It could be strong enough, in fact, to make a significant difference in both Sonus’ and Juniper’s business models in 2014.  Whether it will depends on just how much technical thought the two companies are prepared to put into the deal.

What’s Holding Back Carrier Transformation?

As European operators face the prospect of a single EU market for telecom services (according to the Commissioner for Digital Affairs, Neelie Kroes) and most have already been having profit issues.  It’s no wonder these operators are vigorously pursuing cost reductions and new services.  It’s also not surprising, given their historical reluctance to be market-makers, that telcos would push cost reduction harder.  But many are looking at new services, notably Telefonica, whose API programs are among the most successful globally.

Why haven’t more followed?  Is it just “fearing competition more than seeking opportunity” as I’ve suggested telcos do?  Or something deeper.  After all, they’re seeing plenty of OTT competition now.

The biggest problem I see for operators is a lack of a service-layer vision that embraces IT and network equipment equally.  Operators have IT people, but they tend to be on the OSS/BSS side rather than on the service side, and so where Google might have a bunch of web architects doing forward-facing projects, the operators’ vision of services is more introverted.

Operators have this problem, by their own reckoning, because of the second issue—lack of support for operator transformation among the big network equipment vendors.  The operators have been critical of vendor support for half-a-decade now, and it’s reached far past the point of disenchantment to the point of active suspicion that vendors are pushing boxes instead of solving problems.

Obstruction is an issue, I think, but I’m not sure that the vendors should get all of the blame.  Problem number three on the hit parade is operators are relying on traditional service processes to create non-traditional services.  Pose a new-service challenge to operators and they think “Standards!”  There may be standards involved in new services, but you can see a very significant difference between an OTT vision of standards and an operator vision.  OTT/Internet standards tend to be viewed in a kind of “iterative” form, with prototype implementations and real-world testing creating refinements in the process until something is finally agreed on.  In the operator world, you get a long series of discussions that almost never create anything that can be seen and run until the very end—if then.

Standards also create the final issue for operators, which is excessive focus on consensus-building rather than on innovation.  Google or Apple would shy away from stuff that required consensus simply because you don’t get to lead a market while waiting for others to agree with your approach.  Operators are used to trying to develop systemic services rather than innovative, distinctive, branded services.  That means they don’t jump on opportunity, they try to develop it in a shared fashion.

The obvious question here, which is “How can operators fix these problems?” isn’t all that easy to answer.  Clearly simply reversing all these problem areas would be a good answer, but that requires a level of systemic change in operator organizations that seems unlikely to come about.  My view is that there are two critical things that could be done, and that would evolve into the future right answers.

The first thing is focus immediately on the creation of a true service-layer architecture, one that embraces the cloud, SDN, and NFV in one grand vision.  What we have today among operators is a tale of three cities; cloud, SDN, NFV.  The second thing is to start doing software projects and not standards.  A software approach to cloud, SDN, or NFV would be top-down, would focus on functionality rather than interfaces, would presume the jumping-off points of cloud technology and web-service development…it would be a form of test and refine that would expose the right issues and offer the right answers by testing them in the real world.

Carrier cloud has advanced very little in two years, even though carrier deployment of the cloud has been impressive.  Why?  Because we don’t know exactly what it is.  Carrier SDN has advanced little, despite the fact that there’s tremendous interest, because we’ve not taken the simple and logical step of presuming that it will target cloud deployments and focus largely in the metro.  NFV gave itself two years to produce results when some of the founding operators had already noted their revenue/cost lines would cross in that timeframe.  This isn’t the time to be looking back, gang.  It’s the time to look ahead, then leap forward.

What HP Missed

Sorry I couldn’t blog the last couple of days; at some points I have to travel/work on a schedule that makes blogging impossible!

This week we had HP’s earnings, and generally the Street liked what they got, which was better than they expected.  I’m sorry, Wall Street, but I wasn’t satisfied.  In tech, especially these days, it’s not enough to say “I’m feeling better!” like the guy on Monty Python, as you’re dragged off with the dead.  You’ve got to get out of the cart, and HP isn’t getting out of the cart yet.

Some of HP’s problems can be traced to the shift from PCs to smart mobile devices.  Yes, it’s true that tablets are on the upswing and PCs (obviously) are on the downturn.  It’s also true that this was a completely predictable consequence of the whole mobile broadband revolution.  If you give somebody a gadget they can use to get online from anywhere, and if they did little with their PCs besides get online, nature will take its course.  It doesn’t matter that you can do things with PCs that you can’t do with tablets; you can do things with supercomputers you can’t do with PCs and we don’t all have supercomputers.

But we do, sort of, in the cloud.  HP managed to ignore the obvious in the PC-to-tablet shift, and they then proceeded to ignore the inevitable when they didn’t ask how a newly equipped community of mobile-broadband users might change how information was used.  We all shape our tasks around our tools.  Having a hoe doesn’t necessarily create an itch to garden, but it does formulate how you’ll address some garden tasks.  We gave users new hoes, and now they are (surprise!) hoeing.  As they do, they are changing the way they work and live, and those changes are in turn shaping the design of our metaphorical hoes.  Mobile/behavioral symbiosis is what I’ve been calling this.

HP did jump into the cloud, but IMHO they jumped into the “cloud revolution” as a camp follower.  Once it was clear you could sell IaaS to somebody, they decided to be IaaS players.  In point of fact, HP as a major provider of OS/middleware tools should have been thinking immediately about creating an extended version of OpenStack that added platform services and that created one or more application/service frameworks on which new mobile/behavioral things could have been built.  Why?  Because they had an inevitable loss of PC market share to face, no chance of getting on top in tablets unless they had a special kicker, and every reason to want to create that special kicker by creating a cloud upside to compensate for their PC downside.

We have learned, in IaaS, that we can do server consolidation into the cloud instead of into virtualization.  That’s not earth-shaking.  We have learned, in IaaS, that we can sell cloud services to social media startups to help them conserve the VC funding that they’ve obtained.  That may line a few more VC pockets but it’s not going to transform the industry—nor will it transform HP.  The mobile/behavioral stuff would likely do that.

This same kind of “platform service” is also how HP could become relevant in the two other spaces (besides cloud) that are dominating serious IT planning—SDN and NFV.  If there is going to be application/central control of networks, it’s going to have to run as a cloud application because the meaningful applications of SDN are associated with more dynamic application-to-resource linkage than traditional IT creates.  Because virtual network functions are cloud applications and have to be profitable and useful at the same time, they’ll be running on a “platform” set of tools that facilitate deployment and management to create efficiency and preserve utility.  With one simple step, a step that HP could have taken easily a year ago, they could have jumped into the most relevant of all cloud/SDN/NFV players.  Why?  Because they, virtually alone among the vendors, have all the pieces.

I think, in fact I know, that there are people in HP who see all of this.  I think that even a fair portion of HP management may see it.  The problem for HP is that they’ve let themselves get to a point where exploiting the future will to a degree lose them traction in the present.  They can’t jump to a mobile/behavioral vision of the cloud after all this time without having people say it’s another example of poor management decision.  After all, it’s not like smartphones and tablets just came on the market!  And HP has two other problems too.

Problem one is that they are inherently a channel player.  Companies that have direct sales programs (like IBM) can push things through sales conduits even if they don’t market worth crap and they’ll still at least mine their base.  Channel-dependent companies have to rely on their distribution partners, and few of these are strategic giants.  Distribution is what companies like HP fall into during a period of industry commoditization; it’s logical to counter loss of differentiation by multiplying the chances prospects will stumble across your products.  But distribution makes it hard to take control of your destiny when change is mandatory.  To do that, you have to position and market.

Which is HP’s second problem.  Perhaps they have not learned the art of strategic articulation.  Perhaps they forgot it during their shift from a minicomputer company to a commodity PC company.  Perhaps they did a little of both.  The point is that you have to be able to make a compelling case for a revolutionary change, including the change in how your buyers perceive your skill set and your ability to transform their own IT applications.  The Internet has given us the best direct channel to the hearts and minds of buyers that we’ve ever had, but it can expose a vacuum as much as a high level of insight.  HP now has to choose, in my view.  Will they risk a little now to gain a lot later, and sing a song of change that will threaten a PC business model that’s never going to be enough for them again, or will they hang on as the barrel goes over the falls?

Here and There in Networking

We have a number of interesting items today, ranging from OTT to handsets, so let’s get started!

Yahoo’s latest step in reinventing itself is the acquisition of Tumblr, a “blogging site” that’s in some ways a mixture of Facebook-ish, MySpace-ish, WordPress-ish, and other concepts.  At a high level, Tumblr is a adspace wrapper around a bunch of content, a portal to collect it so that it can be monetized.  Right now, of course, there’s not much monetization going on and the big fear of Tumblr users is that monetization will get in the way of how the site works.  It will, of course; the trick for Yahoo is “not screwing it up” too much.  There are three risks to overcome.

Risk One for Yahoo is that this consumer reluctance to surrender more will become a determination to surrender less.  Facebook’s fervor to capitalize on user data to make money has turned off many users.  Freebies offered in sweepstakes to induce the sharing of private information are, according to my sources, being accepted less often.  Is a one-in-a-million shot at a trip worth perhaps ten grand worth a lifetime of junk emails for you and perhaps for friends as well?

Then there’s Risk Two.  Remember that online advertising, like all advertising, is a zero-sum game.  The fads of today are bleeding the leaders of yesteryear.  Facebook, as I’ve noted, is already pushing the boundaries of ad sponsorship to drive up it’s revenue and redeem its stock price.  That’s a very big gorilla to be striving against in the marketplace.  Tumblr also has some of the same problems with potential advertising YouTube had, only perhaps more so.  The site has never been strongly policed for content, which means that there are many things there that advertisers would find problematic.  Does Yahoo clean it up to make it more ad-friendly?  If so, there are many who will “take their blogs elsewhere” as some have threatened to do.

Risk Three, the “Geocities problem” of a barrage of ads that inundate and disgust current users, seems to be the least of the risks because Yahoo has some control over it and because they understand from their past history what can happen if they move too fast.  The key for Yahoo is to do something seen as positive that’s linked to new ad opportunity, not paste ads all over what’s already there.  Can that be done?  Possibly, but how quickly it could be done and whether the other two risks will cut into opportunity in the meantime is the big question.  Yahoo like all companies is under pressure to show positive financial momentum.  The Tumblr deal will only make the pressure more intense; a billion dollars is a lot to bet.

There’s a rumor afoot that Google plans to incorporate OpenFlow into Android.  Before you rush out to protect your network from handsets acting like OpenFlow controllers, let me try to reassure you.  First, this rumor is just that, and second the likely application of SDN to handsets would involve them accepting commands not generating them.

End-to-end SDN is critical for the utility of the concept; there are just not enough value propositions to drive widespread use if SDN is locked in the data center.  One of the value propositions for end-to-end SDN is the control of mobile networks, Evolved Packet Core in the metro.  Another is application security and performance management via segmentation of user access.  Both of these could converge in the handset.  If Android were to provide for software virtual network termination, then software VPNs could be extended all the way to the user to provide for security and application segmentation.  It would also potentially help companies wanting to secure corporate data and access.

But if we were to reconceptualize mobile networks based on SDN principles, we might see these same software overlays managing the tunnels that the PGW and SGW and so forth cooperate to manage today.  Could we create an end-to-end mobility architecture based on SDN that might employ OpenFlow to link users to it?  Sure thing.  I think this demonstrates how important it is to think of SDN beyond the data center.

Speaking of SDN, SDNCentral ran an interview with Juniper’s SDN leader, formerly the CEO of Contrail, Ankur Singla.  The most interesting point in the piece was the comment that there were two use cases for SDN—enterprise multi-tenant clouds and NFV.  This is the first time I’ve seen a key Juniper figure talk about NFV as a specific target for its SDN strategy.  The question is whether he means it, and if he does whether the rest of the Juniper team will go along.  Juniper’s SDN story (and its NFV story) have been diluted by the fact that everyone seems to want to tell the stories in their own way.  With the market trying to find a useful SDN model, it’s only harder if you can’t even agree internally on where to look.

Juniper has been doing some very interesting and very good NFV work behind the scenes in the working groups.  They recently submitted a contribution that’s IMHO as good as any that have been contributed so far, one that demonstrated an understanding of NFV principles.  And it’s true that there is a significant SDN component in NFV.  But since NFV isn’t all SDN (in fact, most isn’t) the understanding that Juniper has reflected in its latest contribution may be critical because Juniper will have to advance its NFV strategy based on something like that.  Clearly they know how from the NFV side, but do they know where it fits into Juniper’s product line?  If all the strength of their contribution is shoehorned into a narrow SDN footprint they’ve sold themselves short.

A Contral Story that Makes Sense

Last week at Interop, Juniper offered a bit more detail on its Contrail stuff.  I didn’t get any press release on this, perhaps because Juniper has done a number of SDN announcements already and considered this a follow-up.  At any rate, the additional detail offers some color on what might distinguish Juniper’s approach from others.  It’s a story that makes sense, though I still have concerns about the targeting.

The launch point of the Interop preso on what Juniper is calling the “JunosV Contral Controller” is a discussion of the traditional SDN model, which uses the notion of mother-may-I forwarding of packets.  A source emits a packet and the first switch kicks it to the controller for instructions.  That might set up a complete route or only a hop, in which case the process would repeat itself.  There’s a strong implication in the traditional view of OpenFlow that you’re setting up fairly granular flows, perhaps per relationship.  As I’ve pointed out many times, the mother-may-I idea is simply unworkable; the only rational approach to SDN is to presume that the controller would preconfigure paths based on connection policy.  If the controller knows where a packet should go when it’s first presented, it knew all along.  I think most vendors with serious SDN positions have accepted that premise; the question is how to make it work.

What Juniper proposes instead is the two-layer SDN model I’ve mentioned in connection with some other announcements, but with a twist.  As always, overlay virtual networking in the software/Hypervisor layer is the top of the structure and network equipment is at the bottom.  If you let SDN control only the software virtual network, you create agility where there’s no inertia and little in the way of scalability issues.  The underlying network (IP in Juniper’s case) would be a network of physical things, the overlay a network of virtual things.  Obviously I agree with this model; I’ve promoted it in blogs before.

Juniper makes another point I agree with, and perhaps even more strongly.  They point out that OpenFlow was designed to be a packet-forwarding-control protocol that optimized its structure to the nature of forwarding tables in real devices.  For other missions, this specialization is pretty seriously sub-optimal, and “other missions” includes the control of virtual switches.  An XML payload in XMPP or OVSDB are better approaches for the vSwitch control mission (XMPP perhaps a better general approach for control exchange).

The Contrail approach is to integrate the forwarding management of the software virtual switching overlay across devices and downward to the physical network by essentially extending the MPLS VPN (RFC 4364) into the virtual switching overlay, creating a software-hosted BGP IP-VPN PE.  This replaces traditional vSwitching with something more integrated and also likely more scalable.  Contrail also promises cross-controller federation to support services/applications that cross provider boundaries.

I actually like this approach and I wish Juniper had been more specific on how it would work earlier on.  I don’t see why it couldn’t be made an end-to-end approach too, meaning that it could be extended out to branch offices, although Juniper doesn’t show that on their slides.  It would seem to address, given that extension, both the needs of cloud computing (enterprise and providers) and the emerging requirements for NFV (which Juniper does mention in passing in their material).  More details on end-to-end and also on the federation support would be helpful, of course, and without them I can’t give Juniper’s approach the same endorsement for covering the enterprise waterfront that I gave to the Alcatel-Lucent/Nuage announcement.  In the federation space, BGP federation covers only connectivity and it’s not clear whether further integration via XMPP might not be required or at least valuable in inter-cloud or NFV missions.

The biggest positioning issue we see for Juniper is its continued reluctance to shift its emphasis from SDN (where a report just cited Juniper saying no cost savings would be generated) to NFV. I’m not trying to promote a new hype wave here, just acknowledge that you have to shoot new concepts ahead of the market to get a clean shot and avoid the clutter.  To showcase the vision Juniper has, SDN isn’t far enough ahead.

NFV demands coherent integration of cloud/network technology for service creation, federation of providers at both levels, and significant management integration.  All of this should be music to Juniper’s ears.  Their IPsphere initiative was the first carrier-supported standards activity to address cloud technology for network services, the first to formally federate, and the first to develop a management approach to hybrid network/IT services.  There must still be some people there who remember the period; if so, Juniper should trot them out.  They’d make a heck of a lot of sense at this time, enhance Juniper’s NFV stance, and perhaps find some real traction points for what’s a decent implementation.

 

SDN From Rembrandt to Rhode Island

We had a couple more earnings reports that seem to underscore the general theme that network spending, if not all of tech spending, is under pressure.  Brocade did OK versus expectations but issued what the Street believed was tepid guidance.  Aruba undershot by a very significant margin, and specifically called out Cisco’s “bundling” strategies as a factor in both the length of the sales cycle and the success and margins of the deal when done.

What we’re seeing here is pretty obvious; a benefit-constrained industry going through the inevitable throes of commoditization.  A bigger player can do bigger deals.  Bigger deals, by intersecting more issues, can potentially drive better benefits.  For truly strategic or revolutionary shifts, it’s unlikely that small deals will do.  The American Revolution wouldn’t have gotten far by wrestling control of Rhode Island from the British.  But even where there’s no revolution and no broader benefit case to exploit, the fact is that as equipment prices come under pressure, it’s inevitable that operations and integration costs will too.  And everyone knows that a multi-vendor network is, for the buyer, like accepting the Secretary Generalship of the UN.

Even Brocade and Aruba taken alone may prove the thesis of breadth, and also the thesis of revolution.  Brocade is a storage and network switching/routing player, Aruba is a WiFi player.  Brocade bought an overlay virtual network player (Vyatta) and so they can deliver the two-layer SDN strategy that is likely the optimum SDN story to tell in the current market.  It’s hard to see how a WiFi player can be an SDN thought leader.

Interestingly, there are SDN ties to WiFi, in both the enterprise and carrier sense.  It’s hard for vendors to push them (even if they know about them, which I’m pretty sure isn’t the case much of the time) because the buyers are so weak in SDN literacy.  We have about half the level of market literacy on SDN that would be needed to drive a natural, normal, market.  We’re at the point where SDN can be sold only as part of (you guessed it!) a big deal!  Which is one of the reasons why Cisco has a good SDN strategy; they have a big deal strategy.  Even a mediocre SDN story told as a sweeping Cecil B. DeMille epic sounds good to a buyer whose SDN knowledge consists of “it’s hot!”

I’ve been noting over the last half-dozen blogs that SDN is getting mired in hype and hyperfocus.  We’re arguing about how to make a packet forwarding concept (OpenFlow) forward opaque optical flows, for gosh’s sakes, when we can’t make a packet forwarding value proposition that’s compelling.  This, my friends, is not the way a rational market behaves, and that’s a big problem for the smaller vendors who might have a perfectly good SDN story but can’t make it resonate in a world that’s leaping from one easy-to-discuss hype ledge to another.

Brocade may end up being a kind of poster child for SDN rationality.  First-off, they actually have a rational SDN product set and they’re pretty close to having a complete SDN story.  The truth is certainly not mandatory in the current market, but it’s sure a convenient place from which to launch a dazzling marketing tale.  Brocade could tell SDN as it is, or as it must be, and that could be something that’s enormously valuable to it as a company, and even to the market.

If we had a true notion of SDN, a valid and deployable and justifiable model, we could fit things like WiFi, security, application acceleration, and a bunch of other things into the SDN model.  We could fill in the hills and trees and barns and livestock of the “big picture” even if the canvas is very big indeed.  Absent that model, though, we’re forced to get the big picture from somebody who doesn’t want others to be filling in anything, and that means guys like Cisco.  They don’t have to be best to win, they only have to be big…and they are.

You can’t stand indefinitely on your current conception of bigness, something Cisco proved with UCS in a positive sense and which Dell is now proving in the negative.  Dell is perhaps the most PC-dependent player in a marketplace where the PC is never going to be big again.  The company has a good cloud strategy, a decent SDN notion, but it has never learned to paint that big picture.  No matter how good you are, in the current network age you have to be inspirational because you’re fighting against a buyer who has built up a half-decade of calluses against the grinding hype, who has no way of telling which story or claim is true.  They have to believe in you because you will not educate them in time.

SDN alone probably won’t float Dell’s boat.  Dell’s big chance is NFV, because Dell is a server player and because Dell has to fight another server player, Cisco, who is also a big network player.  Networking change is a risk to Cisco if NFV is a positive opportunity, and to the same degree because the two are part of one trend.  Networking change is no risk to Dell, but a big server-and-software payday arising out of NFV would be a heck of a benefit.  If you want to win against a bigger guy, climb out on a limb that won’t hold his weight.  That’s Dell’s only answer, and that’s Brocade’s too.

Cisco: Good Now, Could Be Better Later

Cisco reported their numbers, which were much awaited and which the Street viewed as highly favorable.  The stock is up nearly 10% pre-market on the results, with both revenue and EPS beating estimates and guidance seen as generally good.  Given that most tech companies were weak this quarter, the results are impressive and you always have to look at breakout players (positive or negative) to try to see what’s happening.

To start with, cloud and servers are happening.  Cisco’s UCS move is paying huge dividends for it even now.  Not only does getting into the server space give Cisco a major new market area to go after, it also positions Cisco exceptionally well for the coming era of the cloud.  Servers and networks are cohabiting in the future, not just blowing kisses and exchanging longing glances, and I think Cisco knew this was happening.  So did other competitors, but they weren’t bold enough (or perhaps rich enough) to jump on the transition.

Switching/routing revenues wouldn’t have saved Cisco’s quarter no matter how good their execution was, but they did hold their losses down relative to others and that suggests to me that Cisco is gaining market share on virtually all its competitors.  The reason, I think, is that Cisco is better able to sell solutions to both enterprises and service providers given their server (and thus cloud) incumbency.  It’s not that enterprises are building private clouds (hardly any are doing more than dabbling) but if you see a technology transformation in the wings, you look to players who have you covered during such a period of change.  Who better than the only big server/network player in the market?

Interestingly, Cisco is hardly a revolutionary player, a driver of transformation.  Look at Chambers’ comment on SDN: “Helping our customers move beyond the hype of software-defined networks or SDN to a much more complete solution….”  Cisco has played a strong defensive game with SDN.  They focused on APIs (ONEpk) and they focused on delivering the results of SDN without delivering anything massive in technology change.  That strategy has allowed them to front-sell benefits while at the same time diminishing the pressure to migrate away from current (undepreciated) network assets.  Others played into Cisco’s hands by soft-peddling the central intelligence of SDN that was the only possible driver for its rapid deployment.  We still don’t have a suitable end-to-end SDN model that doesn’t evolve out of current protocols and products, and that favors the Cisco position.

So are they unbeatable now?  No, because they still have two issues to deal with, both in the provider space, but both with potential to rock the enterprise.

The first issue is mobile.  Chambers on mobile:  “We do see budgets shifting from wireline to wireless….”  In point of fact, the shift is more significant, it’s a shift decisively to metro.  Cisco has a strong position with carrier WiFi but it’s weak in 4G.  That means that its major network rivals (Alcatel-Lucent, Ericsson, Huawei, NSN) can expect to have more account control as operator budgets shift to their traditional comfort zones.  Cisco cannot make themselves into a RAN-and-IMS player at this point, so they need to be a killer play in metro infrastructure.  They have the pieces, but operators tell us that Cisco isn’t confident here yet.  As long as that’s the case, it’s harder for Cisco to make a big play.

The second issue is network functions virtualization.  NFV should be a poster-child for Cisco’s cloud-network strategy.  Yes, it’s true that some appliance deals might be lost to cheap commodity hosting of  virtual functions, but Cisco knows that if NFV takes off enough to really impact feature spending in the network, it will create an even bigger network-coupled hosting opportunity in the near term.  Even if we’re right and Cisco fears that long-term trend, for darn sure their share of the hosting infrastructure that would accompany NFV deployment would far outweigh their losses.  Nobody else in the industry has the assets to make NFV work like Cisco could, and yet the company is relatively silent about NFV.  Operators tell us that Cisco is working out a solution as a Cisco product, which would be logical.  The question is whether they’re being too coy about it, and both wasting the chance to build credibility and giving competitors a free shot at defining the market.

NFV, as I’ve said all along, is a cloud-based architecture for creating software-component-based network services.  Even if the NFV body hasn’t fully converged on that view, that’s where this is going—or it’s not going anywhere.  Cisco has their own distro of OpenStack, the logical basis for an NFV cloud.  It has the internal message protocols, it has software functionality ready to be translated into virtual functions.  All the good stuff needed.  The thing is, there’s a vast repository of open-source stuff there too, and in fact OpenStack is open source.  If Cisco takes charge here, they could make carrier-grade virtual functions right now and largely eliminate the risk that NFV would validate open-source software for service logic because few open-source products are certified carrier grade.  If Cisco lags, then a horde of startups will do carrier-grade virtual functions (Metaswitch’s IMS is an example) and Cisco loses a lot of opportunity.  Moving now might also let Cisco partner with people like Metaswitch to create an NFV-based IMS/EPS implementation, which would destroy Cisco’s major competitors’ mobile advantage.

Cisco has done well.  Cisco will still likely outdo its competitors, but both these Cisco weaknesses could be exploited if those competitors get off their own duffs.  It won’t hurt Cisco next quarter, but it could dim prospects for 2014.

Lessons from the Optical SDN Debate

We’re hearing again about the goal of applying OpenFlow to manage optical networks, and the interest surely reflects the value that “converging” layers of the network might have to network operators.  I’ve commented before on the fact that a packet-match-and-forward architecture like OpenFlow is hardly suited to applications where packet examination isn’t possible because data is contained inside an opaque envelope.  Could you make it visible?  Sure, but it would radically increase the cost of the equipment, and the question is whether there’s any value there to justify that.

If you look at the network of the future as a circus act, then the application network layer is the clowns, running here and there in magnificent apparent disorder.  It’s highly agile which means you have to have very low-inertia processes to manage it.  But as you go down, you move through the jugglers and people on stilts, and you eventually end up with the elephants.  They’re powerful, magnificent, but not particularly fast-moving.  So it is with networks.  Move from the top toward the physical layer and you lose the need for a lot of agility.  Yes, “agile optics” has to be agile in optical terms, but hardly as agile as application networks that have to respond to every connectivity change.

Another factor is that lower-layer facilities almost always carry aggregations of those at the higher layer.  A virtual switch network is aggregated onto Ethernet and then downward onto optics.  At each layer the stuff joins similar traffic from other sources to create efficient trunking.  By the time you get to the bottom where optics lives, you have a collection of many different network missions on the same trunk.  So which of these missions controls the trunk itself?  The only answer can be “none of the above”.

Lower-layer flows and paths have to be managed for the collective good, which means that what happens there becomes less a matter of application connectivity and more one of traffic engineering.  Logically you’d want to establish grades of service at lower layers and traffic-manage to meet their SLAs.  The higher layers would consume those grades of service, and changes in allocation at the top would impact the policies at the bottom only to the extent that changes in load might impact QoS.  If that happens, it happens collectively and not by application.

The reason this is important is that SDN principles of central control, if applied to lower network layers, would necessarily have a different mission than when applied to higher layers.  Do we want to manage traffic in an OpenFlow network by changing all the forwarding rules around?  I doubt it.  It’s looking more and more like there’s going to be a fairly agile top-of-network connectivity service and a more efficiency-driven bottom.  That suggests that far from collapsing everything into a single layer (which would force us to address all the issues there) we might actually find multiple layers valuable because the lower layers could be managed differently, managed based on aggregate traffic policies.

Others have pointed out that the application of SDN principles to networks might be easier at the virtual layer, in the form of an overlay virtual network.  Since this layer wouldn’t be visible to or managed by the real network equipment you couldn’t do traffic engineering there anyway.  The question, then, is whether we have created a two-layer virtual-network model where the top layer is a software virtual network of vSwitches and the bottom layer is a big traffic management pool whose internal structure is both disconnected (by reason of OSI necessity) from connectivity at a layer above it, and disconnected from connectivity because it’s not providing connections but policy-managed routes.

This raises a major question on SDN design, IMHO.  First, it reinforces my view that we really don’t need to control lower layers with SDN technology.  Second, it casts the role of OpenFlow into a smaller opportunity space.  If all the agility is at the software virtual network layer, we could still manage forwarding and connectivity there, but would we?  If we can’t control network routing, only vSwitch routing, do we need to control anything at all?  I think that there’s still a very strong argument for OpenFlow SDN utility in the data center, but I think that this whole argument calls the value of it end-to-end more into question.  I’m not ready to throw out OpenFlow and SDN principles, but I am more and more convinced that we’re making OpenFlow and SDN a goal and not a route.  We don’t have to do this; it will be done to the extent that it can offer tangible network benefits.  Just making it possible to do more with OpenFlow or SDN doesn’t make it justifiable.  We need to look at those justifications harder before we start writing standards that assume we’ve made the business case and we’re arguing over the details.

This is the problem with a bottom-up approach in a hype-driven market.  If you say “OpenFlow” or “SDN” to the press you get a nice positioning on the article they write about you.  Thus, you say it, whether the association is valuable or not, or even whether you mean it or not.  That sort of thing can distort a market, but the cardinal sin here isn’t that distortion, but the fact that we’re still assuming our objective is to consume technology.  Maybe that works for yuppies with iPhones, but not in data center or carrier networks.  Being cool isn’t enough there, you have to be valuable.