Might a Deal for Dell be a Cloud Play?

A special note of concern for my friends in the Boston area.  I’ve spent a lot of time up there, and while all my personal friends seem safe a surprising number know others who were at least in the area of the blasts.  I’m thinking of you all, praying for your safety, and hoping that we can react to this event without losing the wonderful openness of Boston, and of America.

It’s generally bad financial practice to compete to buy into a declining industry.  We know that PC sales have been down, and the most recent data suggests they’re down sharply in the current quarter.  Nobody doubts that the reason is the smartphone and tablet, which are tapping off Internet use from PCs.  For those who use computers or appliances primarily to be online, that means there’s no need for PCs at all.  The Dish/Sprint deal, as I suggested earlier, is likely aimed at creating a mobile broadband ecosystem to couple with satellite broadcast, and this sort of thing could only facilitate a shift from PC to mobile.

And yet we have people wanting to buy Dell.  Why?  I think there are 3 possible reasons.  First, maybe they believe that the fear of flight from the PC is overdone.  Second, they might believe that Dell could establish itself in the tablet/smartphone space.  Third, they might believe that Dell’s server assets alone are worth the investment in an age of cloud transition.  Let’s look at the implications and probabilities of all three.

I doubt that many Dell suitors believe the PC is coming back, and I think that most likely believe that even the residual PC market (large though it might be) will be under relentless profit pressure.  To pick up Dell for PC opportunity files in the face of trends in PC usage and sale, and also price and profit trends.  Furthermore, the biggest barrier to those who’d like to discard PCs in favor of appliances—even Chromebooks—is lack of always-on broadband.  We’re clearly heading for just that, and very quickly.  The only thing that separates a PC from a tablet is a hard drive for offline use and a keyboard.  We can add keyboards easily to tablets, and “offline use” is heading toward the same level of anachronism as text terminals and modems.

So might the Dell advocates be seeing a great smartphone/tablet opportunity?  Dell can’t possibly drive a new mobile OS; it’s doubtful that Microsoft is going to be very successful at that and questionable whether Blackberry can stay alive even as a former market leader.  New player equals new casualty.  So they’d have to build Android devices, given that Apple is hardly likely to license iOS to them, and Android tablets and smartphones are at least as commoditized as PCs.

But here we do have a possible angle.  Suppose Dell were to go after the featurephone space using a model like Mozilla’s Firefox OS?  The network operators would love that because they’re already spending too much subsidizing smartphones and they don’t get to showcase their own differentiation through those devices.  Same with tablets.  Might Dell be looking at providing those operators with products that are much more browser/cloud platforms than even the current devices?

That would bring us to the third possibility, which is that it’s Dell’s cloud potential that matters to potential buyers.  In my view, no server vendor is really in a position to drive the cloud to create a unique advantage if they push down low at the hardware level.  Similarly, it’s going to be difficult to drive a unique cloud position through cloud-stack software like OpenStack because everyone is jumping on the same bandwagon.  You have to get above the fray, move not to the cloud platform but to the cloud’s valuable services.  You have to move up to SaaS, to SOA-like implementations of service features.

Dell has some history up here in the cloud-value zone.  They have been a primary driver of cloud DevOps, for example, and DevOps is the key to creating operationalized cloud services of any sort—cloud computing or cloud-hosted service features.  Their M&A all seems to be focused on extending the cloud, adding stuff above the basic software stack.  Might they be looking at creating a cloud not for the simple (and unprofitable) mission of IaaS but rather at creating a cloud for profitable high-level service hosting?  Even one to support carrier activity like NFV?

If Dell were to do that, they could then link their cloud differentiation downward.  A Dell framework for featurephone service support, complete with developer tools, a cloud architecture that you could buy as a service from Dell or buy as a cloud-in-a-box for your own installation, would be a powerful element in a featurephone strategy.  You could address corporate mobility needs with such a platform too.  In other words, you’d have something that would leverage the presumption that the cloud was going to get bigger by going higher, by offering directly valuable features and services.  Nobody is really doing that now, and Dell could be the first.

At least now, they could.  The problem with this sort of opportunity is that it’s far from invisible.  Cisco and Oracle have very similar assets, and HP has identical assets.  While it’s not likely that Cisco and Oracle have specific interest in featurephones or tablets (Cisco had a tablet and killed it), HP surely does—and the HP brand in the tablet space is stronger than Dell’s.  Still, it’s hard for me to see a play on buying Dell that doesn’t follow a variation on this cooperative cloud theme.  There just doesn’t seem to be anything else on the table that could produce enough value.

Maybe-Holistic SDN Model?

One of my biggest frustrations about SDN has been the lack of a complete top-to-bottom architecture.  All of the focus seems to be on the SDN Controller, and that’s an element that is a little functional nubbin that lies between two largely undefined minefields—the lower-layer stuff that provides network status and behavior and the upper-layer element that translates service requests into routes based in part on that status/behavior.  Now we may have at least a step toward a vertically integrated model.

Pica8 has announced an SDN architecture for the data center that’s vertically integrated to the point that it looks a lot like a cloud-provisioning model (Quantum) in terms of the functional boxes.  There’s an open switch abstraction (OVS) linked with a network OS and a hardware layer that adapts the central logic to work with various devices, including “white box” generic switches.  The current Pica8 announcement is focusing on the application of this architecture to the problem of data center networking, not so much segmentation (though obviously you can do that) but to traffic engineering and creating efficient low-latency paths by meshing switches rather than connecting them into trees (the current practice with Ethernet) or turning them into fabrics.

This model of SDN application could be one of the sweet spots for SDN because it’s addressing a very specific issue—that cloud or even SOA data centers tend to generate more horizontal traffic without becoming fully connective in a horizontal sense.  In SOA, for example, you have a lot more intercomponent traffic because you have deployed separate components, but that traffic is still likely less than the “vertical” flows between components and users or components and storage systems.  In traditional tree-hierarchy switched networks, horizontal traffic might have to transit four or five layers of switches, which greatly increases the delay and the overall resource load.  Fabrics, which provide any-to-any non-blocking switching, waste some of that switch capacity by automatically supporting paths that have no utility, or are not even contemplated.

The Pica8 architecture is also interesting,  as at least offers the potential to combine real telemetry from the network and real service requests from data center/cloud software to create paths.  As I noted earlier, there are few models of SDN that provide the vertical stack even in limited form, so it’s heartening to see something come out.  The problem is that the model of the data center, while it may offer sweet-spot early positioning, doesn’t expose the full set of value propositions or issues.

Every data center doesn’t need a fabric or mesh.  While we might want to believe that VM growth (private cloud or virtualization) or other architectural factors would change this, the fact is that data center networking needs are set more by total application traffic than anything else, and moving around where applications are hosted doesn’t impact traffic very much.  A major increase in the application traffic would imply a much larger investment in IT resources, and it’s clear from the earnings reports that kind of growth isn’t happening.  It may, if our model of point-of-activity empowerment matures, but not yet.  Thus, data centers are not necessarily under a lot of growth pressure.

The dynamism of future applications will generate network agility requirements before it will generate traffic, but the question that Pica8 and everyone else will have to answer is how those requirements move out of the data center.  A rope staked to the ground on one end can move only in a circle.  If the edge of the network, the client devices and users, are still doing the same old thing then the changes in the data center will dissipate as they move toward the edge and the total network won’t change much.  Not much dynamism.  Even a zillion mobile clients hooking up to enterprise apps really doesn’t do anything that SSL connections to a web server for worker or even customer access doesn’t do.  You need a new application model that drives a new connection model, one that takes SDN out of the data center and rolls it all the way to the edge.

We need to be watching Pica8 at this point to see how it plans to support this sort of migration.  We also need to see how well it will address the metro-cloud opportunity that is the service provider equivalent of that enterprise network drive I called point-of-activity empowerment.  It’s a promising start but we need more progress to call it a win.

How to Judge the News at ONC

With the Open Networking Summit about to kick off it’s obvious that there’s going to be a lot of things going on with respect to SDN and cloud networking.  The problem we have, in my view, is that all of this is a race to an unknown destination.  We’ve picked apart the notion of SDN and we’re busy claiming supremacy in itty bitty pieces of it, but we’re unable to tie the pieces into a functional whole that would then justify (or fail to justify) SDN use.

Right now there are two big public issues in the SDN world; the controller process (largely because of the OpenDaylight project and some counterpunching by vendors like Adara) and “virtual networking” via overlay technology by players like Nicira/VMware and Nuage/Alcatel-Lucent.  People email me to say they can’t understand how these fit, either with each other or in the broader context of SDN value.

Let’s start with that.  Networks connect things, so it follows that the goal of a network architecture is to create a route between points that need to be connected.  There’s two pieces of that process—knowing where to go and knowing how to make traffic get there.  The second piece is the individual device forwarding process; a packet with this header goes in this direction.  The first piece is a combination of a topology map of the network (one that locates not only the connecting points but also the intermediary nodes) and policies to permit which of what are certainly going to be a multiplicity of route choices should be taken.  In classical networking the topology map is created by adaptive discovery and the policies are implemented in a “least-cost” routing protocol that optimizes something like capacity or hops.

Classical SDN, the original research effort, works to replace the topology/policy stuff that’s currently distributed in devices with a centralized function.  That function needs then to control the forwarding in devices, which is what OpenFlow does.  The OpenFlow controller is a function that manages the exchange (via OpenFlow) with the devices.  It doesn’t decide on routes or policies, it only controls devices.  All that deciding and policyfying goes on north of those often-referenced “northbound APIs”.

What does this have to do with virtual networking, Nicira, and so forth?  Nothing, frankly.  Virtual networking is a way of creating application-specific network partitions to isolate cloud applications and to make it possible to spin up a network that connects a community of components that are related to each other but have to be isolated from everyone else.  You don’t need OpenFlow or any sort of SDN principle to drive this because it’s a software-based tunnel overlay.  There are differences among the implementations, though.  Nicira has been presented dominantly as a VLAN strategy, limited to the data center.  Nuage and Alcatel-Lucent have presented a broader model that can emulate IP and port-level connectivity, which means it’s pretty easy to make their virtual networks run end to end, data center to branch or cloud to user.

The challenge that nobody is really talking much about is creating that high-level value, that central intelligence, that application-and-user specificity that would make all of this highly useful.  We need a connection mission to justify connectivity management, and all of the stuff that purports to be SDN is still picking at implementation details amid the fog of possible applications where nothing stands out.  Add some tools to OpenFlow and you can create core routing like Google did.  Add different tools and you can manage data center flows to better organize switches and eliminate the traditional multi-tier networks.  But you can do both these things without SDN, and if you want to do them with SDN you need more than an SDN controller.

We are starting to see some promising developments in the SDN world.  Alcatel-Lucent’s recent Nuage announcement is an advance because it makes it possible to visualize a clear delineation between virtual-network connectivity management (Nuage) and route management at the network device level (Alcatel-Lucent), but a delineation that provides for feedback from the former to the latter to create manageable networks.  The problem is that because we’re not owning up to the problem, the fact that SDN needs a connection/application mission to be valuable, we don’t hear about these developments in the right context.

When you go to the ONC next week, look past the fights over how many SDN angels can dance on the head of a controller, to the top-to-bottom, end-to-end, vision that a given SDN vendor actually supports—supports with functionality and not with vague interfaces.  That will separate reality from SDN-washing.

Butterflies and Markets

Well, the report on PC sales has pretty well demonstrated that we do have a new dynamic in terms of “personal computing”, a dynamic in which the device that took its name from the concept is falling out of favor.  This is coming about because a seemingly small force–mobility–is driving a systemic change in human behavior.  A mobile butterfly is changing all our weather.

I don’t have to point out that PC sales are down; we all know that.  I don’t even have to point out that the primary reason they’re down is that we’re reframing our information use around portable devices that can empower us at our point of need; I’ve said that enough already.  What I do have to point out is the way this revolution is unfolding and what that means.

A new Temkin Group report, called Media Use Benchmark, says that people on the average are online twice as long for personal reasons as for work.  Mobility is a big factor in that, because traditional pass-the-time pursuits like watching TV still account for four hours per day.  To get all this extra online time in, people have to be portable/mobile in their activity.  But the key point here is that we’re changing people’s habits by changing their personal lives.  We have not tracked the mobility and point-of-activity empowerment stuff into the workforce yet.

One reason that point is key is that the workplace is an area where PCs continue to be strong.  If we’re just waiting for the right formula for worker empowerment to hit, then we’re not seeing a natural core market for PCs in the workplace, just a market where mobility hasn’t hit yet.  PC sales can fall further, in other words.  Not all the way to zero, perhaps, but certainly we are increasingly likely to see the future PC look more like those convertible tablets than like a PC of today.  Microsoft’s Windows 8 failure lies in the fact that it’s not targeting that kind of device, so it’s not delivering on real value.

For the cloud, this mobile shift has major implications.  Point-of-activity empowerment of workers raises new opportunities for productivity enhancement, which could be enormous in terms of market impact.  It also totally alters the nature of applications, just as smartphones did years ago.  A small appliance used while mobile forces the worker to focus on just what they need, because “what-if” navigation is hardly practical on a little screen or when you’re doing something else.  The mobile/behavioral push into enterprise applications will drive more componentized software, composed worker empowerment, and other high-agility measures.  These, given the existence of the cloud today, will be applications that can be run in a cloud-specific way because we have cloud technology to reference as they’re being designed and deployed.  In short, we have the real driver of a cloud revolution looming, and our PC shift is a symptom that the point of inflection isn’t too far off.  Consumers who are adept at mobile empowerment in their personal lives will want to be mobile-empowered workers in their professional lives.

High levels of application composability and agility create a demand for a different way of thinking about networks too.  The goal is to be able to deliver something that can be depended upon without risking total collapse if something trends huge in the market at a given moment, or risking runaway operations costs to keep that collapse at bay.  SDN is an example of what could be done and how it could happen, and now we’re finding out that the ONF may actually be bowing out of SDN in a realistic sense.

The OpenDaylight stuff we’ve heard about is probably a fusion of a hundred cynical and manipulative motivations, but underneath that it’s also an open source project to do SDN.  It’s not going far enough right now, but it’s going as far as ONF has gotten, and ONF I think realizes that it’s going to be outrun by real, software-coded, progress from some source or another.  Yes it will talk about what it’s going to do next, to try (as all organizations try) to perpetuate itself, but code trumps standards because you can’t deploy standards and you can deploy software.

That’s a key point, I think, the technical inflection under the seismic mobility change.  We cannot address a dynamic future with processes that are so static they appear motionless.  No international network standard these days has any hope of relevance simply because it’s unable to progress at the pace of the market.  There are only two ways to move into the mobile/behavioral future as far as networking is concerned—proceed through a series of open-source experiments that coalesce into an accepted set of practices, or blunder along and hope for the best.  The idea that we’ll define and adopt standards is futile; we’d never have one in time.

Networking’s Biggest Question

Can it be done?  That’s a question that I’m sure gets asked a lot in our industry.  We see or hear a news item or claim and we ask “the question”.  Part of the prevalence of this favorite question is the cynicism bred of years of hype, of course, but part is also a reflection of the fact that the industry is getting a lot more complicated and it’s actually hard sometimes to decide whether a particular notion is practical or simply nonsense.  Let’s look at some recent examples to see what I mean.

We’re hearing constantly that OTT video is going to destroy channelized TV viewing, and the Aereo court victory (for now, at least) seems to fuel this issue.  Can it be done?  The problem in my view lies not with distribution but with production.  To the extent that this sort of thing is a gnat buzzing around the head of network TV it’s tolerable, but if it gets annoying enough it gets swatted.  How?  If too much third-party exploiting of over-the-air is happening, you get people who pull their broadcasting.  News Corp has already threatened to go pure cable delivery, which would make their channels immune.

The problem with the OTT model isn’t obtuse legal rights questions, it’s simple economics.  If you can’t produce content you can’t distribute it, no matter what technology you have.  The online ad market’s pricing model for video doesn’t return nearly enough to fund content development, even if you assumed that all the video went online.  We have a lot of economic hurdles to jump here before we get anywhere close to killing off TV.  Can it be done?  Not under currently foreseeable circumstances.

We also hear that Huawei is going to take over the industry, becoming not only the number one in network equipment but perhaps becoming the only truly safe and profitable player.  Yet Huawei is widely regarded as a threat because of their links to the Chinese government, and they’re barred from contracts in at least some countries and situations.  They want to refurbish their image and own the network market.  Can it be done?  Here we have a conclusive answer, I think, which is “Darn straight it can!”

Huawei was at one time a price-leader player, but that time has passed.  In some of the most critical elements of network evolution, Huawei is not only embracing technology advance and differentiation, they’re leading it.  Some of my recent exposure to their work in the standards area has showed me that they’re taking advanced network technology a lot more seriously than other network vendors considered leading-edge.  Where they are particularly strong is in their conception of how IT will integrate with the network—the “metro cloud” or “carrier cloud”.  I can safely say that there’s nobody out there doing better work, nobody being more aggressive in exploring all the nuances of how IT and network equipment cooperate in creating services.

In terms of overcoming a fear of China spying or malice delivered through Huawei equipment, I think their current consumer-level thrust is the smart way to go.  Much of our technology, including cherished Apple gadgets are built in China today.  Are they being built with spy cameras included?  Huawei can make consumer technology at a good price, spread it out under their own name, and make themselves a positive image.  If your company fears Huawei, then fix your own problems and you don’t need to worry.  If you don’t want to do that, then be prepared to be run over.

The last of my items is the idea that mobile broadband will displace fixed broadband.  People, even years ago, were saying that it was just a matter of time before we tossed the whole wireline thing in favor of wireless everywhere, even to TVs.  No more local loops, tethered devices.  Freedom!  Can it be done?  Here, there’s not a pat answer because there are two ways of looking at things.

Mobile broadband is point-of-activity empowerment, as I’ve noted before.  It changes behaviors by making it possible to interact with information and entertainment on our own terms.  We don’t sit at a computer to see something or look something up, we just whip out the old smartphone and go at it.  That changes how we consume information, so it changes behavior.  We’re creating a consumer class for a new kind of consumption.  And if we have enough people doing that, we necessarily enrich the wireless broadband distribution model to the point where it would indeed be possible to access everything wirelessly and forget tethering.

The thing is, getting to that enhanced broadband distribution model means getting to smaller cells at much higher densities.  So what we end up with isn’t an elimination of wireline but a proliferation of micro-, pico-, and femtocells.  All these things need backhaul.  Logically we’d have such a backhaul-and-ittybittycell point in every home.  Logically it would look like the current wireline broadband and WiFi picture.  So yes, we’re moving to an untethered world but no, that’s not going to pull the cable companies or telcos out of the wireline broadband business.  Our wireless connections will be built on our own wireline connections, just as they increasingly are today.

There’s another point here, of course.  There are a lot of exciting things happening in tech and networking today.  All of them have a grain of reality, nearly all are being hyped mercilessly, and so while we can’t say that a given thing is “false” or will “never happen” we can assign many of the things to the same level of probability as flying over the Empire State Building by standing on 34th Street and flapping our arms really hard.  And if we can’t pick out the exaggeration from the reality, we can’t support real planning, real spending, and real opportunity.  We need to exercise some sanity here.  Can it be done?

How Much Light in Open Daylight?

We’ve had a couple of potentially significant developments in the networking space, developments that seem on the surface to be contradictory.  On the one hand, big firms like IBM, Cisco, Microsoft, and Ericsson have created an open-source project for SDN development, called “Open Daylight”.  This suggests wide industry support for an open SDN framework, and that’s something carriers tell me they want.  On the other hand, Ericsson (one of the Open Daylight founders) has just bought Microsoft’s MediaRoom assets, which seems to be a move to establish a position in IPTV based on proprietary technology.  Is there a contradiction here, and what does this say about proprietary versus open-source service software?

Ericsson has to see the Alcatel-Lucent cloud moves as a threat because it doesn’t have strong assets from which to build a cloud story, and because Alcatel-Lucent already has a content position.  Remember, operators see three primary monetization targets—content, mobile/behavioral, and cloud.  A strong cloud story would give Alcatel-Lucent complete coverage, and if the company can shed its long-standing silo mentality and position its stuff well, it could take market share on that combination.  So Ericsson grabs up Microsoft’s content assets, which covers at least two of the three monetization bases.

So what about the Open Daylight stuff?  If a competitive market demands vendors own some assets that can drive strategic engagement, why cooperate with SDN?  Is SDN not strategic?  Is Open Daylight not addressing a strategic piece of SDN?  Is it all a sham?  Maybe a bit of all of these things.

It’s not fully clear where Open Daylight will be showing the light, to start with.  It’s hard to see how it would move the ball much if we presumed it’s going to be nothing more than an OpenFlow controller—we already have those.  The early rumors suggest that the project will be more like a kind of official underlayment to Quantum, with functionality designed to support all of the “models” that make up Quantum in its latest (Grizzly) manifestation.  All of this is helpful in an integration sense, but it’s not going to add a bunch of new features or functions.  Which is why it’s worthwhile to ask what vendors gain by backing it.

The obvious benefit is that the project lets all the members put their hands on their heart and pledge allegiance to SDN, open-source software, the cloud, and probably a few other things without actually having to do very much on their own.  It’s keeping a hand in the SDN process without making any specific commitments when 1) it’s probably too early to make much money on SDN at this point and 2) it’s possible that even now SDN could overhang current products and positioning.

I think there’s another factor here, though, and it’s suggested by the fact that the platinum members of Open Daylight include Citrix, IBM, Microsoft, and Red Hat—all software and IT players.  I’ve said for months that the value of SDN isn’t realized at the OpenFlow level at all, it’s realized above that, in the way that network services and applications and infrastructure merge to create a new union of resources.  The northbound APIs of Open Daylight will be open, open to exploit with the truly useful higher-layer stuff.  The IT vendors may be seeing this as a way of putting networking back in the cage, of creating a specific boundary point where the commoditizing iron of the present meets the exploding IT value of the future.  A boundary point that, of course, favors the IT types.

Don’t take this as an indication that I’m opposed to Open Daylight; I’m in favor of all carrier open-source projects for a bunch of reasons, not the least that they tend to take the otherwise-interminable standards processes out of the abstract and into the (at least more) concrete.  I hope that NFV does the same thing, in fact.  It’s just that all this maneuvering and defense is obscuring the fact that we still don’t have a solid notion of exactly what goes into SDN from top to bottom, what makes software-definition of network behavior uniquely valuable.  We also don’t have a picture of how Open Daylight will balance OpenFlow-centralized SDN with more distributed-IP SDN.  If those two converge anywhere, it’s not inside the Open Daylight corral but above it, north of those northbound APIs.

The good news here is that while it’s possible to use both standards processes and open-source projects to stall a market by obstructing progress on collective visions, it’s much harder to do it with open-source because real code is contributed and anyone can in theory pull together pieces to make something useful.  But the critical piece, as I hope this blog has shown, is those northbound APIs.  Framing the drivers of the SDN movement without specifying interfaces between those drivers and baseline SDN technology is futile, so it’s here at the northern border that we need to be watching for progress.  Open Daylight is in the dark till it has a model for those APIs.

Prepare to Read Earnings Tea Leaves!

We’re coming into that wonderful quarterly ballet that the financial industry calls “earnings season”, a bit of a misnomer given that we actually have four of them any year.  Nomenclature notwithstanding, this is a good time to look at the health and ecosystemic pressures of tech overall.  We’ll be getting at least a snapshot of every public company’s position, but we need to fit them into a coherent industry view to make any sense of the trends.  One way to do that is to play on some data points.

In China, as previously in Europe, we’re seeing pressure from network operators on regulators to provide some relief from unbridled traffic growth.  A mobile chat application is the culprit in China, and regulators there appear willing to consider some sort of compensation.  That’s already under discussion in Europe, and of course it’s been the focus of the neutrality debates here in the US.  Since the US position on OTT-pays has been set by the former VC FCC chairman who is now stepping down, we may see a shift in US policy and an acceleration of the global trend.

Leaving the politics aside, the problem that the current approach of bill-and-keep has created is a disastrous drop in revenue per bit, a slide averaging 50% per year.  This has put enormous pressure on operators to reduce cost per bit, not so much to cut capex but to “cap” capex to a constant percent of top-line revenue.  Two decades ago, operators were spending about 20% less on infrastructure as a percentage of their sales and they want to get back to something closer to historical levels.  In China, where there’s still a lot of untapped opportunity and in the US where there’s still market share up for grabs, the mobile market is still sustaining higher investment levels and likely will for some time.  In Europe, with economic pressure compounded by high levels of competition and little chance for major market-share gains, even mobile is unlikely to show much growth.

Operator interest in things like SDN and NFV stem from this basic issue.  “Flattening the network”, meaning eliminating multiple loosely coupled protocol layers, promises to reduce both the capex (part of which is investing in layers that serve not the customer but other layers) and opex (“OSI ships in the night” as one operator termed the situation).  Hosting higher-level (out-of-data-plane) functionality in servers is another way to cut costs, and the same architecture that offloads firewall services could be the point of implementation for new service features that aren’t in the network of today at all.

In the enterprise, the situation is similar but with the obvious twist that the network isn’t a profit center for the enterprise, it’s a cost center.  ROI for an enterprise means productivity benefit divided by the augmentation cost.  Trends like BYOD aren’t drivers to spending, because they aren’t convincingly linked to incremental worker productivity.  What is?  Point-of-activity empowerment, which is that fusion of network and IT that I’ve blogged about before.  The question is where it’s going to come from.  Absent some specific approach, an architecture, that can guide investment in IT and network changes, enterprises will sit on their hands…as they clearly are.

If there is a technical lowest-common-denominator here it’s the cloud.  As workers are less and less likely to go to their IT support point for answers and more likely to expect those answers at their point of need, we need to rethink not only how we build applications to support those workers, we need to rethink how we resource them and connect them.  The cloud is a completely new application ecosystem, one that delivers services that people will pay for and one that delivers information and productivity support that raises the IT benefit case for enterprises.  So SDN or NFV aimed at the current situation will never be successful, because they will never address the real problem.  The evolution from where we are isn’t justified by the next step on the path, but by the goal.  Yes we have to survive the transition to get to the future, but without a promise of a truly transformational future, you’re in Groundhog Day.

So here’s the point.  Vendors are going to be talking about quarterly sales mechanics in their calls, “feet on the street” or “cutting TCO” or “next-gen optics” or whatever.  All of this stuff is just managing the current decline of tech.  A static benefit case in a competitive market creates consolidation and marginalization and nothing more than that, ever.  If you want to transform the industry you have to transform the justification for spending on what you sell.  When you hear vendors talk about their quarter, their plans, their prospects for the future, make sure it’s the future that they’re prospecting for.  Otherwise they’re not digging for gold, they’re digging graves.

Is Juniper Retreating from QFabric or Advancing to SDN?

We continue to see major vendor announcements on SDN, but obviously some are more responsive/revolutionary than others.  Juniper has announced its EX9200 product, which it’s billing as a programmable core switch and also a stepping-stone toward SDN.  There may be some significant news here, but it’s not on the SDN front—at least not for now.

The new switch is firmly aimed at the enterprise, at data center and campus applications, which is a bit of a change for Juniper in itself.  It’s very likely that this positioning comes as Juniper tries to rehab its enterprise business, something it had hoped to sell off.  They’re also playing more to promote their alliance with IBM, who is really their draw for the enterprise according to my surveys.  Juniper isn’t seen as a strategic partner by enterprise customers to the extent that IBM is, or Cisco.  The big question here is whether there’s anything to this beyond simple opportunism in the enterprise space, either by design or by accident.

Juniper has a data center switch—QFabric.  It would be logical to say that a campus/data center strategy would build on that product and extend it across campus and metro scale, but that’s not what’s happening.  Instead, it really looks like what Juniper is doing is admitting that the fabric opportunity isn’t nearly as large as they’d hoped, and that QFabric isn’t as revolutionary as they said it was when it was announced.  Thus, the EX9200 may be Juniper’s attempt to wrestle some data center market share from rivals like Cisco, market share that QFabric simply isn’t going to claim.

An interesting wrinkle in this is that QFabric was the flagship product for Juniper’s then-new Juniper One chip, which has the interesting capability of being micro-programmable to enhance hardware support for new protocols.  I think the chip is a good idea, but it never went into QFabric (which uses merchant silicon).  Now it’s going into the EX9200, and it’s hard not to believe that this spells the strategic doom of the whole QFabric concept.

Juniper never had a shot of doing what it wanted to do with QFabric, but it does have a chance with the EX9200.  I emphasize the “chance” part because the opportunity for the product is based not on glitzy (if retread) technology but on changes in the market.  If there is a driver to data center change, it’s the concept of the cloud and the trend toward point-of-activity empowerment.  Juniper does provide a software-based WLAN controller hosted in the EX9200, but they don’t really spend any time in their positioning explaining why that matters, other than to cite the already-boring BYOD trend.  Where they focus most of their launch effort is on cost management.  There is absolutely no top-line benefit claim made for the EX9200, only the same old TCO-and-cost-defense stuff that Juniper has been stuck on for years.

The enterprise is changing.  Service providers are changing.  Both are changing because of the new fusion of IT and networking, the evolution of both to become not separate disciplines but facets of one technology.  This trend dates back over a decade and it’s emerged as the driver of everything.  What we call “the cloud” is simply a manifestation of that broader trend.  Juniper’s rival Cisco has a leg up on everyone in the network space because they have servers and so can help shape the trend from both the IT and network side.  Those who, like Juniper, don’t have that asset set have to do something truly phenomenal at the boundary.  Junos Space and the Network Director management console isn’t that “something”.  Does this new thing offer APIs for software control?  If so, why isn’t that a focus of the announcement.  If not, then where is the link to justify a claim of SDN support?

Juniper was getting trashed pre-market this morning, probably in part because rival F5 issued poor guidance and in part because that triggered a downgrade of similar players.  That’s fair because I think that if you look at these two companies you see a common thread—a smaller player in a giant’s market who is unwilling to make the big and daring moves that will erase what is otherwise a slow but inevitable decline.  Juniper is still defending the network past, as rivals like Cisco and Alcatel-Lucent try to shape networking’s future.  That is a position that a major incumbent might survive, but it’s not one for a relatively small player to take.  Even Alcatel-Lucent knows this is the time to step up and swing for the seats, as they did with their SDN story this week.  Juniper also has an SDN spin-in, but where is its contribution?  Maybe if you added Contrail to the EX9200 you’d have something.

Juniper could make something of the EX9200 if they had a coherent SDN strategy that started up at the software layer and moved in an organized way through the network.  They actually have most of the pieces, but they don’t seem to have the direction.  Is this a management problem, as Motley Fool has suggested?  Is it just too many router types who don’t want to give up on the glory days?  It doesn’t matter.  What matters to Juniper is fixing their strategy, and the Alcatel-Lucent SDN announcement shows that there’s precious little time to do that.

Alcatel-Lucent Sets a High SDN Bar (But a Low Singing Standard!)

Alcatel-Lucent finally did their SDN announcement, and it was in most ways a major step forward for SDN, perhaps the biggest step taken by any of the vendors so far.  However, as is often the case, the articulation may not do justice to the technology.  In fact, in many ways the material was downright murky, and since I was traveling and unable to schedule a briefing, I had to struggle to get the measure of what was actually going on.

At a high level, there are two SDN visions in play here.  One is the Nuage vision of a data center network that’s essentially a functional superset (“super” in many ways, as you’ll see) of the Nicira overlay SDN-as-connectivity-control model, and the other a much broader cloud-SDN model.  I make this point because if you look at this as a Nuage story, I think you sell it short, and the same is true if you look only at the WAN side.

Let’s start with Nuage.  All software-based network overlays have a common property of subsetting the “real” connectivity of the network below.  With Nicira the primary goal of that was to create separation of tenants in public cloud infrastructure, and you may recall that I’ve been unhappy with that mission from the first.  It does this by creating a virtual Level 2 network through tunnel overlays, which we’ll call “software-defined connectivity” or SDC here just to keep the pieces separate. This basic SDC model is OK, but it just doesn’t solve enough problems to secure a firm path to relevancy.

Nuage’s positioning (and to be fair, Alcatel-Lucent’s positioning of Nuage) focuses a bit too much on the notion that what Nuage is offering is a higher-layer SDC vision.  That’s true in that Nuage recognizes virtual networks at Level 3 and also (at least as I read the stuff) also recognizes port-level subnetworking, meaning that you could create and maintain virtual networks using TCP/UDP ports and not just IP addresses.  What makes this important isn’t higher layers per se, but the fact that it gets the SDC out of the data center and makes it a logical partner to both endpoints and real network technology.  Layer 4 (ports) is as high as OSI layer references reasonably go because these are the application-level connections actually used in software APIs.  Nuage has thus covered all of the functional-value waterfront here, the first to announce that capability.

A full-layer SDC is inherently able to span the boundary between data centers, so the Nuage offering can be used to build a cloud resource pool that offers elasticity of VM or application image positioning that crosses even national boundaries.  It also supports natural hybridization of public and private clouds for cloudbursting or failover operation, and it could in my view provide a framework for federation of multiple public clouds.  When you look at the feature set here, what you see is a virtual-network overlay that’s designed to manage resource addressing and connectivity management within any arbitrary cloud and also with its endpoints.

Nuage is very much a DevOps-based process, in that the benefits of Nuage are presented through a set of operations APIs that fit the model of cloud DevOps I’ve talked about here.  Virtual networks are abstractions, and the abstractions are based on a set of policies that link them to the second piece of this, which is the Alcatel-Lucent part (I’ll get to the details of that in a minute).  The definition of connections is via logical groups rather than as physical elements, and policies can be assigned at a group level (would normally be, in fact) and that spares the application users of network services from the details of the network itself, even at the virtual level.  Each virtual network also presents a management view so you can see what’s happening at the operations level.

All of this is fine—even perhaps great—but it’s still an overlay and you still have to deal with the real network and real connectivity.  Alcatel-Lucent does that with “policy-pull provisioning”, which has a nice euphonic ring and also happens to be a good idea.  The Nuage policies can link down through an  SDN controller (not the simple OpenFlow SDN controller but a more sophisticated and functionally complete model) that can take advantage of any and all of the protocols/processes used to control resource behavior in the real world.  This makes the model applicable to mobile networks through the traditional policy-based processes and via DIAMETER, for example, and also with OpenFlow-based devices and IP/MPLS devices.  This aspect of Alcatel-Lucent’s story is somewhat congruent with Ericsson’s SDN vision, but Alcatel-Lucent has provided more detail on the implementation and has embraced a broader control protocol model in terms of public articulation.  Alcatel-Lucent also includes a resource discovery function that drives connectivity control changes when physical network changes occur.

The binding innovation here is the “Software Defined VPN” or SDVPN, which is a Nuage SDC overlay linked with the Alcatel-Lucent physical network piece.  SDVPNs can extend endpoint services and so they are a natural source of end-to-end SDN functionality and also a natural platform for network services.  SDVPN agents will be open-sourced so they could be installed anywhere as a virtual endpoint, which is what makes this concept extensible.  In fact, in my view at least, an operator could build services based on these capabilities and sell them to users as ARPU-generating extensions to basic VPN services.  You could also, again in my view, use this as a foundation for building a cloud-NFV application framework.  Alcatel-Lucent has been active in NFV all along.  The same “hosted SDVPN” concept would appear to make it possible to integrate SDVPN functionality with the real network, which could then bridge the current gap between overlays that see only connectivity and real networks that see real traffic but don’t have any view of connectivity controls applied at the overlay level.

All of this is very smart, presuming that I’m correct in my interpretation of the material, but as I noted there are some issues digging this detail out of the released documentation and there are still points where I think clarification and expansion are in order.  For sure, the concept could have used a clearer articulation, but I do applaud its technical completeness and the fact that it’s going to customer trial this year—it’s not some 2014 vision that maybe will happen and maybe won’t.

For competitors, this story could present challenges.  Because Alcatel-Lucent has taken a truly cloud-centric and service-centric view of SDN for the first time, they’ve made it harder for rivals like Cisco and Juniper, who have been (deliberately or by accident of articulation) glossing over their details and making SDN into more of an evolution of current networks than a partner in cloud networks and cloud services.  They have established a functional litmus test for an SDN implementation by any vendor, and even startups will now have to think outside the data center and think more about the context of the applications/services than about the simple issue of connectivity.  That could refine and focus the SDN dialog, but of course whether it will depends on whether anyone understands what Alcatel-Lucent has done.  Eventually, I think, they will.