Butterflies and Markets

Well, the report on PC sales has pretty well demonstrated that we do have a new dynamic in terms of “personal computing”, a dynamic in which the device that took its name from the concept is falling out of favor.  This is coming about because a seemingly small force–mobility–is driving a systemic change in human behavior.  A mobile butterfly is changing all our weather.

I don’t have to point out that PC sales are down; we all know that.  I don’t even have to point out that the primary reason they’re down is that we’re reframing our information use around portable devices that can empower us at our point of need; I’ve said that enough already.  What I do have to point out is the way this revolution is unfolding and what that means.

A new Temkin Group report, called Media Use Benchmark, says that people on the average are online twice as long for personal reasons as for work.  Mobility is a big factor in that, because traditional pass-the-time pursuits like watching TV still account for four hours per day.  To get all this extra online time in, people have to be portable/mobile in their activity.  But the key point here is that we’re changing people’s habits by changing their personal lives.  We have not tracked the mobility and point-of-activity empowerment stuff into the workforce yet.

One reason that point is key is that the workplace is an area where PCs continue to be strong.  If we’re just waiting for the right formula for worker empowerment to hit, then we’re not seeing a natural core market for PCs in the workplace, just a market where mobility hasn’t hit yet.  PC sales can fall further, in other words.  Not all the way to zero, perhaps, but certainly we are increasingly likely to see the future PC look more like those convertible tablets than like a PC of today.  Microsoft’s Windows 8 failure lies in the fact that it’s not targeting that kind of device, so it’s not delivering on real value.

For the cloud, this mobile shift has major implications.  Point-of-activity empowerment of workers raises new opportunities for productivity enhancement, which could be enormous in terms of market impact.  It also totally alters the nature of applications, just as smartphones did years ago.  A small appliance used while mobile forces the worker to focus on just what they need, because “what-if” navigation is hardly practical on a little screen or when you’re doing something else.  The mobile/behavioral push into enterprise applications will drive more componentized software, composed worker empowerment, and other high-agility measures.  These, given the existence of the cloud today, will be applications that can be run in a cloud-specific way because we have cloud technology to reference as they’re being designed and deployed.  In short, we have the real driver of a cloud revolution looming, and our PC shift is a symptom that the point of inflection isn’t too far off.  Consumers who are adept at mobile empowerment in their personal lives will want to be mobile-empowered workers in their professional lives.

High levels of application composability and agility create a demand for a different way of thinking about networks too.  The goal is to be able to deliver something that can be depended upon without risking total collapse if something trends huge in the market at a given moment, or risking runaway operations costs to keep that collapse at bay.  SDN is an example of what could be done and how it could happen, and now we’re finding out that the ONF may actually be bowing out of SDN in a realistic sense.

The OpenDaylight stuff we’ve heard about is probably a fusion of a hundred cynical and manipulative motivations, but underneath that it’s also an open source project to do SDN.  It’s not going far enough right now, but it’s going as far as ONF has gotten, and ONF I think realizes that it’s going to be outrun by real, software-coded, progress from some source or another.  Yes it will talk about what it’s going to do next, to try (as all organizations try) to perpetuate itself, but code trumps standards because you can’t deploy standards and you can deploy software.

That’s a key point, I think, the technical inflection under the seismic mobility change.  We cannot address a dynamic future with processes that are so static they appear motionless.  No international network standard these days has any hope of relevance simply because it’s unable to progress at the pace of the market.  There are only two ways to move into the mobile/behavioral future as far as networking is concerned—proceed through a series of open-source experiments that coalesce into an accepted set of practices, or blunder along and hope for the best.  The idea that we’ll define and adopt standards is futile; we’d never have one in time.

Networking’s Biggest Question

Can it be done?  That’s a question that I’m sure gets asked a lot in our industry.  We see or hear a news item or claim and we ask “the question”.  Part of the prevalence of this favorite question is the cynicism bred of years of hype, of course, but part is also a reflection of the fact that the industry is getting a lot more complicated and it’s actually hard sometimes to decide whether a particular notion is practical or simply nonsense.  Let’s look at some recent examples to see what I mean.

We’re hearing constantly that OTT video is going to destroy channelized TV viewing, and the Aereo court victory (for now, at least) seems to fuel this issue.  Can it be done?  The problem in my view lies not with distribution but with production.  To the extent that this sort of thing is a gnat buzzing around the head of network TV it’s tolerable, but if it gets annoying enough it gets swatted.  How?  If too much third-party exploiting of over-the-air is happening, you get people who pull their broadcasting.  News Corp has already threatened to go pure cable delivery, which would make their channels immune.

The problem with the OTT model isn’t obtuse legal rights questions, it’s simple economics.  If you can’t produce content you can’t distribute it, no matter what technology you have.  The online ad market’s pricing model for video doesn’t return nearly enough to fund content development, even if you assumed that all the video went online.  We have a lot of economic hurdles to jump here before we get anywhere close to killing off TV.  Can it be done?  Not under currently foreseeable circumstances.

We also hear that Huawei is going to take over the industry, becoming not only the number one in network equipment but perhaps becoming the only truly safe and profitable player.  Yet Huawei is widely regarded as a threat because of their links to the Chinese government, and they’re barred from contracts in at least some countries and situations.  They want to refurbish their image and own the network market.  Can it be done?  Here we have a conclusive answer, I think, which is “Darn straight it can!”

Huawei was at one time a price-leader player, but that time has passed.  In some of the most critical elements of network evolution, Huawei is not only embracing technology advance and differentiation, they’re leading it.  Some of my recent exposure to their work in the standards area has showed me that they’re taking advanced network technology a lot more seriously than other network vendors considered leading-edge.  Where they are particularly strong is in their conception of how IT will integrate with the network—the “metro cloud” or “carrier cloud”.  I can safely say that there’s nobody out there doing better work, nobody being more aggressive in exploring all the nuances of how IT and network equipment cooperate in creating services.

In terms of overcoming a fear of China spying or malice delivered through Huawei equipment, I think their current consumer-level thrust is the smart way to go.  Much of our technology, including cherished Apple gadgets are built in China today.  Are they being built with spy cameras included?  Huawei can make consumer technology at a good price, spread it out under their own name, and make themselves a positive image.  If your company fears Huawei, then fix your own problems and you don’t need to worry.  If you don’t want to do that, then be prepared to be run over.

The last of my items is the idea that mobile broadband will displace fixed broadband.  People, even years ago, were saying that it was just a matter of time before we tossed the whole wireline thing in favor of wireless everywhere, even to TVs.  No more local loops, tethered devices.  Freedom!  Can it be done?  Here, there’s not a pat answer because there are two ways of looking at things.

Mobile broadband is point-of-activity empowerment, as I’ve noted before.  It changes behaviors by making it possible to interact with information and entertainment on our own terms.  We don’t sit at a computer to see something or look something up, we just whip out the old smartphone and go at it.  That changes how we consume information, so it changes behavior.  We’re creating a consumer class for a new kind of consumption.  And if we have enough people doing that, we necessarily enrich the wireless broadband distribution model to the point where it would indeed be possible to access everything wirelessly and forget tethering.

The thing is, getting to that enhanced broadband distribution model means getting to smaller cells at much higher densities.  So what we end up with isn’t an elimination of wireline but a proliferation of micro-, pico-, and femtocells.  All these things need backhaul.  Logically we’d have such a backhaul-and-ittybittycell point in every home.  Logically it would look like the current wireline broadband and WiFi picture.  So yes, we’re moving to an untethered world but no, that’s not going to pull the cable companies or telcos out of the wireline broadband business.  Our wireless connections will be built on our own wireline connections, just as they increasingly are today.

There’s another point here, of course.  There are a lot of exciting things happening in tech and networking today.  All of them have a grain of reality, nearly all are being hyped mercilessly, and so while we can’t say that a given thing is “false” or will “never happen” we can assign many of the things to the same level of probability as flying over the Empire State Building by standing on 34th Street and flapping our arms really hard.  And if we can’t pick out the exaggeration from the reality, we can’t support real planning, real spending, and real opportunity.  We need to exercise some sanity here.  Can it be done?

How Much Light in Open Daylight?

We’ve had a couple of potentially significant developments in the networking space, developments that seem on the surface to be contradictory.  On the one hand, big firms like IBM, Cisco, Microsoft, and Ericsson have created an open-source project for SDN development, called “Open Daylight”.  This suggests wide industry support for an open SDN framework, and that’s something carriers tell me they want.  On the other hand, Ericsson (one of the Open Daylight founders) has just bought Microsoft’s MediaRoom assets, which seems to be a move to establish a position in IPTV based on proprietary technology.  Is there a contradiction here, and what does this say about proprietary versus open-source service software?

Ericsson has to see the Alcatel-Lucent cloud moves as a threat because it doesn’t have strong assets from which to build a cloud story, and because Alcatel-Lucent already has a content position.  Remember, operators see three primary monetization targets—content, mobile/behavioral, and cloud.  A strong cloud story would give Alcatel-Lucent complete coverage, and if the company can shed its long-standing silo mentality and position its stuff well, it could take market share on that combination.  So Ericsson grabs up Microsoft’s content assets, which covers at least two of the three monetization bases.

So what about the Open Daylight stuff?  If a competitive market demands vendors own some assets that can drive strategic engagement, why cooperate with SDN?  Is SDN not strategic?  Is Open Daylight not addressing a strategic piece of SDN?  Is it all a sham?  Maybe a bit of all of these things.

It’s not fully clear where Open Daylight will be showing the light, to start with.  It’s hard to see how it would move the ball much if we presumed it’s going to be nothing more than an OpenFlow controller—we already have those.  The early rumors suggest that the project will be more like a kind of official underlayment to Quantum, with functionality designed to support all of the “models” that make up Quantum in its latest (Grizzly) manifestation.  All of this is helpful in an integration sense, but it’s not going to add a bunch of new features or functions.  Which is why it’s worthwhile to ask what vendors gain by backing it.

The obvious benefit is that the project lets all the members put their hands on their heart and pledge allegiance to SDN, open-source software, the cloud, and probably a few other things without actually having to do very much on their own.  It’s keeping a hand in the SDN process without making any specific commitments when 1) it’s probably too early to make much money on SDN at this point and 2) it’s possible that even now SDN could overhang current products and positioning.

I think there’s another factor here, though, and it’s suggested by the fact that the platinum members of Open Daylight include Citrix, IBM, Microsoft, and Red Hat—all software and IT players.  I’ve said for months that the value of SDN isn’t realized at the OpenFlow level at all, it’s realized above that, in the way that network services and applications and infrastructure merge to create a new union of resources.  The northbound APIs of Open Daylight will be open, open to exploit with the truly useful higher-layer stuff.  The IT vendors may be seeing this as a way of putting networking back in the cage, of creating a specific boundary point where the commoditizing iron of the present meets the exploding IT value of the future.  A boundary point that, of course, favors the IT types.

Don’t take this as an indication that I’m opposed to Open Daylight; I’m in favor of all carrier open-source projects for a bunch of reasons, not the least that they tend to take the otherwise-interminable standards processes out of the abstract and into the (at least more) concrete.  I hope that NFV does the same thing, in fact.  It’s just that all this maneuvering and defense is obscuring the fact that we still don’t have a solid notion of exactly what goes into SDN from top to bottom, what makes software-definition of network behavior uniquely valuable.  We also don’t have a picture of how Open Daylight will balance OpenFlow-centralized SDN with more distributed-IP SDN.  If those two converge anywhere, it’s not inside the Open Daylight corral but above it, north of those northbound APIs.

The good news here is that while it’s possible to use both standards processes and open-source projects to stall a market by obstructing progress on collective visions, it’s much harder to do it with open-source because real code is contributed and anyone can in theory pull together pieces to make something useful.  But the critical piece, as I hope this blog has shown, is those northbound APIs.  Framing the drivers of the SDN movement without specifying interfaces between those drivers and baseline SDN technology is futile, so it’s here at the northern border that we need to be watching for progress.  Open Daylight is in the dark till it has a model for those APIs.

Prepare to Read Earnings Tea Leaves!

We’re coming into that wonderful quarterly ballet that the financial industry calls “earnings season”, a bit of a misnomer given that we actually have four of them any year.  Nomenclature notwithstanding, this is a good time to look at the health and ecosystemic pressures of tech overall.  We’ll be getting at least a snapshot of every public company’s position, but we need to fit them into a coherent industry view to make any sense of the trends.  One way to do that is to play on some data points.

In China, as previously in Europe, we’re seeing pressure from network operators on regulators to provide some relief from unbridled traffic growth.  A mobile chat application is the culprit in China, and regulators there appear willing to consider some sort of compensation.  That’s already under discussion in Europe, and of course it’s been the focus of the neutrality debates here in the US.  Since the US position on OTT-pays has been set by the former VC FCC chairman who is now stepping down, we may see a shift in US policy and an acceleration of the global trend.

Leaving the politics aside, the problem that the current approach of bill-and-keep has created is a disastrous drop in revenue per bit, a slide averaging 50% per year.  This has put enormous pressure on operators to reduce cost per bit, not so much to cut capex but to “cap” capex to a constant percent of top-line revenue.  Two decades ago, operators were spending about 20% less on infrastructure as a percentage of their sales and they want to get back to something closer to historical levels.  In China, where there’s still a lot of untapped opportunity and in the US where there’s still market share up for grabs, the mobile market is still sustaining higher investment levels and likely will for some time.  In Europe, with economic pressure compounded by high levels of competition and little chance for major market-share gains, even mobile is unlikely to show much growth.

Operator interest in things like SDN and NFV stem from this basic issue.  “Flattening the network”, meaning eliminating multiple loosely coupled protocol layers, promises to reduce both the capex (part of which is investing in layers that serve not the customer but other layers) and opex (“OSI ships in the night” as one operator termed the situation).  Hosting higher-level (out-of-data-plane) functionality in servers is another way to cut costs, and the same architecture that offloads firewall services could be the point of implementation for new service features that aren’t in the network of today at all.

In the enterprise, the situation is similar but with the obvious twist that the network isn’t a profit center for the enterprise, it’s a cost center.  ROI for an enterprise means productivity benefit divided by the augmentation cost.  Trends like BYOD aren’t drivers to spending, because they aren’t convincingly linked to incremental worker productivity.  What is?  Point-of-activity empowerment, which is that fusion of network and IT that I’ve blogged about before.  The question is where it’s going to come from.  Absent some specific approach, an architecture, that can guide investment in IT and network changes, enterprises will sit on their hands…as they clearly are.

If there is a technical lowest-common-denominator here it’s the cloud.  As workers are less and less likely to go to their IT support point for answers and more likely to expect those answers at their point of need, we need to rethink not only how we build applications to support those workers, we need to rethink how we resource them and connect them.  The cloud is a completely new application ecosystem, one that delivers services that people will pay for and one that delivers information and productivity support that raises the IT benefit case for enterprises.  So SDN or NFV aimed at the current situation will never be successful, because they will never address the real problem.  The evolution from where we are isn’t justified by the next step on the path, but by the goal.  Yes we have to survive the transition to get to the future, but without a promise of a truly transformational future, you’re in Groundhog Day.

So here’s the point.  Vendors are going to be talking about quarterly sales mechanics in their calls, “feet on the street” or “cutting TCO” or “next-gen optics” or whatever.  All of this stuff is just managing the current decline of tech.  A static benefit case in a competitive market creates consolidation and marginalization and nothing more than that, ever.  If you want to transform the industry you have to transform the justification for spending on what you sell.  When you hear vendors talk about their quarter, their plans, their prospects for the future, make sure it’s the future that they’re prospecting for.  Otherwise they’re not digging for gold, they’re digging graves.

Is Juniper Retreating from QFabric or Advancing to SDN?

We continue to see major vendor announcements on SDN, but obviously some are more responsive/revolutionary than others.  Juniper has announced its EX9200 product, which it’s billing as a programmable core switch and also a stepping-stone toward SDN.  There may be some significant news here, but it’s not on the SDN front—at least not for now.

The new switch is firmly aimed at the enterprise, at data center and campus applications, which is a bit of a change for Juniper in itself.  It’s very likely that this positioning comes as Juniper tries to rehab its enterprise business, something it had hoped to sell off.  They’re also playing more to promote their alliance with IBM, who is really their draw for the enterprise according to my surveys.  Juniper isn’t seen as a strategic partner by enterprise customers to the extent that IBM is, or Cisco.  The big question here is whether there’s anything to this beyond simple opportunism in the enterprise space, either by design or by accident.

Juniper has a data center switch—QFabric.  It would be logical to say that a campus/data center strategy would build on that product and extend it across campus and metro scale, but that’s not what’s happening.  Instead, it really looks like what Juniper is doing is admitting that the fabric opportunity isn’t nearly as large as they’d hoped, and that QFabric isn’t as revolutionary as they said it was when it was announced.  Thus, the EX9200 may be Juniper’s attempt to wrestle some data center market share from rivals like Cisco, market share that QFabric simply isn’t going to claim.

An interesting wrinkle in this is that QFabric was the flagship product for Juniper’s then-new Juniper One chip, which has the interesting capability of being micro-programmable to enhance hardware support for new protocols.  I think the chip is a good idea, but it never went into QFabric (which uses merchant silicon).  Now it’s going into the EX9200, and it’s hard not to believe that this spells the strategic doom of the whole QFabric concept.

Juniper never had a shot of doing what it wanted to do with QFabric, but it does have a chance with the EX9200.  I emphasize the “chance” part because the opportunity for the product is based not on glitzy (if retread) technology but on changes in the market.  If there is a driver to data center change, it’s the concept of the cloud and the trend toward point-of-activity empowerment.  Juniper does provide a software-based WLAN controller hosted in the EX9200, but they don’t really spend any time in their positioning explaining why that matters, other than to cite the already-boring BYOD trend.  Where they focus most of their launch effort is on cost management.  There is absolutely no top-line benefit claim made for the EX9200, only the same old TCO-and-cost-defense stuff that Juniper has been stuck on for years.

The enterprise is changing.  Service providers are changing.  Both are changing because of the new fusion of IT and networking, the evolution of both to become not separate disciplines but facets of one technology.  This trend dates back over a decade and it’s emerged as the driver of everything.  What we call “the cloud” is simply a manifestation of that broader trend.  Juniper’s rival Cisco has a leg up on everyone in the network space because they have servers and so can help shape the trend from both the IT and network side.  Those who, like Juniper, don’t have that asset set have to do something truly phenomenal at the boundary.  Junos Space and the Network Director management console isn’t that “something”.  Does this new thing offer APIs for software control?  If so, why isn’t that a focus of the announcement.  If not, then where is the link to justify a claim of SDN support?

Juniper was getting trashed pre-market this morning, probably in part because rival F5 issued poor guidance and in part because that triggered a downgrade of similar players.  That’s fair because I think that if you look at these two companies you see a common thread—a smaller player in a giant’s market who is unwilling to make the big and daring moves that will erase what is otherwise a slow but inevitable decline.  Juniper is still defending the network past, as rivals like Cisco and Alcatel-Lucent try to shape networking’s future.  That is a position that a major incumbent might survive, but it’s not one for a relatively small player to take.  Even Alcatel-Lucent knows this is the time to step up and swing for the seats, as they did with their SDN story this week.  Juniper also has an SDN spin-in, but where is its contribution?  Maybe if you added Contrail to the EX9200 you’d have something.

Juniper could make something of the EX9200 if they had a coherent SDN strategy that started up at the software layer and moved in an organized way through the network.  They actually have most of the pieces, but they don’t seem to have the direction.  Is this a management problem, as Motley Fool has suggested?  Is it just too many router types who don’t want to give up on the glory days?  It doesn’t matter.  What matters to Juniper is fixing their strategy, and the Alcatel-Lucent SDN announcement shows that there’s precious little time to do that.

Alcatel-Lucent Sets a High SDN Bar (But a Low Singing Standard!)

Alcatel-Lucent finally did their SDN announcement, and it was in most ways a major step forward for SDN, perhaps the biggest step taken by any of the vendors so far.  However, as is often the case, the articulation may not do justice to the technology.  In fact, in many ways the material was downright murky, and since I was traveling and unable to schedule a briefing, I had to struggle to get the measure of what was actually going on.

At a high level, there are two SDN visions in play here.  One is the Nuage vision of a data center network that’s essentially a functional superset (“super” in many ways, as you’ll see) of the Nicira overlay SDN-as-connectivity-control model, and the other a much broader cloud-SDN model.  I make this point because if you look at this as a Nuage story, I think you sell it short, and the same is true if you look only at the WAN side.

Let’s start with Nuage.  All software-based network overlays have a common property of subsetting the “real” connectivity of the network below.  With Nicira the primary goal of that was to create separation of tenants in public cloud infrastructure, and you may recall that I’ve been unhappy with that mission from the first.  It does this by creating a virtual Level 2 network through tunnel overlays, which we’ll call “software-defined connectivity” or SDC here just to keep the pieces separate. This basic SDC model is OK, but it just doesn’t solve enough problems to secure a firm path to relevancy.

Nuage’s positioning (and to be fair, Alcatel-Lucent’s positioning of Nuage) focuses a bit too much on the notion that what Nuage is offering is a higher-layer SDC vision.  That’s true in that Nuage recognizes virtual networks at Level 3 and also (at least as I read the stuff) also recognizes port-level subnetworking, meaning that you could create and maintain virtual networks using TCP/UDP ports and not just IP addresses.  What makes this important isn’t higher layers per se, but the fact that it gets the SDC out of the data center and makes it a logical partner to both endpoints and real network technology.  Layer 4 (ports) is as high as OSI layer references reasonably go because these are the application-level connections actually used in software APIs.  Nuage has thus covered all of the functional-value waterfront here, the first to announce that capability.

A full-layer SDC is inherently able to span the boundary between data centers, so the Nuage offering can be used to build a cloud resource pool that offers elasticity of VM or application image positioning that crosses even national boundaries.  It also supports natural hybridization of public and private clouds for cloudbursting or failover operation, and it could in my view provide a framework for federation of multiple public clouds.  When you look at the feature set here, what you see is a virtual-network overlay that’s designed to manage resource addressing and connectivity management within any arbitrary cloud and also with its endpoints.

Nuage is very much a DevOps-based process, in that the benefits of Nuage are presented through a set of operations APIs that fit the model of cloud DevOps I’ve talked about here.  Virtual networks are abstractions, and the abstractions are based on a set of policies that link them to the second piece of this, which is the Alcatel-Lucent part (I’ll get to the details of that in a minute).  The definition of connections is via logical groups rather than as physical elements, and policies can be assigned at a group level (would normally be, in fact) and that spares the application users of network services from the details of the network itself, even at the virtual level.  Each virtual network also presents a management view so you can see what’s happening at the operations level.

All of this is fine—even perhaps great—but it’s still an overlay and you still have to deal with the real network and real connectivity.  Alcatel-Lucent does that with “policy-pull provisioning”, which has a nice euphonic ring and also happens to be a good idea.  The Nuage policies can link down through an  SDN controller (not the simple OpenFlow SDN controller but a more sophisticated and functionally complete model) that can take advantage of any and all of the protocols/processes used to control resource behavior in the real world.  This makes the model applicable to mobile networks through the traditional policy-based processes and via DIAMETER, for example, and also with OpenFlow-based devices and IP/MPLS devices.  This aspect of Alcatel-Lucent’s story is somewhat congruent with Ericsson’s SDN vision, but Alcatel-Lucent has provided more detail on the implementation and has embraced a broader control protocol model in terms of public articulation.  Alcatel-Lucent also includes a resource discovery function that drives connectivity control changes when physical network changes occur.

The binding innovation here is the “Software Defined VPN” or SDVPN, which is a Nuage SDC overlay linked with the Alcatel-Lucent physical network piece.  SDVPNs can extend endpoint services and so they are a natural source of end-to-end SDN functionality and also a natural platform for network services.  SDVPN agents will be open-sourced so they could be installed anywhere as a virtual endpoint, which is what makes this concept extensible.  In fact, in my view at least, an operator could build services based on these capabilities and sell them to users as ARPU-generating extensions to basic VPN services.  You could also, again in my view, use this as a foundation for building a cloud-NFV application framework.  Alcatel-Lucent has been active in NFV all along.  The same “hosted SDVPN” concept would appear to make it possible to integrate SDVPN functionality with the real network, which could then bridge the current gap between overlays that see only connectivity and real networks that see real traffic but don’t have any view of connectivity controls applied at the overlay level.

All of this is very smart, presuming that I’m correct in my interpretation of the material, but as I noted there are some issues digging this detail out of the released documentation and there are still points where I think clarification and expansion are in order.  For sure, the concept could have used a clearer articulation, but I do applaud its technical completeness and the fact that it’s going to customer trial this year—it’s not some 2014 vision that maybe will happen and maybe won’t.

For competitors, this story could present challenges.  Because Alcatel-Lucent has taken a truly cloud-centric and service-centric view of SDN for the first time, they’ve made it harder for rivals like Cisco and Juniper, who have been (deliberately or by accident of articulation) glossing over their details and making SDN into more of an evolution of current networks than a partner in cloud networks and cloud services.  They have established a functional litmus test for an SDN implementation by any vendor, and even startups will now have to think outside the data center and think more about the context of the applications/services than about the simple issue of connectivity.  That could refine and focus the SDN dialog, but of course whether it will depends on whether anyone understands what Alcatel-Lucent has done.  Eventually, I think, they will.

 

Oracle Dips Some More SIP…Maybe

Oracle’s decision to purchase Tekelec, having just done the Acme Packet deal, seems to be indicating that the software giant has something in mind.  The question is what that something might be, and it’s a pressing question because of Oracle’s size and potential in the network market.

The obvious possibility is that Oracle is looking to expand its portfolio in support of UC/UCC.  The fact that it’s picked up two companies in the VoIP space actually reinforces this possibility.  You could argue that Acme had some history in deep packet inspection, which has a broader mission for the future—including support for NFV as I noted when I blogged about the Acme deal.  While Tekelec has created hosted versions of some of its voice functions, that trend isn’t really linked to NFV at this point, simply because we don’t have any NFV architecture against which this stuff could deploy.  Content delivery networks were on the target list for NFV in the white paper, you may recall, and we’d been seeing a shift of CDN implementation from appliance to cloud before NFV was even dreamed of.

If UC/UCC is the driver here, then Oracle has to be betting on some major systemic trend that the classic vendors in the UC/UCC space aren’t seeing.  One good possibility is that Oracle is seeing a transition from TDM to IP voice on a more aggressive level, an active migration, and that would open a host of UC/UCC-like services for both business and residential users.  Another possibility is that Oracle is seeing the NFV trend creating a kind of “open IMS” framework where RAN players won’t be able to dominate IMS deployment like they’ve done so far.

A third possibility is that Oracle is seeing the elimination of the PSTN completely, a transition not to a carrier-hosted VoIP model but to an OTT model.  This transition will obviously be resisted by the major carriers and the major network equipment vendors but embraced by the Googles and Microsofts of the world, which would mean that if Oracle wanted to be a player in the future and compete with those two, Oracle might need its own voice strategy.  Everything Oracle has purchased would facilitate a deployment of something Skype-like but with a stronger SIP flavor, at least in the sense of being able to interwork with SIP and PSTN calls.  That would give Oracle three avenues to pursue; sell to operators, sell to OTTs, or deploy their own service set.

If Google and Apple and perhaps even Microsoft are looking at being MVNOs, as I’ve suggested they might be, then Oracle may have to take action now.  Mobility is the major driver of application changes (point-of-activity empowerment as I’ve been calling it) for both enterprise and consumer, which might make it a major driver of software overall.  Oracle might even be looking at handset deals in some way, and Oracle buying Blackberry would be a lot more logical than it buying a network equipment vendor for example.

How about “all of the above?”  I think there’s clearly a lot of competitive counterpunching in Oracle’s M&A, not the least because Oracle is first and foremost a sales-tactical player and not a strategic player at heart.  NFV may not be a specific target for Oracle but it may be a symptom of a functionality shift from network to IT that Oracle either has to play for its own benefit or see as a pathway through which it could bleed market share.  They just turned in a frankly bad quarter and their only real strategy (so their earnings call said) was more feet on the street.  That’s not going to cut it, even for a sales-tactical player, if the market is really changing at a strategic level.

The next question is “Who next?” both for Oracle and for competitors.  There are still some SIP players out there (Sonus, for example) and there are certainly Oracle competitors.  IBM has stayed out of this fray, but it’s not too late for it to move.  Players like NSN, who have narrowed their product focus to a pure mobile broadband play, may need to think about buttressing their voice and UC/UCC potential.  In short, this could be interesting.

Is Alcatel-Lucent Going to Announce the Right SDN?

Alcatel-Lucent is scheduled to announce it’s “data center SDN” vision next Tuesday (April 2nd) and the announcement may well be one of the most important in the company’s history from the perspective of addressing a compelling need.  Whether it will be important in the sense of moving the SDN ball forward, even in the specific confines of Alcatel-Lucent, is what I’m going to have to be watching for.

The lowest hurdle Alcatel-Lucent has to clear is the validation of their Nuage acquisition.  The announcement is a Nuage story according to their advance material, but all that Alcatel-Lucent really has to do to make its decision look strong is to demonstrate a useful mission for SDN.  That’s not all that hard to do, right?

In the data center it is.  If you look at the data-center SDN stuff, it’s tended to focus on creating segmented or overlay networks.  I’m not disputing for a moment that you need some mechanism to segment public cloud data centers to support a large number of users who have to be completely isolated from each other—read VLANs of some sort.  That’s what Nicira has been focusing on, after all, and there are other developments (the IETF’s NVO3 for example) as well.  But the network of the future isn’t made up entirely of public cloud data centers that need multi-tenant isolation, and even if it were I’d argue that the Nicira overlay approach might well suit that very specific mission.  As I’ve said all along, you can’t co-opt software defined networking for software-defined connectivity, which is all this stuff does.  Alcatel-Lucent has to be wary they don’t fall into that trap.

The second point is that once you cast the multi-tenancy mission aside and look at general data-center needs, you’re struck with the question of just what SDN does.  Yes, it’s a great approach to building better multi-switch large-data-center LANs.  The question is how many of them you are going to need, and also how SDN-linked switches would differ from fabric solutions.  My view is that if you’re looking at cloud data centers in a general sense, meaning public cloud and private/enterprise, then you need to rethink the whole architecture and ask what an optimum connectivity solution would be given the evolving mission, which includes private and hybrid cloud but which is really dominated by more dynamic application-to-worker and component-to-component relationships. Remember point-of-activity empowerment?

The next point is the network operator.  I remain convinced that the most fertile field for any new data center architecture is the place where the most green field data centers are going to spring up.  That’s the network operator in general, and the metro in particular.  In the next five years operators could deploy more new data centers than enterprises have in total.  What are their requirements?  At a per-data-center level we can muse about that one for ages, but there’s one reality about these new data centers that is not only unequivocal, it’s fundamental even to the “public cloud” stuff we started with here.  These data centers are highly interconnected.  CDN is an operator-cloud application, for example, and you can see immediately how interconnected it has to be.

That’s what I’m worried about with regard to Alcatel-Lucent’s announcement.  It’s fine to make a data center announcement for SDN, but I’m not of the view that data center by itself will present a compelling story for network operators, and I’m darn sure convinced that if you can’t win with network operators and SDN in the next three years, you may as well pack it in.  Enterprise value for “real” SDN strategies can’t build up that fast.  A market driven by SDN centralization needs, by NFV, and by the top monetization goals of operators, is going to outrun the enterprise in the early years, and this market is doing to be looking not at data center but at metro cloud.  It’s very possible to argue at this point that the interconnect of the data centers is as important for the metro cloud mission as the data center networks are, and if that’s true then you can argue that metro SDN is as important as data center SDN.  And I think there’s a heck of a lot more money in metro SDN.

We are already seeing the router guys positioning themselves for the validation of IP in the metro.  Cisco and Juniper have both been taking steps in a product sense and in the promotion of changes in the IETF, steps designed to validate the idea that the changes in the cloud mission, the metro mission, will demand the displacement of basic Ethernet with IP routing.  Why?  Because they have to, because these guys know that what I’ve been saying here all along is true.  All of the money in the network of the future will be made in the metro, so most of the infrastructure that deploys will go there.  If you want to sell routers, sell metro routers.  That’s what the router guys are saying.  So if you want to sell SDN, you’d better be selling metro SDN too.

 

SDN Nonsense Instead of SDN Cents?

I’m all for having discussions on the impact of SDN.  I’d prefer they have some substance, though, and we have a couple of examples this morning of SDN-impact stories that don’t (in my mind at least) hold much water.

FRB downgraded both Cisco and Juniper yesterday, citing among other factors the view that SDN was going to reduce the switch/router ports deployed by operators by 40% in 18-36 months.  To me this is a particularly astonishing vision, given that I just finished (and published in our Netwatcher journal) our detailed analysis of SDN and NFV market impacts, and found nothing whatsoever to support any claim of port reductions due to SDN deployment in that period.

Look around your network plant, buyers, and tell me how many SDN ports you see?  The current number is statistically insignificant.  Look on the inventories of vendor products supporting SDN today, and tell me how many are not switches and routers.  The number is statistically insignificant.  Right now, if you want to do SDN in either its central (OpenFlow) or distributed form, you do it by running SDN software on the same switches and routers that Cisco and Juniper and others have been selling all along.  There are no current contenders to “replace” switch/router ports with SDN ports because you can’t do SDN without the same darn ports!

This isn’t to say that there aren’t reasons to downgrade Cisco or Juniper.  I talked about a previous Juniper downgrade in yesterday’s blog.  There is in fact a change taking place in network infrastructure spending, but that change isn’t technology-driven, it’s profit-driven.  Operators are going to spend on the parts of their network that generate the most ROI, and there is very little profit to be made by global, uniform, open connectivity.  Most revenue, as I’ve said, is derived from services that move traffic for 40 miles or less, and nearly all the incremental revenue fits that pattern.  Clearly building traditional edge-core router networks isn’t fitting the profit bill, so less will be built.  But offering SDN technology, even if had some impact on network costs, isn’t likely to change the profit reality.  Will we stop caching content because of SDN?  Hardly.  Will we build one big cloud data center in Missouri to serve the whole US just because we could use SDN to get to it?  In the end, SDN is a somewhat cheaper and more controlled way of moving traffic, but it’s not moving it faster than light and at a negative cost.

That gets to a second story, which is one in Light Reading asking whether SDN isn’t going to hurt operators because it creates all this bandwidth on demand, encouraging enterprises to replace their persistent-capacity services with rent-a-bit services.  First, nobody says that SDN is going to offer bandwidth on demand any more than other technology options we already have.  We can deliver elastic bandwidth with Ethernet and IP.  Second, nobody says that having elastic bandwidth is going to reduce consumption of bits or produce lower costs.  Generally, enterprises say they’d use elastic bandwidth in applications like failover of data center assets into a public cloud.  That sounds like a new application to me, one that generates new revenue not kills off old revenue.

In the real network, the network that ROI builds, the network Cisco and Juniper have to really face, it’s not SDN that’s the driver, it’s NFV.  NFV addresses something that is very real, the displacement of service functionality from specialized devices to cheap commodity servers.  If you were to look at how centralized, software-controlled, SDN could be implemented, the answer is likely to come out of NFV.  Your future security offerings are NFV offerings, and so are the CDNs that deliver your content and the components that support your mobile roaming and your mobile payments.  NFV defines the partnership between IT and the network.  The network of today isn’t about communication but about service delivery, and NFV defines the hosting of service components.

Well, maybe it does.  NFV is just getting started, and its treading a path that other bodies have tried to blaze in the past without success.  I’ve personally worked on standards aimed at creating a partnership of hosted elements and connectivity for five years now.  We still don’t have them.  So now you understand why I think it’s so silly to be thinking about 40% penetration of SDN ports in 3 years or the erosion of business service revenues by bandwidth on demand.  We’re still tangled up in the starting gates of most of the value initiatives in networking.  The guys who ought to be downgraded are the companies who are doing nothing to break out, and we’ll know who they are by this fall.  We’ll also likely know who will win.  Why?  Because 2014 and 2015 are the two years my model says that carrier capex will increase and where opportunity factors will start to drive a new network model.  That means the planning cycle this fall, the cycle every vendor had better be ready for, may be doing vendor triage.

Is Cisco Beating Oracle Where it Counts?

Software giant Oracle surprised the Street with a pretty major miss on their top line, sending their stock tumbling in the after-market yesterday and pre-market today.  The truth is that the Street may be underestimating the questions here, because all of the indicators for the tech space says that Oracle should be doing better.  Software has a more direct connection to productivity gains and thus can drive benefits and projects better than hardware.  Hardware is nearly impossible to differentiate, and software is all about differentiation.  So what the heck happened to these guys?

Market shifts, for one.  Business confidence suffered in the last quarter for sure, and that may have had an impact on projects.  The problem is that it didn’t stall Oracle’s rivals as much as Oracle, and our indicators tell me that the real problem has to be something beyond simple macro conditions.

One issue is clearly the hardware.  Oracle’s hardware numbers slipped again, and it’s pretty obvious at this point that they will never make a go of general-purpose servers.  Yes, there’s a product transition, but why has it been so dramatic?  Companies have these all the time.  What’s disappointing is that they can’t make up for that server shortfall by creating explosive database appliance sales (30% growth here simply isn’t enough; they should have been able to double that), which means that if they don’t fix their problems their whole hardware strategy and the Sun acquisition is toast.

The bigger second issue is Cisco.  Our surveys tell us that Oracle is getting hurt by Cisco more than IBM or HP, because Oracle’s hardware business has traditionally focused on the communications applications that Cisco has gone after.  UCS has been a big success, perhaps the most significant success in all of networking even though it’s not a network product.  Furthermore, with operators talking about SDN and NFV, servers could be the mainstay of carrier deployments and infrastructure in the future.  Oracle needed to hit the ground running in all of these areas, and it’s just not moving the ball at all.  I’ve never seen them out-marketed as thoroughly as they have been in the comm-server area.

But all of this pales in insignificance compared to Oracle’s problem with the cloud.  The company’s anti-cloud stance in the early days, stated with Ellison’s usual forceful style, has made it very hard for Oracle to create an effective cloud position.  They’ve failed most particularly in generating a strong architecture for a “Java cloud” and for PaaS, even though it’s obvious that these two areas could be highly differentiating for what’s after all a software company.  A good cloud strategy with a strong articulation of database appliance relationships to cloud services could have helped the appliances but also showcased the servers.  Solaris is the best commercial real-time OS on the market, and it’s amazing that Oracle hasn’t been able to leverage that truth.

The most alarming thing is that cloud strategy is darn sure where Cisco is taking the game in the network-server world, and nobody has to ask why.  Cisco is naturally good at IT/network fusion, and in any case what the heck is a cloud if it’s not network and IT?  Oracle seems to have missed this obvious point, not perhaps completely in its marketing but at least in an effective sense.  They need leadership in the cloud.

Ironically, this quarter might drive Oracle to do something truly, fatally, stupid while trying to do something smart.  There are already insiders there who tell me that one camp of executives believes that Oracle’s problem is that it needs network equipment like Cisco has, that they should buy Juniper or Brocade.  I think that’s the wrong approach.  You don’t win a puzzle-solving race by buying more pieces to make the puzzle bigger.  The fusion of IT and networking that forms “the cloud”, that’s driving SDN and NFV, is a boundary function.  You don’t have to be in the network to own the stuff that network technology initiatives are working to move out of the network.  If you’re in the IT space with the right boundary position, you just open your arms and gather them in.

At this point, it would seem that it’s too late for Oracle, but I don’t believe that either.  Cisco could have shut them down this quarter, eradicated Oracle’s own hopes of becoming a full-spectrum IT giant and thus killed off a formidable risk for Cisco, but Cisco is still being too conservative at that network/IT boundary.  Falling prey, as vendors nearly always do, to the obsession with defending current markets and products (which, in the long term, will decline in TAM) has kept Cisco from taking the steps that would have set up a conclusive Cisco victory by about 2016.  And that means that Oracle could still, if they were smart and aggressive enough at just the right place in the network/IT picture, pull off a comeback.  But make no mistake, this quarter was a big mistake and it’s going to take something big to really fix it.  Otherwise Cisco has stepped over its first IT victim on its road to being the self-described IT leader.