Juniper Creates Two Degrees of SDN Speculation

One of the fascinating things about SDN is that it’s become even fuzzier than the cloud.  We can’t agree on anything about it these days, including the very basic question of how we’d recognize an SDN if we saw one.  This level of uncertainty is magnified when we have SDN-related announcements, because more often than not they don’t really announce the details.  We’re left to speculate on what SDN is and then on whether this new thing is SDN is not.

Juniper’s decision to buy Contrail is one of these two-degrees-of-speculation things.  Juniper had previously talked about an SDN vision that had the basic layered structure that I’ve been calling for in assessing SDN functionality.  You need cloud exposure at the top, network telemetry at the bottom, and SDN in between.  One of Juniper’s key technical guys, Kireeti Kompella (a self-described “MPLS Fiend” who even had that as his title on his business cards at one point) went to Contrail and that’s led to speculation that Contrail is one of those spin-in plays we’ve come to know and love.  Cisco and Alcatel-Lucent both have spin-in strategies for SDN already.

But what are they spinning in with the Contrail deal?  Kompella spoke at a technical conference at one point and talked about the Contrail approach as an “OpenFlow compiler”, a software product that took network functional requirements and converted them to OpenFlow commands that created a suitable network.  If you took this literally, then Contrail is a kind of SDN supercontroller, enveloping likely all three of the layers of SDN I’ve been talking about.  The question is whether we should take this literally.

Contrail is in stealth mode, but “stealth” today means more “coy” than “silent”.  The company’s blog talks about the dualistic nature of virtual and physical networking for the cloud, suggesting that the two will evolve in parallel.  They also go through a laundry list of what’s wrong with various network technology solutions to the problems of SDN—problems including the scalability in the number of hosts.  A particularly interesting point one blog makes is that you really should limit VLANs to about 500 hosts maximum to avoid multicast issues.

Virtual networks are either overlays or they’re embedded and implemented on network devices.  Nicira is an overlay strategy, and things like VxLAN are embedded virtual approaches.  If Contrail goes overlay, they would have to address how the “real” network gets “software-defined” since overlay networks are just traffic to the underlying infrastructure.  If Contrail goes embedded, they have to address how you can define a new network protocol as a startup without simply contributing it as a standard and losing control.  If that happens, what did Juniper pay over $170 million for?

It would seem that at the minimum, Contrail could look like a cloud DevOps tool, something that takes virtual directions and then tunes physical infrastructure as required, perhaps as an implementation of a cloud API like Quantum.  That would be helpful, I think, but it doesn’t raise the SDN bar relative to Juniper’s competitors.  So, following my usual practice of looking for the BEST possible play, Contrail might be mixing DevOps and virtual overlay principles.

Imagine that the “upper-Contrail” looks a lot like Nicira and a lot like the “Cloudifier” layer in my SDN model.  It presents a virtual set of network services to the cloud just like Nicira does and just like Quantum expects.  One part of those services is the flexible segmentation that we’ve come to know as part of cloud virtual networks; you can define as many virtual networks as you want independent of lower-layer protocols.  Another part of the services COULD be a way of centralizing and supporting things like topology discovery without spanning tree, and multicasting without using the low-level Ethernet tool.  Agent-based multicast, for example, might mean sending a multicast message to a virtual agent who then distributed it to the members of your virtual network.

The lower-Contrail could look more like DevOps, manipulating the northbound interfaces of current standard protocols and creating and controlling the real infrastructure behaviors that the sum of the virtual networks demands.  This would be a pretty decent thing for an avowed MPLS fiend to work on, I’d say.  The lower-Contrail could also be made to work with current network devices or with OpenFlow devices, making the “OpenFlow compiler” characterization true, and it would harmonize the physical and virtual evolution that Contrail’s blogs talk about.

So is that what Contrail is doing?  If it is, it could be significant enough to not only justify the $170 million price of the deal but also Juniper’s SDN strategy.  It could be a contender, and thus Juniper could become one in a single stroke.  But of course I have no idea whether this is what Contrail is doing (I really don’t, for those who might wonder if I have inside information).  And Juniper hasn’t been a company to swing for the seats with that sort of aggressive positioning in the past.  Contrail could still be little more than a way of provisioning Juniper boxes for the cloud, something that may have some tactical value but would have no strategic value.  In fact, it would call into question whether Juniper really has any SDN plans at all.

We have, then, not only two degrees of speculation about Contrail, but three degrees of  speculation about spin-ins.  Alcatel-Lucent, Cisco, and now Juniper are pinning part or all of their SDN hopes on stealth projects.  It’s nice to read a good mystery but it’s kind of expected that in the end you’ll find out who did what.  Hopefully we’ll find that out with all this SDN steath positioning too.

 

Apple TV: Pipe Dream?

The big flap today is yet another resurfacing of the notion of an aggressive move by Apple into the TV space.  On one hand, the move seems not only logical but inevitable.  Apple’s future obviously depends on its ability to keep launching new spaces to counteract the fact that its high-end model will always lose market share over time.  Look at smartphones and, increasingly, tablets.  On the other hand, TV isn’t the ideal place for Apple to go.

Television is an appliance married to a broadcast model.  As a viewing culture, we expect things to “be on” at a specific time and we expect that there will be a regular refreshing of content.  Sure there’s time-shift viewing through DVR and VoD, and sure there’s long-tailed content to fill in the annoying periods when nothing seems worth watching, but survey after survey shows that the primary value of television is still the link with broadcast network material.  Device-based viewing may be adding to total viewing time but it doesn’t seem to be subtracting from normal channelized content consumption.

If you look at this from the perspective of a “TV” appliance player, or prospective player, the obvious question is how you differentiate when most of what the viewer wants is table stakes.  All of the recent experiments with alternative TV have taken the tack of chasing the “nothing’s on” market, the viewers who don’t find anything on the schedule for a given timeslot that they’re prepared to watch.  Most also take a swipe at the time-shift viewer, trying to figure out how to best address the people who aren’t around when the stuff they want to watch is scheduled.

My modeling has always suggested that the only fruitful path toward an Apple-TV-like product would be built around what we could call “virtual channels”.  Everyone gets a plethora of stuff with their TV contract, most of which they will literally see only snippets of as they tune past it with their remotes.  And that’s good because they’d hate the material; it’s just part of the bundle.  But suppose you have a channel guide that starts with classification of your viewing “mood”.  Suppose it learns what you watch under specific conditions, and suppose it then lays out these virtual mood channels by grabbing shows in real time, shows in deferred viewing, and downloaded long-tailed content.  It lets you shuffle things around, jump to a different mood.  You might actually like it.  But would Apple find it a success?

One reason it probably wouldn’t is that it still has that nagging problem of commercials/ads.  If you personalize TV you break the “Lucy is on at 9” viewing model that guarantees a rapt audience for a given show, and audience advertisers will pay for.  Experience with online ads shows that while you can still target ads to people, perhaps even better, with online viewing, advertisers use the better targeting to reduce the people they target, and thus reduce cost.  The overspray of commercials helps fund shows; take it away and you have less content.  That shortage works against Apple’s desire to have pricing power and high margins in its ecosystems.

The other problem is that all of this is more a “service” than a product.  An Apple TV is at risk to being nothing more than a large-format thin client.  Apple doesn’t want to build that sort of things, not in TV and especially not in phones and tablets.  How do they adopt a TV model that demands that role, whose features and capabilities are necessarily developed in “the cloud”, and not have that model creep into spaces Apple needs to protect?

Could they move to a pure fee-for-play TV business?  iTunes for TV?  The problem with that is that the cost would have to compete with the broadcast models of cable and satellite companies, and it’s hard to see how Apple could pay content providers and make a nice profit if they discounted the programming enough to make the iTunes approach even half-again as expensive as broadcast subscriptions.

I think Apple is earnestly trying to figure this out, as the WSJ article today suggests, but I think they’re looking deeper into the business model than into the TV design.  They need to figure out a way of making an appliance-centric service out of what’s always been a network-centric service.  That’s going to be VERY hard, and if they fail then they risk a lot of their aura of market invincibility.  So maybe the fact this is taking so long means that it’s got no good outcomes possible.  Not for Apple, for Google, or for anyone.

An SDN Taxonomy, and Reality Check

The continued buzz over software-defined networking (SDN) is creating the usual “specification creep”, where so many things are claimed to be part of an SDN strategy that we lose sight of what such a strategy would be able to accomplish and what it might look like.  This seems like a good time to summarize those points, with links to current market issues.

First, SDN has two three “flavors”; loose construction, strict construction, and no construction.  Loose SDN means that software control over network behavior, including segmentation of address space and traffic control, can be exercised by manipulating current protocols and devices.  Strict SDN means that the adaptive behavior of current devices is eliminated in favor of central control, creating forwarding paths by explicitly setting forwarding-table entries.  No-construction SDN means that the SDN is simply an overlay that segments address space but doesn’t involve network devices at all.

Second, central control of network services (“strict construction”) demands that the topology and state of the network be reflected back to the control point, or proper decisions on forwarding and traffic management cannot be made.  There are no mechanisms for this in the current standards, and it’s reasonable to assume that “probes” or “monitoring” could provide much of this information.  However, the control paths themselves have to be established in some way, which creates a “boot-from-bare-iron” problem for strict-construction SDN that’s still to be resolved.  How can you command a device to set up paths without paths to carry the commands?

Third, it is HIGHLY UNLIKELY that strict-construction SDN could scale to a large global network.  At the least, we’d have to conceptualize the network as a series of SDN domains where SDN principles were obeyed within the domain but where something else (perhaps even BGP) controlled the boundary interactions.  SDN is an “intranet”, so it works best inside something where there’s high traffic to generate a lot of traffic engineering benefits, but not a lot of ROUTES that would stress central control.  Inside a cloud is great, and so is inside a CDN or even a metro area.  Metro, in fact, is likely the secret sweet spot for early SDN deployment.

Fourth, adherence to OpenFlow is not a useful test for SDN compliance at this point in time.  OpenFlow is far from sufficient to create/define an SDN with adequate functionality.  You have to define an SDN by its northbound interfaces—what does the SDN let the applications do to define the network?  Segmentation, security, traffic management, quick response to failures…all of these capabilities have to be available to software or we’re cutting the benefit case and eroding the value proposition.

The real structure of the SDN has three layers—the “Cloudifier” that presents an SDN virtual network interface to things like a cloud computing management API (OpenStack’s Quantum, for example), the “Topologizer” that sits at the bottom and collects information about the network itself, and “SDN Central” that maps services to the top based on conditions below.  OpenFlow simply provides a command conduit between SDN Central and the lower devices.  It’s all this stuff that a vendor SDN strategy has to provide, not just OpenFlow support.  Put into this perspective, we can see vendor strategies largely dividing into three camps.

The first camp says “SDN is real and I want a piece of the action”, and this means quickly creating the functionality of those three critical SDN layers and exercising that functionality through existing network interfaces and protocols.  Why?  First, because we can’t fork-lift our way into SDN.  Nobody will go.  Second, because vendors don’t obsolete their own equipment, they migrate it.  We can make a three-layer SDN cake control OpenFlow devices in the future, but they can also control current devices in the present.  Even where OpenFlow is supported on those devices, it’s likely the network would work better if SDN were applied in loose-construction form while those devices dominate.

The second camp consists of those who are actually looking at OpenFlow as the heart of SDN.  One sub-group of this camp is focusing on controlling devices without specific regard for how the software decides what it wants done.  The three layers of the mandatory SDN cake have to be provided by some unspecified partner.  The other sub-group is simply SDN-washing.  They do OpenFlow and say they have an SDN strategy.

The final camp comprises those who contain/bound SDN application to contain the issues of our three layers.  They may be data center players, or metro players, or optical/SDN players, but the common point is that they’re limiting the impact of the general lack of development in our three-layer critical SDN structure by limiting the scope of deployment.

What makes a “good” SDN strategy then?  The answer in my book is a solution that starts with our third camp, defining a market entry point for their SDN that’s first one where incremental deployment is likely and second that can showcase benefits quickly.  This must then be extended with an ARCHITECTURE for SDN that creates our three-layer cake and lets it be extended using strict-construction principles to “intranet” SDNs and by loose-construction principles to global networks.

Who has this?  Nobody, yet, or at least nobody that’s articulating it in public.  I think Cisco is aiming for this market, and from the recent announcement of a stealth SDN startup partner, Alcatel-Lucent is likely doing the same thing.  For the other network vendors, we simply don’t have enough information.  Ericsson has arresting metro-SDN capability they inexplicably won’t talk about.  Huawei and Juniper have an architecture they don’t seem to be applying to any products, and NSN has been largely quiet.  Startups like Big Switch and Plexxi have components of the solution that need to be framed as part of a larger story; they’re an SDN intranet looking for something to be “inter” with.  If Cisco or Alcatel-Lucent (or both) manage to put the intranet-plus-architecture story into play, they will put tremendous pressure on their less-articulate competition. That may be what Cisco’s Chambers means when he said that Juniper might face unexpected SDN headwinds.

We’ll likely see some real SDN progress in 2013, but for sure we’re going to see more SDN obfuscation.  Let’s try hard not to lose sight of the SDN mission and value proposition in the marketing melee.

 

“The Days of Boxes are Over”: Does Cisco Mean It?

Cisco is expected to announce a major strategic shift into software and services today.  Why?  “The days of boxes are over.”  That’s a quote from Cisco’s Chambers that’s welcome from my perspective because it’s true.  Of course, it’s been true for a while and Cisco has only now embraced that reality, but at least they are finally seeing the light.  That’s more than can be said for their competitors.  The bad news is that while Chambers is right about the twilight of the boxes, he’s wrong about motivation.  That leaves those competitors with a way out.

According to Chambers, the thing that’s driving the future is what most would call the “Internet of Things”, the expansion of the Internet from serving human clients to autonomous devices.  Call it M2M if you like.  Respectfully I disagree.  It’s not that M2M isn’t real, but that it’s like video—it happens to the extent that it doesn’t cost much.  We need to look to a revenue future based on things that can provably generate revenue.  Thus, the agent of change is mobile broadband and the way that always-on empowerment impacts the behavior of consumers and workers.  I think Cisco wants to see the other side of the picture because some of its competitors have strong credentials in the mobile space, credentials Cisco lacks.  Cisco not only needs to devalue these credentials, it needs to be able to promote its own story unfettered by the rest of the network vendor masses.  “We wouldn’t be getting into wiring oil rigs if we didn’t think we could get 40% share” says it all.  It would be impossible for Cisco to get that kind of market share in mobile broadband.

That doesn’t meant that Cisco is walking away from the mainstream network apps, though.  They’re just looking first at the unoccupied spaces.  You don’t have to invade your own territory, after all, and Cisco has a dominant market share in switching and routing.  The trick is to get a new market for new TAM, but at the same time start edging over toward the valuable parts of the mobile broadband space.  And remember that I’ve said the driver was mobile/behavioral symbiosis and not mobile services per se.  While there are plenty of incumbents in the latter, Cisco’s competitors have been weak in developing the former.  That huge space is still on the table, and Cisco I think is planning to go after it but not in a frontal assault.  It’s going to do an end run.

I think Cisco wants to make IP networks application-responsive, not by eroding the value of routing and switching with stuff like OpenFlow but by adding features and capabilities within the context of existing (read, “existing CISCO”) devices.  I think Cisco wants to create an architecture in which devices can spawn the applications that support them and process their data, which is valuable not only with M2M but also with appliances.  I think they have been doing M&A to support those goals, and will do more.  And if they succeed they do have a shot at challenging the giants of IT, of being the “next IBM”.  If they succeed at THAT, they leave all their current competitors in the dust.

So how do competitors respond?  Not by playing Cisco’s game, but by hitting the weak part of the story.  M2M is still not a big-ticket item today, and the stuff that supports it best could just as easily—or more easily even—evolve out of mobile/behavioral applications as the other way around.  If Cisco’s competitors lock down the architecture for the mobile cloud, then they cement themselves in the current richest space and they can still easily evolve to cover Cisco’s M2M target.  Furthermore, a real and open SDN story makes it harder to tell an opportunistic and proprietary one.  A competitor who has both a mobile/behavioral architecture and real SDN can make Cisco’s transition to an IT company a lot harder.  Particularly if Cisco is really still clinging to traffic, not software.

That’s what generates all those “ifs” about Cisco’s own course.  The big risk for Cisco is continued self-delusion.  Chambers may have said that the days of boxes were over, but you can read Cisco’s comments so far as a bit of “fingers-crossed promising”.  Is Cisco looking at M2M simply as a traffic-generator, as it has always seen telepresence as being?  Down deep, does Cisco want to kiss software babies while continuing to bet under the covers on more bits to push?  If it does, then this initiative will be a monumental failure.  If competitors avoid going over the same cliff as Cisco, if they make boxes the story of the past, then Cisco could be one of the “IT companies” that Chambers says could be casualties in the future.

What’s Behind the M&A in Network Monitoring?

I got an email on the Emulex/Endace deal, making among other things the obvious comparison between the move and deals like NetScout/OnPath and Riverbed/Opnet.  I think some of Cisco’s recent deals, as well as moves by VMware/EMC, could also be called “related”.  The problem I have at this point is deciding just what the common factor is.

Strategy is like mob psychology.  A charismatic leader raises a banner and attracts followers to the message—for about the first 100 people.  After that, the new converts are just joining the crowd and not the cause, because eventually the mob covers the stimulation.  So we have to look through the mob to see if there really is any cause at the core.

To my mind, the only possible strategic driver for all the sudden interest in network telemetry is the broad shift that was started by the cloud and is now being advanced with specific network initiatives like SDN and NFV.  Whether you agree with the idea of having network intelligence centralized, or you just want networks to be more explicitly controlled by software, there’s no substitute for knowing what you’re doing if you’re controlling something.  If network devices today are wired to adapt to conditions and you propose to override that adaptation in some way, you have to get a handle on conditions before, during, and after your changes.  That would seem to validate the notion that network telemetry could get a LOT more strategic.

Think a moment about this.  If we have a “cloud network” as a virtual-network overlay a la Nicira, could we deal with the fact that such a network can’t impact network performance (since it’s built transparently over the network and not by or in it) through telemetry?  No.  In fact, since a network is a black box to an overlay protocol, we can’t even associate telemetry we might receive with any element of that virtual network.  Much less do anything if we could in fact figure out what we were measuring.  So what this means is that the craze to find out what’s happening in the network has to feed something below “the cloud”, meaning it has to feed an SDN/NFV initiative simply because those two things are the strategic initiatives that are recognized by buyers and vendors.  Absent a link to the special context of these things, there’s nothing useful we can do with telemetry that we couldn’t do all along.  If that’s the case, then the current M&A craze is either crowd-following or consolidation.

If all this network monitoring stuff is going to be useful, then it has to either be subsumed into a movement to create a centralized network control process such as that SDN proposes and which at the moment isn’t being developed by any standards process, or it has to be owned by a network equipment vendor—nay, a GIANT with enough market share to be able to actually add value to a lot of networks.  That’s why Cisco’s moves here are important, and why everyone who isn’t Cisco has to appeal to the higher architecture in their positioning, or they’re not justifying strategic value.

Emulex/Endace, and NetScout/OnPath, have so far failed to establish that connection to the higher layer.  They could do that, of course, but I always wonder when somebody does something big and doesn’t give a big reason, whether there’s a justification for holding back.  Maybe they’re just not thinking big?

The fact is that in the cloud, in SDN, in NFV, we’re doing the classical “groping the elephant” thing.  Vendors are grabbing on to something that has a vague connection to a little part of a big story and then claiming the big story as their own.  In my fall survey, both enterprises and service providers were almost unanimous in saying that while they welcomed the concept of SDN and the cloud, they were being presented with pieces of a story that they could still not grasp (or even see) in its glorious whole.  Absent that glorious whole, there’s no big story and there’s no grand cause, and no revolution.  There’s just a mob.

Why Cisco and Apple Should Worry…or Not

Today, we’re digesting some interesting data points on two giant market players, Apple and Cisco.  Both companies have been the subject of Street research focus and both have demonstrated a mixture of weakness and strength.  Because these two players have such a dramatic influence on their markets—and through them on our lives—we need to understand what’s really going on, so let’s look at Apple and Cisco and see.

Apple is arguably the most significant market driver in the whole of technology.  Their whole appliance drive, culminating in the iPhone and iPad, literally launched whole industries and created a truly seismic shift in the mobile broadband space.  The company has become an icon, but underneath that iconic status is a concern that Apple simply may lose its innovation.  That’s likely what sent the shares tumbling yesterday.

Concerns are warranted, but not as much about innovation as about TAM.  Apple’s challenge is that you can’t create a broad market for high-end products, so your strategy has to be either to surrender prices and margins and go after the broad market, or to limit yourself to the segment that can pay your price.  The latter move will obviously mean you’ll have to jump into new spaces regularly to refresh your TAM, and this is what Apple supporters need to be worrying about.  How many gadgets can we expect yuppies to hang from body parts so Apple can continue to leap from opportunistic stone to stone?

The manifestation of Apple’s challenge can be found in its cloud position.  Of all of the major players in the tech space, Apple has the most anemic cloud story.  Why?  Because a rich cloud begats a poor device; ceding service intelligence to the network makes the device simply a front for the service and virtually guarantees a price-based market in appliances.  Apple wants a high-margin market.  But the classic problem with “market denial” is that you can deny progress but it’s hard to force competitors to do that.  Especially when the network operators (AT&T, Verizon, etc) want MORE customers and are the primary conduits for moving new devices into customer hands.  Android’s success is less due to Google than to operators who can’t sell smart data services to people with older dumb phones.

Cisco’s incumbency is in network infrastructure, and there they enjoy the higher margins that Apple does with appliances.  While they’ve had some market-share ups and downs, Cisco has always been a sales machine and has exploited its leadership well.  In addition, competitors to Cisco have been fairly non-insightful in developing the kind of “new-problems-demand-new-solutions” story that’s needed to unseat a market leader.  But Cisco under the covers has that same TAM problem; routing and switching isn’t growing fast enough to make Cisco a growth company and gaining market share normally requires the same kind of new-problem story for an incumbent as for a competitor.  The problem is that incumbents don’t want to say that new problems demand new solutions for fear they won’t be part of the shift.

The Street, after a Cisco conference, is saying that Cisco’s position is much better than most people believe.  I got two research reports on that thesis just this morning, and the reports are true and insightful at one level and dangerously short of the mark in another.  Cisco’s position ISN’T better, but its competitors’ position is worse.

Every single strategic initiative in networking today is aimed at making switching and routing into the very kind of market that Cisco doesn’t want to have.  Software-defined networking (SDN) is aimed at substituting central control for distributed, adaptive, device behavior in building networks.  That would likely bring about lower device prices, and also radically change the value of incumbent vendors by making incumbent networks just candidates for replacement with the new paradigm.  A strong SDN story could mess up Cisco’s whole positioning in a single quarter, and that’s the truth.  What’s also true is that Cisco’s competitors are so namby-pamby about SDN on their own that they avoid road signs with any of the letters of the acronym.  Who among them has an SDN position that’s not reactive, and how can you unseat an incumbent by reacting to that incumbent and not driving them relentlessly?

Here’s the net-net for both companies.  TAM expansion is the only path to growth for a market leader, and TAM expansion demands that you take a new issue in the marketplace and use it to build a bridge from the old to the new, from the existing base on which you depend to the base through which you’ll grow.  Cisco’s genius with SDN has been that they’re capturing the functional benefits of SDN without embracing the technology changes.  That doesn’t deliver “standard” SDN, but since the whole standards process is about as broken as politics, that’s not the issue.  The issue is that Cisco can do enough with “SDN-in-a-brown-paper-bag” to make the step from it to “real” SDN difficult to justify.  You don’t have to offer the best, just offer something good enough that the price of the best is too high to bear.  And if SDN is networking for the cloud, Cisco can cross SDN’s bridge to become a cloud player, an IT giant, just like it wants to, and browse the fields of fertile TAM there.

Apple’s problem is that they have no strategy to build that same bridge, not so much because the bridge doesn’t exist but because first they fear its foundations will sink the bank of the river they’re already on, and because they can’t see the other bank in the fog.  Where could Apple’s fertile field of TAM be?  Glasses, rings, wallets?  Making all these things “smart” would certainly create opportunity, but the smartness would necessarily require surrender of functionality and functional integration to the cloud.  That makes an iPhone an expensive form of window-glass.

But what Cisco and Apple have in common is their dependence on continued lackluster competitive reaction.  Google and Microsoft have given Apple a lot more running room than they needed to; in point of fact the move to the cloud at a breathtakingly aggressive pace would benefit both those players rather than hurt them, yet they’ve been unimaginative in their positioning and product offerings.  Alcatel-Lucent, Ericsson, and Juniper have almost embraced Cisco’s defense of the status quo even though they’re losing in it, and every day the golden fields of TAM that these guys could move into get more brown and distant.  So what’s the net?  Both Cisco and Apple are at risk, dire risk.  But both are being buoyed up by competition that fears risk more than Cisco and Apple do.

 

Plexxi Shows It’s SDN Smarts

One of the problems with the whole SDN thing is that it’s so abstract.  We have a combination of a few dedicated researchers who are doing the usual egghead stuff, and a lot of media types picking at the topic.  The good news is that this is coming to an end; we’re getting actual SDN products and that will tend (I hope) to focus the dialog better.

SDN products mean SDN switches in my view; we need SDN-specific devices to prove out an SDN endgame.  The current focus on SDN controllers running software in existing devices is a transition strategy, but you have to establish what you’re transitioning to and prove its benefit case.  Logically we’d expect SDN to start in the data center, where the cloud starts and the controlling software is likely to run, and Plexxi has now given us a look at an SDN data center strategy.  Happily it includes both a controller and switches.

At the hardware level, Plexxi offers a modified version of a fabric switch (or a fabricized version of a rack switch, depending on your perspective), and it’s here that you find the first important element of their story.  I’ve commented before about the fact that virtual networking overlay strategies like Nicira can’t manage capacity because they run over top of devices and not on them.  As a result they recommend having low levels of utilization to prevent performance issues on some virtual paths due to (invisible-to-them) congestion.  This suggests a pure fabric data center connection network; any-to-any seamless connectivity.  The problem with THAT is that most of those “anys” don’t need to talk to most of the others; few possible paths are actually useful and some are security risks.  And, of course, all this extra capacity costs.

Plexxi’s approach is to create a high-capacity optical “bus” that interconnects the chassis of their switch.  The capacity isn’t infinite, but you can allocate it in an infinite number of ways.  The idea is to give capacity to the paths that represent useful connectivity and deny the rest, thus saving yourself from having to pay to provide connectivity your security system will then have to limit.  The cost advantage of this over a true fabric will vary, but it could be sizeable.

The trick here is how you give capacity to those useful paths.  To save users from a traffic engineering marathon, Plexxi introduced something truly unique and interesting, the concept of AFFINITY.  Affinity is a computational/algorithmic mechanism for analyzing traffic and connection topology and deriving the optimum topology to support it.  Since the paths through the optical “bus” are flexible, you can build the structure without regard for the physical connections or device locations.  Affinities create something above the physical hardware and fiber and below the protocols.  The Plexxi Control software, which is the “software” that “defines” the SDN, is what manages the affinities, and the Affinity network is both a virtual network without traditional Ethernet VLAN limitations and a physical network that can manage traffic and connectivity.

Each Plexxi Switch has a Control element, all bound to a central controller that’s conceptually similar to the central concept of OpenFlow control, but different in that it contains the necessary logic to drive the path creation process, where OpenFlow Controllers manage individual forwarding tables and context to create “networks” has to come from some non-specified higher layer.  There are what might be called rule sets to define Affinities of various types, and by growing the complement of these Plexxi plans to create a dynamic evolution of off-the-shelf service models.  Users can still define their own approaches, of course, and all this stuff is accessible via open APIs to be integrated with other software as desired.

What Plexxi lacks at the moment is specific support for OpenFlow, either as an interface to its switch or as a southbound option from its controller (it’s likely to come along later, but nothing official yet).  Some people are probably going to be fluffed up over this, but the challenge Plexxi poses is how you value functionality versus standards at a time when standards are significantly behind the state of market needs.  The work on linking optics to OpenFlow, for example, is only beginning, and there’s nothing happening at that higher layer where individual forwarding tables fuse into a network, or rather ARE FUSED into one, under software control.  One handoff doesn’t make a bucket brigade.  My own position on this issue is clear; we need the value of SDN before we worry about the standards, since standards bodies have demonstrated they can NEVER keep up with the market.

Plexxi’s stuff is available; there are early (unnamed) customers for it, one of which is a cloud provider.  The Affinity concept is, I think, a critical step forward in that it defines a way of making a virtual network physical enough to manage traffic and availability fully.  There may be other ways, but until somebody offers them we can’t draw a comparison.  I’m happy just to have one solid picture to look at.  As I said, maybe this sort of thing will help advance the SDN dialog.

 

 

How Wall Street Reads the Future of Networking

Wall Street is a lot more intertwined with the networking industry than just in the logical financial sense, or as a big network technology user.  These people are the ones who win big or lose big by betting on company successes and failures, so it’s a good idea to take their pulse from time to time to see just what they’re betting on.  Right now, it seems to be mixed.

Analysts are saying that basic switching and routing are losing steam, but more are attributing this to the current Euromess than to anything fundamental in the industry.  I think the truth is that both factors are involved.  Operators in my fall survey indicated a major shift of spending toward the metro aggregation network, which tends to focus the bucks on optics for wireless backhaul and cloud/content networking and on Ethernet.  Aggregation is not an IP mission; you only need to concentrate traffic from a lot of access points back to a small number of service points.  Add to that the fact that operators are not earning a good return on bandwidth and you see the problems.

We know they’re not earning a good return because more and more operators are moving to usage pricing.  TW said it was expanding its trial footprint for usage pricing at the low end, where it helps “light” Internet users.  Yes, that doesn’t add cost for everyone but I doubt anyone is naïve enough to think that once usage pricing goes in anywhere in the spectrum of usage, it won’t expand to become the norm.  But this isn’t going to make the network “profitable” as much as it will stop it from becoming excessively unprofitable.

What IS profitable, then?  The cloud.  We think of network operators and the cloud as being a Tier One adventure, but I was in Latin America earlier this year and met with a Tier Three operator there who had major-league cloud aspirations.  In fact, the largest change in cloud computing interest my survey showed this fall was among Tier Threes.  Is it any surprise that Chambers wants Cisco to be an IT company and not a network company, and that Gartner says that Cisco is one of the bright spots in the server market?

Cisco’s success in servers shows something else, though.  They’re the newest kid on the server block, their sales force hasn’t historically called on the IT guys but on the network guys, and they don’t talk the talk and walk the walk yet.  They’re not a software company, relying instead on other software players to round out their data center story.  And yet they’re a bright spot?  I would argue that’s because the notion of the cloud is bringing about a planning/buying fusion between IT and networking.  And remember, even though servers form the resource pool, the network forms the cloud.  Cisco is demonstrating that playing on cloud positioning can sell networks and servers as well.  By inference, absent cloud positioning, you may not be able to sell either.

One impact of this is suggested by a Street prediction that Huawei will strike a reseller deal with IBM.  If that does come about, it would be seismic for the enterprise network space.  Might it be true?  Well, it would darn sure be good business for both parties.  Remember that Lenovo is a Chinese company.  Remember that in any reseller agreement, you need to start with a box provider who can tolerate low margins so the additional layer of overhead doesn’t push the retail price too high.  Remember that if you need to have your image sanitized for the US market, it would be hard to find a better antimicrobial spray than the IBM logo.

The SDN and NFV stuff fit into this too.  When you have services that are margin-pressured you have to lower the base cost of infrastructure for those services.  Both SDN and NFV are examples of buyer revolt; we need to make the network into what it really is, a business pipeline and a platform for the cloud.  Vendors have resisted that change, Cisco perhaps more than most.  Cisco must now embrace it—sort of—in order to make its transition.  As they do, they will make the penalty for not following suit rather dire.  Market leaders set market trends, after all.

We have three network challenges today.  Number one is how to fuse a vision of the cloud as something that’s the SUCCESSOR to the concept of the Internet, because from a business perspective that’s exactly what it’s going to be.  Number two is how to create a unified architecture for metro aggregation that combines optical agility and limited, software-directed, electrical forwarding control.  Otherwise we can’t make metro networks effective as mobile broadband, content, and cloud drive them.  Number three is how to create a framework for adding services to networks to take advantage of the fused network/server vision that the cloud represents.  Services are cloud applications, but what is the cloud platform on which they run?

We’re heading into a new year, one that’s going to be challenging for everyone because it will force transitions on everyone.  There’s no hanging back in a race for the future, or at least there’s no survivors among those who do hang back.  Alcatel-Lucent and Cisco and Ericsson and Huawei and Juniper and NSN are all vulnerable in this coming year.  One mistake may be all it takes for any of them, and doing nothing will be the largest mistake of all.

NSN: How Exiting Optics Demands Cloud Aggression

NSN is selling its optical transport assets to the same private-equity firm who bought Sycamore’s technology assets, and the move doubles down on a bet that NSN is making—namely that broad product lines aren’t valuable in networking any longer.  I think that at one level that’s true; my research shows that buyers are much less likely to value a full-house portfolio these days.  At another level it’s problematic because it exposes NSN to a critical decision they may not be prepared to make.

Optical isn’t exactly languishing as a market sector.  My preliminary data suggests that it will outperform both routing and switching in growth next year, largely due to the fact that metro connectivity is disproportionately optical.  Down the line, OTN demands by operators will further boost optical transport by displacing some core routing costs in new network configurations.  The problem is that optical profit margins are limited, in no small part because Huawei has exploded in the space and taken a lot of market share.

It’s likely that NSN realizes that at one level it can afford to be out of the optics space; the primary driver of new infrastructure is mobile services and NSN is well-engaged there.  It’s also likely that they realize that cutting lower-margin products out of the mix will help their parents.  The question is whether it realizes that once you leave the transport layer you have nowhere to go except the service layer.

The margin transformation of networking is being driven by the fact that connectivity and transport have capped revenues as a result of fixed-price connectivity.  You can’t have capped revenues and escalating costs, so you have to drive down infrastructure spending.  But at the same time even operators need profits, so they have to look to what’s popularly called “over-the-top”.  This combination of events creates two service-layer pushes; one by encouraging the migration of functionality out of the network’s expensive real estate to commodity servers in the cloud, and one to hosting of incrementally valuable services in the cloud.  Are you seeing the common denominator here?  That’s what poses NSN’s challenge.

NSN has a cloud strategy, but it doesn’t have cloud PRODUCTS, and you can’t be a success in the service layer without anything to build the service layer with.  Professional services alone are not going to create success because operators don’t want to put themselves in perpetual thrall to vendors (the NFV initiative proves that).  Thus, the challenge for NSN isn’t so much what they sell off but what they ACQUIRE.  They need to buy some service-layer startups of their own, to follow along the trail that Cisco is clearly trying to blaze.

The problem of the service layer is going to become acute shortly because the three giants of the industry are all aimed at it.  Amazon, Apple, and Google are all alike in an important respect—they have a revenue base that’s inherently limited and they need to expand their own TAM to address that limitation.  Retail profit margins aren’t ever going to go up, Amazon.  We can’t festoon people with hip-looking personal appliances, Apple.  Even in total, ad spending won’t put you in the black forever, Google.  In all three cases, the only track that’s open to the online giants is “the cloud”, and anyone who believes that means shifting IT spending from in-house to cloud is smoking dust.

All of our giant friends have been puffing, but that’s going to change.  All three Internet giants all need to go where NSN (and other vendors in the network space) need to go.  Yes, none of them have been as smart as they could have been, but are they ALL going to stay stupid forever?  Doubtful, and if they don’t then any one of them who moves will radically narrow the range of opportunities for vendors like NSN because none of these players have any reason NOT to commoditize the network.  Only the network vendors have that motivation, and I think they need to get with the program…while they can.