Are Cisco and Alcatel-Lucent Finding the Cloud/Network Formula?

Cisco continues it’s M&A tear, this time picking up Cariden, a specialist in aligning network behavior with business goals.  The focus of the deal has been on what Cariden could do for Cisco’s SDN position, but Cariden is more complicated than that, I think.  It also enhances Cisco’s position in Network Functions Virtualization and improves Cisco’s overall potential for operational integration and orchestration…in theory.  Again, we’re left to speculate on the real motives, so let’s get to it.

What makes Cariden special is the fact that it supports gathering of network intelligence from multiple devices and layers, and integrating that intelligence into an analytic framework that lets operators decide how to best handle traffic and optimize resources.  Resource optimization based on telemetry is most valuable when you cross layers of technology—IP and optical or networking and computing.  It can also be used to apply non-technical metrics to network decisions, and to factor in things like historical behavior.  Absent analytics and telemetry of the sort Cariden can provide, networks live in the moment.  With the Cariden capabilities they can use the totality of information available to make the “best” decision, no matter how you define the superlative.

I think it’s clear that the primary objective of this move is to enhance network behavior, improving QoS by using more predictive analytics to figure out what the best paths and decisions are, and to improve utilization and economics.  Cisco knows a basic truth, which is that while you can claim TCO benefits all you want nobody has done anything whatsoever to prove them to buyers except Huawei.  Cisco also knows that if you want to reduce network TCO by some means other than reducing equipment prices, you have to push bits more efficiently.  Hence, Cariden.

The secondary goal for the deal in my view is to enhance Cisco’s cloud position.  Remember that Verizon has said that the future network services will look more like cloud services than like VPNs.  Add to that the reality that picking the best spot for a machine instance is a combined IT/network problem; where you put something has network cost and performance implications, and clearly what you run something on has IT performance implications too.  Cariden can help balance this.

This secondary goal is reinforced by the fact that Alcatel-Lucent’s cloud strategy, CloudBand, is as I noted yesterday a building of an operationalizing layer on top of the traditional cloud.  CloudBand, in short, does what I’ve just said the cloud needs—it picks the right place to host an application instance and even when to spin one up, based on business criteria and the totality of resource conditions.  And remember that it was Alcatel-Lucent who won that deal with the California university—the one that Cisco overbid by a hundred billion.  Cisco just has to see Alcatel-Lucent as a major threat, perhaps as THE major threat since Cisco rival Juniper seems to have stalled in its software initiatives.  You can’t be a cloud player without software, period, and Cisco of all the network guys has to be thinking about cloud because they’re the only player who has servers.

But CloudBand has something else, something that you have to dig a bit for.  I spent an hour yesterday talking details of the technology, and there’s more to it than the PR would suggest.  Not only does Alcatel-Lucent have a more aggressive plan, they have actually delivered on at least a couple of applications based on that plan, one of which can be cited.

Think of CloudBand as a layer above the cloud stacks, connected to resources through the High Leverage Networking APIs.  It creates a framework for applications to run, a framework that can deploy them, expand them in number of instances, contract them, move things around to improve performance or respond to problems…you get the picture.  But CloudBand can also effectively host higher-level features, and one that Alcatel-Lucent is prepared to talk about is load balancing.  There’s probably nothing more important to a true enterprise cloud service than load balancing because in the real world hybrid clouds will have to respond in an instant and elastic way to changes in demand or changes in resource state (read “failure”).  What Alcatel-Lucent does is to add a set of APIs to CloudBand to expose load-balancing-as-a-service to applications.  The same model could be used to create other services (and it has, Alcatel-Lucent is just keeping the trials quiet at the moment).

CloudBand is the threat to Cisco’s plans here, in short.  If Alcatel-Lucent can drive this second level of capability—the ability to host as-a-service elements in CloudBand and expose them to applications—then they can implement Network Functions Virtualization.  Remember too that Alcatel-Lucent has already said they will cloud-host IMS, which is a step operators want in their NFV evolution.  If Alcatel-Lucent virtualizes not only “network internal” functions but also other functions (even things like open-source databases or retail service elements) they could expand CloudBand into a platform that developers could write to, and that operators could customize to differentiate themselves.  In short, Alcatel-Lucent could really mess up Cisco’s whole strategy for the cloud.


Where Will the Hybrid Cloud Take Us?

Verizon’s Enterprise Solutions group thinks that hybrid clouds will be one of the top trends, and a telling comment is that they see the cloud supplanting the VPN as an enterprise service.  While this may seem outlandish on the surface, I think there’s a lot of logic/truth behind the statement, and it also explains some of the recent M&A.

According to users I’ve surveyed, the trend is away from “network services” in a pure connectivity sense and “application services” in a broader sense.  It’s not that the enterprise isn’t seeing connectivity requirements going forward, but that they see these requirements as increasingly supported via the public Internet, increasingly linked to ancillary activities like worker mobility, and increasingly a part of a broader plan for worker empowerment.  In these new missions, there’s a compute element involved too.  So users, looking as always for one-stop shops for IT/network services, are certainly amenable to the shift from VPNs to the cloud.

From the seller side, this has to be a dream come true.  With enterprises continually demanding lower costs for network connectivity, the prospect of adding something with a bit more margin credibility to the mix is a welcome one.  Network operators also realize that the most common hybrid missions involve mission-critical apps and thus tend to favor major partners rather than little bitty cloud firms.  The operators also have lower ROI targets, which means that they can be price leaders while still securing tolerable margins.

If all this shifting is real (and I think it is) then it also spells trouble for pure-play network vendors and should provide encouragement to vendors like Cisco who have more direct cloud stories.  It explains why Alcatel-Lucent in their SDN and cloud story seems to be magnifying private clouds, multi-vendor clouds, and end-to-end software-defined network behavior.  If you don’t sell servers then you need to stress the value of network performance and application-specific behavior because applications are all it’s about in this brave new world.  And of course cloud deployment and hybridization are management issues, not to mention being possibly the only differentiator that software players have.

Some management refinement is critical for hybrid clouds even if you DO have control over both sides of the story.  Nobody I talk with at the enterprise level believes that they’re going to cloudsource their major apps, but nearly everyone believes they will offload peak-load capacity to a public cloud partner and fail over to that partner if something goes awry in the data center.  This means not only the “basic” steps of spinning up new instances and load-sharing (or load-substituting) but also more complicated tasks of keeping databases in sync and insuring that all the software versions available everywhere match the approved versions.

Something like this could be a boon to the IT players who have cloud offerings too, including HP, IBM, and Microsoft.  Arguably these firms have recognized the need for hybridization from the first, and their seeming slow start in the media-hype cloud race is attributable not to their being behind but to their targeting a market segment that’s slow to develop.  Slow, but representing the majority of the real dollars downstream.

Even some incumbent-giant cloud providers like Amazon are going to find this hybrid trend increasingly troubling because Amazon still lets third-party players carry the water for EC2 compatibility in private cloud implementations.  That’s risky because all the features of AWS aren’t available in the basic platforms offered by third parties, and that forces Amazon to accept a functionality gap between public and private cloud that will dumb down their own hybridization capabilities, perhaps even make them less valuable.  Ideally, hybrid clouds are PaaS clouds in that they should offer a set of platform services to facilitate hybridization, and they should also sustain the same OS and middleware in the public and private parts of the cloud.  While you can do that in IaaS through administration, PaaS makes hybridization easier and it also offers potentially greater benefits because PaaS displaces not only hardware costs but also software-platform costs.


Is Alcatel-Lucent Stuck in the Shallow End of Cloud/SDN?

I’ve been harping on Alcatel-Lucent’s need to offer something tangible in cloud and SDN, and they’ve announced something in both spaces.  The question is whether the concepts will go far enough, especially given the relatively late positioning.  It’s not that all their competitors have jumped out though, so Alcatel-Lucent still has a chance to set some agendas.  Did they?

In the SDN space, Alcatel-Lucent is taking what’s  becoming a common refrain; supporting the functional goals of SDN but without the emphasis on standards.  That mirrors Cisco’s vision, what COULD be an excursion into the concept without the specific details most would associate with SDN.  In the case of Alcatel-Lucent, SDN is part of their “Application Fluent Networks” concept, and while the SDN goals are programmability, application awareness, and a global control view, it’s application awareness that forms the centerpiece of the Alcatel-Lucent SDN vision.

Alcatel-Lucent’s positioning seems to focus on making “SDN” mean more than virtual networks a la Nicira, a worthy goal in my view given that traditional virtual networking is simply a tunnel overlay with little ability to manage experiences.  The Alcatel-Lucent concept is to employ the tools in Application Fluent Networks to create application-specific network behavior end to end.  Given that there’s no real capability within the standards (OpenFlow) to do more than make a single switch bend to your will, the functional scope of Alcatel-Lucent’s approach is positive and helpful.

The specific virtual networking linkage comes from enhancements to OmniSwitch to make it VM-aware (and VMware-certified), which allows Alcatel-Lucent to manage experiences down to the linkage of applications to VM instances.  Thus, properly, Alcatel-Lucent is creating an SDN strategy by linking virtual networking to application awareness.  They won’t add in OpenFlow until 2014 according to some sources, but I hear it will be available next year.

On the cloud side, Alcatel-Lucent is refining its operator cloud positioning (CloudBand), making it clear that the differentiators will be a stack-agnostic/network-equipment-agnostic model combined with a focus on the management of cloud infrastructure, especially deployment of new cloud nodes.

I’ve noted before that cloud software is largely a set of management APIs that control resource pools and assignment of resources to applications.  Thus, the Alcatel-Lucent focus is logical, and it appears that they intend to integrate the network management processes and the cloud management processes more tightly.  Whether this means they’ll contribute Quantum stubs and perhaps define virtual network models isn’t clear.  Thus, it’s not clear just how much direct symbiosis there is between the cloud announcement and the SDN announcement I’ve already discussed above.

Since both these announcements came out on the same day, it would seem logical to assume there’s a linkage beyond semantics.  The multi-vendor stuff may be the linchpin here, because Alcatel-Lucent has long been perhaps the leading player in multi-vendor network management tools.  Certainly a concept of cloud/SDN built  around a universal network management tool and accommodating all the cloud models (or at least OpenStack, CloudStack, and EC2) would be very powerful for operators, most of whom are looking at the cloud as an activity based on a combination of large-scale resource pools they provide and software they obtain from partnerships.  For this sort of situation, flexibility in supporting anything that comes along could  be critical.

I think the thing missing from both these announcements is the clear vision of a shared future for them.   I doubt anyone believes that the “software” that defines the SDN is cloud software, in which case a cloud strategy should explicitly engage SDN principles.  Clouds include data centers  and there is an SDN linkage to Alcatel-Lucent’s data center approach, but you have to dig through a lot of links to tie it into CloudBand.   That means a lot of potential buyers won’t find any linkage at all, and make perhaps even more wonder if such a link is even intended.  That would weaken the authority of both announcements for those who think the cloud and SDN are linked.

I still think that Alcatel-Lucent could do more for the cloud, more for SDN, and in doing so, more for Alcatel-Lucent.  They have a complete service-layer API strategy, a clearinghouse for APIs in general, a cloud plan that promises to host the most critical service-layer element Alcatel-Lucent has (IMS) in the cloud.  Why not embrace this sort of thing in CloudBand, and why not link it to SDN?  I think they could have sung a truly compelling story in both these areas, but they seem to have left quite a bit on the table.  Given Cisco’s aggression in the cloud and in software, that’s a pretty significant risk.


Google Joins the IaaS Race to the Bottom

Google has decided to drop its prices for cloud service and expand its IaaS offerings, apparently to compete better with Amazon or to respond to a cloud market that seems to be racing for the bottom.  Another possibility is simply that there’s a lot of “undifferentiated interest” in the cloud, meaning that buyers don’t know exactly what they want.  In that kind of market, classical wisdom says that a broader product line gives you more avenues for engagement and a potential pull-through for higher-margin stuff.  Google and Amazon may simply be trying to spread the shot to hit the duck.  But can this kind of approach work?

Look at the cloud market overall for a moment, forgetting hype.  We have realized less than one percent of the opportunity and we’re already having price wars?  What does that say?  I think it says a number of things, all of which are important.

First, it says that sellers don’t have pricing power for cloud services, which in turn says that the buyers are having some difficulty making the cloud transition.  Generally that sort of problem occurs when the business case is weak, when buyer literacy on how to proceed is limited, or both.  My surveys say that we’re at about half the level of market literacy in cloud computing needed to sustain a natural, healthy, market.  Thus, sellers are forced to start something that could all too easily turn into a race to the bottom.

Second, it says that price reductions will quickly mean that nobody in the market could even dream of competing with network operators if cloud services stay in the IaaS basement.  Former public utilities have very low internal rates of return, meaning that they can sustain projects with low ROI without hurting their financials overall.  So Amazon or Google are doomed if the cloud is the IaaS pond the market seems to think it will be.

Which brings us to third, the key point.  You have to climb the cloud chain.  The higher-layer cloud services displace more cost and are more easily consumed by buyers with limited technical literacy, so they are more likely to succeed.  That means that the seemingly smart thing would be for Google and Amazon to be CLIMBING the cloud chain and not diving down to the IaaS depths.  So why do they do it?  First, because they think their lower-cost IaaS will generate software partnerships that will provide the higher-layer services.  They don’t want to be full-service cloud providers themselves.  Second, because with the exception of Microsoft and maybe Oracle, nobody has enough application software on their own to offer a higher-layer service that’s broad enough to be compelling.

The IaaS position Google or Microsoft or Amazon takes in the near term as a bridge becomes an isolated outpost in an unfriendly financial world, unless these companies promote a transition strategy for apps themselves.  If IaaS is needed because we don’t have a formal development dogma for cloud apps, then we need that to be developed or we’re stuck in lower (and lowering) margins forever.

I don’t think that Google, Amazon, or anyone else in the cloud space has done enough to promote the Great Reality of the cloud, which is simply that to make it optimally useful you’ll need to write cloud-specific apps.  A war among vendors to promote their own platform for this mission could be the best thing that could happen in the cloud.  It would focus buyers on the real question, which is not how to host stuff we already host in-house somewhere else instead, but how to do stuff we couldn’t do in-house at all.  More benefits are needed to drive more spending; without them the cloud becomes another way to cut IT budgets at best, and a failure at worst.

Is Cisco Going to the M&A Cloud…Again?

Cisco is certainly giving all of us a lot to think about (probably more than a lot for its competitors).  The prevailing view here is that Cisco bought a mid-market WiFi company.  If that’s true then they need their collective heads examined.  The price is too high for what Meraki brings to Cisco’s WiFi, incrementally.  I think there’s more to it—a lot more.  Their purchase of Meraki has even more potential dimensions than its previous Cloupia deal, in fact.  The combination definitely makes me wonder whether Cisco is getting ready to be a major cloud player.

One of my points on the Cloupia deal was that Cisco would likely be faced with a decision on becoming a cloud provider if it wants to be a serious cloud player.  Arch-rival HP is both a provider of cloud tools and a provider of cloud services, and Cisco either needs to accept it’s not competing with HP in the cloud or it has to offer a cloud service.  With Meraki it potentially has just that capability.

What’s the beef that justifies the price here?  I think the Meraki site explains it; it’s cloud-based control of the edge, including security and DPI.  The notion of having the cloud control the edge via telemetry is congruent with the operators’ Network Functions Virtualization (NFV) initiatives.  NFV explicitly calls for having DPI-based services like firewalls implemented using a simple device at the edge and cloud-hosted smarts.  Sounds a lot like what Meraki offers, right?

Even for SDN (something Cisco really doesn’t want to do in the “normal” way but may have to do in SOME way), Meraki could be a real value.  SDN needs a mechanism for software to control the network, and the best way to do that is at the edge and from the cloud.  Which Meraki can do.  If Cisco adds Meraki telemetry to other devices in its product line, it could be taking a major step toward SDN either by following the standards like OpenFlow (over the telemetry tunnel) or by ignoring them and implementing a better and richer form of software control.

You could also start sticking UCS servers in the Meraki data centers, expanding the capacity, adding management features from Cloupia, and the next thing you know you have a real cloud service infrastructure.  Cisco could resell management services to operators, field its own services, host IaaS through SaaS clouds, you name it.  All this, and NFV and SDN too.

I think Cisco realizes that differentiation at the device level can come only by coupling the devices to a strong higher-layer story.  Right now, market needs for cloud computing, service-layer intelligence, and lower-cost networking overall are compelling enough that operators will overlook lack of standards.  In any case, if the past is an indication of the future, we’re looking at three or more years of development to get a useful standard in SDN, NFV, and the service layer out there.  Nobody will wait that long.

But…as I said with the Cloupia deal, there’s always the chance that Cisco’s motives were less lofty than the potential would suggest.  I do have to say that chance is growing smaller with each acquisition that seems to play more directly to the cloud scene.

So what do competitors do?  I think we can presume there will be more cloud-driven M&A involving boutique startups and major firms.  That’s because while most of the Cisco competitors have some software position, none of them has any compelling cloud position.  Alcatel-Lucent is the closest to having the internal collateral it needs, as I’ve noted before.  HP has a good cloud story but it’s not as strong tying it to their network products, and the company’s recent faux pas in the Autonomy buy may make it gun-shy in gathering further software assets.  Ericsson and NSN have software elements but nothing I’d call a cohesive software STRATEGY.  Juniper has promoted its Junos-universality position and had at one time an aggressive higher-layer position with Space, but it’s recently backed away from that in favor of simple operations missions.  For the latter three players, I think it’s M&A or the highway, so to speak.


Will Cisco Make Cloupia the Center of the Cloud?

I’ve said for months that Cisco needs to be a definitive cloud player and to do that, a software player.  Cisco has now demonstrated it’s serious about being a cloud player with its acquisition of Cloupia, a fairly impressive player in the growing space of provisioning/DevOps, particularly for the cloud.  I think this is a very early move for Cisco and so it’s a bit of a risk to read a lot into it, but let’s give it a go!

The form of the cloud everyone intuitively thinks about is little more than server consolidation in a hosted model.  You take apps that ran on nuclear servers, you host them on something like EC2, and they run there without the cost of remote support and low utilization you’ve experienced in the past.  These apps are a kind of “fire-and-forget” model of cloud deployment, because they were intended to run independently and persistently so they really present a minimal management burden.  But they’re also the transitional infancy of the cloud, opportunity-wise.  If you want to do REAL cloud, you have to think about deployment of apps that are highly integrated with each other, that have to expand and contract by having multiple instances, and that have to fail over between internal IT infrastructure and public cloud.  All that stuff generates increased complexity in assigning resources and integrating components, so you need a DevOps-type of tool.

So Cloupia is in a good space, but Cisco hasn’t tried to build its own cloud stack, it’s announced its own OpenStack distro instead.  Why not rely on the open-source DevOps stuff that’s emerging around OpenStack?  Cisco has in fact contributed a project there (Donabe) and a link between Donabe and the OpenStack Quantum interface for network provisioning.  Is jumping into the cloud DevOps space commercially a smart idea?  Darn straight it is; it may in fact be a necessary condition for cloud success, and I don’t mean “success of the cloud”, I mean “Success of CISCO in the cloud.”

Cisco is a high-margin player, and that’s something Wall Street judges them on.  If you’re looking for high margins you don’t find them in open-source software.  Yes, cloud software could be a factor that could pull Cisco servers through, but the problem is that it’s the same stuff everyone else has.  Furthermore, while it may sound heretical to say this, cloud software stacks are often no big deal.  A traditional cloud is a virtualized data center front-ended by a management interface.  Hardly something that’s going to drive a big differentiator into the game for Cisco.  So they need something else.

If any significant cloud needs DevOps tools, then Cisco could be THE player for the “significant cloud” by having the best DevOps in the market.  While I don’t think you could argue that Cloupia is the absolute best, Cisco certainly has the resources to make it so.  A Cisco-specific Cloupia-based provisioning strategy could not only manage IT resources but network resources, creating a single framework for setting up cloud applications that could be used by enterprises and still scale to support cloud providers.  Add this to Cisco’s OpenStack distro and you really have something!

Ah, but there could be more.  Remember that a cloud stack is a resource collection (like virtualized servers) front-ended by a management interface.  How far a stretch would it be to take Cloupia from the role of “Quantum plugin” to manage the network or “Nova plugin” to manage servers, to BEING THE CLOUD STACK?  If Cisco were to simply expose the same APIs as OpenStack does, from Cloupia-based DevOps, what you would have would look exactly like a cloud stack but it would be Cisco’s alone, potentially a major differentiator in a market whose price reductions already cry “Commoditization!”  And if Cisco integrated its workflow tools into the mix, it could have a complete “service cloud” for network operators and a complete cloud/PaaS framework on which enterprises and developers could build the commercial apps of the future.

I’ve been preaching DevOps and integration of service logic (workflow/orchestration) for a long time, and maybe I’m seeing the opportunity behind every bush at this point, but I don’t think so.  We need a distributed model of IT to make the cloud whole, and we need both distributable provisioning and workflow/orchestration to get a distributed model of IT.  Does Cisco have it?  Yes they do, but in pieces.  Will they integrate it?  We’ll see.


Lessons of the Fallen?

One of the most fascinating and frankly frustrating things about our networking industry is the fact that companies often get a truly great insight, lead the market in introducing it and even productizing it, and then let it languish while the rest of the industry moves the concept forward.  There’s nothing harder to take than a failure that didn’t have to happen, and there are just so darn many of them…with more to come.

One example of this is Verivue, a CDN provider who was perhaps one of the pioneers in developing a CDN solution that was orchestrable and cloud-ready.  For a good solid year and a half, they had what I’d judged to be the best CDN technology out there (we ran a feature on this in our tech journal Netwatcher last year).  Now, we’ve just heard that network operators want “Network Functions Virtualization”, which is the componentization and hosting of network features in the cloud.  And guess what?  CDNs are one of their primary targets!  So what happens?  Verivue, almost coincident with the NFV announcement, gets sold to Akamai for a sum the latter described as “inconsequential to our bottom line”.

Another example is Sycamore, a company who emerged in the height of the dot-com bubble and was a high flyer in networking.  Sycamore raised a ton of cash in the optical/DWDM space.  They had good people, good technology, and they were at the very edge of the space where the merger of optics and modern technology innovations like SDN were preparing to revolutionize the market and potentially drive their opportunity through the roof.  So what do they do?  Liquidate.

What, you might ask, are the common themes of these needless failures?  In my view, the primary problem for both was myopia.  A focus on the sale of the moment blinds you to the opportunities of the future.  Not that you can’t focus on sales; the key is to make sure that your sales people are doing sales and everyone else is doing product and strategic planning.  I think the proof of this thesis is that in both the companies I’ve cited, the signals of market change were very clear (I was writing about them for a full year, for example) by the time the die was cast and yet they didn’t even make an attempt to position for these trends.  They never saw them, not really, because they had their eyes to the pavement and the feet of salespeople.

Every one of the current network vendors has their own demons to fight in this space.  Alcatel-Lucent has yet to articulate an SDN strategy when the company’s lack of servers makes it critical they build a bridge with software between network and cloud.  Cisco has said the vows with SDN while crossing its fingers at the alter, and yet if it wants to transform itself from being a network device vendor to something bigger, the transformation has to pass through the network/IT boundary that SDN defines.  Ericsson has the best SDN story of the major vendors, but despite repeated attempts to get details on it they still won’t provide a slide deck to describe it.  HP is the most logical cloud giant in all the market, with network devices, servers, and software, but instead of leading in SDN they wait till October of this year to announce something, and then do a total me-too.  Juniper was the first vendor to talk about the cloud officially, the first vendor to do an NFV offload of network functionality, and has done nothing to develop either asset…I could go on.

I probably will, but not in this blog.  We are going to see some business transformations in networking driven by the same changes that could have empowered Sycamore and Verivue and didn’t.  Those changes are going to drive M&A, even business failures…again.  It will be unnecessary…again.  So to the media that is asking whether SDN is a threat to Cisco or some other vendor, I ask the counter-question.  Are they a threat to themselves?  Market conditions are as they are; they create problems or opportunities for firms according to the firms’ own measure of themselves.

Cisco’s Strength and Market Opportunities

Cisco’s earnings calls are almost always an education, and this one may be of particular significance given that the industry (and the global economy) is teetering on the edge of maybe-good, maybe-bad.  Cisco showed some good, and some of Cisco’s lessons are even applicable to other companies in the space.  Some aren’t and shouldn’t offer much hope to competitors.

To start with the basics, Cisco beat its quarterly revenue and profit goals (modestly).  North America and Asia were its strong markets and (obviously) Europe was its worst.  Margins were up slightly and service revenues outgained product revenues.  Guidance was in-line with estimates and so Cisco’s quarter and call were enough of a success to send the company’s stock up 6% or so in the aftermarket, and nearly 8% this morning.  Most any network CEO would have loved to have reported the quarter.

Looking deeper, there are other good signs, the best of which is that Cisco sold more servers than before, ramping up its business enough that it put pressure on overall margins because UCS has a typically lower margin than network gear.  Going forward, though, Cisco seemed to indicate it expected further margin compression not only from this source but from discounting to fend of competitors, obviously including Huawei.

The best sign is that Cisco did pretty well when its competitors did badly, and in most of the spaces they did badly.  That proves what my fall surveys, just coming to an end now, are showing; Cisco is gaining strategic influence with buyers and leveraging its incumbency better overall than any other player in the space.  It doesn’t mean they aren’t vulnerable; Huawei can still force Cisco to trade margins for sales and other firms like Palo Alto can attack niche pieces of Cisco’s business.  But to knock off Cisco broadly, a competitor is going to have to deal with formidable sales presence and influence.  Only a dynamite strategy can do that, and nobody really seems to have one.

The bad signs?  First, order growth is behind in EMEA, and obviously a slowdown there on a large scale would be hard to make up elsewhere.  The public sector is also weak pretty much everywhere, and then there’s the issue of growth overall.  Cisco can take some market share from others, and likely has, but it’s not going to be easy for the company to get to double-digit growth except through major market recovery or major changes in positioning.  UCS, while stronger than before, isn’t a general-purpose play as much as it is a play in the growing value of servers in network applications.  That value will only increase, but its gains will be losses for the higher-price-and-margin network gear.

The overall indicator here shouts “Cisco needs to be more cloudy!” to me, not cloudy in terms of “murky” but in terms of riding the cloud wave more effectively.  The fact that UCS is “for the first time” pulling through network gear shows that Cisco is getting traction with servers in applications that mix them with traditional devices, which is really good, but it needs to lead that space since it’s clearly counting on the trend to continue.  Cisco’s big cloud problem is SOFTWARE, and not just the spat with VMware.  Cisco demonstrated with that little spitting contest that it really doesn’t WANT a cloud software strategy, it would rather ride someone else’s wave.  That’s already difficult, as VMware again shows, but it’s only going to get harder.

The network is where Cisco is, the cloud is where Cisco needs to be in the lead, and that’s where Cisco is most vulnerable.  It comes back to creating a vision of a network-centric cloud, not unlike what’s implied by both SDN and NFV.  Right now, it would be fair to say that Ericsson among the network vendors leads in SDN and that Alcatel-Lucent leads in NFV.  Neither of them has effectively articulated their own leadership position, and neither has fully exploited and developed it.  Where Cisco demonstrably leads (besides influence and sales account control) is in servers.  The cloud is more servers than network equipment even now, and so having servers is an enormous market advantage.  But having something that linked servers and networks in a techno-philosophical way would be the most powerful thing of all, and that space is still open.  However, there are startups nibbling at various key elements (see my 6WIND post yesterday as an example, and also my orchestration favorite M2MI), and somebody big could pick one or more of these up and suddenly have critical mass.  If I were John Chambers, I’d want to be getting my picking in now, because those who don’t get in a pick may get plucked down the road.


Uniting SDN and NFV

Big Switch today announced the release of its SDN products, and while there’s no question the company has some good stuff, I’m still not convinced they have ENOUGH stuff.  The fundamental question of SDN isn’t how control of devices is exercised, it’s how a system of devices can be organized into the coordinated behavior we call a “service”.  That’s a question that’s gotten a lot more complicated recently as other initiatives aimed at offloading functionality from the network developed.

I’ve blogged a lot over the past year about SDN, and recently I’ve blogged a lot about Network Function Virtualization (NFV).  One of the points I’ve been making on both is that while everyone seems to profess love for the notions, so far all we have is furtive blowing of kisses and no concrete proposals of marriage.  We need not only concrete proposals to implement each, we need something to implement BOTH.  There can’t be two independent network revolutions.

Amid all the network romanticism  I’ve finally found an example of substantive progress with a French company, 6WIND.  These guys are the first to present an explicit NFV position and obviously the first to link a solution to both SDN and NFV.  In fact, I think they may have a more interesting contribution to SDN than players who are normally associated with the concept.

Both SDN and NFV propose to do more network features in the cloud, and while there’s a lot of intuitive benefit in doing that there’s also a potential problem.  Servers are not real-time devices, particularly when you add in the operating systems that were designed to run business applications and not real-time processes.  How, then, can we expect them to perform when they have to take on real-time burdens?  Probably not well.

Interestingly, if we look at a modern computer device, we see an increasingly sophisticated multi-core processor with increased capabilities in handling I/O of all types, including communications.  As we start to give servers a real role in communication either by implementing virtual network segmentation on them or (in the case of NFV) actually hosting network features on them, we run into real-time issues created not by the hardware but by the real-time inefficiencies of the software.  6WIND’s goal is to solve the real-time network problem for modern multi-core servers and appliances.

The idea behind 6WINDGate, as the product is known, is to place a set of “fast-path” packet tools between the chip-level communications support and the hypervisor or operating system.  These tools integrate with the OSs to accelerate the handling of communications protocols by the higher layers, significantly improving packet-handling performance and (perhaps most important) making packet-handling deterministic—independent of the load placed on the hardware by the higher-layer elements.  You need determinism in the cloud or the use of resources by one virtual machine will contaminate the performance of another in an unpredictable way, destroying the fundamental requirements of multi-tenancy, which is independence of application elements from each other.

This capability is useful in three ways; first, you can use it to improve how servers manage things like virtual networks, and second you can use it to host network-related applications like firewall, NAT, DHCP, and even IMS components.  Second, you could use it to create a network appliance, as some appliance vendors already do using 6WIND software.  The best thing is that when you visualize network functionality as a hardware overlay usable either by servers or embedded-control-based appliances, you can decide where to host things—network or server—based on all the possible technical and business considerations.  If operators really want to drive NFV to the ultimate hosting of all non-packet-forwarding functionality on servers, that’s fine.  If they want some tightly coupled things hosted on appliances optimized and located for maximum performance, that’s fine too.  In theory, a “service cloud” could be created from the software with network devices and servers both playing their optimized role, and with a common orchestration process putting it all together.  That’s compatible with the vision that the NFV operators show in their white paper, but it’s also at the minimum a better way of migrating from current discrete-device networks to a more server-hosted model.

I think this is the start of something big, a trend to optimize servers/devices for the cloud in general and the “service cloud” of network operators in particular.  6WIND is showing that we can host network features, even those just above forwarding, right in the cloud, and that satisfies basic NFV goals.  But they also show that we can create servers/devices that can provide under-the-OS-and-hypervisor services that are exceptionally efficient (dare we call these “underware”?)  These could serve as a model for how a basic cloud would evolve to become a service cloud because in theory any kind of service that needs fast-path implementation could be hosted this way.

You can even consider whether generalized cloud trends like the Nicira-style virtual networking might create a reason to think of a cloud server as being very different from a VM server, and some fast-path cross-OS functionality like 6WINDs being the difference.  Intuitively I think we all believe that the cloud, because it’s different, will drive differences in server technology.  This may be how that happens.

For VMware, who bought Nicira, and for Brocade who bought Vyatta, 6WIND poses some compelling questions.  Can we expect servers, which have for decades been shedding communications functionality because of performance issues, suddenly embrace that functionality with no changes?  Seems unlikely, and if it is then they need to be thinking about these same fast-path issues, and quickly.

The Three Legs of The Future of Networking

Most everyone who uses telecom these days knows that we’re not in our mothers’ network anymore, but the changes have come somewhat gradually and it’s hard to realize now that we’re only 60 years from the first coast-to-coast non-operator-assist call.  Some recent events may help us come to terms with reality, though.

Verizon and AT&T are both asking regulators to revamp the rules on phone service.  The Telecom Act and other previous legislation oblige the RBOCs to offer PSTN services as the provider of last resort, and both operators want to see that rule changed so that wireless or IP services could be offered instead.  AT&T is also asking to eliminate many of the old rules on wholesaling of copper, and it seems pretty clear at this point that the old copper plant’s days are numbered.  However, this doesn’t mean the end of copper and DSL.  Operators, including those here, tell me that they expect that there will still be fiber/copper hybrid deployments, particularly those in AT&T’s region that will focus on using vectored DSL to deliver U-verse.  Home-run copper is probably dead, and so is a lot of the older non-fiber digital loop carrier plant, simply because the loops are too long or otherwise unsuitable for high-speed digital services.

There’s an intersection between fiber and wireless brewing, too.  Verizon is going to be adding carrier Ethernet capability to FiOS, and I’m told that a major mission for this will be the backhaul needed for smaller cells.  Operators are more and more interested in small-cell service, largely because it’s better in spectrum efficiency since it supports more users by dividing them up into more cells and hauling independent feeds to each.  The trick, of course, is getting those feeds to the cells without exploding costs, and FiOS is deployed in a lot of areas where small-cell deployments are targeted.  Yes, there are business sites also passed by FiOS, but I think most of them would be happy with the business version of FiOS data.  Carrier Ethernet seems more targeted at backhaul.

I think the cloud is going to create the third leg to this colossal industry stool of network change.  Network architecture is governed by traffic distribution, and more and more traffic is being short-stopped in the metro area.  Content alone is a major factor; CDNs cache nearly all valuable content on a per-metro basis.  The cloud will concentrate user access to information on resources that, if not actually in the metro area, are peered with it.  Increasingly traffic to support complex requests will take place within the cloud rather than between user and cloud, for the simple reason that if cloud computing is logical at all, it’s logical to assume it would displace computational activities at the end of a mobile tether.  After all, such a move would also reduce edge traffic and user mobile data charges.

The effect of all of this is to accent the role of Ethernet in carrier networking, not specifically as a subscriber offering but as an infrastructure element.  You don’t need IP routing in the access network, you just need to carry IP over it.  Similarly, inside the cloud you’re more likely to consume Ethernet services because you have resource-to-resource connections.  Recall that even in the IP core we expect to see more agile optics than core routers, and all of this is why we believe that there’s a pronounced shift to Ethernet switches and that future IP elements will tend to look more and more like BRASs.

If you add in the SDN and Network Functions Virtualization (NFV) initiatives you see why commoditization of network-layer and Ethernet equipment is likely.  More and more, Ethernet is an aggregation technology and a resource-network technology.  In these missions, which have limited connectivity needs, you can afford to forego adaptive discovery in favor of centralized traffic management.  It may also explain why NFV proponents put a premium on higher-level IP elements (firewalls, etc.); the functionality of the lower layers is already dumbing down and the prices will likely follow.

But only “likely”; there are still opportunities to build devices for the future age.  Every new model of networking has new requirements and values new features.  The real race today should be to identify these new things and seize them while there’s no competition.  Everyone will get that eventually, but of course only one will get it first.

Cisco, who reports earnings tomorrow, is certainly one who has a lot at stake in “getting it”.  Credit Suisse has noted that Cisco is embracing the transition from a growth company to a value company in a stock sense, but you can’t be a value company if you don’t do what’s valuable.  The CS report notes SDN, the Brocade/Vyatta NFV-like drive, and Huawei as threats and they’re right in one sense.  In another, they’re wrong; the issue is that bit-pushing isn’t going to be valuable/differentiable, period.  There’s no shortage of initiatives and competitors striving to realize that truth so it hardly matters which prevail.

Look at the California university bid news; Cisco overbid by a hundred million dollars on a deal.  You don’t overbid that much unless either you have no idea of what reality is or you had a super-differentiating angle you somehow forgot to mention.  The university said it all when they said their requirements were nothing special.  Nobody’s requirements are really that special these days, and that’s getting worse not better.  Cisco is a cloud company, or needs to be.  Likewise all Cisco’s competitors.