A Not-SDN Company to Complete Cisco’s Not-SDN Strategy

And it just gets better, SDN-wise, I say with considerable irony.  Cisco did a virtual-data-center network deal yesterday (vCider) and of course everyone is out there saying that it’s going to help Cisco with its SDN approach, perhaps to be integrated in Cisco switches!  Even for a market that’s been more nonsense than substance, this is a new low.

First, vCider, like Nicira, isn’t an SDN strategy.  It’s an overlay virtual network designed to partition LANs to eliminate first the limitation on VLAN/VXLAN size (which most people won’t hit) and second to unite the management of network virtualization more easily with the cloud given that many data centers are multi-vendor.

Second, Cisco doesn’t want to do an SDN, it wants to achieve software union with legacy networks via APIs.  Thus, while it’s wrong to say that vCider is an SDN play, that’s good because Cisco wants an anti-SDN play.

Third, where the heck does fitting this into Nexus switches come from?  First, it’s overlay virtual networking so it runs over and not on the network.  Second, it’s written to be an adjunct to the cloud stack software that runs (you guessed it) in the cloud.  And third, Cisco says that its going to be integrated into Cisco’s implementation of the OpenStack Quantum interface, which is part of OpenStack and the cloud.  Is that enough proof?  Cisco is going to use vCider to link a cloud-model of data center virtualization with legacy LAN technology to avoid doing more.

When is this industry (particularly the media) going to accept the fact that everything that’s colored yellow isn’t a banana?  SDN is more than a claim.  And what’s truly sad is that I just spent three days listening to real SDN stuff, things that were insightful, useful, sensible, and even revolutionary.  Over the next couple of weeks I hope to be able to share all of it, but for now let me say that there is in fact a mission for SDN, there’s a real model for cloud-based services that really works, and there’s a real unity of cloud and SDN that magnifies both.  Given that we’re reduced to washing announcements with both cloud and SDN, wouldn’t it be nice to actually have something to cover that’s substantive.

The test for our friends in the media will now.  Substance, after all, isn’t easy.

OpenStack: Criticized for the Wrong Reason?

Gartner has published a report that’s very critical of OpenStack, and there’s been some sharp push-back from the vendors involved.  It would be tempting to say that the truth is between the poles of the discussion, but I think the truth may be orthogonal to both positions, somewhere out there in the aether where it’s been consigned by hype. The big problem I have with this whole discussion is that it’s about what the conditions are in Atlanta when the problem is getting from New York to LA.

OpenStack was cynical in its origins.  It was founded by Rackspace in an attempt to derail Amazon’s runaway success with its IaaS EC2 service.  Wrap yourself in the Righteous Cloak of Open Source, create a consortium, and you make everyone who doesn’t join—including Amazon, who clearly wouldn’t—look bad.  It’s fair to say that the early process was all about “Anyone but Amazon”, and I’ve noted that in blogs.

I’ve also noted that OpenStack now appears to have genuine momentum and is in my view the main point of thought leadership in the cloud software stack area.  The OpenSource framework is necessarily augmented by technology-specific (and therefore vendor-specific) implementation stubs, but that doesn’t mean that the process is proprietary, only that it accommodates the fragmented nature of the equipment market.  The critical positive for OpenStack is that it appears to have the beginnings of a useful and general model of virtualization.  Quantum, the network interface, is particularly critical because it’s the only thing in cloud stack software so far that attempts to relate the network explicitly to the cloud (some DevOps tools also try this).

But don’t take this to mean I’m a wide-eyed enthusiast with respect to OpenStack.  My view of the cloud is simple; if it’s a true revolution that justifies accommodations like IaaS to “convert” applications to it, then it must have a native architecture that could be supported by applications directly.  Such an architecture would look like a hybrid of SOA and PaaS, and OpenStack has no specific provisions for either of these things.  I like the “container” model that’s offered in some Linux, Solaris, and even BSD derivatives.  It doesn’t have the same solid buttressing against interaction that you find in a virtual machine, but it does have an easier evolutionary path toward the unified SOA/PaaS vision that I believe will eventually characterize the cloud, and lead to its success.  The challenge is making one of these “container” cloud models into a widely accepted cloud contender.  Joyent has done the best job with this so far, but it doesn’t yet articulate the unified model even though it has probably the most tools to support it of any of the current cloud software crop.

The most basic test of any concept in technology is where it takes the market, not how it takes the market to a destination it doesn’t define or justify.  The responsible surveys on the cloud have shown that the big issue inhibiting adoption isn’t security or compliance or technology, it’s that the buyer can’t make a business case.  Virtualization is server consolidation.  IaaS is hosted virtualization, a way to run applications that expect bare-iron execution in a cloud framework.  So to consume the cloud, we continue to write non-cloud apps and then run hosted virtualization to get them on the cloud.  Color me naïve, perhaps, but that seems to lack elegance at the least and sanity at the worst.

The cloud is a virtual-resource model.  The best cloud strategy is the one that virtualizes the resources the best, and then matches the model of virtualization with the application in a similarly optimal way.  No waste from production to consumption.  We can get there from any model, including OpenStack, but we’ll get there fastest if we stop carping about non-issues and face the reality that a cloud application isn’t one that just runs on the cloud, it’s one that’s written for the cloud.

A decent gain in stocks may have been throttled by HP’s announcement that it was hoping, by 2016, to be able to match revenue growth to GDP growth.  Hey, wasn’t tech supposed to be a growth industry?  Actually, the forecast isn’t terribly out of line with the longer-term trendline, which is that technology spending has been growing at less than the typical premium over GDP growth for a decade now.  What tech needs is a new benefit case, a paradigm that would improve productivity more and thus accelerate tech spending by raising the “R” in ROI.  Is the cloud that something?  We think it could be, but we’ll have to see if the cloud’s execution can rise above the hype.

 

The SDN that Was, and the One that Wasn’t

Well, it seems like the SDN action never ends.  We have two stories today; the SDN strategy that sort-of-was, and the SDN strategy that probably isn’t.

Let’s start with the “isn’t’.  Cisco announced some optical enhancements to its CRS line, and what’s interesting about them is that they are aimed at an area where SDN technology is already active, and yet Cisco isn’t talking SDN when they chat up their latest optical-integration vision.  They’ve jumped off the GMPLS starting point and done pretty much what they suggested they’d do, which is to achieve SDN goals using legacy technology were possible.

The question this raises is whether Cisco really has any desire to support the totality of SDN principles, which would include the elimination of distributed, adaptive, discovery-based forwarding in favor of central control.  You can’t fairly say that you HAVE to do this to support SDN because it’s not at all clear what supporting SDN means (any more than it’s clear what the cloud is).  It may be that Cisco sees what I’ve described above, which is a major evolution of the cloud, and also sees what I noted yesterday in my Juniper blog—that SDNs are the union of the cloud and the network.  If progress toward achieving the cloud’s goals is difficult, then uniting it with the network is at least as hard.  It may be that a proprietary strategy that bypasses all the disorder of the consensus process that hampers standards of any sort is the fastest path to market.  It may be that Cisco is telling us it intends to take that path.

Now we come to our “sort-of” with HP.  The company is an enterprise play in networking but a public cloud provider, supporter of OpenStack, and supplier of servers.  It’s logical it would take some SDN position.  HP had OpenFlow switches before, and what it now appears to be adding is an OpenFlow controller and some related software tools that can gather information from HP devices via a RESTful interface, then send OpenFlow commands that presumably will automate route-threading.  The key element, the HP Virtual Application Networks SDN Controller (VANSDNC from now on!) is available either as an appliance or as software, and it fits into my SDN model in the same general way that the combination of the Topologizer, SDN Central, and the Cloudifier fit in combination.

The good news here is that HP appears to have taken the need for extra functionality above OpenFlow seriously, and they also say their approach extends beyond the data center to the network at large (to “the desktop” in their material).  In their material they show an example of the VANSDNC linking to a cloud provisioning or DevOps framework as well as supporting “SDN applications” which are designed to specifically exploit SDN capabilities.  Two initial ones include a cloud data center virtual network setup app and a security app.

The bad news is in all the qualifications like “apparently” and “say”.  There is simply not enough material provided to get a full understanding of the potential of the products being announced.  So while it seems that they can get topology and status information from the network using a pre-standard RESTful interface, it’s not clear whether “the network” means Ethernet, IP or a hybrid of both and whether it’s for HP only or for other vendors.

What’s missing, detail-wise?  We have no specific picture of the functionality of the element of the VANSDNC that’s responsible for collecting topology data and creating a map of the network, or that responsible for converting that map to route information.  Can it do path selection?  If so based on what?  What products, protocols, vendors, can contribute information to the map?  If it’s HP only, then how can other network elements be introduced to create a complete topology?  And when HP says it can extend SDNs to the desktop, what limits are there over the WAN services that might be involved?

There’s a lot of good here.  I do like the notion of SDN applications.  The HP binding material for their application virtual networking does suggest that HP is taking virtual networking from a pure multi-tenant play to an application segregation play, which I think is smart.  But it’s hard to escape the notion that HP has built an SDN to suit its own cloud offerings, public and private, and not worried too much about SDNs in general.  And, of course, the strategy can’t really address the WAN unless there’s some secret tool set involved.  Thus, this is a good move, and one that will put pressure on competitors.

Like Cisco.  HP takes some swipes at Cisco’s “looks like an SDN and quacks like an SDN but isn’t ever going to be one” positioning, and this may make things harder for Cisco as they try to fend off “real” SDN competitors in both the enterprise and carrier spaces.

 

Juniper’s New Edge: Edgy Enough?

Juniper’s MX has been one of its most successful products, and it’s not surprising that the company wants to build on that success.  On Tuesday they announced a series of enhancements to the MX “Universal Edge”, all designed to host service intelligence close to the network edge.  That’s a worthy goal, and in my view it’s where the market is heading.  The question is whether Juniper’s approach to that goal is the optimum one, and that’s harder to tell than it should be.

There seem to be two pieces to Juniper’s announcement; a set of MX applications that enhance the Universal Edge concept by extending network capabilities, and a set of applications that actually host “non-network” functionality on the MX.  This is supported via a new x86 line card that runs Linux.  The keynote application for this card is video caching or CDN functionality, though it also appears that you could use it for other types of content like advertising.

Some of the stuff that’s proposed for the Universal Edge is network-related.  One set of capabilities allows operators to host what would ordinarily be “subnet services” for smaller branch offices, including things like DHCP, firewall, NAT, and monitoring services.  This seems logical and could also facilitate virtual networking since those kinds of services are what augment basic Level 2 LAN services in the Quantum virtual network model of OpenStack.  Juniper does say the new products facilitate cloud services, but that seems aimed at VPN linking to cloud services.

I don’t have any issue with making the network edge more network-functional, but I do question whether it’s a good idea to host CDN caching on network devices.  Operators have characterized network boxes as “the most expensive real estate in our infrastructure” and indicated that they are generally more interested in dumbing these boxes down than in adding server applications to them (that’s what’s behind the SDN interest, after all).  They tell me that they would prefer a software-CDN approach, one that can be fit onto any suitable set of servers, than even an appliance-based model.  That seems to speak against a device-hosted cache.  Then there’s the lesson of Cisco, whose AON board was aimed at non-network service features—and never got any market traction.  You could argue that AON was too early, but that’s a hard argument to prove given overall trends toward cloud-hosting features.

The future is the cloud and the cloud is software and servers, which means the future of the network edge is cloud facilitation.  The new announcement is promoted on Juniper’s homepage with the obligatory cloud billboard, and there does seem to be some movement here by Juniper toward creating a cloud-ready network.  What’s missing is the specific notion of what such a network would look like and how it would tie to the SDN stuff that most everyone sees as the way the network adapts to the cloud.  Juniper has made two general public introductions to its vision of SDN but neither of them has included a specific product architecture or roadmap.  The most recent was a virtual event that promoted the notion of a single aggregate view of multiple devices, which sounds a lot like the Node Unifier for the MX that’s included in this announcement.  Thus, it’s looking like SDN-by-synthesis, and that makes it harder to understand the positioning.

It’s also less effective for Juniper, so it’s hard for me to understand why they’d take the risk.  It’s hard not to see two almost-competing internal-company threads here.  On the one hand Juniper seems determined to talk about boxes as the core of the future, including all the usual hype about how this or that feature accelerates time to market or reduces TCO.  This is a very tactical approach that doesn’t lend itself to presenting grand ideas.  On the other, the company does seem to realize that it needs some architectural meat on the box bones, especially in the areas of cloud and SDN.  But you can’t create that message by leaking little pieces of it in box stories.  It would sure be nice if these threads converged somehow to create tactics that clearly built into a longer-term infrastructure strategy that made sense.

Light Reading reports a Juniper layoff, and Tech Target says that it’s directed at least somewhat toward the QFabric team.  If true, that would be a real surprise given Juniper’s cloud positioning.  Their primary asset in the cloud is the data center network, and QFabric could be combined with PTX to create an optical cloud virtual data center that could span the globe and offer a real differentiator to the company.  Fabrics are particularly strong in hosting virtual networks and have a strong link with SDN (which technology would be the key to link QFabric and PTX in the first place).  On the other hand, QFabric is regarded by most financial analysts as problematic and I agree the positioning of the product has never lived up to the technology.  Which only shows how important it is to understand those pesky architectures and markets.  Listening, Juniper?

 

Is Europe’s Cloud Paper Worth More than Paper?

The EC has released a policy paper on cloud computing, and the thrust is that the cloud can be a major benefit to Europe, a view reflected in the paper’s “Unleashing the Potential of Cloud Computing in Europe” title.  I’d guess you wouldn’t be surprised to know that there are aspects of the document that I find troubling, but the good news is that it nets out to a recommendation for a coordinated industry/regulatory policy on the cloud evolution.  That would be a significant benefit, and possibly one that would extend beyond Europe.

The troubling aspects of the document, for me, center on the way that the impact of the cloud is weighed.  On one hand the document acknowledges that cost savings are the major driver for the cloud, and at the same time proposes that the cloud would add to Eurozone GDP.  This seems to mirror the industry’s tendency to say that the cloud’s savings will explode and with it the quantity of equipment consumed and the salaries generated.  Where then do the savings come from?  This contradiction is endemic to our view of the cloud so we can hardly blame Europe.  Better yet, the contradiction is resolved by the fact that one of the major precepts of the cloud is false.  Savings aren’t going to be the major driver.

ROI, roughly, is the relationship between benefit and cost.  In a pure savings-driven market, ROI can be high but only at the cost of reducing investment since the benefit case is static.  The question is whether “pure savings-driven market” correctly characterizes the cloud market, and I don’t think it does

“The cloud” is a shift in IT architecture that facilitates the adapting of computing tools to an increasingly as-needed model.  I remember well the days of batch processing and punched cards; we had IT but it captured business transactions post-facto.  When we shifted from batch to interactive, to what’s called online transaction processing, we shifted to capturing business transactions in real time.  There, for a while, things stopped.

We all knew what the future focus would have to be; once you’re capturing and supporting business transactions in real time, you can improve only by moving your support beyond business transactions to business activity.  We move to supporting work processes, in short.  And the same kind of shift is happening in the consumer market, where mobile broadband is moving online services from something you go to your computer desk to consume to something you consume every minute of your life as an integrated element of your daily routine.  THAT is what cloud computing is going to do; move IT into a future that we’re already committed to both for workers and for consumers.

For the EU, this shift of the fundamental value of the cloud from savings to enhanced work and lifestyle means that it doesn’t have to suck the cost out of the current model and create a cheaper one to succeed.  That means the cloud COULD grow GDP, and that’s good for the EU in goal-realization terms…sort of.

The thing I think is most interesting about the document is the findings of interviews with SMEs, which are quoted on page 16.  According to that group the thing they need most from the cloud is “objective and understandable information”, and this correlates well with what I get in surveys.  Cloud prospects are inundated with cloud promotion rather than cloud information, and it’s difficult to understand the source of benefits or the totality of costs without actually doing a trial.  The smaller the business the greater the risk that the trial will be too costly to undertake or that the resources needed for it would have to be hired or “rented”.  We’re still trapped in the hype of the cloud even today, and it’s probably not going to get better any time soon.

In summary, the EU paper doesn’t seem to move the ball much.  The data they cite is from traditional and largely US sources.  The applications and visions they propose are straight out of the cloud hype, and the regulatory insight they offer is minimal.  Yes, the paper is the tip of a process that could advance things, but I have to wonder whether it’s sensible to launch any sort of standards or bureaucratic process in the cloud space.  It could never keep up, and so can never be relevant.

We’re going to do a cloud computing and related topics issue in Netwatcher in November, and it will include our latest modeling of the cloud market, with sizing and pace of adoption, and even suggest where we might see the best overall performance by vertical.