We’ve Met the Cloud and It is All of Us!

The end of a week is a good time to reflect, and there’s nothing better to reflect on than that huge, complex, disorderly transition of global IT and network services that we call “the cloud”.  It’s wonderful to be fuzzy, I guess, because when you are your boundaries can be smeared around at will to envelop anything and everything.  That’s true of the cloud in the short term; it’s what gives rise to the notion of “cloudwashing”.  In the long term, it’s clear to anyone who looks objectively that the cloud IS anything and everything so no pretense is needed.  What’s at issue is whether a given “thing” (be it vendor or product) has a place in the cloud or becomes a “nothing” by definition.

The big winner in the cloud of the future is the concept of “orchestration”.  This is the highly evocative term that means the coordination of both cloud resources and cloud application components to combine and create a harmonious experience for a worker or consumer.  Clearly you have to apply orchestration to resources or virtual things stay virtual and nobody gets anything but promises.  What’s not been clear but needs to be is that when you take component elements of functionality and host them anywhere, whether they’re assigned static or dynamic resources, you have to orchestrate these components to create something cohesive.  So what characterizes the cloud is “multi-dimensional orchestration”.

The big at-least-potential loser in the cloud of the future is, surprisingly, the cloud of the present.  We have in the cloud a vision of a new IT world that’s confronting the leavings of the old one.  There’s little or nothing out there today that can fairly be called a “cloud application”.  What we have are non-cloud applications that somebody has shoehorned into running in the cloud.  If the cloud were mature and real in its final form, you’d not need to argue about things like operating systems or virtualization.  All that stuff is under the covers.  All future cloud applications and experiences are assembled by orchestrating service elements exposed as APIs by processes that live in something we don’t care about.  IaaS, the kingpin of the cloud today, is a way of getting VM hosting to facilitate server-consolidation-like applications that, once we realize what the cloud means will never be written again.  If this low-level stuff is important then the cloud is too complicated to ever justify itself.  The cloud’s own evolution will make it transparent, as it has to be.  That means that cloud providers have to treat the current market as what it is, a transition strategy to reach the future market.  That, in turn, means that their offerings have to address that future now, and gain ownership of its issues.

I think there are some basic truths we can rely on in doing this.  First, there will be a number of future cloud platforms, just as there are a number of OSs today.  What made current OS leaders into leaders was getting out there quickly and gaining developer and user support.  Same for the cloud.  Second, the cloud of the future won’t need virtualization to run on, and so virtualization has to be a feature and not a focus.  Third, you can’t differentiate cloud offerings on management interfaces.  The purpose of the cloud is to serve, and only services can differentiate you there.

 

A New Cloud Visionary?

Joyent, a cloud provider and cloud software vendor I’ve talked about a little in the past, has released a new version of its SmartOS stack (Joyent7) that is making the distinction between the Joyent approach and that of traditional clouds like Amazon or OpenStack a bit clearer.  Not clear enough, though, I think.  There’s still a bit of an articulation issue here.

Most clouds today are built on hypervisor technology, and the cloud software is essentially a management front-end that allocates machine images to VM instances.  This approach is usually characterized by a lot of “hypervisor-agnostic” comments in the marketing material.  The benefit is that users of virtualization or those doing server consolidation in an attempt to get out of the hole of past IT planning disorder can bring things home more easily.  The problem is that it’s not really the cloud at all.

I think “the cloud” is a new IT paradigm built on a set of services that collectively form something like a cloud-distributed OS.  The problem with the traditional hypervisor clouds is that they don’t really have any platform for services.  Sure you can build a service, stick it in a VM, and  let it live in a virtualized instance, but you’re replicating all the middleware and OS overhead for every one you deploy, which is bad enough for IaaS but fatal if you’re trying to build a cohesive cloud that supports cloud-native apps, which is where we’re heading if there’s any substance to the cloud at all.

Joyent’s approach has been to take what’s arguably the best real-time OS that ever was, Solaris, and use one of its open-source forms (Illumos) to write a true cloud OS that runs on the bare metal.  The KVM hypervisor lets the system run machine images like IaaS services in “zones” that are fairly well isolated (not quite as well as hypervisor-on-hardware systems would isolate them) but that are elastic in resource allocation so they’re not wasteful.  Native cloud apps can be written on the platform directly, but Joyent is also a key backer of node.js, a server platform programmed in Javascript.  Node.js can be used to build a bunch of cloud tools and services, and so could be said to be one of if not THE first cloud languages.  SmartOS also inherits the ZFS file system that’s the most scalable of all the modern file frameworks for OSs.

In IaaS application, zone-hosting KVM and IaaS machine images is much less resource-intensive because it doesn’t waste resources on fixed allocations.  Zones can also contain native applications and the management of the two are essentially the same, and best of all the native or node.js apps are running right on the cloud OS, so no matter how many of them there are, you don’t have replication of a bunch of platform software with each image to clog things up.

You can get this all in three forms; a public cloud service based on the software, a server platform (SmartOS) and a data center complex with a bunch of added integration tools (SmartDatatCenter).  With the new Joyent7 version it should be possible to create hybrid clouds nearly seamlessly and also to create federations of providers (Joyent’s Global Cloud Network is such a federation).

All this good stuff would make you wonder why Joyent isn’t a household word, and even more so when you reflect that they are the leading cloud platform mentioned by operators in our surveys (and have been for a year).  The reason is that the company, like all tech startups, is hopelessly mired in techobabble and can’t get the message out in digestible form.  This year they got a major funding infusion and the new investors are gradually sorting out the organizational issues.  They’ve just added a new CEO, Henry Wasik, who came from Force10, successfully sold to Dell.  He had a good reputation as a tech leader, and obviously a better one as a guy who can flip a company for a profit.  Likely the new investors hope for similar heady results, but to get them it’s clear that Joyent is going to have to do a heck of a lot more to merchandize itself.  It’s probably the most future-directed of all the cloud platforms, which means that in my view it’s less shooting at Amazon’s narrow EC2 concept than at its stealth, evolving, AWS API-centric evolution to a service-based cloud.  Saying that would be a good place to start your image-buffing, Joyent.  IaaS is a transition strategy only, and winning there just hangs you up short of the goal line.

 

What Do Carrier Ethernet and SDN/NFV Have in Common?

The Carrier Ethernet Forum isn’t where you’d expect to find things like SDN and Network Functions Virtualization, but there was plenty of both there, and that’s likely an important step for both the group and the two technology initiatives.  SDN and NFV are getting the buzz, enough that groups like CEF think it’s important they jump on the bandwagon.  That could be the start of the classic bandwagon effect for SDN and NVF.

The challenge, as an AT&T spokesperson said in her talk, is getting SDN defined in a way that actually meets its goals.  As I’ve pointed out repeatedly, OpenFlow is a tentative first kiss in the SDN marriage, and what’s getting married (whether we realize it or not) is the cloud and the network.  I think a lot of people at CEF know that, which is why the body is looking harder at cloud-ready APIs.  But CEF illustrates the risk of Balkanization of the SDN standards by the introduction of a host of standards groups.

Despite the risk, the CEF interest in SDN might be critical because the reality is that SDN is likely to be very much an “Ethernet” technology.  First, virtualization principles overall are more likely to start in the data center because of the needs of cloud multi-tenancy.  Second, if you’re going to dumb down a network device that forwards packets and slave forwarding to a central process, you’re left with something that’s little more than Ethernet interfaces connected with a simple forwarding fabric.  Third, the applications of SDN in the metro will clearly be Ethernet-based, and finally Ethernet and optics will be creating the evolved core network model.  I’m not saying that IP is dead; rather, it will be the “service protocol” but not so much the transport protocol.  You can see these trends already.

The overall economic questions may create additional pressure here.  There’s no doubt that both network operators and enterprises are parceling out their capital dollars with great care, due in no small part to the fact that the US fiscal cliff and the Euromess remain open questions.  For tech vendors, the fact is that the best thing that could happen to them would be for there to be a sudden attack of bipartisanism in Congress.  If US risk factors were off the table, I think we’d see a major surge in the US economy that would likely restore the US share of the tech market for both enterprises and network operators.  That would have the effect of pushing sales and opportunity ahead of any possible massive changes in network technology, and it would therefore delay the adoption of SDN/NFV principles.  On the other hand, a protracted problem with US and European capital budgets (likely, sad to say) would create enormous pressure to develop cheaper network infrastructure, which could advance both NFV and SDN quickly.  Once they advance, they’ll never recede.

You can see that vendors are under pressure for sure.  Cisco is pushing the notion of WiFi as the real mobile future because it has no 4G position to rival Alcatel-Lucent, Ericsson, and NSN, and nobody without mobile has much of a shot these days.  Alcatel-Lucent and Ericsson are both cutting staff to cut costs.  Juniper has dumped one of its big suppliers, Plexus, and the story is that they’re also trying to cut costs.  All of this reflects a market that has lost pricing power, and that’s a market that simply has to consolidate.  But SDN/NFV may show the way for any vendor brave enough to step up RIGHT NOW.  You can follow the other lemmings over the cliff, dear vendors, or you can hunker down a safe distance back from the edge and build a veritable market castle in a field that will be free from competitors.  To me, the choice seems obvious, but any corporation is filled with people expert at telling management what they want to hear.

 

Brocade: Big Break, or Big Chance?

In yet another sign that there’s a LOT of SDN maneuvering going on, Brocade has announced the acquisition of open-source routing/switching software platform provider Vyatta.  Sometimes seen by the media as a Cisco rival, Vyatta has in fact not been a serious threat to Cisco or any other equipment vendor—until lately.  With the advent of the notion of SDN and the additional driver of operator interest in offloading network functionality from devices into servers (Network Functions Virtualization or NFV) there’s suddenly a lot of good reasons to like what Vyatta could provide, and maybe like Brocade for buying them.

The big question now is whether Brocade has any of those good reasons for having done the deal.  Brocade does have a cloud and SDN goal, like pretty much everyone in networking these days, and it also has the Foundry product lines which have been moderately successful and have also managed to gain some traction in data center networking.  It’s not at all unreasonable to assume that Brocade sees Vyatta as its Nicira, a virtual networking play to shore up its cloud and data center strategy.  And it might well do that, too.  Vyatta has a broad spectrum of features, surpassing simple virtual networking and extending well into the spectrum of features that have been recently added (in the Folsom release) to Quantum.  In fact, Vyatta makes a darn good framework for a complete Quantum play.  It would be a help, but not a revolution, and not enough to change Brocade’s fortunes in my view.

The NFV part is Brocade’s chance for revolution.  Recall that the goal of NFV is to host on servers what would otherwise have been embedded in custom appliances.  Address assignment, firewalls and security, packet inspection—you name it.  That’s almost exactly what Vyatta does.  And SDN requires path computation, topology, addressing, and all that good stuff to be added to OpenFlow.  That’s what Vyatta does too.  So if Brocade adds in its own OpenFlow (which it has already committed to) and frames the Vyatta assets in the higher SDN layers, then it has not only what could well be the industry’s first full-on, desktop-to-data-center SDN, it would also have an operative version of the NFV vision.  It could force bigger vendors like Alcatel-Lucent, Cisco, and Juniper to choose between actively developing an architecture that they’d see as undermining their incumbent products or letting Brocade run off with the customers.  Either one could help Brocade a lot.

But this is a BIG stretch for Brocade in terms of bold initiatives.  The company has been challenged in positioning its Foundry assets even in their conventional terms, and while it’s told an SDN story, the story to date hasn’t been anything more than others have articulated.  Now they’re confronting not only full-on SDN, but hosted network functionality that makes the cloud INTO the network and not just a customer for it.  Can they rise to that grand a vision, or will it scare everyone in management into immobility?

For Brocade, that’s the question; for the industry it may not matter.  Look at Vyatta’s Network OS and you see what almost seems a point-by-point list of the functionality that the NFV operators want migrated from appliances into hosts, which is exactly where Vyatta could put it.  Who could miss that connection?  Which means that there’s a good chance that even if Brocade doesn’t take the Big Step and perhaps remake themselves totally (and for the better) they’ll drive someone else to do that.  Not any of the Big Three in routing that I’ve named, but perhaps either another major player (Ericsson or NSN) or a smaller one (Extreme, Aruba, Riverbed, even Palo Alto).  There’s a ton of open-source network code out there, derived in some cases from the early UNIX stuff.  There’s also the IP of some failed equipment vendors.  Anyone who likes cloud and likes SDN and likes money has got to see this as an opportunity.

This model fits into the vision of edge equipment change I blogged about yesterday, of course.  You can visualize all these network features as being cloud-hosted and pushed out by my “omnipus” to basic edge devices who use OpenFlow-like forwarding stubs to link users to the features just as they could link them to services or applications.  That, of course, may be the biggest barrier to vendors taking the bold path with something like Vyatta.  How many, even the Brocades, would be willing to toss out the model of the past network devices, even as an option?  But for players like Brocade, it’s revolutionize or perish.  If the market for network equipment is commoditizing, as it surely is, the smallest players get eaten first and under the least favorable terms.

The Vyatta deal could be really big.  It could be the move that ignites SDN and that unites SDN and NFV.  Or it could be another missed opportunity.   We’ll have to watch Brocade to see how they play this, and from that we’ll know their company future, and know more about the future of our industry.

 

“Edge SDN” and It’s Opportunity

I’ve blogged a lot over the last couple weeks on the transformation of service provider infrastructure that’s being driven by the economic imperative of monetization and the technical mechanisms of the cloud, SDN, and NFV.  I’ve also noted that this transformation will surely impact network equipment, and that one such impact will be commoditization of transport/connection functionality.  The obvious question is whether there’s anything left for the network guys.  There is.

The best way to think of both cloud applications and cloud-based services is as a “omnipus”, an any-legged octopus-like organism that lives in the cloud data centers but extends its arms, in the form of services, outward to the network edge.  Each arm represents a service relationship, hosted on a virtual network in the cloud.  Since the users of the future (like those of the present) are looking for applications and services, the users will transform their vision of “being online” to “being connected to the omnipus”, to all the arms that represent what they do and what they want.

This creates a kind of bicameral vision of service delivery; everything is “delivered” in a connectivity sense to the major points of presence where access (mobile or wireline) is concentrated.  The user then simply taps into the necessary omnipus legs.  Connectivity is commoditized partly by the natural forces of declining revenue per bit and partly because the connectivity needed at the service level is directed at linking fairly static resource populations (cloud data centers) with fairly static metroPOPs.  This isn’t a non-connective network but it’s a network without much need for a lot of adaptation and dynamism.  This is the trend that makes SDN and cloud-directed networking feasible; the network of the future covers most of its route miles in a very structured and easily provisioned way.

Every omnipus arm has a presence in the edge, from the network side.  Think of it as being a doorway with a turnstile on it.  The goal of the new-age edge is to manipulate the turnstiles.  That means providing users access to services and applications by simply allowing cross-transfer traffic to one or more omnipus arms.   This looks very much like a hybrid of a BRAS, a firewall, and DPI, because sophisticated traffic inspection determines what omnipus arm something gets connected with, if it’s connected to anything at all.

My view is that the shift to an omnipus service model is inevitable, and that things like SDN and NFV are symptoms of technical change being driven by an increased understanding that the future network WON’T be like the present one.  This is why I’m so hard on vendors for not looking into both concepts now; they need to support not the early groping of buyers for the transition to the future, but the future that buyers are seeking.  To do that they need to be thinking ahead, and that starts by thinking the here and now, not just pushing boxes based on a paradigm that is clearly going away.

Alcatel-Lucent should see this paradigm shift as their real opportunity.  They have plenty of valuable assets to leverage this new vision, but they’ve had those assets for years now and have somehow been unable to leverage them fully.  This, more than too many underperforming product sectors, has been their problem.  With a strong monetization position that’s linked to both cloud and SDN, Alcatel-Lucent could be singing the song the buyers want to hear.  That harmony has just not worked out for them, and if they could fix that problem decisively no other changes would be required.

SDN is one of the things Alcatel-Lucent needs to embrace, and the fact that embracing SDN isn’t rocket science is demonstrated by Cyan’s announcement of its Blue Planet platform, a software platform that rides on top of vendor management tools for major vendors like Alcatel-Lucent, Cisco, and Juniper as well as about a dozen smaller vendors (not all devices are supported).  The platform supports a series of Blue Planet applications, those from Cyan in direct mode and those from third parties through a northbound API.  Details on functionality and interfaces wasn’t provided in their release, and there was no specific mention of cloud-compatible interfaces.  There was also no specific mention of OpenFlow.

The point is that management-system orchestration of service behavior isn’t new; anyone who can provision multi-vendor networks could assert SDN support, in no small part because we don’t have a strict definition of what that is.  I’m of the view that Cyan is heading for what I would classify as an SDN strategy but that the state of the product wouldn’t meet my criteria yet.  But even this would be a step forward for Alcatel-Lucent or even for Ericsson, Juniper, and NSN.  And there are other companies who do as much as Cyan but who also do OpenFlow (M2MI is the example that comes to mind).  I think we can expect to see a number of companies announcing SDN or network virtualization shortly, and while most of them will really be announcing a movement of an existing management orchestration platform toward SDN support, it’s momentum and it’s a way into the press.  Those who wait a month or so are going to be in the “me-too” stage.  You can also expect that we’ll see some hardware evolution, first in the data center and then in the edge network, but the latter probably won’t happen this year.

In economic news, US elections and the risk of a deadlock pushing the US over the fiscal cliff is suppressing markets, or at least it has up to now.  Friday wasn’t a good day but the volume wasn’t decisive, and today futures are showing a modest upswing.  In Europe, Greece faces a number of critical parliamentary votes that will determine whether it can draw on aid, and answer the “in-or-out” question relative to EU membership.  It’s not likely that if Greece passes the necessary measures they’ll end up leaving, as some in Europe have already commented.

Reshaping for the Future?

Alcatel-Lucent reported its quarter, and the results were disappointing to say the least.  The company suffered from margin pressure, caused no doubt by the competition from arch-rival Huawei.  The devil here is in the details, which in some ways are much better for Alcatel-Lucent, and in some ways worse.

When you’re a very broad-based supplier in an industry in transition, you tend to lose in one place while gaining in another.  If you look at Alcatel-Lucent’s router numbers, you see healthy performance and margins comparable to competitors.  It’s just that one product sector can’t overcome the fact that the gains in that sector are the result of secular shifts that create losses elsewhere.  So the good news for Alcatel-Lucent is that in the IP area, they’re still in the game.

The bad news is that they’re not capitalizing on it very well.  The key division for Alcatel-Lucent today is wireless, where again the problem was the Peter-versus-Paul dilemma.  Being big in 2G and 3G means that unless they’re REALLY big in 4G they face a net loss.  And today wireless is the only place that matters in the network.

So what should they have done, or what should they do?  More good news here, potentially.  They need to embrace the IP model of networking full-on, and the first step in that is now and has always been the “IP-fying of IMS”.  Nobody has taken a better first step toward that than Alcatel-Lucent, but the step they’ve taken still (obviously) isn’t enough.  They need to be totally aggressive here, embracing the IP-only model of the future and working to be the leader in all the critical concepts.  Their biggest issue is SDN, which could be an enormous asset to them if it’s supported correctly—unifying their service, wireless, and IP strategies.  It could be an enormous liability if they don’t get it done, and that’s where they are now.  The fate of Alcatel-Lucent may literally hang on its SDN activity, not because SDN is such a great financial foundation but because it’s the link between cloud and network, and if you’re a vendor you NEED that link under your control.

Ordinarily, network monitoring isn’t exactly the foundation of our practice.  In fact, I’ve been told by the media that I’m one of the few analysts who follow the space at all.  Ordinarily, it generates nothing much in the way of news.  This time it may have outdone the ordinary, because even monitoring has to contend with the radical changes in our industry.

NetScout is one of the few “real” monitoring firms left, a company who offers actual probes to read traffic and send it off for analysis.  Over the years, NetScout has been evolving beyond the geeky peek-at-bits image, or trying to.  They’ve made a number of acquisitions, the most recent of which is the OnPATH stuff that’s opening the higher-speed interface market to NetScout.  It’s a hard slog, but they may be getting some help from the outside.

Network operators are really not interested in having a DPI strategy for every season, but that’s what the market would like.  Their challenge is to separate the mechanics of packet inspection from the application of what you find there.  More and more, operators see the latter as being an application for the cloud.  Verizon’s Stu Elby gave a speech last week about new models for service infrastructure and centralizing DPI management was one of them.  It’s also a model of interest for Network Functions Virtualization (the two are likely related since Verizon is a founding member of that group).

In some ways, monitoring seems a step toward the happy goal of cloud DPI, because most monitoring is already centralized and over time it’s been focusing on how to avoid creating more traffic from the probes than the network is carrying from the user.  That’s Elby’s beef, and the beef of many operators.  But not all DPI is about looking and analyzing, some also wants to influence forwarding behavior or other network policies, and it’s not yet clear how easily the folks at NetScout can tie into that.  I asked them if they were aware of NFV, for example, but I didn’t receive a response.  That’s not to say they’re not working feverishly to support it; they may simply not want a public comment at this point.  If they are, they could be one of the few companies facing the cloud-network future.  If they aren’t, they could be another potential victim of it.  Like Alcatel-Lucent.

 

Supply and Demand Threatens Models and Markets

Supply and demand shape our industry, like they shape pretty much everything in every market, and we see some signs of them both operating today.  We also see signs that they often create long-term change after short-term disorder.

On the demand side, corporate raider Carl Icahn has taken a 10% share in Netflix, embattled OTT video player whose promise to change the game in video hasn’t quite worked out.  There are a lot of people who will see this as an indication that the Netflix model is going to be rejuvenated, but Icahn’s investments are more likely to end up being dismembered than as emerging industry giants.  The real question is where he thinks a buyer will come from, because a buyer is what he wants.

The obvious answer is the Big Three; Amazon, Apple, and Google.  Handsets and wireless appliances started off as a kind of cellular-service camel’s nose, a way of attracting customers and reducing churn.  Now the service is the tail of the appliance dog, and this is changing only in that handsets or tablets are at risk of becoming table stakes in a bigger war.  I’m sure if you’ve read this blog regularly that you know of my views that all three of the Big Three are likely to become MVNOs over time.  To build an MVNO value ecosystem they need something special, and it’s getting very hard to differentiate on appliances alone.  What better way than to have a captive content play, something you can offer at a discount to your own customers?

On the supply side, we have news that Big Switch has raised another round in a victory for SDN—sort of.  The real news here may be more that Big Switch absolutely HAS TO BE looking beyond its original OpenFlow model.  No, I don’t mean just a “commercial” version of the OpenFlow controller.  Frankly, that isn’t enough to create value in the real world (it might still be enough for the backers to flip the company; how much rationality is there in the world of VCs these days?)  You have to think of SDN as a cupcake to understand.

Cupcakes are soft on top and hard on the bottom, and so is any realistic SDN model.  If you want to be low down in the stack, at the OpenFlow level, you had better be a hardware player because anyone with a dozen programmers can field an overlay virtual network strategy these days—something Nicira-like.  I doubt that Big Switch has been out there building fabrics on the sly, so I doubt that is where they’d go to find some value.  For that role, keep an eye on Plexxi, a semi-stealth SDN player who has talked about their product enough to tell us it’s a switching concept.

So is Big Switch moving up, SDN-stack-wise?  It has to, and the easiest nut for it to aim the cracker at would be the marriage of OpenFlow and cloud software, particularly Quantum and the DevOps stuff.  You can’t do software-defined networking without some way of getting connected to the software, and at the least Big Switch has to defend this higher-level position against the risk that a player with a hardware solution will grab the icing and leave the dull part of the cupcake as the only game in town.  No crispy edge, no sweet soft top, just…well…filler.  But how many times have I heard startups talk about “laser focus”?  In VC lingo that means “I’m going to give you starvation funding and so to accomplish anything with it you need to pick a micro-mission and run with it.”  Big Switch has been laser focused on a little mission—OpenFlow control.  Will the new funding give them the ability to expand, or just to primp themselves for sale?

These two news items encapsulate the complicated state of our networking market.  We have profound business changes on tap because the guy with retail brand is the guy who owns the retail side of the food chain.  There’s little chance that carriers can be that guy, which means that wireless will inevitably shift to an MVNO model.  That focuses operators first on managing infrastructure cost relentlessly (bad for network vendors) and second on building service-layer features on top (also bad for them, given that the vendors have refused for five years to cooperate with that effort).  The notion that the network is a slave to the features, which is what emerges from this, is the breeding ground for SDN.  So, hopefully at least, our supply and demand trends may actually be converging.

That creates problems and opportunities at the vendor level.  An MVNO craze would clearly shift everyone’s priorities, and quickly.  That might favor the least formalistic of all the SDN players—Cisco.  The Cisco SDN strategy has been a kind of double envelopment—support the cloud interface to own the icing and then provide a linkage to the crispy core that doesn’t really involve open protocols and standards (you blow those a kiss on your way past).  The advantage of that is that it offers a holistic vision of SDN even if it doesn’t meet many of the formal definitions.  The obvious solution, for Cisco competitors, would be to match Cisco in scope and embrace the standards processes.

The wild card in that is the Network Functions Virtualization stuff that I think marries well to the MVNO world…for carriers, handset/tablet players, and OTTs.  A linkage between NFV and SDN (which the NFV people seem to be taking pains to say are only complementary) would make things harder for Cisco.  Ericsson with its strong SDN assets and its OpenSAF initiative and Juniper (if it could retune its services vision for the Universal Edge and offer an SDN story instead of an SDN placeholder) could be in the best position to do that.  We’ll likely see early in 2013 at the latest.

As a final point, I want to thank those who have read this blog in the years since it’s launched.  Because of your continued interest and frankly your willingness to tell others, our October activity levels beat our record of 40 thousand hits per month!  I’m gratified by your response, and I hope you’ll continue to let me know what you think!