More Interesting Stuff in Overlay SDN

The SDN game never ends, possibly because nobody wants to diss a good market hype wave while it still has momentum, and possibly because there’s still room to do something useful given the rather vague utility of some current strategies.  In any case, PLUMgrid has joined the fray with an offering they call “Virtual Network Infrastructure”, a model that among other things drives home some changes in the SDN market model.

At a high level, we’ve always had “overlay SDN” and “infrastructure SDN”, meaning a division of SDN models between a connectivity-management mission (overlay) and a traffic management mission (infrastructure).  A third model, sort of, can be created using virtual switches (OVS), and you can combine virtual switching and tunnel-overlay to create a more flexible and agile data center.  All these models have to be mapped in some way to the cloud, so I think the most useful way of looking at PLUMgrid is to look at it from an OpenStack Quantum (now called “Neutron”) perspective.

Neutron builds networks by converting “models” to connectivity and functionality.  For example, the classic model of a Neutron network that hosts application elements is a VLAN that’s combined with functions like DHCP and a default gateway.  What PLUMgrid has done is to translate Neutron fairly directly into connectivity and functionality, using a combination of network tunnels and hosted virtual functions.  Virtual switching and routing is provided where the model has explicit bridging/routing functions, otherwise there is only connectivity and higher-layer functions.  In some ways, the connectivity model is similar to that of Nicira, and OpenFlow is not required because forwarding is implicit in the implementation of the model, connections are tunnel-supported rather than switching-derived.  There’s a rich developer kit provided with PLUMgrid as well as management interfaces available for most popular cloud platforms.

So how does this relate to other SDN models?  Arguably, PLUMgrid is a mature and perhaps better-thought-out version of Nicira.  It’s distinctively cloud-data center in targeting, it’s at least so far an in-center model and not an end-to-end model, and it’s a true overlay SDN and not a virtual switch hybrid.  It doesn’t attempt to align overlay SDN vision with physical infrastructure as much as to simply use what infrastructure is available.  That means it can run over anything, even a combination of things like Ethernet and IP.  That makes it easy to build hybrid networks that extend from the data center into a public cloud.  Where some virtual SDN models are more “physical” (Alcatel-Lucent/Nuage, Juniper/Contrail, Brocade/Vyatta) PLUMgrid is solidly virtual.

What PLUMgrid makes very clear, I think, is that there is a lot of potential value to visualizing SDN as a two-layer process.  At the top there’s virtual/overlay networking that has to be very agile and flexible to conform to software needs.  Below that, there’s physical or infrastructure SDN, where software control is likely exercised more at the policy level than by managing specific connections.  Separating these functions is good for startups because it keeps them out of the hardware business, and it lets them focus on the cloud.

The two specific questions that PLUMgrid begs are “Can you really make the cloud network a ship in the night relative to other traffic?” and “Can you do a useful network that doesn’t really extend end to end?”  I think that the answer to both questions is “Under some conditions!” but I also think that the questions are related.  If you contain the mission of overlay SDN to the data center then the cost of presuming ample connectivity and capacity is limited and might even be offset by the management simplicity of a fabric model.  Thus, overlay-data-center SDN can be traffic-insensitive.  As soon as you start transiting the WAN, though, you have to consider SLAs and QoS and all that stuff.

This is a good step for SDN because it’s making it clear that we really have a number of models of SDN that have their own optimum missions.  We may also have situations, of an unknown number, where any given model will do as well as any other.  We also have situations where some models will do better, and for buyers there’s going to be a need to align model and mission very carefully.  Nobody in the SDN vendor space is likely to work very hard to make that easy, but some of this mission-utility stuff may emerge from competition among vendors, particularly between overlay and infrastructure SDN providers.  Infrastructure SDN can be easily made end-to-end and is typically justified in large part by traffic management capability, so it stands to reason that the relationship between overlay SDN and traffic will emerge out of competition with the infrastructure side of the family.  In any case, I think a debate here would be good for SDN and it might even create a future where we’re less concerned about “OpenFlow” and centralism on the infrastructure side.  That would dodge the problem the SDN industry has wanted to dodge from the first, those northern-living application elements that create the service model on top of OpenFlow.

Is Cisco the New King of the Supercloud?

Cisco has used its Cisco Live event to buttress its Open Network Environment and the onePK API set that’s designed to address the SDN and NFV space.  In the move, Cisco is showing us why they’re able to grow sales when others are under pressure, and also why the notion of a network revolution isn’t as easy to promote as one might think.

From the first, ONE has been what many would call a cynical play, blowing kisses at things like OpenFlow and NFV to gain benefits while at the same time downplaying or even obstructing the radical change that both technology initiatives could represent.  That may well be true, but in the marketplace you don’t expect sellers to lie down and die so the next generation can trample easily over the remains.  What Cisco did that was very smart was to grasp the essential notion of SDN (and, to a lesser degree, NFV) and capture the value of that notion without promoting what would have been to Cisco hurtful technology changes.  Software control of networking is a top-down requirement.  You have to start with the software’s mechanism to exercise control and go from there, which is what onePK did.

Cisco has now announced it’s rolling ONE out across more products, but what’s really interesting is that they’re doing this under the label of “Enterprise Network Architecture”, the broad strategy Cisco is targeting at (obviously) enterprises.  Enterprise interest in SDN and NFV is significantly different from network operator interest; literacy among enterprises is about half that of operators for SDN and it’s in the statistical noise level for NFV.  However, there is interest among enterprise buyers in SDN and NFV as a means of bringing networks to heel in supplying application services the way applications want.  That’s why the onePK concept is valuable.

I think that what Cisco is doing here is building from the API boundary point in both the upward (cloud) and downward (network) directions.  If you look at its acquisition of Composite Software, a data virtualization player, you can see the beginning of what might be “cloud platform services” emerging from Cisco.  They don’t want or need to push their own version of cloud stack software—IaaS is just a platform to build valuable stuff on.  The valuable stuff is likely to come in the form of cloud platform services that can enhance applications by leveraging what makes the cloud different.  It’s no surprise that Cisco is linking enterprise mobility to this, because it’s mobility-based applications and what I’ve been calling “point-of-activity empowerment” that provide the benefit kicker that will justify any enterprise excursions into SDN, NFV, or even ultimately a large-scale commitment to the cloud.

Data virtualization, network virtualization, resource virtualization.  It’s not hard to realize that we’re moving to a virtualization-based platform, not in the old sense of VMs but in the new sense of “fully abstracted”.  We’re building a cloud operating system, just like we need to be, and Cisco may well be the player who has the clearest idea of what that might look like and how to get there without sacrificing early sales.  Not only that, by grabbing the API side of this, Cisco is going after a space where newcomers in the SDN space have been weak.  “Northbound APIs” have been that vague pie-in-the-sky thing that you pushed everything you didn’t want to implement into.  If Cisco makes APIs concrete, they expose the fact that others are really pretty simplistic in how they view them.

Cisco’s weakness in this is NFV.  They’ve lumped it into the same package as SDN, which suggests that at least for the moment Cisco is looking at NFV as a kind of SDN stepchild, which it most definitely isn’t—NFV is the value-proposition senior partner because NFV as an architecture defines the deployment and operationalization of agile, componentized, cloud-specific services and applications.  The question is whether Cisco’s NFV myopia is due to the fact that they can’t easily fit it into their current SDN/ONE approach and are waiting to see if something gels, or because Cisco actually has a grand NFV design and they don’t want to let it out yet.

I’m inclined to favor the latter conclusion here.  I think it’s obvious to many at Cisco that NFV is really creating the framework for the future of both cloud and the network, but it’s doing that creating at a layer above the network itself.  That favors Cisco’s API strategy because NFV components could reasonably exercise APIs just as any other cloud/software element could.  It’s not logical to assume that something that so easily fits with Cisco’s strategy at the API level could be held back because those APIs aren’t the same as SDN would use.  Again, the data virtualization buy suggests that Cisco is looking at virtualization in the fullest sense.

Alcatel-Lucent and NSN are increasing their SDN, NFV, and cloud articulation but it’s too early to say whether they have anything substantive to say.  Cisco has a lot of real technology pieces here, and so it’s going to be very hard for competitors to catch up if they let Cisco take a convincing lead in articulation too.  I think that anyone who doesn’t have a good SDN/NFV/Cloud story by mid-November is going to be in for a cold winter.  So…look for more M&A.

The Age of Do-It-Yourself Networking?

Who’d have thought that Google and Facebook and Netflix might be the face of competition for network vendors?  Well, it’s happening, and while the risk isn’t acute at this point for reasons I’ll get into, the actions of these three OTT giants is a symptom of the issues that face networking and network equipment.

Google and Facebook are getting more into the network switch business.  Google’s use of OpenFlow and custom hardware to enhance its network core is pretty well-known, and they’re probably a poster child for SDN in an IP core.  The technology Google has used, at the software level at least, is pretty generally available and so in theory anyone could run out and adopt it.  Their custom hardware isn’t (obviously) off the shelf, but it’s a harbinger of what’s coming down the line in generic switch technology.  Add some open software to cheap hardware and you get a network.

Facebook’s plan is to build custom data center switching, also based on open-source technology.  The concept is an offshoot of Facebook’s Open Compute Project, and it’s goal is to create a fabric switch that would flatten large data centers.  Behind Facebook’s efforts is a shift in traffic patterns; social networks are an example of modern applications that have more inter-process communication than user-to-process communication.  If everything is moving in that direction (which I think is likely) then could Facebook be building the model of future data center networks?

Netflix’s idea is to build their own cache point, a mechanism to make it cheaper to deliver quality video to their customers.  Content delivery networks are typically built from fairly expensive purpose-built gear, or by tweaking general-purpose hardware, both of which are too expensive from Netflix’s perspective.  So they’re rolling their own stuff, creating content cashing in the midst of a market that has many CDN providers and many CDN products.

So OTTs, like network operators, are looking to do networking on the cheap.  Behind all of this is a simple truth; networking is under crushing cost pressure, and there is nothing likely to change this picture any time soon, in fact, never.  Improved chip design, a more software-centric product framework, and better tools are allowing at least some network users to become their own vendors, to squeeze out a bit more margin.  You might wonder whether do-it-yourself networking is going to take over the world and sweep the tech giants out of the game.

Not likely, particularly in the near term.  The fact is that even though concepts of open design for servers have been around for several years, we’re not seeing open servers being cobbled together by your average (or even above-average) enterprise.  Being a vendor isn’t all that easy, and the giant OTTs have an opportunity to play in that game largely because they are giants.  For the masses, do-it-yourself isn’t an option.  Even the giants aren’t necessarily going to have an easy time of it.

I’ve watched three standards groups struggle to define a new network model that’s more software-driven, and thus less expensive.  None of them have found it an easy job, and arguably the first two have already failed at it.  Here’s a mass activity, well supported, and yet unable to take the steps needed to break free of what they believe is a restrictive model (and market).  So what’s the chance that individual users, even individual carriers, could carry off a self-help network transformation?  Then there’s “who supports it?”  Look at Linux.  We have what is absolutely the key operating system of our time, and it’s open source—free.  Yet when we ask businesses how they want to use Linux, over four out of five say that they want it from a source that will offer support.  Red Hat is “Linux” to more companies than Linus Torvalds is; in fact nearly all Linux users know who Red Hat is and less than half know its inventor.  And when you ask businesses whether they’d be comfortable with white-box, open-source, networks, guess what they say?  No not yet.

OK, you say, we’ll get everything from inventive startups who will reshape the network landscape by creating something based on those white boxes and open tools, right?  Ask VCs first whether they really want to fund a networking startup, second whether they’d do one with a broad general product mission, and third whether they’d accept something that was based on off-the-shelf commodity stuff that anyone else could assemble as easily as their startup could.  You know where that one will go!

So just what’s going to come out of these Google, Facebook, and Netflix efforts?  Nothing, you might think after reading my points here.  But like the network operators’ NFV initiative, the OTT’s do-it-yourself craze is a message to vendors.  Big buyers are not seeing feature value commensurate with price levels.  They either want a lot more useful stuff or they want regular stuff a lot cheaper.  That alone will have a major impact on markets, particularly since arch-price-leader Huawei is determined to provide both features and value at the same time.

I think that the do-it-yourself network trend is just another factor to hit vendors with lower margins unless they can do something truly innovative.  The challenge for them in doing that is that we’ve spent about a decade and a half at this point winnowing innovation out of the industry in favor of aggressive cost control.  Clearly software, computing, and the architecture that binds them into a unified fabric of services is the right answer for networking.  We’re also at the point where tools are indeed facilitating self-help, maybe not for everyone but for the big buyers.  There’s no question that change is on the way.  Whether it will come by having major vendors accommodate commoditization, by Huawei driving everyone out, or by big players rolling their own networks and leaving “consumer networks” the only mass market, is too early so say.

What Oracle and Cisco Say About the Cloud and NFV

Oracle announced their numbers yesterday, and the company took a hit for having a soft quarter from a revenue perspective.  As has been the case for quite a few quarters, hardware sales were soft (though Oracle said they saw signs that the slip would reverse shortly) except in the appliance area.  Software had some issues too, and Oracle cited global economic factors as the cause.  Since one of the areas of softness was middleware, I’m not sure that’s the case and I don’t think Oracle really believes it either.

In the Q&A, Ellison alluded to a major cloud software announcement coming shortly, and from the comments he made it sounds like it may be something like a “soft hypervisor”, a container-based multi-tenant management architecture rather than a fixed hardware partitioning.  To improve security, Oracle will also provide its first true multi-tenant database in its 12c release.  So it may be that part of the middleware issue was Oracle’s prepping for a fairly radical shift in its cloud platform strategy.

Clouds are a balance between efficiency and security, because the more rigorous the partitioning among tenants, the more tightly resources are bound to specific VMs, which means the hardware isn’t efficiently used.  Solaris, Sun’s (IMHO best-in-the-market) OS has always had the ability to support “containers” that provided more isolation than simple multi-tasking but less than hardware hypervisors.  It may be now that Oracle is going to enhance the container model and offer improved cloud efficiency while maintaining security at least at past levels.  With the new DBMS, perhaps even better than past levels.

Oracle’s SaaS cloud stuff (Fusion) is doing quite well, and one truism with SaaS is that since the application layer is more under your control, you have to worry less about tenant security.  The question is whether Oracle is tweaking cloud software to suit the mission of its own cloud, or reflecting a reality about the cloud—SaaS is where the money is.  Certainly Oracle has every reason to push SaaS and push containers versus hypervisors in the broad market—it hits players like VMware whose ascendency in virtualization gives it an edge in the cloud that even Cisco seems to fear.

Speaking of Cisco, they’re still pushing their “traffic sucks” vision, meaning that traffic growth just sucks dollars out of operator pockets and into network (read, Cisco) equipment regardless of ROI.  Their push is that the “Internet of things” will enhance business operations and so create more spending.  Our model says that while Cisco may be right about the fact that spending on networking is driven by business benefits, they’re wrong to say that those benefits arise from connecting stuff.  Applications enhance productivity, not information sources.

Cisco also made some comments on NFV, which confused a number of the financial analysts I talked with.  They suggested that NFV would actually open a bigger market for Cisco servers (true) and finessed the questions on whether that bigger market came at the expense of the market for higher-priced network devices.

I can’t take Cisco too much to task for this, because the truth is that there will be little impact of NFV in a negative sense in the near term, and there could in fact be significant positive impact.  Down the line as NFV concepts mature, there is likely a growing shift of budgets for operators from bit-pushing to service-creating.  Cisco could be a beneficiary of that, but only if it stakes out a rational NFV position.  It’s hard to say whether they have one at this stage; certainly they aren’t talking about it.  But then neither are its primary competitors Alcatel-Lucent and Juniper.

Oracle’s potential shift toward a soft virtualization model could have implications for NFV, as it happens, and even for Cisco NFV.  Like SaaS, NFV is a framework where tenant software is under pretty tight control and thus would likely not require as rigid partitioning.  The NFV white paper started the body down the path of “virtualization” which most have taken to mean “hypervisors” rather than containers.  Might Oracle with its supercontainer (hypothetical, at this point) architecture jump into a premier position as a host for NFV?  The company, recall, did two acquisitions (Acme and Tekelec) that could be aligned with NFV aspirations.  It has all the platform components, and while on one hand Oracle NFV might step on Cisco’s plans, it might also help Cisco unseat VMware.  Which raises the question of whether Cisco might want to deploy its own container-based cloud stack down the line.  There’s open-source Solaris-modeled software out there, and Joyent (a Dell cloud partner even before Dell got out of its public cloud business) has a container-based cloud stack and public cloud service.  Things could get interesting.

The point with both Cisco and Oracle here is that the drivers of change in our industry are very real, but very different from the simplistic vision we tend to get fed by vendor PR mills.  Something big is very likely to happen in the cloud and in NFV in the next year, but it probably won’t fit our naïve preconceptions of cloud evolution.

A Tale of Two Clouds

With IBM announcing a bunch of “C-suite” analytics tools designed for the cloud and GE getting into big-data analytics, it’s hard not to think that we’re deep in the “build-the-buzz-meaning-hype” phase of big data.  Well, did we expect the market to be rational?  After all, we’ve pretty much washed every erg of attention we could out of cloud and SDN already.  As always, the hype cycle is helping reporters and editors and hurting buyers.

It’s not that analytics isn’t a good cloud application.  In theory, it’s among the best, according to what users have told me in my surveys.  Two years ago, analytics was the only horizontal application that gained significant attention as a target for the cloud.  But today, analytics is back in the pack—having lost that early edge.  The challenge is the big-data dimension, and that challenge is also the major problem with cloud adoption overall.

If you look at cloud service pricing, even the recent price reductions for data storage and access don’t change the fact that data storage in the cloud is very expensive.  So given that, what’s all this cloud-big-data hype about?  Hype, for one thing, but leaving that aside there is a factual reality here that’s being swept (or, better yet, washed) under the rug.  We really have two different clouds in play in today’s market.  One is the resource cloud which we have long been touting.  Resource clouds are about hosting applications.  The second is the information cloud, which has nothing to do with resources and everything to do with processing architectures for managing distributed data.  I would submit that Hadoop, the archetypal “cloud” architecture is really an information cloud architecture.  Further, I’d submit that because we don’t acknowledge the difference, we’re failing to support and encourage the developments that would really help big data and analytics advance.

In many ways, what the market is looking for is the long-sought “semantic web”, the superInternet that somehow understands critical data relationships (the “semantics”) and can exploit them in a flexible way.  We’ve not managed to get very far in the semantic web even though it’s been nearly 50 years since the idea emerged, but if we really want to make big data and analytics work, we need to be thinking about semantic networks, semantic knowledge storage, and semantics-based analytics that can be distributed to the data’s main storage points.  It’s superHadoop to make the superInternet work.

We have a lot of business problems that this sort of semantic model could help to solve.  Optimization of networking or production decisions, the target of some of the recent big-data announcements, are examples of the need.  Simple Dijkstra algorithms for route optimization are fine if the only issue is network “cost”.  Add in optimal server locations based on usage of a resource pool, policies to limit where things are put for reliability or performance reasons, and information availability and you quickly get a problem that scales beyond current tools.  We could solve that problem with superHadoop.  We might even be able to evolve Hadoop into something that could solve this problem, and more, if we focused on Hadoop as an information cloud architecture.

Do you believe in the future of the cloud?  If you do, then there may be no single thing that could be done to advance it as far as the simple separation of the information and resource clouds.  We need an IT architecture, a semantic information architecture, for the new age.  Yes, it will likely enhance the resource cloud and be enhanced by it in return, but it’s an interdependent overlay on the resource cloud, the traditional cloud, and not an example of how we implement it.  Big data and analytics problems are solved by the information cloud, not the resource cloud, and what we need to be doing now is recognizing that and looking for information-cloud solutions that go beyond the Hadoop obvious.

Can Alcatel-Lucent Steer the “Shift” Course?

Alcatel-Lucent announced its new strategy, and frankly I was disappointed in the articulation—or at least the amount of stuff that got articulated.  The “Shift” plan to me so far states the obvious, and that makes it less obvious whether Alcatel-Lucent really has a long-term strategy that can save it.

If you look at the high-level comments made, the sense of the moves Alcatel-Lucent intends is clear.  They’re going to trash R&D and sales emphasis on everything except ultra-broadband and IP, and in particular they are going to focus on “cloud infrastructure”.  This is the only possible strategy to address the declining differentiation in the networking space, and the accompanying price competition and margin erosion.  Get out of the commodity space and into something that’s not a commodity.  Seems sensible.

It’s not, because every sector in networking is circling the same drain, some are just deeper in the bowl than others.  Yes, Alcatel-Lucent can buy time by jumping out of the spaces where they’re inevitably going to lose to Huawei and/or ZTE, but all that will do is buy them a little time.  The real question has never been whether the company would stubbornly try to defend its broad collection of product lines in the face of relentless commoditization, it’s been whether the company knew how to avoid commoditizing everything.  So far, they’re not giving us enough to judge whether they have that vision.

“The cloud” is probably the thing that Alcatel-Lucent executives would point to as their touchstone for future profit, and if we define the cloud as being the union of information technology and networking, they’re right.  What else is there, after all?  However, neither IT nor networking are all that financially healthy.  With CIOs reporting to CFOs in corporate organization charts, we’ve acknowledged networking as a premier cost center, not an innovation center.  In any event, it’s not likely that Alcatel-Lucent will try to get into computing.  Given that, just what is it that they could do in “the cloud”?

I think the only answer is the thing I’ve been calling the “supercloud”, the cloud created not by attempting to do hosted server consolidation but by the evolving needs of mobile broadband users.  “Point-of-activity empowerment” is the goal of everyone these days, and while we’re demonstrably meeting the early demand in that space we’re clearly not thinking much about evolution.  Absent a credible evolutionary strategy, mobile broadband is just another source of pressure for cheap bits, which means cheap networks.

Alcatel-Lucent knows something about this space.  Their concept of the “high-leverage network” is spot-on in that it expresses the fact that for network vendors to prosper, network investment has to prosper.  Leverage is a good way of describing that; you “leverage” your investment in many ways.  Pushing bits to support five different applications isn’t five leverage strategies, either.  It’s one strategy—pushing bits.  That means that service intelligence is mandatory for Alcatel-Lucent, and for the industry.  How do we get it?

Traditional cloud means hosting on the network.  That’s back to pushing bits, network-wise.  We have to think about creating things in the network that can differentiate and revalue it.  Even here, Alcatel-Lucent has past credible positioning in their APIs.  But they’ve not been able to make something happen with APIs and they’ve recently begun to pull back on some of their previous strategies—selling API warehouse ProgrammableWeb that they’d bought with great PR fanfare not that long ago.

Alcatel-Lucent still has the right stuff, tech-wise.  What they don’t have is an inspirational articulation of the vision of the network of the future and a clear map as to how their pieces get a network operator to that promised land.  It’s almost like they want to educate buyers and not inspire them, and sadly that doesn’t work for revolutions.  So effective marketing/positioning is a must for Alcatel-Lucent’s survival.

So, IMHO, is NFV.  This concept is, I think, bigger than even its own body acknowledges.  It’s a way to create a model for deployment and operationalization of the supercloud of the future.  That’s not the only ingredient for that cloud (you need a functional architecture to define what it can do and how you build it), but deploying something is obviously a prerequisite for using it.  Here is where Alcatel-Lucent needs to focus in the near term; here is where they will likely stand or fall in a technical sense.  It’s a bully pulpit for a concept that Alcatel-Lucent needs badly.

The big challenges that convergent supercloud-NFV has to face are less the deployment (which is an application of optimization and DevOps principles, complicated but understood) than they are of management and function modeling.  We need to be thinking about how network/service/cloud features are best conditioned for being deployed in a supercloud.  Operationalization is also key.  Flexibility often comes at the price of complexity, and that can quickly escalate to eat any possible profits from service deployment.  Operators understand operations, but they understand it in the old five-nines-regulated-monopoly framework.  Everything can’t be ad-sponsored, and so the secret weapon of operators is that they know how to sell services people pay for.  They just have to figure out how to build new ones to sell, and how to operationalize that new service model in a suitable way.  And that’s what Alcatel-Lucent has to do, or its “Shift” won’t be a shift, but the beginning of a slow retreat.

Will IBM and Amazon Show Red Hat the Way to the Cloud?

Today we have two pieces of cloud-market change, and as we’ll see, it’s important to consider the two as parallel developments.  One involves Red Hat, and the other IBM.

A while back, I commented that Red Hat’s absence from the cloud space was puzzling from a strategy perspective and perhaps harmful to the company’s prospects.  Well, they’re fixing that by creating some packages based around their Red Hat Enterprise Linux (RHEL) and OpenStack.  What distinguishes Red Hat’s approach is that they’re integrating the cloud into their commercial Linux platform, one that’s been hardened to support high-value applications and even service provider missions.  They’re also likely going to harden OpenStack too, and provide the combination with solid professional support.  The question is whether they can create a differentiable model at this point in the market.

That’s also a question for IBM, who recently appealed a government decision to base the CIA cloud buy on Amazon rather than IBM despite the fact that IBM was cheaper.  The reason was that Amazon offered more “platform services” that added value and facilitated integration of the cloud and web-based applications and services.  In effect, the GAO review said that Amazon was more of a PaaS than IBM, which implies that PaaS is better than IaaS even if it’s a bit more expensive.

The PaaS/IaaS wars don’t directly impact Red Hat because it’s generally accepted that IaaS platforms are the most logical foundation for PaaS.  The problem is indirect; PaaS is “better” than IaaS because it can displace more cost in a public cloud, offers potentially better application development options in a private cloud, and above all has features—those platform services—that can differentiate a PaaS offering.  If PaaS has more features and more differentiators than IaaS at the service side, so it does at the platform level.

Which is what Red Hat now has to consider.  It has two cloud offerings on the table now.  One is aimed at large enterprises and public providers and focused on building big, efficient, clouds.  The other is for your typical hybrid-cloud enterprise.  Both missions could use a dose of platform services, as Amazon has proved.  What services will Red Hat settle on?  The initial announcement of Red Hat’s cloud was accompanied by a storage-integration announcement.  Red Hat’s storage will support OpenStack’s Block Storage (Cinder), Image Service (Glance) and Object Storage (Swift), which is good but not enough.  Red Hat will have to settle pretty quickly on a DevOps standard as well, and I expect something will come out in that area.  Similarly, I think Red Hat will be developing its own Quantum approach.  While all of this brings the major interfaces of OpenStack under one company roof, it doesn’t necessarily raise the bar for PaaS.

At this point in cloud evolution, I think Red Hat needs a bully pulpit, a specific mission that will drive its cloud.  In the cloud provider/carrier space that mission could be Network Functions Virtualization, and in the hybrid cloud area it could be supporting multi-dimensional elasticity in business applications.  Best of all, these missions could be converging on a single architecture.

NFV is a framework for hosting network features in the cloud (though the body is reluctant to accept that the cloud is the only platform that’s suitable).  This activity demands a combination of composition agility and operational integration that’s totally absent in the cloud today, and totally necessary if the cloud is to really support mission-critical apps.  Thus, both of Red Hat’s cloud packages could benefit from a dose of NFV-like orchestration and management.  NFV will also demand the integration of shared-function components (IMS, DNS) and per-user components (firewall, NAT) into a single service, as well as support for services that are all one or the other.  That’s a value to enterprise application architectures too, and the shared-function components might be a good way of describing platform services in general.

Which, I think, is the key to Red Hat’s success, because Amazon/IBM just proved it to be.  PaaS differs from IaaS in the inclusion in the former of platform services.  Quantum and Glance and Cinder and Swift are precursors of what must become a wide range of platform services, some of which will be generally useful and some of which will be more mission-specific.  Obviously Red Hat can’t support them all, but they could build a framework for their support through the support of NFV.  Right now there’s no real articulated computer-vendor, cloud-stack, support for NFV out there—nor of course much more than a few blown kisses from other types of vendors, in fact.  This is a perfect opportunity to recognize NFV for what it is—the on-ramp to a future supercloud architecture.

I’m not seeing an NFV behind every bush here, I’m simply saying that NFV is one game that has almost universal potential, and you need that if you’re a late-comer.  You’ve got to jump ahead of the pack if you enter the market late.  Red Hat has definitely done the late-entry part and now has to get to the jumping part.  The evolution of application software demands a PaaS architecture that makes cloud services explicit platform services.  NFV provides a mechanism to build platform services.  QED.

What Does Telefonica Have that AT&T (Might) Want?

One of the more interesting M&A rumors is the story that AT&T had made a bid for Spanish telecom giant Telefonica, a move that was blocked (says the rumor) by the Spanish government.  Telefonica has since denied any overtures were made, and it seems likely that one or the other of these negatives would be enough to make a deal doubtful.  Still, one must ask “Why?”  Is such a deal logical under any circumstances?

In fundamental terms, Telefonica doesn’t seem to be exactly the poster child for outside M&A interest.  Spain is in the midst of a truly bad economic slump, and while telecommunications isn’t typically hit as hard during these slumps as other industries, there’s no question that the telecom giant is burdened by debt (as is Spain at large).

It’s also true that the EU telecom landscape has been replete with examples of Eurotelecom giants running to emerging markets to invest because margins and ROI are too low in their home territory.  That’s been much less a problem for US telecom companies, so it’s hard to see what AT&T would see in a Telefonica buy.  They’d likely do better in ROI terms by investing at home.

OK, then, what’s the benefit that might be driving AT&T’s interest?  I think it’s Telefonica Digital.  Somebody asked me recently who was, in my judgment, the most innovative of the carriers in facing the future and my response was “Telefonica”.  It’s not likely that AT&T would want Telefonica’s business, but they might want the innovation.  And they might yet get it, in one way or another.

Telefonica Digital has done a couple of fairly impressive things.  First, just by being there it’s an example of the first critical step a telco has to take to be a player in a modern notion of services.  You can’t make Bell-heads into Net-heads, so you have to keep the two partitioned organizationally so you can create a culture for the latter and offer a career path that would make sense.  Second, Telefonica Digital has come closer to framing a credible service-layer strategy than anyone else, even an equipment vendor.  Third, Telefonica Digital has been a leader in targeted OTT-modeled services, in critical areas like health care, digital payments,   They’ve made strides in digital content, and they’re big in Brazil and Latin America in general, a space that some US operators covet.

US operators are trying to become players in next-gen services, but the culture barriers to success have been formidable.  Arguably, the history of being a public utility has been the greatest political barrier and the barrier posed by integration of new services with OSS/BSS has been the most difficult technical barrier.  Telefonica has alleviated the first with its spin-out of Digital, and the story is that they’re working on the second as well.  If that’s the case, then they could be a highly valuable commodity.

AT&T is perhaps under the most pressure of the US operators.  They have a lower economic density than rival Verizon.  They also have a smaller percentage of enterprise headquarters sites than Verizon, and it’s the HQ that makes business network buying decisions.  Their business sites, overall, are less likely to be cloud compute candidates, which hurts that side of AT&T’s business service plans.  In short, they could use some insight into OTT service creation and the cloud, and that’s something that Telefonica could bring.

Or something that Telefonica Digital could bring.  If AT&T can’t buy Telefonica, could it do a deal for Telefonica Digital?  Could, in fact, that be the deal that AT&T’s been interested in all along?  Nobody would likely think that Spain would approve the sale of its national carrier.  But would Spain go along with a deal to sell an OTT subsidiary of that national carrier when the sale might well keep Telefonica solvent?  I think they might.  They would obviously have no problem with a deal that would license Telefonica Digital work to AT&T on a non-exclusive basis or create some other digital-layer partnership.

We may be seeing the first step in what will become a land rush as operators attempt to use M&A to gain more than just a tech nibble here and there.  The clock is running down on network profitability.  We have little time to frame a new business model, particularly considering that capex through 2016 will likely be up as operators try to use their last gasp of financial latitude to prep for the future.  M&A is a darn good way to get in the game again.

Subtle SDN/NFV Data Points

We’re seeing more evidence of major changes in the networking industry, this time from the vendor side of the picture.  Obviously one impact of a sudden shift in network operator business models would be a collateral shift in spending that would impact vendors depending on their product portfolios.  I think some of those impacts can be seen.

NSN, who’s been a kind of on-again-off-again stepchild of Nokia and Siemens (and is now “on”) is rumored to be up for consideration for a complete Siemens takeover.  NSN has also articulated “pillars” on which it will frame its future business model.  Sadly most of these are rather vapid (increase capacity, improve QoS) and those that have potential (cloud-enable operators) are vague.

The challenge for NSN is that their recent decision to focus on mobile may be putting them in the position of having to small shoes for their feet.  To be sure, mobile is a big deal for operators because margins are higher there, but ARPU for mobile users is already flattened and in some cases heading to declines.  WiFi offload is actually revaluing metro, and so is CDN.  That means that a pure-RAN-and-IMS push may not be enough skin in the game.  And IMS, remember, is perhaps the number one target of operators for NFV.  You can’t have an open-source radio, but you could take everything else in IMS, including the evolved packet core (EPC) and turn it into cheap hosted functions.  That would have a decidedly chilling impact on three of the big mobile suppliers (Alcatel-Lucent, Ericsson, and NSN) but most of all on NSN simply because they don’t have anything else to play with.

The SDN and NFV future also present risks to the vendors, obviously, though they’re much harder to assess.  Operators have expressed the conviction that future networks will be rich in both, but the fact is that operators really don’t have any solid strategies for making that happen.  In our spring survey results, now largely in for the network operators, we found that the indicated that they did not believe that they would have made “significant strides” in deployment of either SDN (in the network) or NFV by the end of 2014.

Alcatel-Lucent may be the centerpoint of the whole evolution-of-the-network thing.  They have a broad network asset base, a strong incumbency, good professional services, and probably more assets in the SDN space than competitors.  They’re working hard to build a cloud position, which could give them an SDN position.  Their stock has been rising on the expectations of the Street that they’re less disordered than some of their competition.  Philosophically their biggest rival is Cisco, who has sacrificed broad carrier-product engagement for value-driven carrier-cloud IT engagement.  The question is whether Cisco can drive a cloud strategy for operators in an industry that is still unable to articulate one as a consensus framework or standard.  Many say that Cisco is one reason that’s happened; they’ve been dragging their feet on issues like SDN and NFV.  If that’s the case, then the Cisco/Alcatel-Lucent dynamic is the focus of a major Cisco gamble.  Our models say that carrier spending on network infrastructure will climb nicely through 2015 and then tip downward, never again to rise out to the 2018 limit of our prediction.  If that’s true, Alcatel-Lucent could benefit enormously in the near term and if Cisco can’t drive an IT shift to offset their lack of many of the basic elements of infrastructure (RAN comes to mind) then they risk watching a major rival reinvent itself with the cash earned from this spending surge.

Another vendor is betting on change, Cyan.  The company has launched what’s essentially an SDN-coop with a bunch of other largely smaller players.  The titular focus of all of this is an SDN-metro deployment that would combine optical and electrical and link better to the cloud and to central network operations-based apps.  I like the idea a lot, but I’m not convinced that there’s enough substance here.  The trouble with SDN these days is that all you need to do is spell out the acronym and the media thinks you have a story.  No details on how things actually work are required.  The press likely wouldn’t carry them anyway, but I’ve noticed that they don’t even go on the website.  If you have an ambition to SDN-ize metro you have to open the kimono to the degree needed to explain exactly how you expect to do it, which I can’t get from what Cyan has said.

We see a lot of SDN and NFV coming down the line based on briefings we’ve received and what we’ve gotten from operators who have also been briefed.  Most of it seems to be relying more on professional services and customization than on standard products and open architectures.  Since only major vendors can offer the professional-services path, it may be that this proves the major drivers of our big network revolutions are the least revolutionary of the players.  That may compromise the goals of both technologies.  Operators have fallen into this trap of trusting innovation to those with the most to lose by innovating.  I hope they don’t do it again.

Two Good-News Items

There’s been some potential progress on a couple of fronts in the cloud, SDN, and NFV space (a space I’m arguing will converge to become the framework of a “supercloud”).  One is the introduction of Red Hat’s OpenShift PaaS framework as a commercial offering, and the other a proposal to converge two different OpenFlow controller frameworks into one that would provide for a “service abstraction layer”.

Since I think the cloud is the driver of pretty much everything else, we’ll start there, and with its most popular service.  IaaS is a bare-bones model of the cloud, one that provides little more than VM hosting and thus really doesn’t change the economics of IT all that much.  I’ve been arguing that cloud services have to evolve to a higher level, either to accommodate developers who want to write cloud-specific apps or to broaden the segment of current costs that the cloud would displace.  In either case, we need something that’s more PaaS-like.  Developing cloud-specific apps means developing to a cloud-aware virtual OS, which would look like PaaS.  Displacing more cost means moving up the ladder of infrastructure from bare (if virtual) metal to a platform that at least envelopes some operations tools.

Red Hat’s OpenShift is the latter, a platform offering that’s designed to address what’s already the major cloud application—web-related application processing.  You can develop for OpenShift in all the common web languages, including Java, and the development and application lifecycle processes are supported throughout, right to the point of running.  OpenShift currently runs as a platform overlay on EC2, which puts IaaS services in what I think is their proper perspective, the bottom of the food chain.

OpenShift isn’t the end of the PaaS evolution, IMHO.  I think Red Hat will be a player in deploying further PaaS tools to create a more operationalizable cloud that’s aimed at the web-related application set.  The question is whether they’ll take any steps to advance to the other PaaS mission, the creation of a virtual cloud OS.  In some ways, Java and perhaps some of the other popular web-linked languages are suitable candidates for distributed package development.  What’s needed is a definition of platform services which, added to either IaaS or a development-enhanced PaaS, would create a virtual cloud OS.  We actually have a lot of that today, particularly if you add in the OSGi stuff.

OSGi stands for Open Services Gateway initiative, and it’s an activity that implements what many would call a Service Abstraction Layer (SAL), which is the other piece of my potential good news.  SAL is an architecture that offers a Java-and-web-friendly way of creating and using Java components in a highly dynamic way.  It’s been around for perhaps a decade, and with concepts like SDN and NFV it could come into vogue big time.  In the OpenDaylight context, the concept seems to have this original meeting and another besides; a notion that the SAL would provide a common semantic for talking to control elements or drivers (“plugins”) that would then let these relate to the rest of OpenDaylight in a consistent way.  You could argue that the models in Quantum are an abstraction, for example.

One of the potential advantages of SAL in OpenFlow Controller missions is creating an organized way to modularize and enhance the controller by providing an easy exposure of low-level features and the binding of those features upward to applications.  Without some common set of abstractions, it’s doubtful whether a controller could use all of the possible southbound protocols in the same way, and variability in how they had to be used would then percolate upward and make it harder to build applications without making them specific to the control mechanism at the bottom.  Thus, in theory, this SAL thing could bring some structure to how the famous northbound APIs would work, and it could also allow for the easy support of multiple lower-level control protocols rather than just OpenFlow.

The proposal is to make OpenDaylight a three-layer controller architecture with the low layer being the “drivers” for the control protocols, the middle layer being “network services” (kind of the platform services of SDN) and the higher layer being the applications.  SAL would fit between drivers and network services and would create a dual OpenFlow/OVSDB interface.  Presumably it would be easy to add other control frameworks/languages.  This sort of structure seems to play on the benefits of the Cisco-contributed code, which (obviously) can be enhanced to support the ONEpk API.

The biggest impact of this change, if it happens, would be to make OpenDaylight more of an SDN controller and less of an OpenFlow controller.  It doesn’t destroy OpenFlow but it would make it easier to develop control alternatives that might make more sense.  Recall that OpenFlow was really designed to be an easy way of communicating ASIC-level forwarding rules to hardware, something that is probably not really necessary as SDN evolves.  Why?  Because likely there would be control logic in the devices that could perform a compilation of forwarding rules from any reasonably expressed structure.  That would make it easier to apply SDN principles to protocols/layers that were opaque from a normal packet-header perspective.  Like optics.

All this makes one wonder just what Cisco has in mind here with OpenDaylight.  As I commented earlier, the controller code seems quite good, and these new concepts seem to advance the controller toward the point where it might actually provide the basic level of utility needed to drive an OpenFlow “purist” network model.  But I want to point out that routing and switching are still “applications” to the controller, and most of what we take for granted in networking is thus still up in the Frozen Northbound…APIs, that is!