Three Network Sign-Posts

It’s been an interesting week for the markets, in all of the dimensions that drive us forward.  There are glimmers of technology change in SDN, there are signs of vendor shifts, and there are macro indicators that might tell us a bit about demand.  So, given that it’s Friday and a good day to take a broader look at things, I’ll talk about all three.

NTT’s strategy linking SDN technology to cloud service delivery for hybrid cloud and even cloudshifting applications is potentially interesting.  If you look at the infrastructure dimension of SDN, you see it focused primarily on narrow data center switching applications or (in Google’s case) a fairly specialized IP core.  The big opportunity for infrastructure SDN is metro, and we’ve been light on examples of things that you could do with it in the metro space.  NTT proposes to have data center connections made via an SDN-controlled gateway, which could not only create a framework for metro cloud services to enterprises, it could be an element in NFV implementation.

The data center is the foundation of NFV, but the biggest (and perhaps hardest) step in NFV infrastructure evolution is creating a web of resource connections throughout the metro area.  Applications like IMS/EPC and CDN, both NFV use cases, demand interconnection of metro resources.  Most of the advanced mobile/behavioral opportunities that operators have identified—both in the enterprise space and for consumers—demand a highly elastic resource pool with good connectivity that lives underneath the orchestration and operationalization layers.  Thus NTT may be driving some activity in a very critical area.

On the vendor side, we can see that the spin out of Cisco’s Live event on Insieme does seem to be focusing on something more “operationalizing” in terms of benefits, which I think is both good for the market and smart for Cisco.  The goal of SDN must be the support of the evolution of that grand fuzzy artifact we call “the cloud”.  If SDN doesn’t do that, it’s just tweaking current processes and won’t matter much in the long run.  The challenge in supporting the cloud isn’t just a matter of speed-matching agile application-layer evolution with seven-year-depreciation infrastructure, it includes the question of how you can make all that dynamism work without creating an explosion in operating costs.  That problem is so critical to carriers that it swamps the question of how you optimize deployment or whether you even use SDN technology.

We need a new operations model for the cloud, and in fact bodies like the TMF have recognized that for at least four or five years.  The problem is that we’ve been unable to create one using the traditional standards processes, largely because those processes get so tied up in evolution they forget the notion of destination.  If Insieme can be used to take a bold leap into the operational future and jump over all the dinosaurs of past processes, then it gives us the destination of cloud operations.  Getting to that goal can then be addressed, and I know for sure that there are models of operations evolution that can manage both the goal and the route.  I’ve presented them to carriers, in fact.  What has to happen now is that they get productized.

The first of our broad-market points is the Accenture quarterly report, which was a revenue miss due to shortfalls not so much in outsourcing as in consulting activities.  Professional services have been a bright spot for a lot of companies, not the least being IBM.  In networking, Alcatel-Lucent, Ericsson, and NSN are increasingly dependent on it.  So the question at this point is “Are tech buyers being asked to spend more on professional services to competitively pressured equipment price reductions?”  It appears to me based on the market data and my spring survey results (now almost all in from the responders) that buyers think so.  Enterprises are starting to adopt that suspicion-of-my-vendor mindset that evolved over the last five years in the carrier space and has substantially poisoned the credibility of the vendors.

The challenge this poses for vendors is multi-faceted.  On the surface, having your buyers distrust you has to be a bad thing, and that’s especially true when they’re trying to make a major technology transition.  In fact, the poisoning of the credibility well is so serious a problem for vendors that the only reason it’s not hurting them is that nearly all of them have it.  Cisco seems to be the vendor who has escaped the credibility slump the best, and not surprisingly it’s the network vendor doing the best in the marketplace.  But there’s a deeper point too, and that’s the fact that buyers really do need professional services to help them along, and if they don’t get them because the services are overpriced or the sources aren’t trusted, then major new applications of networking and IT can’t be deployed.  That would disconnect all our wonderful new technology options—cloud, SDN, and NFV—from the improve-the-benefit-case side of the process, focusing them entirely on cost reduction.  That, as I’ve said for years now, will take networking and tech to a place where few of us in the industry will want to see.

The good news is that I think we’re starting to see, in the “SDN wars” the beginning of a synthesis of a value proposition.  Yes, it would have been better had someone simply stood up and announced the right answer, meaning positioned their assets to address the real market value of SDN.  However, we’re finding the right spot by filling in a million blank spaces, some substantially useless, and then looking at the impact of each new solution.  Eventually, we’ll see the elephant behind the forest of boulders, snakes, and trees.

“Insieme” Spells “Operationalization” for Cisco

Cisco’s comments on its new “Application-Centric Infrastructure” vision is yet another proof point for my argument that Cisco has successfully played the weaknesses of SDN players against them.  In military tactics, seizing the high ground is a maxim.  So it is in marketing, and Cisco has done that with finesse.

You can harp on about the value of SDN in terms of efficiency or better hardware/software innovation or whatever, but to actually defend SDN’s value you have to get back to the place where software starts defining networking.  That means you have to face those northbound APIs and the applications that control connectivity and manage traffic.  In the SDN space, startups have tended to either jump into the overlay virtual networking space or the SDN controller space.  The former disconnects the application from the network through an intermediary virtual abstraction and the latter is too low on the totem pole to understand services and support software control.  Cisco, I suspect, knew all along that the OpenFlow/SDN community would take too little a bite to be a threat, and they went for the APIs instead.

What Cisco now seems to be doing is preparing a spot for its own SDN-spin-in story, Insieme Networks.  That’s actually going to be the tricky part for Cisco, because up to now their SDN approach was the “quacks-like-an-SDN” model; if something exhibits expected SDN properties at the API level, then it’s an SDN.  That works as long as you don’t look inside the SDN black box Cisco has built, but Insieme forces Cisco to open that box a bit, to define internal structure to their picture.  That will then let competitors take shots not at the philosophy of the Cisco approach but at the technology.  So Cisco has to defend.

If you look at Cisco commentary on Application-Centric Infrastructure, you see that there’s a lot of integration and operationalization inside it.  It’s tempting to see Insieme as something that would address that, particularly since Cisco is aiming it at the data center.  That would make Insieme very Contrail-like or Nuage-like, perhaps, a means of linking virtual and real networks.  But virtualization and abstraction work against integration and operationalization, and Cisco will have to address how those two are resolved or face the risk of a competitor who has tangibly better answers.

Network operators have already told me that they’re more worried about how cloud applications (including NFV) are operationalized than about how they’re optimized and deployed.  The challenges of operations in a virtual, integrated, world are formidable because there are no real devices that present MIBs, and there are elements of application/service performance that don’t belong to any component that you’ve deployed, but to the connections between components.  In the world of the supercloud, the future, you have to be able to derive operations rather than apply them, because in the virtual world there’s nothing real to manage.  As we evolve more to virtual elements, we’ll have to face the transition from real device management.

To what?  One thing that’s clear is that you’ll need to rely more on automated processes and less on human practices in the cloud of the future.  You’ll also have to take a more service-centric or user-centric view of resources and behavior rather than a device-centric view, because you don’t have real devices any more but you’ll always have real users (or you starve, and your problems become irrelevant in a market sense).  As an industry, though, we have never really come to terms with a service-user-centric vision of network or IT management; everything ends up coming down to operations centers drilling down through layers of devices.

Whether Insieme contributes anything directly to this process isn’t the relevant point; the challenge is that when you solve problems in a new way you have to operationalize for that new solution.  Cisco is dragged into the operations side of the cloud whether they like it or not, and of course so are their competitors.  Every SDN strategy should be judged in part based on its operations context, but we’ve been unable to compare SDN operationalization because the competitive focus (and thus the product design and articulation) has been on network features.  And Cisco, by abstracting the network, has been able to stay out of the fray completely.  Now, with Insieme coming along, it has to dive in, and that makes the full picture from APIs to technology fair game for competitive byplay.  Including operations.  Especially operations, in fact, because if you can’t carefully operationalize our new and agile virtual world you’ve only invented a new way of getting into deep cost trouble down the line.  The complexity of a virtual system is inherently higher because of its additional flexibility and the multiplication in the number of elements.

So watch Cisco’s Insieme stuff for operational clues, and start looking at SDN stories for their operationalization story.  What you can’t build and deploy and sustain, you can’t bill for and profit from.

More Interesting Stuff in Overlay SDN

The SDN game never ends, possibly because nobody wants to diss a good market hype wave while it still has momentum, and possibly because there’s still room to do something useful given the rather vague utility of some current strategies.  In any case, PLUMgrid has joined the fray with an offering they call “Virtual Network Infrastructure”, a model that among other things drives home some changes in the SDN market model.

At a high level, we’ve always had “overlay SDN” and “infrastructure SDN”, meaning a division of SDN models between a connectivity-management mission (overlay) and a traffic management mission (infrastructure).  A third model, sort of, can be created using virtual switches (OVS), and you can combine virtual switching and tunnel-overlay to create a more flexible and agile data center.  All these models have to be mapped in some way to the cloud, so I think the most useful way of looking at PLUMgrid is to look at it from an OpenStack Quantum (now called “Neutron”) perspective.

Neutron builds networks by converting “models” to connectivity and functionality.  For example, the classic model of a Neutron network that hosts application elements is a VLAN that’s combined with functions like DHCP and a default gateway.  What PLUMgrid has done is to translate Neutron fairly directly into connectivity and functionality, using a combination of network tunnels and hosted virtual functions.  Virtual switching and routing is provided where the model has explicit bridging/routing functions, otherwise there is only connectivity and higher-layer functions.  In some ways, the connectivity model is similar to that of Nicira, and OpenFlow is not required because forwarding is implicit in the implementation of the model, connections are tunnel-supported rather than switching-derived.  There’s a rich developer kit provided with PLUMgrid as well as management interfaces available for most popular cloud platforms.

So how does this relate to other SDN models?  Arguably, PLUMgrid is a mature and perhaps better-thought-out version of Nicira.  It’s distinctively cloud-data center in targeting, it’s at least so far an in-center model and not an end-to-end model, and it’s a true overlay SDN and not a virtual switch hybrid.  It doesn’t attempt to align overlay SDN vision with physical infrastructure as much as to simply use what infrastructure is available.  That means it can run over anything, even a combination of things like Ethernet and IP.  That makes it easy to build hybrid networks that extend from the data center into a public cloud.  Where some virtual SDN models are more “physical” (Alcatel-Lucent/Nuage, Juniper/Contrail, Brocade/Vyatta) PLUMgrid is solidly virtual.

What PLUMgrid makes very clear, I think, is that there is a lot of potential value to visualizing SDN as a two-layer process.  At the top there’s virtual/overlay networking that has to be very agile and flexible to conform to software needs.  Below that, there’s physical or infrastructure SDN, where software control is likely exercised more at the policy level than by managing specific connections.  Separating these functions is good for startups because it keeps them out of the hardware business, and it lets them focus on the cloud.

The two specific questions that PLUMgrid begs are “Can you really make the cloud network a ship in the night relative to other traffic?” and “Can you do a useful network that doesn’t really extend end to end?”  I think that the answer to both questions is “Under some conditions!” but I also think that the questions are related.  If you contain the mission of overlay SDN to the data center then the cost of presuming ample connectivity and capacity is limited and might even be offset by the management simplicity of a fabric model.  Thus, overlay-data-center SDN can be traffic-insensitive.  As soon as you start transiting the WAN, though, you have to consider SLAs and QoS and all that stuff.

This is a good step for SDN because it’s making it clear that we really have a number of models of SDN that have their own optimum missions.  We may also have situations, of an unknown number, where any given model will do as well as any other.  We also have situations where some models will do better, and for buyers there’s going to be a need to align model and mission very carefully.  Nobody in the SDN vendor space is likely to work very hard to make that easy, but some of this mission-utility stuff may emerge from competition among vendors, particularly between overlay and infrastructure SDN providers.  Infrastructure SDN can be easily made end-to-end and is typically justified in large part by traffic management capability, so it stands to reason that the relationship between overlay SDN and traffic will emerge out of competition with the infrastructure side of the family.  In any case, I think a debate here would be good for SDN and it might even create a future where we’re less concerned about “OpenFlow” and centralism on the infrastructure side.  That would dodge the problem the SDN industry has wanted to dodge from the first, those northern-living application elements that create the service model on top of OpenFlow.

Is Cisco the New King of the Supercloud?

Cisco has used its Cisco Live event to buttress its Open Network Environment and the onePK API set that’s designed to address the SDN and NFV space.  In the move, Cisco is showing us why they’re able to grow sales when others are under pressure, and also why the notion of a network revolution isn’t as easy to promote as one might think.

From the first, ONE has been what many would call a cynical play, blowing kisses at things like OpenFlow and NFV to gain benefits while at the same time downplaying or even obstructing the radical change that both technology initiatives could represent.  That may well be true, but in the marketplace you don’t expect sellers to lie down and die so the next generation can trample easily over the remains.  What Cisco did that was very smart was to grasp the essential notion of SDN (and, to a lesser degree, NFV) and capture the value of that notion without promoting what would have been to Cisco hurtful technology changes.  Software control of networking is a top-down requirement.  You have to start with the software’s mechanism to exercise control and go from there, which is what onePK did.

Cisco has now announced it’s rolling ONE out across more products, but what’s really interesting is that they’re doing this under the label of “Enterprise Network Architecture”, the broad strategy Cisco is targeting at (obviously) enterprises.  Enterprise interest in SDN and NFV is significantly different from network operator interest; literacy among enterprises is about half that of operators for SDN and it’s in the statistical noise level for NFV.  However, there is interest among enterprise buyers in SDN and NFV as a means of bringing networks to heel in supplying application services the way applications want.  That’s why the onePK concept is valuable.

I think that what Cisco is doing here is building from the API boundary point in both the upward (cloud) and downward (network) directions.  If you look at its acquisition of Composite Software, a data virtualization player, you can see the beginning of what might be “cloud platform services” emerging from Cisco.  They don’t want or need to push their own version of cloud stack software—IaaS is just a platform to build valuable stuff on.  The valuable stuff is likely to come in the form of cloud platform services that can enhance applications by leveraging what makes the cloud different.  It’s no surprise that Cisco is linking enterprise mobility to this, because it’s mobility-based applications and what I’ve been calling “point-of-activity empowerment” that provide the benefit kicker that will justify any enterprise excursions into SDN, NFV, or even ultimately a large-scale commitment to the cloud.

Data virtualization, network virtualization, resource virtualization.  It’s not hard to realize that we’re moving to a virtualization-based platform, not in the old sense of VMs but in the new sense of “fully abstracted”.  We’re building a cloud operating system, just like we need to be, and Cisco may well be the player who has the clearest idea of what that might look like and how to get there without sacrificing early sales.  Not only that, by grabbing the API side of this, Cisco is going after a space where newcomers in the SDN space have been weak.  “Northbound APIs” have been that vague pie-in-the-sky thing that you pushed everything you didn’t want to implement into.  If Cisco makes APIs concrete, they expose the fact that others are really pretty simplistic in how they view them.

Cisco’s weakness in this is NFV.  They’ve lumped it into the same package as SDN, which suggests that at least for the moment Cisco is looking at NFV as a kind of SDN stepchild, which it most definitely isn’t—NFV is the value-proposition senior partner because NFV as an architecture defines the deployment and operationalization of agile, componentized, cloud-specific services and applications.  The question is whether Cisco’s NFV myopia is due to the fact that they can’t easily fit it into their current SDN/ONE approach and are waiting to see if something gels, or because Cisco actually has a grand NFV design and they don’t want to let it out yet.

I’m inclined to favor the latter conclusion here.  I think it’s obvious to many at Cisco that NFV is really creating the framework for the future of both cloud and the network, but it’s doing that creating at a layer above the network itself.  That favors Cisco’s API strategy because NFV components could reasonably exercise APIs just as any other cloud/software element could.  It’s not logical to assume that something that so easily fits with Cisco’s strategy at the API level could be held back because those APIs aren’t the same as SDN would use.  Again, the data virtualization buy suggests that Cisco is looking at virtualization in the fullest sense.

Alcatel-Lucent and NSN are increasing their SDN, NFV, and cloud articulation but it’s too early to say whether they have anything substantive to say.  Cisco has a lot of real technology pieces here, and so it’s going to be very hard for competitors to catch up if they let Cisco take a convincing lead in articulation too.  I think that anyone who doesn’t have a good SDN/NFV/Cloud story by mid-November is going to be in for a cold winter.  So…look for more M&A.

The Age of Do-It-Yourself Networking?

Who’d have thought that Google and Facebook and Netflix might be the face of competition for network vendors?  Well, it’s happening, and while the risk isn’t acute at this point for reasons I’ll get into, the actions of these three OTT giants is a symptom of the issues that face networking and network equipment.

Google and Facebook are getting more into the network switch business.  Google’s use of OpenFlow and custom hardware to enhance its network core is pretty well-known, and they’re probably a poster child for SDN in an IP core.  The technology Google has used, at the software level at least, is pretty generally available and so in theory anyone could run out and adopt it.  Their custom hardware isn’t (obviously) off the shelf, but it’s a harbinger of what’s coming down the line in generic switch technology.  Add some open software to cheap hardware and you get a network.

Facebook’s plan is to build custom data center switching, also based on open-source technology.  The concept is an offshoot of Facebook’s Open Compute Project, and it’s goal is to create a fabric switch that would flatten large data centers.  Behind Facebook’s efforts is a shift in traffic patterns; social networks are an example of modern applications that have more inter-process communication than user-to-process communication.  If everything is moving in that direction (which I think is likely) then could Facebook be building the model of future data center networks?

Netflix’s idea is to build their own cache point, a mechanism to make it cheaper to deliver quality video to their customers.  Content delivery networks are typically built from fairly expensive purpose-built gear, or by tweaking general-purpose hardware, both of which are too expensive from Netflix’s perspective.  So they’re rolling their own stuff, creating content cashing in the midst of a market that has many CDN providers and many CDN products.

So OTTs, like network operators, are looking to do networking on the cheap.  Behind all of this is a simple truth; networking is under crushing cost pressure, and there is nothing likely to change this picture any time soon, in fact, never.  Improved chip design, a more software-centric product framework, and better tools are allowing at least some network users to become their own vendors, to squeeze out a bit more margin.  You might wonder whether do-it-yourself networking is going to take over the world and sweep the tech giants out of the game.

Not likely, particularly in the near term.  The fact is that even though concepts of open design for servers have been around for several years, we’re not seeing open servers being cobbled together by your average (or even above-average) enterprise.  Being a vendor isn’t all that easy, and the giant OTTs have an opportunity to play in that game largely because they are giants.  For the masses, do-it-yourself isn’t an option.  Even the giants aren’t necessarily going to have an easy time of it.

I’ve watched three standards groups struggle to define a new network model that’s more software-driven, and thus less expensive.  None of them have found it an easy job, and arguably the first two have already failed at it.  Here’s a mass activity, well supported, and yet unable to take the steps needed to break free of what they believe is a restrictive model (and market).  So what’s the chance that individual users, even individual carriers, could carry off a self-help network transformation?  Then there’s “who supports it?”  Look at Linux.  We have what is absolutely the key operating system of our time, and it’s open source—free.  Yet when we ask businesses how they want to use Linux, over four out of five say that they want it from a source that will offer support.  Red Hat is “Linux” to more companies than Linus Torvalds is; in fact nearly all Linux users know who Red Hat is and less than half know its inventor.  And when you ask businesses whether they’d be comfortable with white-box, open-source, networks, guess what they say?  No not yet.

OK, you say, we’ll get everything from inventive startups who will reshape the network landscape by creating something based on those white boxes and open tools, right?  Ask VCs first whether they really want to fund a networking startup, second whether they’d do one with a broad general product mission, and third whether they’d accept something that was based on off-the-shelf commodity stuff that anyone else could assemble as easily as their startup could.  You know where that one will go!

So just what’s going to come out of these Google, Facebook, and Netflix efforts?  Nothing, you might think after reading my points here.  But like the network operators’ NFV initiative, the OTT’s do-it-yourself craze is a message to vendors.  Big buyers are not seeing feature value commensurate with price levels.  They either want a lot more useful stuff or they want regular stuff a lot cheaper.  That alone will have a major impact on markets, particularly since arch-price-leader Huawei is determined to provide both features and value at the same time.

I think that the do-it-yourself network trend is just another factor to hit vendors with lower margins unless they can do something truly innovative.  The challenge for them in doing that is that we’ve spent about a decade and a half at this point winnowing innovation out of the industry in favor of aggressive cost control.  Clearly software, computing, and the architecture that binds them into a unified fabric of services is the right answer for networking.  We’re also at the point where tools are indeed facilitating self-help, maybe not for everyone but for the big buyers.  There’s no question that change is on the way.  Whether it will come by having major vendors accommodate commoditization, by Huawei driving everyone out, or by big players rolling their own networks and leaving “consumer networks” the only mass market, is too early so say.

What Oracle and Cisco Say About the Cloud and NFV

Oracle announced their numbers yesterday, and the company took a hit for having a soft quarter from a revenue perspective.  As has been the case for quite a few quarters, hardware sales were soft (though Oracle said they saw signs that the slip would reverse shortly) except in the appliance area.  Software had some issues too, and Oracle cited global economic factors as the cause.  Since one of the areas of softness was middleware, I’m not sure that’s the case and I don’t think Oracle really believes it either.

In the Q&A, Ellison alluded to a major cloud software announcement coming shortly, and from the comments he made it sounds like it may be something like a “soft hypervisor”, a container-based multi-tenant management architecture rather than a fixed hardware partitioning.  To improve security, Oracle will also provide its first true multi-tenant database in its 12c release.  So it may be that part of the middleware issue was Oracle’s prepping for a fairly radical shift in its cloud platform strategy.

Clouds are a balance between efficiency and security, because the more rigorous the partitioning among tenants, the more tightly resources are bound to specific VMs, which means the hardware isn’t efficiently used.  Solaris, Sun’s (IMHO best-in-the-market) OS has always had the ability to support “containers” that provided more isolation than simple multi-tasking but less than hardware hypervisors.  It may be now that Oracle is going to enhance the container model and offer improved cloud efficiency while maintaining security at least at past levels.  With the new DBMS, perhaps even better than past levels.

Oracle’s SaaS cloud stuff (Fusion) is doing quite well, and one truism with SaaS is that since the application layer is more under your control, you have to worry less about tenant security.  The question is whether Oracle is tweaking cloud software to suit the mission of its own cloud, or reflecting a reality about the cloud—SaaS is where the money is.  Certainly Oracle has every reason to push SaaS and push containers versus hypervisors in the broad market—it hits players like VMware whose ascendency in virtualization gives it an edge in the cloud that even Cisco seems to fear.

Speaking of Cisco, they’re still pushing their “traffic sucks” vision, meaning that traffic growth just sucks dollars out of operator pockets and into network (read, Cisco) equipment regardless of ROI.  Their push is that the “Internet of things” will enhance business operations and so create more spending.  Our model says that while Cisco may be right about the fact that spending on networking is driven by business benefits, they’re wrong to say that those benefits arise from connecting stuff.  Applications enhance productivity, not information sources.

Cisco also made some comments on NFV, which confused a number of the financial analysts I talked with.  They suggested that NFV would actually open a bigger market for Cisco servers (true) and finessed the questions on whether that bigger market came at the expense of the market for higher-priced network devices.

I can’t take Cisco too much to task for this, because the truth is that there will be little impact of NFV in a negative sense in the near term, and there could in fact be significant positive impact.  Down the line as NFV concepts mature, there is likely a growing shift of budgets for operators from bit-pushing to service-creating.  Cisco could be a beneficiary of that, but only if it stakes out a rational NFV position.  It’s hard to say whether they have one at this stage; certainly they aren’t talking about it.  But then neither are its primary competitors Alcatel-Lucent and Juniper.

Oracle’s potential shift toward a soft virtualization model could have implications for NFV, as it happens, and even for Cisco NFV.  Like SaaS, NFV is a framework where tenant software is under pretty tight control and thus would likely not require as rigid partitioning.  The NFV white paper started the body down the path of “virtualization” which most have taken to mean “hypervisors” rather than containers.  Might Oracle with its supercontainer (hypothetical, at this point) architecture jump into a premier position as a host for NFV?  The company, recall, did two acquisitions (Acme and Tekelec) that could be aligned with NFV aspirations.  It has all the platform components, and while on one hand Oracle NFV might step on Cisco’s plans, it might also help Cisco unseat VMware.  Which raises the question of whether Cisco might want to deploy its own container-based cloud stack down the line.  There’s open-source Solaris-modeled software out there, and Joyent (a Dell cloud partner even before Dell got out of its public cloud business) has a container-based cloud stack and public cloud service.  Things could get interesting.

The point with both Cisco and Oracle here is that the drivers of change in our industry are very real, but very different from the simplistic vision we tend to get fed by vendor PR mills.  Something big is very likely to happen in the cloud and in NFV in the next year, but it probably won’t fit our naïve preconceptions of cloud evolution.

A Tale of Two Clouds

With IBM announcing a bunch of “C-suite” analytics tools designed for the cloud and GE getting into big-data analytics, it’s hard not to think that we’re deep in the “build-the-buzz-meaning-hype” phase of big data.  Well, did we expect the market to be rational?  After all, we’ve pretty much washed every erg of attention we could out of cloud and SDN already.  As always, the hype cycle is helping reporters and editors and hurting buyers.

It’s not that analytics isn’t a good cloud application.  In theory, it’s among the best, according to what users have told me in my surveys.  Two years ago, analytics was the only horizontal application that gained significant attention as a target for the cloud.  But today, analytics is back in the pack—having lost that early edge.  The challenge is the big-data dimension, and that challenge is also the major problem with cloud adoption overall.

If you look at cloud service pricing, even the recent price reductions for data storage and access don’t change the fact that data storage in the cloud is very expensive.  So given that, what’s all this cloud-big-data hype about?  Hype, for one thing, but leaving that aside there is a factual reality here that’s being swept (or, better yet, washed) under the rug.  We really have two different clouds in play in today’s market.  One is the resource cloud which we have long been touting.  Resource clouds are about hosting applications.  The second is the information cloud, which has nothing to do with resources and everything to do with processing architectures for managing distributed data.  I would submit that Hadoop, the archetypal “cloud” architecture is really an information cloud architecture.  Further, I’d submit that because we don’t acknowledge the difference, we’re failing to support and encourage the developments that would really help big data and analytics advance.

In many ways, what the market is looking for is the long-sought “semantic web”, the superInternet that somehow understands critical data relationships (the “semantics”) and can exploit them in a flexible way.  We’ve not managed to get very far in the semantic web even though it’s been nearly 50 years since the idea emerged, but if we really want to make big data and analytics work, we need to be thinking about semantic networks, semantic knowledge storage, and semantics-based analytics that can be distributed to the data’s main storage points.  It’s superHadoop to make the superInternet work.

We have a lot of business problems that this sort of semantic model could help to solve.  Optimization of networking or production decisions, the target of some of the recent big-data announcements, are examples of the need.  Simple Dijkstra algorithms for route optimization are fine if the only issue is network “cost”.  Add in optimal server locations based on usage of a resource pool, policies to limit where things are put for reliability or performance reasons, and information availability and you quickly get a problem that scales beyond current tools.  We could solve that problem with superHadoop.  We might even be able to evolve Hadoop into something that could solve this problem, and more, if we focused on Hadoop as an information cloud architecture.

Do you believe in the future of the cloud?  If you do, then there may be no single thing that could be done to advance it as far as the simple separation of the information and resource clouds.  We need an IT architecture, a semantic information architecture, for the new age.  Yes, it will likely enhance the resource cloud and be enhanced by it in return, but it’s an interdependent overlay on the resource cloud, the traditional cloud, and not an example of how we implement it.  Big data and analytics problems are solved by the information cloud, not the resource cloud, and what we need to be doing now is recognizing that and looking for information-cloud solutions that go beyond the Hadoop obvious.

Can Alcatel-Lucent Steer the “Shift” Course?

Alcatel-Lucent announced its new strategy, and frankly I was disappointed in the articulation—or at least the amount of stuff that got articulated.  The “Shift” plan to me so far states the obvious, and that makes it less obvious whether Alcatel-Lucent really has a long-term strategy that can save it.

If you look at the high-level comments made, the sense of the moves Alcatel-Lucent intends is clear.  They’re going to trash R&D and sales emphasis on everything except ultra-broadband and IP, and in particular they are going to focus on “cloud infrastructure”.  This is the only possible strategy to address the declining differentiation in the networking space, and the accompanying price competition and margin erosion.  Get out of the commodity space and into something that’s not a commodity.  Seems sensible.

It’s not, because every sector in networking is circling the same drain, some are just deeper in the bowl than others.  Yes, Alcatel-Lucent can buy time by jumping out of the spaces where they’re inevitably going to lose to Huawei and/or ZTE, but all that will do is buy them a little time.  The real question has never been whether the company would stubbornly try to defend its broad collection of product lines in the face of relentless commoditization, it’s been whether the company knew how to avoid commoditizing everything.  So far, they’re not giving us enough to judge whether they have that vision.

“The cloud” is probably the thing that Alcatel-Lucent executives would point to as their touchstone for future profit, and if we define the cloud as being the union of information technology and networking, they’re right.  What else is there, after all?  However, neither IT nor networking are all that financially healthy.  With CIOs reporting to CFOs in corporate organization charts, we’ve acknowledged networking as a premier cost center, not an innovation center.  In any event, it’s not likely that Alcatel-Lucent will try to get into computing.  Given that, just what is it that they could do in “the cloud”?

I think the only answer is the thing I’ve been calling the “supercloud”, the cloud created not by attempting to do hosted server consolidation but by the evolving needs of mobile broadband users.  “Point-of-activity empowerment” is the goal of everyone these days, and while we’re demonstrably meeting the early demand in that space we’re clearly not thinking much about evolution.  Absent a credible evolutionary strategy, mobile broadband is just another source of pressure for cheap bits, which means cheap networks.

Alcatel-Lucent knows something about this space.  Their concept of the “high-leverage network” is spot-on in that it expresses the fact that for network vendors to prosper, network investment has to prosper.  Leverage is a good way of describing that; you “leverage” your investment in many ways.  Pushing bits to support five different applications isn’t five leverage strategies, either.  It’s one strategy—pushing bits.  That means that service intelligence is mandatory for Alcatel-Lucent, and for the industry.  How do we get it?

Traditional cloud means hosting on the network.  That’s back to pushing bits, network-wise.  We have to think about creating things in the network that can differentiate and revalue it.  Even here, Alcatel-Lucent has past credible positioning in their APIs.  But they’ve not been able to make something happen with APIs and they’ve recently begun to pull back on some of their previous strategies—selling API warehouse ProgrammableWeb that they’d bought with great PR fanfare not that long ago.

Alcatel-Lucent still has the right stuff, tech-wise.  What they don’t have is an inspirational articulation of the vision of the network of the future and a clear map as to how their pieces get a network operator to that promised land.  It’s almost like they want to educate buyers and not inspire them, and sadly that doesn’t work for revolutions.  So effective marketing/positioning is a must for Alcatel-Lucent’s survival.

So, IMHO, is NFV.  This concept is, I think, bigger than even its own body acknowledges.  It’s a way to create a model for deployment and operationalization of the supercloud of the future.  That’s not the only ingredient for that cloud (you need a functional architecture to define what it can do and how you build it), but deploying something is obviously a prerequisite for using it.  Here is where Alcatel-Lucent needs to focus in the near term; here is where they will likely stand or fall in a technical sense.  It’s a bully pulpit for a concept that Alcatel-Lucent needs badly.

The big challenges that convergent supercloud-NFV has to face are less the deployment (which is an application of optimization and DevOps principles, complicated but understood) than they are of management and function modeling.  We need to be thinking about how network/service/cloud features are best conditioned for being deployed in a supercloud.  Operationalization is also key.  Flexibility often comes at the price of complexity, and that can quickly escalate to eat any possible profits from service deployment.  Operators understand operations, but they understand it in the old five-nines-regulated-monopoly framework.  Everything can’t be ad-sponsored, and so the secret weapon of operators is that they know how to sell services people pay for.  They just have to figure out how to build new ones to sell, and how to operationalize that new service model in a suitable way.  And that’s what Alcatel-Lucent has to do, or its “Shift” won’t be a shift, but the beginning of a slow retreat.

Will IBM and Amazon Show Red Hat the Way to the Cloud?

Today we have two pieces of cloud-market change, and as we’ll see, it’s important to consider the two as parallel developments.  One involves Red Hat, and the other IBM.

A while back, I commented that Red Hat’s absence from the cloud space was puzzling from a strategy perspective and perhaps harmful to the company’s prospects.  Well, they’re fixing that by creating some packages based around their Red Hat Enterprise Linux (RHEL) and OpenStack.  What distinguishes Red Hat’s approach is that they’re integrating the cloud into their commercial Linux platform, one that’s been hardened to support high-value applications and even service provider missions.  They’re also likely going to harden OpenStack too, and provide the combination with solid professional support.  The question is whether they can create a differentiable model at this point in the market.

That’s also a question for IBM, who recently appealed a government decision to base the CIA cloud buy on Amazon rather than IBM despite the fact that IBM was cheaper.  The reason was that Amazon offered more “platform services” that added value and facilitated integration of the cloud and web-based applications and services.  In effect, the GAO review said that Amazon was more of a PaaS than IBM, which implies that PaaS is better than IaaS even if it’s a bit more expensive.

The PaaS/IaaS wars don’t directly impact Red Hat because it’s generally accepted that IaaS platforms are the most logical foundation for PaaS.  The problem is indirect; PaaS is “better” than IaaS because it can displace more cost in a public cloud, offers potentially better application development options in a private cloud, and above all has features—those platform services—that can differentiate a PaaS offering.  If PaaS has more features and more differentiators than IaaS at the service side, so it does at the platform level.

Which is what Red Hat now has to consider.  It has two cloud offerings on the table now.  One is aimed at large enterprises and public providers and focused on building big, efficient, clouds.  The other is for your typical hybrid-cloud enterprise.  Both missions could use a dose of platform services, as Amazon has proved.  What services will Red Hat settle on?  The initial announcement of Red Hat’s cloud was accompanied by a storage-integration announcement.  Red Hat’s storage will support OpenStack’s Block Storage (Cinder), Image Service (Glance) and Object Storage (Swift), which is good but not enough.  Red Hat will have to settle pretty quickly on a DevOps standard as well, and I expect something will come out in that area.  Similarly, I think Red Hat will be developing its own Quantum approach.  While all of this brings the major interfaces of OpenStack under one company roof, it doesn’t necessarily raise the bar for PaaS.

At this point in cloud evolution, I think Red Hat needs a bully pulpit, a specific mission that will drive its cloud.  In the cloud provider/carrier space that mission could be Network Functions Virtualization, and in the hybrid cloud area it could be supporting multi-dimensional elasticity in business applications.  Best of all, these missions could be converging on a single architecture.

NFV is a framework for hosting network features in the cloud (though the body is reluctant to accept that the cloud is the only platform that’s suitable).  This activity demands a combination of composition agility and operational integration that’s totally absent in the cloud today, and totally necessary if the cloud is to really support mission-critical apps.  Thus, both of Red Hat’s cloud packages could benefit from a dose of NFV-like orchestration and management.  NFV will also demand the integration of shared-function components (IMS, DNS) and per-user components (firewall, NAT) into a single service, as well as support for services that are all one or the other.  That’s a value to enterprise application architectures too, and the shared-function components might be a good way of describing platform services in general.

Which, I think, is the key to Red Hat’s success, because Amazon/IBM just proved it to be.  PaaS differs from IaaS in the inclusion in the former of platform services.  Quantum and Glance and Cinder and Swift are precursors of what must become a wide range of platform services, some of which will be generally useful and some of which will be more mission-specific.  Obviously Red Hat can’t support them all, but they could build a framework for their support through the support of NFV.  Right now there’s no real articulated computer-vendor, cloud-stack, support for NFV out there—nor of course much more than a few blown kisses from other types of vendors, in fact.  This is a perfect opportunity to recognize NFV for what it is—the on-ramp to a future supercloud architecture.

I’m not seeing an NFV behind every bush here, I’m simply saying that NFV is one game that has almost universal potential, and you need that if you’re a late-comer.  You’ve got to jump ahead of the pack if you enter the market late.  Red Hat has definitely done the late-entry part and now has to get to the jumping part.  The evolution of application software demands a PaaS architecture that makes cloud services explicit platform services.  NFV provides a mechanism to build platform services.  QED.

What Does Telefonica Have that AT&T (Might) Want?

One of the more interesting M&A rumors is the story that AT&T had made a bid for Spanish telecom giant Telefonica, a move that was blocked (says the rumor) by the Spanish government.  Telefonica has since denied any overtures were made, and it seems likely that one or the other of these negatives would be enough to make a deal doubtful.  Still, one must ask “Why?”  Is such a deal logical under any circumstances?

In fundamental terms, Telefonica doesn’t seem to be exactly the poster child for outside M&A interest.  Spain is in the midst of a truly bad economic slump, and while telecommunications isn’t typically hit as hard during these slumps as other industries, there’s no question that the telecom giant is burdened by debt (as is Spain at large).

It’s also true that the EU telecom landscape has been replete with examples of Eurotelecom giants running to emerging markets to invest because margins and ROI are too low in their home territory.  That’s been much less a problem for US telecom companies, so it’s hard to see what AT&T would see in a Telefonica buy.  They’d likely do better in ROI terms by investing at home.

OK, then, what’s the benefit that might be driving AT&T’s interest?  I think it’s Telefonica Digital.  Somebody asked me recently who was, in my judgment, the most innovative of the carriers in facing the future and my response was “Telefonica”.  It’s not likely that AT&T would want Telefonica’s business, but they might want the innovation.  And they might yet get it, in one way or another.

Telefonica Digital has done a couple of fairly impressive things.  First, just by being there it’s an example of the first critical step a telco has to take to be a player in a modern notion of services.  You can’t make Bell-heads into Net-heads, so you have to keep the two partitioned organizationally so you can create a culture for the latter and offer a career path that would make sense.  Second, Telefonica Digital has come closer to framing a credible service-layer strategy than anyone else, even an equipment vendor.  Third, Telefonica Digital has been a leader in targeted OTT-modeled services, in critical areas like health care, digital payments,   They’ve made strides in digital content, and they’re big in Brazil and Latin America in general, a space that some US operators covet.

US operators are trying to become players in next-gen services, but the culture barriers to success have been formidable.  Arguably, the history of being a public utility has been the greatest political barrier and the barrier posed by integration of new services with OSS/BSS has been the most difficult technical barrier.  Telefonica has alleviated the first with its spin-out of Digital, and the story is that they’re working on the second as well.  If that’s the case, then they could be a highly valuable commodity.

AT&T is perhaps under the most pressure of the US operators.  They have a lower economic density than rival Verizon.  They also have a smaller percentage of enterprise headquarters sites than Verizon, and it’s the HQ that makes business network buying decisions.  Their business sites, overall, are less likely to be cloud compute candidates, which hurts that side of AT&T’s business service plans.  In short, they could use some insight into OTT service creation and the cloud, and that’s something that Telefonica could bring.

Or something that Telefonica Digital could bring.  If AT&T can’t buy Telefonica, could it do a deal for Telefonica Digital?  Could, in fact, that be the deal that AT&T’s been interested in all along?  Nobody would likely think that Spain would approve the sale of its national carrier.  But would Spain go along with a deal to sell an OTT subsidiary of that national carrier when the sale might well keep Telefonica solvent?  I think they might.  They would obviously have no problem with a deal that would license Telefonica Digital work to AT&T on a non-exclusive basis or create some other digital-layer partnership.

We may be seeing the first step in what will become a land rush as operators attempt to use M&A to gain more than just a tech nibble here and there.  The clock is running down on network profitability.  We have little time to frame a new business model, particularly considering that capex through 2016 will likely be up as operators try to use their last gasp of financial latitude to prep for the future.  M&A is a darn good way to get in the game again.