Reading into Alcatel-Lucent’s ProgrammableWeb Decision

Alcatel-Lucent has been in many ways the leader among network equipment vendors in service-layer strategy.  Their notion of the “high-leverage network” and their focus on APIs and developers for next-gen services has been, in my view, right on the money (literally).  Their positioning of their concepts, and as a result their ability to leverage their vision into some bucks in the quarterly coffers, has been disappointing.  So they’ve changed their game now, starting with the divesting of ProgrammableWeb, the API outfit they’d previously purchased to augment their developer strategy.

I’ve always been a fan of “application enablement” in Alcatel-Lucent’s terms, but I wasn’t a fan of the ProgrammableWeb thing, largely because I think it ignores the fundamental truth about developers, which is that they’re not all the same.  From the very first, network operators told me that they were less interested in exposing their own assets to developers than in creating their own service-layer assets to offer as retail features.  That requires APIs and developer programs, but below the surface of the network—down where infrastructure and service control live.  In fact, that is where Alcatel-Lucent now says it will focus.

This whole exercise demonstrates just how complicated it is for network operators and equipment vendors to come to terms with the software age.  For literally a century, we’ve built networks by connecting discrete boxes with largely static functionality.  A “service” is a cooperative relationship of these devices, induced by simple provisioning/management steps that are simple largely because the functional range of the devices are limited.  No matter how many commands you send a router or switch, it’s going to route or switch.  But make that device virtual, instantiate it as a software image, and now it’s a router or switch of convenience.  Add a functional element here and pull one out there and you have a firewall or a web server.  How do you build cooperative services from devices without borders?

When I go to NFV meetings or listen to SDN positioning or review my surveys of enterprises and network operators, what strikes me is that we’re all in this together, all trying to get our heads around a new age in networking, an age where role and role-host are separated, and so where role topology (the structural relationship among functional atoms in a network) isn’t the same as device topology.  Virtual networking is possible because of network advances, but it’s meaningful because of the dynamism that software-based functionality brings to the table.

There are players who purport to “get” this angle, and I’ve been talking to them too.  Any time I go to a public activity I get buttonholed by vendors who want to tell me that they have an example of this network-new-age situation.  In some ways they’re right, but there’s a difference between being a gold miner who uses geology to find proper rock formations for gold deposit and somebody who digs up a bag of gold in their back yard while planting a shrub.  Any change in market dynamic will create winners or losers just by changing the position of the optimal point of the market.  What separates these serendipitous winners and losers from the real winners and losers is what happens when the players see that the sweet spot has moved.  Do they run to the new one, or defend the old?

That’s Alcatel-Lucent’s dilemma now.  Their API strategy has been aimed at the wrong developers.  They picked up some baggage as a result, and now they’re shedding that.  Good move.  But did they pick up new insight when they dropped old baggage?  Do they understand what it means to support service-layer development now?  It’s more than saying that they’ll help the operators expose their assets, or more than saying that they’ll expose their own network APIs as assets to the operators.  What operators are saying is that they need to be able to build services in an agile way, reusing components of functionality and taking advantage of elastic pools of network-connected resources.  The future, in short, is a kind of blending of SOA principles and cloud computing principles.  When we build services, not only now but henceforth, we are building these elastic SOA-cloud apps.

Nothing we have today in terms of infrastructure matches the needs of this future.  Static device networks won’t connect elastic application resources.  Element management systems make no sense in a world where an element is a role taken on by a virtual machine.  Blink and you’re managing a server; blink again and it’s a firewall or a CDN or maybe even a set-top box.  Schizo-management?  Provisioning means nothing; we’re assigning roles.  Service creation is software component orchestration.  The question for Alcatel-Lucent is whether they grasp where the reality of future services will be found, because if they don’t then they may have dropped the baggage of ProgrammableWeb, but they’ve picked up another heavy load that will reduce their agility and limit their ability to create relevance in a network environment that is not only changing rapidly, it’s institutionalizing a mechanism to permit changes to be completely unbridled—because that’s the goal.

But for Alcatel-Lucent’s competitors, the issue may be worse.  Alcatel-Lucent has at least shown it knows it’s bringing empty buckets to the fire and put them down.  Do Cisco or Ericsson or Juniper or NSN know that?  Is Alcatel-Lucent’s commitment to virtualize IMS, for example, an indication that they know that all network features that aren’t data-plane transport are going to be fully virtualized?  Do they know that NFV goals will eventually be met ad hoc whether the body meets them or not?  And do other vendors who have made no real progress in virtualizing anything or even talking rationally about the process have even a clue?  If they don’t, then even a detour through the backwaters of the too-high-level-API world may still get Alcatel-Lucent to the promised land before their competitors take up residence.

Two Tales of One City

The market giveth, and takes away, but probably in the main it’s the vendors’ own actions that make the difference.  We have an interesting proof point of that in two events yesterday—the end of the second NFV meeting in Santa Clara and the earnings call of Juniper, just down the road in the same town.  Same town but two different worlds.

NFV is a poster child for carrier frustration and vendor intransigence.  I’ve surveyed operators since 1991 and many who have followed my comments on the results will remember that about five years ago, operators were reporting themselves as totally frustrated with vendor support for the operators’ monetization goals.  Well, guess what?  They got frustrated enough to start an organization dedicated to the transfer of network functionality from devices to servers.  Nobody listened five years ago; maybe this time it will be different.

Juniper is a player who on the surface should be a major beneficiary of initiatives like NFV.  Juniper was the founder and champion of the “Infranet Initiative”, which became the “IPsphere Forum” and was later absorbed by the TMF.  This early activity wasn’t aimed at pulling functionality out of the network but rather in laying functionality onto/into it, admittedly using software and hosted elements.  Many of the agility, operationalization, and even federation needs of NFV hark back to those old IPsphere days.

But where is Juniper on NFV?  They’ve been bicameral.  The company has blogged about the topic, as well or better than anyone else in the vendor space, but in public they have not only failed to exploit NFV in positioning (and thus exploit their topical expertise, gained from their past activities) but have actually taken NFV concepts and stuffed them into an SDN brown paper bag.  I commented at the time that this was an illogical step, and I think the explosion of interest in NFV proves that Juniper rode the wrong horse to the show on this one.

And perhaps on other things too.  The tagline to remember from Juniper’s earnings call was from Rami Rahim, who said “It’s clear that traffic is continuing to grow and this forms of course the foundation of much of our business. So it just comes down now to how much of a risk operators want to take or how hot they want to run their network before they want to invest. Clearly as long as that traffic continues to increase, which we see as increasing everywhere, that investment cycle especially in cost centers like the core will come, eventually.”  This sure sounds like “lie back and hope the money comes in”, which isn’t the kind of innovation-driven approach to the market that an aspiring player with a minority market share and a P/E multiple of four times the market leader should be talking.

The contrast with the coincident NFV event is striking.  Let operators buy routers to carry traffic despite declining revenue per bit and return on infrastructure, at the same moment as these same operators are convening an organization to disintermediate at least some of the devices you make.  Go figure.

NFV has its challenges, not the least being that the body is still dependent on the vendor community’s willingness to come up with solutions to fit the body’s requirements.  The goal of improving cost and agility by hosting network functions seems (and is) logical on the surface, but the devil is in the details.  If you replace a fifty-dollar residential gateway with three virtual machines and the intervening connectivity to link the functionality, you’ve likely created something whose capex/opex contribution is greater than what you started with.  It’s also not clear how functional agility offered by virtual residential gateways versus real ones would help sell new services to residential users.  Simple virtualization of networks on a device-for-device basis isn’t going to generate enough savings to matter, and the basic architecture of networks and services wouldn’t be changed.  If you’re going to do NFV, you have to do it with an eye to exploiting the cloud—which is the model of the new fusion of IT and networking.  The cloud, as a platform for applications, is an equally sound and flexible and cost-optimized platform for service components.  Because, gang, services are nothing but cooperating software application elements sitting on a network.

Everything that Juniper, or any other vendor, would need to fully realize the vision of NFV (even before the body is able to fully articulate how they expect that to work) is in the marketplace today in the form of proven technology.  Every insight needed to make network equipment into an optimum platform for the new network, the network the operators need to be profitable and continue to buy hardware, is not only visible but glaringly obvious.  That there were over 200 attendees at the NFV meeting suggests that carriers are committed and vendors are at least wary that the concept might just happen.  It will, because it has to.  It’s just a question of who it will happen with, and Juniper will have to take its eyes off the bits to smell the NFV roses.  So will everyone else.

Out with the Real, In with the Virtual

The attendance at the NFV meeting in Santa Clara seems a pretty solid indication that NFV has arrived in terms of being of interest.  It’s not a surprise given the support that’s obvious among the big network operators.  They run the meetings and are driving the agenda, an agenda that’s also clear in its goal of shifting network features from costly specialized devices to open platforms.

A move like that has its challenges.  We don’t engineer general-purpose servers to be either stellar data movers or high reliability devices.  There is interest among both operators and vendors, right down to the chip level, in improving server performance and reliability, but the move can only go so far without threatening to invent special-purpose network devices.  Every dollar spent making a COTS server more powerful and available makes it less COTS, and every dollar in price increase reduces the capital cost benefit of migrating something there.

I think it’s pretty obvious that you’re not going to replace nodal devices with servers; data rates of a hundred gig or more per interface are simply not practical without special hardware.  We could perhaps see branch terminations in server-hosted virtual devices, though.  How this limitation would apply in using servers to host ancillary network services like NAT or firewall is harder to say because it’s not completely clear how you’d implement these functions.  While we might ordinarily view a flow involving firewall, NAT, and load-balancing as being the pipelining of three virtual functions, do we actually pipe through three or do we have one virtual device that hosts them all with the pipeline managed only at the software level?  The latter seems more likely to be a suitable and scalable design.

Availability issues also have to be looked at.  You can’t make a COTS server 99.999% available, but you could make multiple parallel hosts that available.  The challenge is that it wouldn’t make it available in the same way as our original five-nines box.  A packet stream might be load-balanced among multiple interfaces to spread across a server complex, but unless the servers are running in parallel the result will still be at least a lost packet or two if one unit fails and you have to switch to another.  That wouldn’t happen if you were five-nines and didn’t fail in the first place.  As I said, it is possible to build a virtual application that has the same practical failure-mode characteristics and availability, but again you’re forced to ask whether you need to do that.  Do even modern voice services have to meet traditional reliability standards given how much voice is now carried on a best-efforts Internet or a mobile network that still creates “can you hear me?” issues every day at some point or another?  We’ll have to decide.

Security may or may not be an issue with hosted functions, including hosting the SDN control plane.  If we assume that virtual functions are orchestrated to create a service, there are additional service access points created at the boundaries and these could in theory be targets of attack.  However, you can likely protect internal interfaces among components pretty easily.  A more significant concern is what I’ve called the DoR or Denial of Resources attack, which is an attack aimed at loading up a virtual function host with work in one area to force a failure of another service being hosted there.  If you can partition resources absolutely, this isn’t a significant risk either.

One area that could be a risk is where a data-plane interface can force a control-plane action and a function execution.  The easiest example to visualize is that of the SDN switch-to-controller inquiry when a packet arrives that’s not in the forwarding table.  The switch has to hand it off to Mother, and if you could force that handoff at a high rate by sending a lot of packets that don’t have a forwarding entry in a short period, you might end up by loading down the controller or the telemetry link.

I don’t think that virtual function or SDN security is going to be worse in the net, but it will almost surely be different.  Same with availability and even performance.  There are things we can do in a hosted model that we can’t do in an iron-box model after all.  Even if, as seems likely for migration/transition reasons, NFV first defines a network of virtual devices that mirrors the network of real devices, it can evolve to one where all network functions would appear to be performed by a single virtual superdevice.

That has operational issues of course.  If your goal is to evolve from a real-box network, you’ll likely need your virtual boxes to mirror the real ones even at the management interface level.  But you can’t get deluded into starting to track failure alerts on virtual devices and dispatching real field techs to fix them!  A virtual device is “fixed” by instantiating it again somewhere else, and it might well be that is done automatically without reporting a fault at all.  It probably should be.  And remember that if we have one virtual device doing everything, we have only one management interface and less management complexity!

The point is that the virtual world is different in that it’s whatever you want it to be.  Any kid who ever daydreamed knows that.  We’ll learn it in the real world too.

 

Where Now, NFV?

The majority of the current network hype has been focused on SDN, and either despite the attention or because if it, SDN hasn’t garnered much focus other than hype.  We have so much SDN-washing that it’s hard to see what’s even being washed any more.  Laundry fatigue in tech?  A new concept at last!

NFV is a newer concept than SDN, and one that so far doesn’t have a show with vendors exhibiting and issuing press releases.  There are vendors who are voicing support for NFV (Intel did so just last week) but so far the claims are muted and even a bit vague.  The second of the NFV global meetings is being held this week, and before the meeting may be a good time to review the issues the body will have to address.

The goal of NFV is to unload features from network appliances, presumably including even switches and routers, and host them in generic server farms.  This, operators hope, will reduce costs and help the operators overcome the current problem of profit squeeze.  It’s also hoped that the architecture that can support this process, which is where “network function virtualization” comes from semantically, will provide a framework for agile feature creation.  That could make operators effective competitors in a space that’s now totally dominated by the OTT and handset players.

A virtualized anything is a step on the path to reality, obviously.  You start by defining a set of abstractions that represent behaviors people are willing to pay for–services.  You then decompose these into components that can be assembled and reassembled to create useful stuff, the process that defines virtual functions or a hierarchy of functions and sub-functions.  These atomic goodies have to be deployed on real infrastructure—hosted on something.  Once they’re hosted, they have to be integrated in that there has to be a mechanism for the users to find them and for them to find each other.  Finally, workflow has to move among these functions to support their cooperative behavior—the behavior that takes us back to the service that we started with.

NFV, as a body, has to be able to define this process from start to finish to create NFV as a concept.  What, for example, are the services that we propose to address?  One example already given is the firewall service, another content delivery network services.  Even at this point, we have potential issues to address.

Firewalls separate endpoints from traffic by creating a barrier through which only “good” stuff can pass.  It follows that they’re in the data flow for the endpoints they serve.  So does this mean that we feed every site through a software application that hosts selective forwarding?  That might be practical up to a point, but servers aren’t designed to be communications engines operating at multi-gigabit speeds.  Intel clearly wants to make it possible, but is it practical, or should we be thinking about having a kind of switch-like gadget that does the data plane handing and is controlled by a virtual function that needs only process rule changes?  Good question.

Even higher up the tree in the conceptual sense is what we’re serving here.  If we need to have endpoints supported by firewalls it follows that we need some conception of an endpoint.  Who owns it, how is it connected in a protocol sense, how is it managed and who’s allowed to exercise management, what functions are associated with it (like firewalls)?  In software terms, an endpoint is an object and an enterprise network is a collection of endpoints.  Who owns/hosts the object that represents each endpoint, and who owns the association we’re calling an “enterprise network”?

We can do the same thing with CDNs.  We have a concept of a CDN service as something that delivers a content address (presumably from an optimized cache location) to a user in response to the user having clicked on a URL.  One element of this, obviously, is that we have to decode URLs, which is a DNS function.  Do we have a special DNS for this?  Does every user have their own “copy” or “instance” of DNS logic?  Remember, in enterprise firewall applications we likely had an instance of the app for each user site.  Not likely that will scale here.  Also, the DNS function is a component of many applications; is it shared?  How do we know it can be?  Is “caching content” different from storing something in the cloud?  How do we integrate knowledge of whether the user is an authenticated “TV Everywhere” client to access the video?  Obviously we don’t want to host a whole customer process for every individual customer, we want to integrate an HSS-like service with DNS and storage to create CDN.  That’s a completely different model, so is it a completely different architecture?  If so, how would we ever be able to build architectures fast enough to keep pace with a competitive market.

You can see that I’ve filled a whole blog with questions about two possible applications in the first of five stages of execution.  That’s the NFV challenge, and it’s a challenge that only a great architecture can resolve.  So that’s what everyone—you and me and all the operators and vendors—need to be looking for out of meetings like this week’s session.  Can NFV do a great architecture?

If they fail, does NFV fail?  Not likely.  There are way too many players behind this.  We may have a failure of process—what carrier standards group in the last decade has produced a commercially and technically viable standard in time to be useful—but we’ll have a result produced by the market’s own natural competitive forces even if we don’t create one by consensus.  I’d sure like to see consensus work here, though.  It would be a healthy precedent in an industry that needs collective action to face formidable challenges.

 

Is IBM Presaging the Death of Strategic Thinking?

IBM delivered a rare miss in their quarterly numbers, and a significant one at that.  While the company seemed to focus on execution issues and delays in getting contracts signed rather than the usual macro-economic conditions tech vendors like to blame, I think the problems are deeper for IBM.  And for the rest of the space.

From the first, at least according to our surveys that began in 1982, IBM has led the vendors in the strategic influence they exercise on customers.  In the last decade, though, IBM has steadily declined in influence.  They opened this century with enough influence from mid-sized businesses to enterprises to drive projects over even combined opposition.  Now they can barely drive them without opposition, and of course opposition is mounting.  Worse for IBM, their influence is concentrated almost completely in the enterprise space.  You can see that in their numbers; hardware sales of mainframes were strong and everything else was weak.  “Everything else” represents the hardware classes that are suitable for the larger and broader market.

What’s responsible for this?  IBM, once the bastion of business-speak, is now seen as being able to articulate its message only to professional IT cadres.  Integrators told us that IBM can’t address the SMB space at all, even in marketing/advertising terms.  Absent marketing articulation, nobody can do anything at the sales level except play defense, and that’s what’s happening.  And you always lose eventually by playing defense.

Digging into IBM’s numbers and their call, you also catch the disquieting truth that some of the key components of their software strategy are losing steam.  Lotus, of all groups, turned in the best growth for them.  WebSphere, which absolutely has to be the framework whereby IBM would introduce new productivity paradigms, had once gained better than 20% and is now into single digits.  But it was hardware that dragged IBM down; only mainframes were above water for the quarter, and Lenovo has confirmed that IBM is looking to sell off its whole x86 server business.   Too much competition, too little profit.

Frankly, this kind of quarter makes me wonder yet again what IBM is doing backing OpenDaylight.  You can’t make money selling hardware, says IBM’s quarter.  You certainly can’t make money selling open-source software.  So do they plan to link a losing hardware business to free software to boost profits?  Somehow that seems illogical.  Do they plan to do what I suggested regarding OpenDaylight, which is to commoditize the lower part of the SDN market and focus on building the upper layers?  If they do, they need to have a lot more value to offer above the SDN controller, and I’m not sure where they think they’ll get it.  NFV is an example of a clear new-server-and-software application area, and yet Intel seems more aggressive with SDN/NFV positioning than IBM is.

For the industry, I think this should be a wake-up call, which is a good thing if true because as an industry we’ve been stuck in the cost-reduction stupids.  CEOs and CFOs of the tech world, unite!  You have only your profits and reputations to lose!  What happens to a company who can sell only by promoting cost reductions versus prior product generations?  They sell less, of course.  That means they start missing their quarters.  I’ve been harping on the fact that since the literal dawn of tech, we have had regular cycles of IT spending growth that corresponded to new productivity paradigms that created new IT benefits.  We had, until about 2002, when they stopped.  They’ve not restarted since.  This is the problem that the IBM of old could have solved.  This is a problem that any respectable IBM competitor of old would have jumped on had IBM somehow missed the boat.  Nobody’s jumping; we’re still stuck in TCO-neutral here, promoting the cause that there’s nothing new and useful computers can do, so every technology enhancement has to lower their cost.  It’s not taking us to a happy place, it never could have, and so I’m tired of everyone griping about stagnant sales.  If you don’t like them, get off your ass and come up with a value proposition other than “spend less”.

This has to be a critical issue for Cisco, too.  Microsoft, despite the issues it has, beat its estimates slightly.  Oracle, another rival, has a stronger software position and thus isn’t as exposed to the whole hardware problem.  Software, remember, is the link between humans and IT; we don’t have disk interfaces so we need productivity intermediation that only software can buy.  Oracle bought Acme, so might they be getting ready to buy into some new UCC-based productivity thing?  Could be, and if it is then where does it leave Cisco’s server side?

And Cisco’s network, of course.  I’m not among those who believe that SDN is going to destroy the network vendors.  I’m a believer that their focus on TCO is doing a great job of destruction on its own and doesn’t need help from the tech side.  SDN is an opportunity for network vendors, a way of creating a framework for point-of-activity empowerment that could represent that needed and long-delayed benefit driver for a new spending cycle.  But all these guys are playing SDN defense, linking it to operations cost management, which gets us back to lowering spending, bad quarters, and maybe new management teams.  And Cisco’s not the only vendor in this boat; every player who wants to sell to the enterprise has to either offer better benefits to drive higher spending or (surprise, surprise!) accept lower spending.

Businesses buy IT and networking because it makes people more productive.  The more productivity you drive, the more they can spend—on you.  Seems simple to me, but apparently a lot of senior management in the vendor space is finding it too complicated to deal with.  Maybe some new management teams really are in order here.

What Might Intel’s Open Network Platform Mean?

There’s a clear difference between dispatching an ambulance to an accident scene and chasing one there, as we all know.  There’s also a difference between a company reacting opportunistically to a market trend and a company actually shaping and driving that trend.  Sometimes it’s hard to tell the difference in this second area, and so it is with Intel’s announcement of a reference implementation of a server architecture for networking.

Trends like the cloud and SDN and NFV are driving servers into a tighter partnership with networking.  I’ve been saying for months that the future of IT was going to be created by the shift from IT as a network service access point to IT as a network component.  That’s what the cloud really means.  And Intel seems to know that, whatever is driving their interest, because they’re participating and not just product-washing.  In the NFV space, for example, they’re a more thoughtful and active participant than most of the network equipment vendors.

Intel’s Open Network Platform reference design includes the Wind River Open Network Software suite and a toolkit for creating tightly integrated data-plane applications.  The platform will implement an open vSwitch and the toolkit means that other vSwitch architectures, including we think the Nuage/Alcatel-Lucent one, could be implemented easily on the platform.  So at the minimum Intel may be voting with its R&D and positioning dollars that things like SDN and NFV are real.  At best, it might be taking steps that will actively drive the process.

One of the most important points I cited in Alcatel-Lucent’s Nuage launch was that the new SDN model the two companies promote is an end-to-end hybrid of virtual-overlay networking (what I’ve called “software-defined connectivity”) and real device-SDN networking.  The Intel platform seems to encourage the creation of a new model of highly functional virtual device, one that could form the branch edge of a network as easily as the server side.  This model would encourage the creation of application VPNs or what Alcatel-Lucent calls “software-defined VPNs”.  It could be deployed by enterprises, and also by network operators, and it could be linked by common central control down to policy-based or even route-based special handling at the traffic-connection level.

Perhaps the most profound impact of the Intel step could be the impact is has on NFV, and I don’t mean just the ability to create better server platforms to host virtual functions.  The value of the NFV concept, if it’s contained to network operators, will be slow in developing and limited in scope.  Intel might be framing a mechanism to link NFV to what it frankly should have been linked to from the first—the cloud.  NFV as a cloud element is NFV for enterprises, which is a much bigger market and a market that will move opportunistically with demand for cloud-specific services.  Thus, Intel might be at least attempting to single-handedly make NFV mainstream and not an operator science project that could take years to evolve.

The most general model for a network-coupled IT environment is an as-a-service model where all functional elements are represented by URLs and RESTful interfaces.  In such a model it doesn’t matter what platform hosts the functional element; they all hide behind their RESTful APIs.  That model is likely the ideal framework for NFV, but it’s also the ideal framework for the evolution of cloud services and the creation of a cloud virtual operating system that hosts a currently unimaginable set of new features and applications, for workers, and consumers.  This may be the NFV model Intel is thinking about.

The Intel step may put network vendors in the hot seat.  Alcatel-Lucent has already committed to a hybrid virtual-overlay-and-real-SDN-underlay approach, a model that tends to commoditize enterprise network hardware.  That’s fine for them because they’re not enterprise incumbents, but what do Cisco and Juniper and the other smaller enterprise players do?  Even for carrier applications like the metro cloud I’m always harping on, there’s a necessary marriage between the virtual-overlay stuff Intel’s ONP proposes to host and the metro Ethernet and fiber networks that build aggregation and backhaul.  A formalistic link between virtual-overlay and real device networks may be mandatory now, and that link to be useful has to elevate the former above Level 2, link it effectively to the cloud and to componentized software architectures, and then bind it in a general way to the real device networks that alone can move real traffic in high volumes.

Make no mistake, Intel’s ONP doesn’t replace specialized switching and routing and the major layers of transport networking.  An x86 platform running any UNIX derivative with the BSD enhancements could have done that for decades (and in fact the first Internet routers were servers like this).  We got specialized devices for traffic-handling because they’re better at that mission, and that’s almost surely true now and forever.  However, every time we add IT intelligence to a service we have a traffic-terminating mission from a network perspective, and that’s what Intel is optimizing for.  If they’re aggressive and clever in their support for things like Quantum, DevOps, SDN, and NFV, they’ll have a major leg up on an important server mission of the future.

Facing Networking’s Era of Change

We’ve already seen signs that mobile broadband is gutting at least the near-term PC sales, signs that Intel’s quarterly numbers only confirm.  We have lived for over thirty years in the personal computer age, and PCs have transformed just about everything in our lives and in business.  Now they’re dinosaurs.  My point is that if mobile broadband can shake the literal foundation of modern technology, it’s going to shake pretty much everything and we need to understand that.

Yahoo needs to understand that too.  Marissa Mayer, Yahoo’s CEO, said that it would take years to turn the company around.  They don’t have years.  If Yahoo wants to jumpstart itself, it will have to take advantage of a market revolution to do that, and the market revolution of our generation is right here, right now in mobile broadband.

This week, we’re having the ballyhooed ONF event.  Next week we have an NFV conference.  You can fairly say that both these activities are aimed at dealing with changes in networking, but I think it’s fair to ask whether either SDN or NFV is being driven by mobile broadband.  If they are, then we should see some signals of the shift in their early work.  If they’re not, then we have to ask whether either is going to meet its goals, its potentials.

Operators have for five years now outlined the same three top priorities for monetization; content, mobile/behavioral, and cloud computing in that order.  Their priorities have been set financially rather than technically; they saw content traffic as a big threat and so monetizing it was a big priority.  They saw the cloud as a business service and something outside their basic comfort zone, even in terms of setting financial goals, so they had it at a lower priority.  Over the five years since this started, though, the cloud has jumped into high gear with operators and where we stand now is that the cloud monetization projects have outrun everything else.  That leaves mobile/behavioral opportunity, the thing mobile broadband is enabling, in last place.

You can see this in the SDN stories.  If you look at a mobile broadband deployment from a sparrow’s vantage point, you see cell sites springing up like spring flowers (well, spring flowers in a normal spring—we’re not seeing that many yet) and backhaul trunks spreading like galloping tree roots.  Where?  In the metro areas.  Wireline broadband is an access/metro problem.  So tell me this; where are the stories about how SDN is going to revolutionize that space?  We have SDN in the data center.  We even have (via Google’s work) SDN in the core.  Ericsson has told a basic SDN-metro story but only a basic one, and when other vendors have made what could have been SDN-metro announcements there was no metro in them.

In the NFV space, there is a double-barreled question in the mobile broadband area.  First, given that the white paper the carriers issued at the launching of NFV focused on the migration of virtual functions from custom appliances to generic hosts, it tended to focus on stuff already being done.  Mobile broadband changes and opportunities aren’t represented in today’s appliances.  We’re actually searching for an architecture to support them, and logically the NFV architecture designed to host past-service elements in an effective way should also be tasked with supporting the future effectively.  But focusing on migrating existing features will miss the mobile/behavioral fusion that mobile broadband is driving, and that’s the biggest trend in all of networking.

The second point for NFV is the cloud.  That same initial white paper talked about virtualization as the host of the functions.  I pointed out from the first that the architecture for network feature hosting had to be broadened—the cloud is the logical vehicle.  This is especially true given that those operator monetization projects that involve the cloud advance twice as fast as those that do not, even when the projects aren’t aimed at offering cloud computing services of any sort.  Content cloud equals progress.  Mobile cloud equals progress.  NVF cloud is likely to equal progress too, so we need to see whether the group will accept that reality and embrace something that can move all its goals forward, while at the same time making the mobile-broadband-driven changes in the market an implementation priority.

Even the cloud has to change, though.  The conception we have of cloud computing today is just VM hosting of applications that were written before the cloud was even conceptualized.  Well, we’ve conceptualized it.  Will we keep writing applications in a way that demands the cloud morph into something that looks pretty much like legacy IT, or will we do things differently?  Yes, I know that today’s answer is “stay the course”, but that’s because vendors are all taking root in their current quarterly goals and becoming trees.  SDN and NFV will show startups where it’s possible to link new network visions and new cloud visions to new revenue opportunities.  That will include addressing the point-of-activity empowerment that mobile broadband enables by structuring applications to deliver just-in-time insight to the worker, whether they’re trying to make a sale or pry the cover off an inspection panel to start work.

Every network vendor, every IT vendor, is both empowered and threatened by the current trends, including Intel, Microsoft, Dell, HP, IBM, Cisco, Alcatel-Lucent, and yes Yahoo and Google.  We have seen the power of this change already.  We’ll see more of it, and more vendors will stand or sink based on whether they buck it or ride it.  This quarter is only the start.  More is coming.

Might a Deal for Dell be a Cloud Play?

A special note of concern for my friends in the Boston area.  I’ve spent a lot of time up there, and while all my personal friends seem safe a surprising number know others who were at least in the area of the blasts.  I’m thinking of you all, praying for your safety, and hoping that we can react to this event without losing the wonderful openness of Boston, and of America.

It’s generally bad financial practice to compete to buy into a declining industry.  We know that PC sales have been down, and the most recent data suggests they’re down sharply in the current quarter.  Nobody doubts that the reason is the smartphone and tablet, which are tapping off Internet use from PCs.  For those who use computers or appliances primarily to be online, that means there’s no need for PCs at all.  The Dish/Sprint deal, as I suggested earlier, is likely aimed at creating a mobile broadband ecosystem to couple with satellite broadcast, and this sort of thing could only facilitate a shift from PC to mobile.

And yet we have people wanting to buy Dell.  Why?  I think there are 3 possible reasons.  First, maybe they believe that the fear of flight from the PC is overdone.  Second, they might believe that Dell could establish itself in the tablet/smartphone space.  Third, they might believe that Dell’s server assets alone are worth the investment in an age of cloud transition.  Let’s look at the implications and probabilities of all three.

I doubt that many Dell suitors believe the PC is coming back, and I think that most likely believe that even the residual PC market (large though it might be) will be under relentless profit pressure.  To pick up Dell for PC opportunity files in the face of trends in PC usage and sale, and also price and profit trends.  Furthermore, the biggest barrier to those who’d like to discard PCs in favor of appliances—even Chromebooks—is lack of always-on broadband.  We’re clearly heading for just that, and very quickly.  The only thing that separates a PC from a tablet is a hard drive for offline use and a keyboard.  We can add keyboards easily to tablets, and “offline use” is heading toward the same level of anachronism as text terminals and modems.

So might the Dell advocates be seeing a great smartphone/tablet opportunity?  Dell can’t possibly drive a new mobile OS; it’s doubtful that Microsoft is going to be very successful at that and questionable whether Blackberry can stay alive even as a former market leader.  New player equals new casualty.  So they’d have to build Android devices, given that Apple is hardly likely to license iOS to them, and Android tablets and smartphones are at least as commoditized as PCs.

But here we do have a possible angle.  Suppose Dell were to go after the featurephone space using a model like Mozilla’s Firefox OS?  The network operators would love that because they’re already spending too much subsidizing smartphones and they don’t get to showcase their own differentiation through those devices.  Same with tablets.  Might Dell be looking at providing those operators with products that are much more browser/cloud platforms than even the current devices?

That would bring us to the third possibility, which is that it’s Dell’s cloud potential that matters to potential buyers.  In my view, no server vendor is really in a position to drive the cloud to create a unique advantage if they push down low at the hardware level.  Similarly, it’s going to be difficult to drive a unique cloud position through cloud-stack software like OpenStack because everyone is jumping on the same bandwagon.  You have to get above the fray, move not to the cloud platform but to the cloud’s valuable services.  You have to move up to SaaS, to SOA-like implementations of service features.

Dell has some history up here in the cloud-value zone.  They have been a primary driver of cloud DevOps, for example, and DevOps is the key to creating operationalized cloud services of any sort—cloud computing or cloud-hosted service features.  Their M&A all seems to be focused on extending the cloud, adding stuff above the basic software stack.  Might they be looking at creating a cloud not for the simple (and unprofitable) mission of IaaS but rather at creating a cloud for profitable high-level service hosting?  Even one to support carrier activity like NFV?

If Dell were to do that, they could then link their cloud differentiation downward.  A Dell framework for featurephone service support, complete with developer tools, a cloud architecture that you could buy as a service from Dell or buy as a cloud-in-a-box for your own installation, would be a powerful element in a featurephone strategy.  You could address corporate mobility needs with such a platform too.  In other words, you’d have something that would leverage the presumption that the cloud was going to get bigger by going higher, by offering directly valuable features and services.  Nobody is really doing that now, and Dell could be the first.

At least now, they could.  The problem with this sort of opportunity is that it’s far from invisible.  Cisco and Oracle have very similar assets, and HP has identical assets.  While it’s not likely that Cisco and Oracle have specific interest in featurephones or tablets (Cisco had a tablet and killed it), HP surely does—and the HP brand in the tablet space is stronger than Dell’s.  Still, it’s hard for me to see a play on buying Dell that doesn’t follow a variation on this cooperative cloud theme.  There just doesn’t seem to be anything else on the table that could produce enough value.

Maybe-Holistic SDN Model?

One of my biggest frustrations about SDN has been the lack of a complete top-to-bottom architecture.  All of the focus seems to be on the SDN Controller, and that’s an element that is a little functional nubbin that lies between two largely undefined minefields—the lower-layer stuff that provides network status and behavior and the upper-layer element that translates service requests into routes based in part on that status/behavior.  Now we may have at least a step toward a vertically integrated model.

Pica8 has announced an SDN architecture for the data center that’s vertically integrated to the point that it looks a lot like a cloud-provisioning model (Quantum) in terms of the functional boxes.  There’s an open switch abstraction (OVS) linked with a network OS and a hardware layer that adapts the central logic to work with various devices, including “white box” generic switches.  The current Pica8 announcement is focusing on the application of this architecture to the problem of data center networking, not so much segmentation (though obviously you can do that) but to traffic engineering and creating efficient low-latency paths by meshing switches rather than connecting them into trees (the current practice with Ethernet) or turning them into fabrics.

This model of SDN application could be one of the sweet spots for SDN because it’s addressing a very specific issue—that cloud or even SOA data centers tend to generate more horizontal traffic without becoming fully connective in a horizontal sense.  In SOA, for example, you have a lot more intercomponent traffic because you have deployed separate components, but that traffic is still likely less than the “vertical” flows between components and users or components and storage systems.  In traditional tree-hierarchy switched networks, horizontal traffic might have to transit four or five layers of switches, which greatly increases the delay and the overall resource load.  Fabrics, which provide any-to-any non-blocking switching, waste some of that switch capacity by automatically supporting paths that have no utility, or are not even contemplated.

The Pica8 architecture is also interesting,  as at least offers the potential to combine real telemetry from the network and real service requests from data center/cloud software to create paths.  As I noted earlier, there are few models of SDN that provide the vertical stack even in limited form, so it’s heartening to see something come out.  The problem is that the model of the data center, while it may offer sweet-spot early positioning, doesn’t expose the full set of value propositions or issues.

Every data center doesn’t need a fabric or mesh.  While we might want to believe that VM growth (private cloud or virtualization) or other architectural factors would change this, the fact is that data center networking needs are set more by total application traffic than anything else, and moving around where applications are hosted doesn’t impact traffic very much.  A major increase in the application traffic would imply a much larger investment in IT resources, and it’s clear from the earnings reports that kind of growth isn’t happening.  It may, if our model of point-of-activity empowerment matures, but not yet.  Thus, data centers are not necessarily under a lot of growth pressure.

The dynamism of future applications will generate network agility requirements before it will generate traffic, but the question that Pica8 and everyone else will have to answer is how those requirements move out of the data center.  A rope staked to the ground on one end can move only in a circle.  If the edge of the network, the client devices and users, are still doing the same old thing then the changes in the data center will dissipate as they move toward the edge and the total network won’t change much.  Not much dynamism.  Even a zillion mobile clients hooking up to enterprise apps really doesn’t do anything that SSL connections to a web server for worker or even customer access doesn’t do.  You need a new application model that drives a new connection model, one that takes SDN out of the data center and rolls it all the way to the edge.

We need to be watching Pica8 at this point to see how it plans to support this sort of migration.  We also need to see how well it will address the metro-cloud opportunity that is the service provider equivalent of that enterprise network drive I called point-of-activity empowerment.  It’s a promising start but we need more progress to call it a win.

How to Judge the News at ONC

With the Open Networking Summit about to kick off it’s obvious that there’s going to be a lot of things going on with respect to SDN and cloud networking.  The problem we have, in my view, is that all of this is a race to an unknown destination.  We’ve picked apart the notion of SDN and we’re busy claiming supremacy in itty bitty pieces of it, but we’re unable to tie the pieces into a functional whole that would then justify (or fail to justify) SDN use.

Right now there are two big public issues in the SDN world; the controller process (largely because of the OpenDaylight project and some counterpunching by vendors like Adara) and “virtual networking” via overlay technology by players like Nicira/VMware and Nuage/Alcatel-Lucent.  People email me to say they can’t understand how these fit, either with each other or in the broader context of SDN value.

Let’s start with that.  Networks connect things, so it follows that the goal of a network architecture is to create a route between points that need to be connected.  There’s two pieces of that process—knowing where to go and knowing how to make traffic get there.  The second piece is the individual device forwarding process; a packet with this header goes in this direction.  The first piece is a combination of a topology map of the network (one that locates not only the connecting points but also the intermediary nodes) and policies to permit which of what are certainly going to be a multiplicity of route choices should be taken.  In classical networking the topology map is created by adaptive discovery and the policies are implemented in a “least-cost” routing protocol that optimizes something like capacity or hops.

Classical SDN, the original research effort, works to replace the topology/policy stuff that’s currently distributed in devices with a centralized function.  That function needs then to control the forwarding in devices, which is what OpenFlow does.  The OpenFlow controller is a function that manages the exchange (via OpenFlow) with the devices.  It doesn’t decide on routes or policies, it only controls devices.  All that deciding and policyfying goes on north of those often-referenced “northbound APIs”.

What does this have to do with virtual networking, Nicira, and so forth?  Nothing, frankly.  Virtual networking is a way of creating application-specific network partitions to isolate cloud applications and to make it possible to spin up a network that connects a community of components that are related to each other but have to be isolated from everyone else.  You don’t need OpenFlow or any sort of SDN principle to drive this because it’s a software-based tunnel overlay.  There are differences among the implementations, though.  Nicira has been presented dominantly as a VLAN strategy, limited to the data center.  Nuage and Alcatel-Lucent have presented a broader model that can emulate IP and port-level connectivity, which means it’s pretty easy to make their virtual networks run end to end, data center to branch or cloud to user.

The challenge that nobody is really talking much about is creating that high-level value, that central intelligence, that application-and-user specificity that would make all of this highly useful.  We need a connection mission to justify connectivity management, and all of the stuff that purports to be SDN is still picking at implementation details amid the fog of possible applications where nothing stands out.  Add some tools to OpenFlow and you can create core routing like Google did.  Add different tools and you can manage data center flows to better organize switches and eliminate the traditional multi-tier networks.  But you can do both these things without SDN, and if you want to do them with SDN you need more than an SDN controller.

We are starting to see some promising developments in the SDN world.  Alcatel-Lucent’s recent Nuage announcement is an advance because it makes it possible to visualize a clear delineation between virtual-network connectivity management (Nuage) and route management at the network device level (Alcatel-Lucent), but a delineation that provides for feedback from the former to the latter to create manageable networks.  The problem is that because we’re not owning up to the problem, the fact that SDN needs a connection/application mission to be valuable, we don’t hear about these developments in the right context.

When you go to the ONC next week, look past the fights over how many SDN angels can dance on the head of a controller, to the top-to-bottom, end-to-end, vision that a given SDN vendor actually supports—supports with functionality and not with vague interfaces.  That will separate reality from SDN-washing.