MWC Blew a Chance at Greatness

I have to admit that I never was wild about trade shows.  Not only are they a madhouse, if you’re an independent like me you end up paying to be somewhere that people can find you to tell you lies.  I got to the point where I was happy to see somebody who was just clueless and not actively manipulative and my natural cynicism didn’t need more fuel!  The truth doesn’t sell so if you’re interested in truth you have to get away from selling, which I think MWC proved.

The mobile operators have the same problem that all operators have, in fact that all businesses have—profit.  So here we have MWC, but how much guidance or insight did operators get on solving their profit problem?  A couple of operators told me that the only thing of any significance was the Firefox OS phone.  Sure, it’s been panned by journalists who have reviewed it because it’s not as fast or rich or cool as a nice iPhone or Galaxy, but all of those nice cool phones cost the operators a bagfull of loot to subsidize.  They’d sure like to be able to keep some of the money since, after all, it is their wireless network that makes the darn things work in the first place.

The question that MWC should have answered and didn’t is simple:  “Why are OTTs able to launch new and profitable services over the network when we can’t launch them in the network?”  Telefonica, who was a big promoter of Firefox OS at the show, has actually answered that question better than the vendors who exhibited.  What you need is a service-layer architecture on which to build applications.  Ideally this architecture should be like the Advanced Intelligent Network (AIN) of old in that it defines in-network application behavior, because operators can differentiate themselves best from OTTs by being in, not over, the network.  Telefonica has been working to do just this thing.

One of the reasons I’m excited about Network Functions Virtualization (and not trade shows!) is that NFV is an architecture for hosting and connecting virtual functions that make up a service.  Yes, the initial target of NFV is cost reduction by replacing purpose-built (and vendor-proprietary, overpriced) appliances with generic servers.  The thing is, if you get an architecture for that, you can apply it to new stuff that’s never been sunk into appliances at all.  Mobile/behavioral symbiosis, what I’ve called “point-of-activity empowerment” of consumers and workers has been identified by operators as number one or two on the monetization hit parade, so it’s sad that issues like expanding NFV into a full service layer didn’t get more play.  Yes we had some NFV stories, and some SDN stories, but they were pap for the masses and not insight for the profit engineers.

But don’t get me wrong about cost management.  Operators are not likely to be able to profit in the long run by selling competing higher-layer services against OTT giants while at the same time running their networks at a loss.  The network has to be profitable, at least, or it drags the operator down even if they succeed at the higher-layer services game.

But cost reduction still has to be seen in a profit context, and so do technology changes.  SDN and NFV are slaves to the service layer for network operators, because we don’t call them “service providers” for nothing.  So is everything else.  There is no “backhaul market”, no “CDN market”, no “EPC market”, there is one infrastructure, one bottom line.  We have in network services the largest cohesive, cooperative, technical ecosystem in the world, literally.  We don’t keep it going by breaking off pieces and pretending we can address or sell them as one-offs.

“The Cloud” is the conceptual technical model of the service layer.  Whatever doesn’t live in devices lives in the cloud.  Whatever is too agile, too versatile, too market-reactive to be stuffed into silicon lives in the cloud.  Whatever profit sources will drive operators in the direction they want to go live in the cloud.  So we, and MWC, need to live in the cloud too, and we need to live there and not just open the old trench coat and flash those who do.  Every vendor who spouts a vacuous story about operations and TCO, or about SDN, or about cloud or mobile or metro, is working against the ecosystem we should be cooperatively building, and you can build something right in only one way—start with the plans.  This is the time to build the conceptual cloud of the future, the new model of network and IT combined, not separate.  Who will do that?  When we can answer that question we’ll know who wins and who loses.

IBM Does Almost-NFV

Sometimes you look for the wrong monsters under the bed.  IBM made an announcement today demonstrating that’s true in networking, and in two dimensions.  They came out with specific support for Network Functions Virtualization and they announced a successful partnership (with Connectem) to field virtualized mobile core infrastructure for Optus in Australia.  This shows that 1) a big computer vendor recognizes the potential for NFV and 2) once you virtualize functions they can be provided by a host of software developers.

Everyone is looking at SDN as being the boogie man in network equipment sales, but my numbers are showing that while SDN will be strong in the data center over time, and it can also play a major role in the metro network, SDN’s potential is limited by lack of central-control mechanisms.  There will be a respectable SDN market, but it’s not going to kill the network vendors’ profits or sales.

NFV could be another matter completely.  First, as IBM points out, shifting functions out of purpose-built hardware into software where they can be deployed quickly and instantiated based on load requirements makes networks more flexible.  Substituting servers for custom devices makes it cheaper.  And if software types see the potential for developing new NFV-based features to sell as service components (I blogged on this yesterday) then we could finally see a framework where network operators get a chance to raise revenues and not just cut costs.  That’s what IBM sees, clearly.

But for all the potential here, IBM is still shooting behind the duck.  You can’t advance a strategy by holding your partner’s hat.  IBM needs an architecture for NFV that their partners would build to, because absent the architecture NFV ends up being islands of functionality lost in a disconnected cloud ocean.  And guess what?  IBM’s arch-rival Cisco knows that darn straight well.  Another problem IBM has is that if there was ever something that cried out for centrally controlled OpenFlow-flavored SDN, it’s metro mobile.  You have everything in mobile core signaling that you need to manage cell-hopping and addressing and backhaul and priority-and-profit-based offload in a central way, but there is no specific SDN link here.

The IBM angle opens another topic, which is IBM’s partner Juniper.  A lot of Juniper people go to bed at night praying IBM will buy them.  Juniper was recently reported as unsuccessfully trying to sell off its enterprise (security) business so it could focus on the service provider space.  The current story is that it wants to raise some capital in a private placement of stock to reignite enterprise.  Well, the service provider space is a shrinking target unless you count servers and software into it.  IBM has those already.  Network equipment is what’s shrinking, so IBM is supposed to buy Juniper to get in on the decline?  Show me how that’s smart.

IBM needs to have an NFV architecture not just partners who have applications that look NFV-ish.  Cisco needs the same thing, and their weakness (which IBM can exploit) is that their slant on NFV isn’t going to threaten current equipment, which means it’s basically NFV neutered.  HP has both networks and IT and perhaps the best position from which to launch a true NFV/SDN/cloud story, but they’ve not launched it.  Juniper doesn’t have NFV at all, or at least they want to call their clearest NFV advances “SDN” instead.  If IBM wants to stave off Cisco, they need real NFV.  If Cisco wants to attack IBM in the service-provider space, they need real NFV.  If Juniper wants an enterprise strategy they will have buy an NFV/cloud player, and then doing something a lot more sensible with it than they’ve done with their assets so far.  Then all these guys have to tie NFV to SDN to complete the circle.

Which sets the marching orders for some other players.  The TMF has got to forget the notion of OSS/BSS in an NFV cloud and start talking DevOps or they’re defining a backhoe to weed a patio pot.  The ONF has got to start thinking about the real mission of central networking and the standards and protocols that would support that mission, not just about shoehorning OpenFlow into every real application, whether it fits or not.  The NFV people have to recognize they are building a platform-as-a-service definition of what a network application looks like in the cloud, and get to work on that task.  It’s software…programming.  How many abstractions can dance on the head of a pin?  Any number, but nobody will notice. We need real stuff here, from vendors and from standards bodies, and the things we need are as clear as the nose on your face, as the saying goes.

Did We Goof With OpenFlow, and How Might We Fix It?

We’re likely going to see a lot of “SDN announcements” in 2013, most of which will (like most of what we’ve seen already) be marketing crap.  The simple truth is that SDN is about central control over packet forwarding.  If you’re not centrally controlled, you’re not SDN.  If you aren’t about forwarding packets you aren’t SDN.  If you don’t want me to say “They’re not SDN!” about your stuff then don’t brief me—you might get lucky and I’ll overlook you.  But I digress.  If SDN is central control over packet forwarding, then you have to tune the control to the forwarding mission, and it may be that the archetypal SDN element, OpenFlow, is tuned to something less than the optimum mission.

The problem is that if you look at OpenFlow, you find a concept of “mother-may-I” forwarding in there.  The SDN device gets a packet for which it has no flow rule, so it sends it off to the central controller for analysis, and the controller presumably then sets up forwarding rules not only in the requesting SDN device but in other devices that would make up a path for packets of that type.  Obviously this is not scalable to Internet or even large enterprise size.  And nearly all the SDN types I talk with say “Hey, we know that; we’d build the routes from a route source without those central-hand-off packets.”  So problem solved?  Maybe not.

If you look at OpenFlow even more closely, you’ll be struck by the fact that the protocol sends forwarding table changes in something very close to what a forwarding table would look like.  There are also discussions about how difficult it might be to “compile” the forwarding table changes into an actual table entry at a high speed.  At the same time, you see that it’s necessary to get pretty specific about what parts of a packet header you look at and how you look, and also that it’s hard to cleanly define how OpenFlow might be used to command forwarding changes where the packets aren’t visible.  Think optics.  It’s clear that we’re seeing a compromise in design, a balancing between having a protocol exchange in a format easily converted into a forwarding entry and having one that’s more flexible.  And we’re seeing features that are more justified by an open connectivity model in a protocol likely to be used in a closed, non-volatile, domain.

So here’s my point.  Suppose we were to take that domain mission and design a protocol for it.  We’d likely say that we need to have a generalized way of representing what we want to connect, not just packet mask-and-match, so perhaps we’d have several classes of flow designators, one being packet-header, a second being lambda in WDM, and perhaps a third being TDM channel.  We’d also likely want to think beyond forwarding as the only action.  OpenFlow already has to deal with encapsulating and “de-capsulating”.  Don’t we have other initiatives like NFV out there?  Doesn’t NFV work more on deeper inspection and doesn’t it have more actions than simply forwarding?

I suspect that if we were to design a switch control protocol for the real mission, for all the kinds of traffic we might like to control, we’d end up with something more like an XML schema than like OpenFlow.  Yes, I know that some will cry out here that it would be terribly inefficient, but efficiency has to be measured against the total impact of the overhead it creates, and if you’re using OpenFlow to push lambda connections around you’re probably not doing so at the speed of a Keystone Kop directing traffic.  And if we had a more schema-oriented structure we could accommodate port parameterization, state and status exchange, and all of the stuff we’re kludging in—or ignoring—now.

No matter what, we are not going to software-host high-performance routers and switches.  Look at Cisco’s virtual router and its performance requirements; it needs to own pretty much all of the CPU resources of a typical multi-core server just to run.  Communications devices will be specialized in the data handling plane because it’s worthwhile.  That specialization has to be matched with flexibility at the mission level, and the matching means having some control software than can decode general statements of requirements and translate them to “handling rules” and not just forwarding rules.  So before we get too far with SDN, and before we get anywhere at all with NFV, we need to think about the question of the best way to control data-plane handling from off the box.

Keats Teaches Us a Network Lesson

MWC is kicking off today, and operators tell me their own hope for the show and for the year is a strategy to raise revenues and profits.  Well, I’m a poetry fan, and so I offer a snippet of a poem to guide them through their perusal of vendors’ wares, from John Keats:

Heard melodies are sweet, but sweeter yet are those unheard…

We’re hearing a lot in advance of the show, how SDN will do this or how M2M will do that or how the future is better backhaul or featurephones built on Mozilla’s new Firefox OS.  These things will all certainly play in the future, but they’re “heard melodies”, old news.  They’re also aimed almost completely at the cost side.  NOBODY in any market can surrender the notion of revenue growth without surrendering their future.  So we have to listen past the heard melodies to those (still) unheard.

M2M is what everyone talks about, but ultimately the problem with M2M is that it’s traffic not services unless you start thinking about what all those “Ms” are communicating about.  We can’t put a lot of control devices directly on the Internet anyway; you’ll have foreign hackers attacking traffic lights for heaven’s sake.  Are there applications that could justify a deployment of smart devices?  Sure, but unless the telcos start their M2M deliberations with the “A” word, meaning applications, they’re just feathering OTT nests with their investments.

Backhaul is traffic too.  Yes, you need towers to sell mobile, and backhaul to feed towers, but if there was ever a “heard melody” it’s traffic.  Operators have zero chance of building a future for themselves in networking if they think about nothing but traffic.  They’ll all end up fleeing to providing accommodations for trade show attendees as a profit source.

Featurephones with Firefox OS?  Sure, and I like the concept of a smartphone OS that builds an API for developers by compositing local device APIs and remote service elements.  But again I catch the haunting strains of those heard melodies.  The issue isn’t the phone, it’s the remote service elements!  I can empower iPhones and Android phones and Windows phones and RIM (oh, sorry, Blackberry) phones with remote services, providing I have a way of differentiating them.

OK, NSN may have a part of that with their “Liquid Applications” and the notion of siting a server close to an eNodeB, but we still need to define a service model for these servers or they again become assets for Google or Amazon to run stuff in or from.  Here we may get a taste of our first unheard melody because forward deployment of intelligence is something we do today with CDNs.  CDNs are immune from neutrality regulation in the US by explicit FCC rule.

So here’s my thought, my score for the MWC unheard melody.  Operators are committed to things like Network Functions Virtualization (NFV) to pull service features out of appliances so they can be hosted in generic servers.  One of their specific targets is the CDN space.  So suppose we combine the notion of CDN, the notion of Liquid Applications, the notion of Firefox OS featurephones, and the notion of backhaul all into one glorious harmony under the auspices of NFV?  If we create an architecture that can manage the hosting of transplanted “heard melody” features of today, why can’t that same architecture host the unheard, the new stuff, the cloud stuff?

A Liquid Application built inside a properly designed NFV enclave would look like a part of the network just as CDN components do.  They’d be immune from sharing, and most significantly they’d be differentiable by being IN the network and not over top of it.  The operators could deploy service components on the servers, offer them to featurephones and smartphones and dumbphones, and the components are the revenue stream. Not the phones, not the bits, not the “Ms” or any of those other tired heard melodies.

We have to get our eyes off our toes.  I’m not at MWC and a big reason is that I’m not interested in kissing off a week of work and flying to Europe just to hear vendors tell us what we have to buy to prepare for tomorrow.  It’s too late to prepare for tomorrow, and everyone in the industry should know that by now.  We will never reach the future by taking it one day at a time.  We in the telecom space have to prepare for the kind of future songs that Apple and Google and other OTTs have been hearing.  Those melodies may be unheard to the operators and network vendors, but they’re heard loud and clear by others, and we’ll never see that if what we think are those future unheard melodies are really the sounds of our own footsteps.

Cloud Lessons for HP, Alcatel-Lucent, and Even Google

We have three more reminders of the reality of transformation this morning; it’s hard at best and often impossible.  HP reported their quarterly numbers, Alcatel-Lucent got its new CEO, and Google launched a serious Chromebook, and the three represent different aspects of a common vision of the future, which is “the cloud”.

HP’s challenge is that the Internet has been the draw for incremental growth in PC usage, and of course you don’t need a PC to be on the Internet.  Since HP is a name in PCs and not a name in tablets or phones, it’s representing a minority segment of the device market.  No matter how much Whitman wants to talk about how well this or that PC has been received by shows and reviewers, the contraction in the PC market she talks about isn’t a 4Q phenomena, it’s permanent.  The PC business is going to be an albatross for HP, and of course for Dell, unless you can make it more symbiotic with the cloud.

Google’s Chromebook Pixel is an example of how not to do that, in my view.  Here’s a gadget that will cost you substantially more than a typical laptop (I bought one recently, well-equipped, for about a third of Pixel’s list price), but it tries to hide functional deficits in coolness.  Apple has always been cool, but it’s also been highly functional in all its devices.  Pixel just doesn’t have the functionality needed to pull users away from tablets or PCs, which means it targets that fuzzy transitional space of people who want “more” than a tablet and “less” than a PC.

Solving both HP’s and Google’s problems is a matter for the cloud.  The thing I think is clear is that we really have to embrace the notion that “the cloud” goes all the way to the edge, to the appliance, in a resource sense.  An application in the future isn’t client/server, it’s cloud.  Functional elements are hosted where they can be, and run as needed.  The difference between a thin or thick client is (you guessed it) thinness or thickness in a function-hosting sense.  Microsoft may be heading in that direction with Office 365, but if you’re a Google or an HP you need to be leading the charge to the new vision, not waiting for Microsoft to pull your chestnuts.

HP has been sadly unfocused on its vision of the cloud, and in some ways their earnings call demonstrates the reason.  They’re all locked up in product-silo profit centers where everyone tries to make their own numbers and the company’s numbers are just what it all adds up to.  You can’t have technical integration of resources and users and not have business integration of the elements that support those communities.  Business integration means one goal, one cloud, for all.

You might wonder where Alcatel-Lucent fits into this picture, so let’s look at them now.  For literally half a decade, Alcatel-Lucent has been a leader in envisioning the service layer of the network.  What is a service layer?  It’s a cooperative community of resources and features/components that provides a means of dynamically assembling an experience from the functionally optimum pieces, then puts each piece in the cost/performance-optimized place to run.  In short, it’s a cloud.  The problem they’ve had is that they didn’t (and still don’t) articulate their cloud story to match their service-layer story.  We’re back to silos in an ecosystemic market.

What’s bringing this to a head for all of the players is the advent of the dynamic network duo of software-defined networking and network functions virtualization.  SDN technology is not going to spread itself over the Internet in a couple years.  Likely never, in fact.  What SDN technology will do is to support ENCLAVE networking, the networking of behind-the-scenes resources and elements that combine to create experiences.  It’s an intra-service-layer technology, and the bigger the service layer is the bigger SDN is.  NFV technology is going to define the framework in which resources and components combine, the architecture of the service layer, because you can’t pull functions from cooperative devices in a network, host them on servers as discrete elements that don’t communicate with each other, and expect you’ll end up with the cooperative behavior you started with.  Even enterprises will be impacted by NFV because NFV is the initiative most likely to create the platform-as-a-service architecture that real cloud applications will be written to.

HP should have seen that, and HP should have pushed itself in that direction while PCs were a cash cow and not carrion.  Google should have seen that and created the framework before they started pushing Chromebooks, before they asked users to buy into something that’s not the answer to the general market needs.  Alcatel-Lucent should have seen that and made their cloud the cloud of the real future and not just another cloud-computing service platform offering.

You can’t talk about how cows digest without talking about grass, you can’t talk about how grass grows without soil chemistry, you can’t talk soil chemistry without understanding the biology of the whole ecosystem whose refuse feeds the soil, and that gets you back to cows.  If the cloud is real, we’re all in it together, and we have to first look up to the cloud if we want to GO up.

Buzzword Games: Mobile, SDN, and NFV

MWC is nearly upon us, so it’s not surprising that we have more mobile-related news than usual.  In fact there are a couple of stories that offer not only the usual mobile/product slant but also say something about the companies making the announcements and the market they’re trying to address.

The first announcement was from IBM, with “MobileFirst”, a sweeping push at the mobile-device-in-the-enterprise space that includes everything from application development to device security.  It’s obvious that IBM is taking the space I’ve called “point-of-activity intelligence” very seriously, looking not only at the issue of how to empower workers with the what-they-need-and-where kind of insight that boosts productivity, but how to manage the delivery.

Mobility and point-of-activity empowerment is critical to that because you can’t make a worker more productive without altering how they work.  And when you do that you collect more information about work processes (big data and “mobile fusion” applications), change how computing relates to its users (the cloud), and change how networks have to support workers and resources (SDN, NFV, mobile/wireline convergence).  None of these things are created by point-of-activity empowerment, but all of them are impacted, and on an upward trajectory.

Another less obvious truth is that IBM is staking out a claim in enterprise mobility, a claim it’s creating and supporting from the software/IT side of the house.  If PofA empowerment is critical and if it boosts productivity benefits, it drives IT spending.  Anywhere more money will be spent is a place where players can gain both profits and market share.  Further, in the never-ending battle for account control between network and IT players, it’s the collision zone.  Rival Cisco’s recent pushes into mobility at the network level signal they want these productivity dollars too, and IBM is making it clear to Cisco that to get those dollars it will need SOFTWARE.  Hey, Cisco, you want to be an IT company?  Start thinking IT in mobile PofA empowerment, says IBM.  Good advice for Cisco, I think.

Juniper also made a mobile announcement, linked to its MobileNext (which was, for the record, announced before “MobileFirst”) approach.  MobileNext has been a bit of a poor stepchild of profit goals as far as Juniper’s relationship with the Street goes, but the fact is that Juniper needs a mobile/metro strategy desperately because that’s where the money is.  Since Juniper doesn’t have radio or IMS capability, they’re at a disadvantage in the mobile space, and so they need to get hot in mobile transport, which in the language of LTE means the Evolved Packet Core and the signaling and data components thereof.

What Juniper has announced is a virtualized form of the signaling-plane elements of EPC on their JunosV App Engine, with help from Hitachi.  The move would make it possible to host these signal functions (MME, SGSN) rather than to employ dedicated appliances to support them, which would make this a credible step toward implementing service provider goals for Network Functions Virtualization (NFV).

Which is where this gets strange—again.  Juniper’s SDN, story just amplified and the subject of several of my blogs, cast service chaining via Junos V App Engine as the centerpiece of SDN.  It’s clearly an example of NFV, and so is this latest EPC-virtualization step, and yet again Juniper doesn’t mention NVF at all in its release.  Every other telecom equipment vendor I’ve talked to has something to say about NFV and their participation in it.  I get more search hits on my blog on “NFV” as a term than I do on “SDN”.  From an implementation perspective, doing NFV would give you the model to implement the centralized functions of SDN.  So how does this NFV-aversion make sense?  I have no idea, and even a phone conversation with Juniper’s top software guy didn’t resolve my confusion.

Apart from whether it makes sense to prance accidentally into a football game because you’re wearing pads and carrying a ball to a ballroom dance contest, the lack of a forthright NFV position is both a direct technical problem and a symptom of yet another market risk.  You can’t virtualize network functions as a one-off process, you need an architecture to define the component relationships in a cloud-distributed service-layer implementation.  This is something that I think the NFV ISG in ETSI needs to make a priority, because otherwise the efforts of vendors like Juniper (whether they are calling those efforts “NFV” or not) show that we’re going to get NFV-like elements offered as products before they can be put into an architectural context.  That would weaken the foundation of NFV and compromise its “openness” goal.

 

What Enterprises and Operators Want (and Need) From SDN

The question of what buyers might want from SDN is one that will become more important as this year passes, though my model says that SDN won’t be a major purchase factor until next year.  What’s helpful in the “early views” is getting a sense of what issues are resonant, because “issue ownership” is the key to effective positioning.  That’s something that faces every network vendor in the near future, with SDN and overall.

A Barron’s blog this week noted that enterprise IT specialists were generally more favorable toward Cisco’s hardware-centric vision of SDN than toward the Nicira virtual-network overlay model.  That’s consistent with the results I’ve gotten in my surveys of enterprises.  The main problem with the virtual approach for enterprises is that it focuses on segmentation, which is something a cloud provider may need for multi-tenancy but that an enterprise can’t necessarily find much of a use for.  They want SDN to improve performance and efficiency and to lower costs.

The big enterprise SDN question, as I’ve noted in earlier blogs, is how you get SDN out of the data center.  You can significantly improve data center network reliability, efficiency, and operating costs by eliminating the disorder of Ethernet-style bridging between switches.  That’s even more true if you extend the data center across multiple locations, which is fairly common in some industries (banking, health care).  The thing is, data center networking is a switching problem, an Ethernet problem, and vendors would really like to empower more IP because margins are better.  Their challenge is that with most enterprises consuming VPN services in the WAN, they don’t own the WAN infrastructure and thus can’t apply SDN to it.  Which makes “enterprise SDN” a carrier SDN problem.

Carriers are of two minds with SDN.  They see the cloud data center mission as clearly as the enterprises do, and they are increasingly aware of the enormous potential for SDN in the metro area.  The challenge they face with metro SDN is that while they understand from a mechanics perspective what it involves (OpenFlow fusion with optical is the number one area of interest), they’re trying to work through what the architecture of an SDN-based metro network would look like.  The big question-mark for them is how metro intra-cloud connectivity (for CDNs, cloud computing services, NFV, etc.) will mesh with the metro aggregation and wireless backhaul missions.  A close second in issue terms is how an SDN-driven metro network might look in 4G applications.  Is the EPC now “virtual”?

The cloud is of course the point of technical and business convergence for enterprise and carrier SDN.  At the cloud service level, one of the questions that’s coming up )both among enterprise cloud planners and among service provider cloud service leaders) is how cloud computing services will expand into the big leagues, the core application set.  One simple point we uncovered in our fall survey of enterprises is that they’re struggling to understand the relative benefits of virtualization, private cloud, or just componentized (SOA-like) software deployment.  Then they have to worry about how public cloud/hosting might impact this.

If a user decides to create applications that are elastic in terms of number of available instances and fail-safe in that they can be spun up on new resources (including public ones) when old ones fail, that doesn’t demand they have anything cloud-like at all.  Not even in the hosting, I would add.  Yes, the cloud is a more efficient option for application hosting providing that the application isn’t eating an entire server.  If it is, the user is going to “buy” that server completely (or, likely, more than that server in price terms) in their fees.  Public cloud works for low-utilization applications, in short.  Same with private cloud.

Enterprises are finding that out, and service providers are now asking just what would be needed to offer enterprises a value proposition to move those business-critical and higher-usage applications.  Probably elasticity and operations costs are a big part of any future value, and that may mean that SDN is a part of that future value too.  But what does an SDN service even look like?  Is it just like IP or Ethernet but with different QoS parameters?  Is it perhaps something that’s nearly-Ethernet or nearly-IP but has some different or special features?  Is management more a part of these services, particularly in cloud-friendly form?  How about provisioning and integration?

Security and application acceleration are two areas where both enterprises and service providers believe there’s a major-league opportunity for SDN and even for NFV-type hosting of service intelligence.  But the interesting thing here is that the number of enterprises who say they have seen or heard a smart solution for security or application performance management based on SDN is down below the statistical noise level.  I have yet to see a strategy for either one that was even interesting much less compelling.

A lot to think about, and talk about.  So why not start talking?

 

SDN Missions or SDN Madness?

In my blog yesterday I talked about the risks to SDN, with the largest being a disconnect between SDN missions and SDN implementations.  I got a couple of emails asking just what “SDN missions” might be, and I think that’s a fair question.  Missions reflect a benefit case, after all, and benefit cases are needed to drive SDN deployment.  I’ve been working through a market model of SDN, and while the full set of results won’t be available till next month, I do have some “mission” comments I can make now.

Whatever one of the three “models” of SDN architecture you accept (central/OpenFlow, distributed/IP, or Nicira-like) the fact is that all of the models have limitations in terms of mission.  There is absolutely no reason to believe that complete central control of a service network at even the scale of a large enterprise could be made to work.  The notion that SDN might be less costly (operationally or in capex) than current networks can be defended in limited geographies like a data center, and can be postulated with reasonable confidence at the metro level, but it can’t be defended globally or even nationally based on what we have.  If you try to make an SDN value proposition you can make it inside a data center or perhaps in an IP core area (as Google has proved).  For anything broader you’ll have to wait a bit.

This is the core of the mission challenge.  It’s easy to propose to pack the family into the car and head out to Disneyworld, then get everyone to focus on the packing and route planning.  If you didn’t know where you were taking them things might be a bit more challenging.  We really need to have an SDN end-game to justify getting enterprise or operator planners involved in laying out their SDN direction.  Who is articulating that compelling destination?  Vendors like Cisco may offer an architecture for SDN but what are the specific driving benefits, the utopian future vision?

Devices  would be helpful to gaining a sense of SDN evolution.  If you want to do more than experiment with SDN today or deploy it in a data center, you have to do so by adding SDN/OpenFlow features to current hardware.  Well, that pretty well eradicates the cost advantage doesn’t it?  I buy what I always did, I add a layer of SDN to it, and somehow that becomes cheaper?  How about benefits beyond device cost?  The original goal of SDN (before we SDN-washed everything we could get our hands on) was to centralize network control so as to improve utilization and operations efficiency.  Can that be done?  We don’t really have a complete picture to measure the prospective outcome.  And few vendors have articulated a complete vision of even a limited SDN mission.  None have articulated a complete vision of a universal SDN mission.  If SDN isn’t universal in scope, can it have profound operations benefits?

There are clear SDN values, real missions.  You can utilize SDN to improve traffic management in data centers and core networks to enhance availability.  You can utilize SDN to combine provisioned services with best-efforts infrastructure if you’re an operator.  You can tie SDN easily to tenants in a public cloud and to applications in a private cloud.  We could build a whole new vision of security based on SDN, a new model for cloud service infrastructure.  So where are the models for these things?  You tell me. Only a couple vendors have met even a basic test of functionality in their briefings to me on SDN.  Most tell me that other pieces are from other vendors.  In the end, nearly all vendors tell us that they are supporting the cloud revolution with their SDN plan, but if you define anything that connects software as SDN you’ve set a pretty low bar for yourself in supporting revolutions.

So am I becoming an SDN skeptic?  No, I’m becoming hype-shy having been bitten multiple times.  I think, based on what I’ve gotten from buyers and vendors, I could draw out an “SDN destination plan” for the missions I outlined above.  I think others could do the same thing, but I don’t think we’re demanding that anyone do that.  We are so quick to declare SDN utility and interest because SDN is “new” and “revolutionary” that we aren’t insisting vendors offer more in SDN than simple entertainment.  Maybe that Disneyworld analogy was better than I thought.  SDN Fantasyland, anyone?

Assessing the REAL Risks to SDN

There’s been some talk recently about the risks to SDN, with the focus being that vendors will push their own proprietary visions and create something that’s not interoperable.  There’s a principle in risk management that says that it’s not helpful to attempt to mitigate risks that are below the level of risks you’re already accepting.  That’s a good guideline for SDN adoption, I think, because when you apply it you find that what seemed risky before isn’t so risky.  You also find out that there are risks, major ones, you hadn’t considered.

Let’s start with the risks frequently discussed—primarily that of SDNs becoming another proprietary pond.  The concept of “openness” is best applied to interfaces and protocols, meaning that an “open” architecture is one that relies on published/standardized interfaces and protocols.  Is SDN at risk to being less open?  It’s hard to see how.  Networks expose a bewildering array of these interfaces and protocols, and even today there are many that are non-standard.  For example, while most of the protocols used in IP networks today are standardized and thus open and interoperable, most of the INTERFACES are management APIs that are proprietary.

Will SDN interfaces be different?  Vendors today provide proprietary interfaces to their own devices because their devices are built to be different—to be differentiated.  Whether device management in an SDN world can be different will depend on whether SDN devices are standardized.  What I’ve called the “centralized” model of SDN could use OpenFlow to control forwarding, but there are no adequate device management standards for an OpenFlow device today.  If you’re running OpenFlow today you’re probably managing the devices with the same vendor-specific tools, because you’re probably implementing OpenFlow on traditional switches and routers.  SDN doesn’t add to risk here; it’s the same risk.

You could argue (and some will) that SDN creates a new level of interfaces, the interfaces designed for software control.  Even there I disagree.  There are two relevant sources of software control for networking—the Quantum interface of OpenStack and the generalized set of DevOps tools that are used as part of application lifecycle management in general, and increasingly for the cloud.  Quantum has a two-level model of plug-in, where a general tool is augmented with a set of vendor-specific interfaces and logic below. We have support for this model now, it works for non-SDN networks, and it would work for SDN too.

At the protocol level there is some risk of Balkanization that’s arising from the fact that consensus processes like standardization inevitably lag market requirements these days.  In part this is due to the fact that vendors probably work to rig the process for delay so they can differentiate at will, but I’ve worked enough with standards groups to know that delay is institutional there.  You don’t need deliberate obstruction.  OpenFlow doesn’t address management.  Given that, vendors will HAVE to find their own solutions because it’s unrealistic to assume the buyers will wait until standards are completely mature (most won’t even live that long!)

So are there no risks?  There are risks aplenty, they’re at a higher level than people talk about, and they are potentially more serious and insidious than any of the ones that are typically listed.

The primary risk is MISSION INCOMPATIBILITY WITH IMPLMENTATIONS.  SDN is an architecture that like all architectures balances benefits against costs/trade-offs.  Cars and trucks have common components but different missions, and if you get one when you needed the other no amount of standardized piece are going to take you back to the right choice.  I’ve pointed out that there are three widely accepted “SDN models”, the virtual-network-overlay Nicira model, the purist/centralized OpenFlow model, and the distributed/evolutionary IETF-favored model.  All of these produce what buyer call SDN, but none of them produce the same capability set and trade-offs the others do.  What point is there to talk about standardized implementation at the protocol/interface level when you’re not equivalent in terms of how the architecture is applied?

The second real SDN risk is in LACK OF CONFORMANCE WITH CLOUD EVOLUTION.  The cloud is what everyone agrees is driving SDN, but what does that mean in terms of where SDN is going?  It means we need to know where the cloud is going.  We have two trends in “the cloud” that are critical:

  • Applications of the future will be written for the presumption of hybrid cloud hosting.  How?  What is it that makes an application a native cloud app?  If we don’t know that then we don’t know whether SDN is tracking cloud progress.  If SDN isn’t tracking cloud progress, then SDN is disconnecting from its primary driver.  Much of today’s SDN focus is the cloud, and we obviously need to know where it’s going to know where SDN has to take us.
  • Services of the future will be built on network functions virtualization.  Like it or not, intentionally or not, network operators are building in NFV a model of distributed functionality, a “service layer” that defines network services in terms of basic devices and complex hosted/orchestrated elements.  There are already some operators (and vendors) looking at how orchestrated feature requirements influence connections and traffic flows.

If vendors articulated a full SDN architecture, something that covered all of the functional zones of SDN that I’ve outlined in my SDN tutorials on YouTube then we could determine from their positioning just how they addressed these SDN risks.  Absent such a vision we have problems with SDN at a higher layer, not down where the focus of compatibility and interoperability is today.  We should be holding vendors accountable for addressing these functional zones in detail, and we are not.  THAT’s the big risk of SDN.

 

Brocade’s Maybe-Path-to-Greatness

Brocade announced its earnings yesterday, and they generally exceeded Street estimates.  I was interested in their quarter and their comments on it because Brocade is arguably the first of the second tier in network vendors, a player with a real chance to become a giant if they play their cards right.

The key comment from Brocade’s call was “Disruption in IP Networking is also creating business opportunities which Brocade is well positioned to address.”  That is absolutely true, because the growth of “the cloud” and the collateral focus of network operators on shifting logic out of devices and into servers is a fundamental change.  It’s also true that Brocade is in a good position to address the change because they have powerful Layer 2/3 devices (sales for which were up 14%, making it their leading product area).  But telling the truth may make you free but it doesn’t necessarily make you rich.

Here’s the formula for a Brocade success.  Think of it as a one-two punch in boxing.  Punch number one is HIT THE DATA CENTER CLOUD EVOLUTION and punch number two is HIT THE METRO CLOUD EVOLUTION.  They have the pieces to do this, but not yet the positioning.

Brocade’s VDX fabric is an excellent foundation for an evolution to the cloud, but if you look at their website you find, on the VDX 8770 page, a single comment about the cloud:  “Simplifies network architectures and enables elastic cloud networking with Brocade VCS Fabric technology.”  HO HUM!  Here’s the thing.  The data center is the hub of enterprise network spending.  Develop a compelling and differentiable data center positioning and you own the whole of the enterprise market.  The data center is also the hub of network operator service-layer spending.  Cloud data centers in metro areas will grow to include hosting for network virtual functions and for content and cloud services, and eventually the whole of the metro will be a logical or virtual cloud.  The cloud needs to be the focus of Brocade’s application of fabric technology, not just a throw-away line in a website bullet list.

Brocade’s Application Delivery Switches (ADX) are just about perfect examples of what a “gateway” in an OpenStack Quantum virtual network might look like.  The problem is that this particular application isn’t mentioned at all in Brocade’s webpage on the ADX, nor is it in the cloud-oriented white paper.  Enterprise networking in the future is arguably a combination of an ADX and a VDX in terms of block diagram, and that’s the combination that needs to be integrated.

SDN is likely a way of doing the integration.  Brocade has committed to support for OpenFlow on its MLX and VDX lines but there’s nothing in their latest white paper about the ADX.  The thing is that if you segment enterprise data center LANs you have to think of something innovative to do with the segmentation, otherwise it’s a nice figure on a data center block diagram.  Multi-tenancy isn’t an issue, so application-based segmentation is the logical strategy.  What better way to link that strategy out to workers than to create job-specific worker groupings in offices, link those to application-specific segments in the data center, and then join workers onto the job-group to obtain access to the applications that their type of job requires?  An ADX would be a nice way of doing that.

In the metro area, the thing I think everyone tends to miss is that there will be an explosion in the number of “data centers” created by operators’ decisions to host virtual functions.  The total addressable data center network market for metro-cloud data centers could be an order of magnitude greater than the enterprise market for fabric switches.  The ADX could serve as an on-ramp for Network Functions Virtualization data centers too.  So you can see that if you had a killer VDX strategy in the data center and a killer ADX strategy at the edge, you could tout support for the cloud, for SDN, and for NFV.  Brocade might well have that potential, but they aren’t positioning it in a compelling way.  You have to dig a bit to find references to “SDN” (it’s spelled out once in the “Solutions and Technologies” tab but it’s not there in its familiar initials form).   Nothing is said about NFV at all on the site, and Brocade could have a very good story there, something that’s essential to get all those metro-cloud data centers into your “Win” column.

Brocade has some really good SDN ideas; they have an SDN community site that focuses their outbound marketing material and would give them a darn nice bully pulpit to drive a strong SDN and NFV message, but the site doesn’t go quite far enough in positioning cloud, SDN, NFV, fabric, application delivery, and metro in one grand design (or two, if you want enterprise segmentation of the message versus operator).  If you’re top-hitter in the best of the minors, you break into the majors by taking advantage of the mistakes of the leaders.

And they’ve made them.  Cisco and Juniper are both very tentative about SDN because they have to be to avoid overhanging the router sales that are critical to them.  Juniper, as the smaller of the two, is Brocade’s logical target if they want to gain market share.  On the earnings call, Brocade’s CEO (Lloyd Carney, who came from Juniper) counterpunched with Juniper’s QFabric positioning, which is too narrow a focus to win with.  Juniper’s vulnerability lies in the combination of a need to defend routers and a single-minded focus on products when they should be talking ecosystems.  You don’t fight them by jumping into the path of their swing with a product comment, you fight them by picking your own optimum openings based on their weakness in STRATEGY.

Brocade is doing decently in the current market, growing in the switching/routing area.  They have fabric, they have application delivery, they have SDN and an SDN pulpit.  Go for it.