Cisco’s Clear Goal and Hazy Path

Cisco reported their numbers, and it’s fair to say that they showed that the company has promise but didn’t show there was a guarantee that promise could be met.  To me, a single comment from CEO John Chambers sets the tone:  “Cisco’s strategy, delivering integrated architectures that address the top business opportunities and the biggest market transitions, is differentiating us from our peers and will help enable us to achieve our goal of becoming the number-one IT company the world.”  To read Cisco’s tea leaves we have to dissect this statement.

Let’s start with GOAL; which clearly is to become the number one IT company in the world.  This is to me what differentiates Cisco from the rest of the pack of network vendors.  They alone accept the reality that network equipment cannot drive markets or sustain margins, and there’s nothing that can be done now or ever to change that.  This is the “biggest market transition”, a shift from networking being the value to networking DELIVERING the value.  IT creates the services of the future, and the services are the drivers of profits and productivity.

Now we can look at path to fulfillment.  It’s “delivering integrated architectures that address the top business opportunities and the biggest market transitions”.  To me this shows that Cisco is aware of the fact that networking IS becoming an IT game and that this shift will create both a risk for traditional networking and an opportunity for a new kind of networking.  Cisco wants to jump on the latter to overcome the problems created by the former.

The goal is strong and in my view unarguable.  The path is the question.  Cisco like everyone must start a transition from where they are now.  For Cisco, that’s an incumbent in a space that’s at tremendous near-term value risk—bit-pushing.  They want to get to a place where value can be sustained, the space they loosely call “IT”.  But IT isn’t a house, it’s an apartment building.  Living in IT is a value space to be sure, but right next store is another value trap.  Computer hardware is in the same place networking is.  That demonstrates Cisco’s first big risk.

Cisco has servers, UCS.  Cisco doesn’t have software in the real sense.  Yes they have software products, but even software is a dorm and not a comfy apartment.  In the software space we have a host of operations and other “system” software that has little better hope of generating margins than hardware.  We also have things like UC that have struggled for a decade for relevance, and that are succeeding to the extent they’re cheap.  Lacking any real software to drive productivity, for example, Cisco is unable to deliver new benefits for its servers.  That confines it to network-related applications, a good space to jump off but not a good one to bet your future on.

Network-related applications present the second big risk for Cisco.  Network value is in fact shifting toward cloud-hosted service-layer functionality.  Right now that shift could be utilized by Cisco to raise the number of “network-related applications”, but what is lost to networking in the process of that value shift is lost to Cisco as the major incumbent.  So Cisco is of two minds; do I defend devices and slow the migration of functionality to the cloud, or do I exploit the opportunity for UCS that migration creates?  It’s a clear choice, and it’s the choice Cisco doesn’t talk about because they’ve not made it.

Cisco’s SDN strategy is to build a kind of fence of APIs with onePK, which has the effect of creating the “software-defining” part of SDN by providing service-control links.  Inside the black box of the APIs, Cisco has been working to promote an SDN vision that redefines what “SDN” stands for.  You think it’s “software defined networking?”  The IETF material calls it “software-DRIVEN networking”.  The distinction is that the IETF model for SDN retains most or even all of the current IP universe, and focuses on “bridling” or bounding it by providing for mechanisms for software to exercise direct control.  This is the “distributed” SDN model.  Cisco doesn’t call this out, but I think their ring-of-APIs approach is a good and clever way of addressing the shift of value to software while at the same time keeping as much of that value as possible safely inside the current boxes.

Look at Cisco’s numbers and you see the issue.  They’re more exposed in switching and routing than anywhere else and yet that is where their money comes from.  If they can’t hold margins on those devices then they run the risk of being the broad-jumper who tries to jump from inside the pit on the loose sand instead of on the firm edge.  They need a foundation for their future, and it has to be where they earn their money in the present.

In carrier networking, the major risk battleground for Cisco is metro, and that’s why I think Ericsson’s SDN metro use case is so important.  If somebody big, somebody with good mobile credentials, gets metro just right then Cisco can’t sell gear in the market where most gear will be installed, and can’t link the cloud and mobile SERVICE value as easily to their products.  In the enterprise the battleground is the data center, where the cloud defines the service of the network and the abstract infrastructure that fulfills user demands.  There, startups like Plexxi and incumbents like Brocade have a chance to take control of the key issue, which is how cloud data centers connect—to each other and to users.

Cisco’s just about flat in terms of stock price pre-market today.  That probably reflects the fact that financial analysts are not confident that Cisco can get to its IT-number-one goal, not because it’s not possible to attain it but because they won’t be able to balance the need for a hard take-off point and a soft landing point to jump between the old network and the new.

Is Ericsson in the Lead in SDN?

It’s nice to have some good news on the SDN front.  Ericsson took the opportunity to present their SDN strategy to me as a ramp-up to demonstrating some of the key elements at MWC in Barcelona the week of Feb 25th.  They also announced a cloud strategy that included support for OpenStack, which is important because the cloud is the primary driver of SDN.  I’ve been looking forward to getting some SDN detail because obviously Ericsson is a key network vendor and they’ve not said much in public about their plans.  As it happens, they have a good approach—arguably the most cohesive of any of the major vendors, and certainly more detailed than either Cisco’s or Juniper’s SDN stories of the last two weeks.

The foundation of Ericsson’s SDN vision is something their CTO said some time ago, which is that there was a difference between “SDN” in the basic industry sense and “SDN” in the carrier sense.  What that means to Ericsson is that SDN ties not only to the network in abstract, but also to all of the legacy technologies that are currently used to create services and all of the OSS/BSS processes that are used to create the critical monetization cycle.  Monetization has to parallel any infrastructure investment cycle or there’s no ROI to drive investment in the first place.  Not only is this a “holistic” SDN vision, it’s a holistic carrier monetization vision that envelops SDN and that’s key because carriers are in the profit business not the SDN business.

Ericsson’s SDN architecture has three layers, “Service Exposure and Application Ecosystem”, “Network and Cloud Management”, and “Common Control Plane and Protocols”  What’s different with their vision is that all these layers contain both SDN and “legacy” elements, and their vision of an SDN network includes not only OpenFlow but also the Policy and IMS protocols as well.  SDN is part of network infrastructure, in every respect.  Rather than have SDN technology create another ship in the capital/operations night, they put it aboard a functional framework with everything else, at every level.

The top layer of this is a set of what Ericsson calls “composed APIs” which are effectively examples of the “Façade” design pattern; an API that stands in place of and orchestrates a series of lower-level APIs.  Ericsson’s goal is to make these APIs business-friendly so that developers/partners can build services without being exposed to low-level details about network operations that would raise developer costs, delay projects, and introduce potential stability/security risks.  Ericsson also plans to expose OSS/BSS elements like service metadata this way, which will allow developer APIs to tap into service/monetization management processes in parallel with the service creation activities normally represented here.

The Common Control Plane strategy makes it unnecessary to postulate an SDN architecture that offers control of the network but builds on current devices and protocols, the “distributed SDN” model I’ve blogged about.  If there are standards that allow such a model to evolve, that’s fine, but even if there isn’t the “services” and “routes” created by the Ericsson SDN model can combine OpenFlow with everything else, from RAN management to TDM trunks.  That also means that if OpenFlow standards evolve to provide for things like optical-path management, it’s not difficult to accommodate the shift.  Management processes operate through their own APIs to the Common Control Plane, which can exercise control via any/all of the protocols available.

The Common Control Plane notion and the fact that it would be routine for OSS/BSS and NMS processes to share the Network and Cloud Management layer means that Ericsson has the hooks needed to receive an abstract SDN service model, to build the global network map and populate it with network status, to integrate IT resource status into the map, and to make network route decisions based on that combination.  As I’ve noted, they can then execute on the decisions using any of the protocols that link to the Common Control Plane, including the policy and gateway management protocols used in mobile/IMS.  Hold that “mobile” point; I’m coming back to it.

At MWC, Ericsson is going to present two use cases for their SDN model.  One is the implementation of the same “service chaining” that Juniper claims is the key to centralized control (see my blog yesterday).  Ericsson, in my briefing call, correctly characterized this as being an example of a combination of SDN (for connectivity) and Network Functions Virtualization (NFV), and they indicated they were active in the NFV space.  In their MWC demo they link the services via the SDN controller.  They also said they were in the process of providing a common virtual platform for hosting their logic that would offer a generic x86 cloud platform option as well as the current purpose-built appliances.

The second use case is what gets me back to my “mobile” point.  In this use case, Ericsson demonstrates the use of OpenFlow to create a metro/IP aggregation network that has the scalability benefits of IP but utilizes the centralized management of OpenFlow to control operations cost and complexity.  This use case makes the entire metro network look like a virtual IP router connected upstream to the real service edge (Ericsson SSRs) and downstream to mobile, wireline residential, and business uses.  Given that the Ericsson approach integrates the SDN stuff with IMS elements, the configuration would support applications that merged metro configuration (including things like caching mobile content) and mobile/IMS control plane elements with overall IMS admission control and policy management.  Not all this will be shown at the demo (obviously) but just the virtual-router metro-network model is a critical credibility asset.

I doubt anyone who reads my blog doesn’t understand how much importance I assign to the metro application of SDN.  Metro is where the money is, where infrastructure investment remains strong and where changes in demand and mobile technology are driving the introduction of new capacity, new devices.  This is the kind of fertile field that SDN needs to make it big, and obviously the kind a vendor needs to make a big success out of SDN.  Ericsson is the first vendor to talk with me specifically on the metro-SDN application.  Also, as I’ve noted, the first vendor to really show some of that “inside-the-black-box” SDN detail.  Given all of this, I wonder whether the MWC SDN demos might be where we start to see SDN-washing and SDN implementations separate.

Might We “Wash” the Good Out of SDN?

One of the frustrations of the SDN space (and, sadly, a lot of spaces in the world of tech) is that sometimes companies get so obsessed with washing their products in the sacred water of some new (and press-worthy) technology they miss a chance to make a good and valuable point about what they have.  We see two examples of this phenomena, one from F5 and another from Juniper.

F5 announced it was acquiring LineRate, a company that says it’s specializing in Level 7+ SDN.  The problem with this, of course is 1) there are only 7 levels in the OSI model, 2) the top four layers aren’t part of the network, they’re part of the endpoint and 3) the goals of SDN that are fairly widely accepted apply to the lower levels, the real three layers of the network.  That’s one reason I personally resist labeling the overlay-segmentation approach of firms like Nicira as “SDN”; they’re “software-defined connectivity”.

The irony is that underneath all this fuzzy logic LineRate has a real point about applying SDN, particularly in its “purist” or centralized/OpenFlow form.  Remember, SDN is really about how the network forwards packets, and there are a lot of things that happen in the network that aren’t packet forwarding in a direct sense.

If we go back to the OpenStack Quantum virtual network model, we see that a virtual network to them is a Level 2 VLAN with some service-creating elements attached.  In most cases, this model would define only the network in the cloud data center.  If we look at LineRate’s approach, we can see that their vision of things like load-sharing are perfectly fine features of one of those “service-creating elements” of a Quantum VLAN, to wit the “gateway” that links it to the outside world—to the users.

It’s also fair to say that any network technology that segments the data center LAN is either designed for multi-tenancy or it’s going to have to segment on something other than just tenants.  If the former is true, it’s a pure cloud provider approach, and if it’s the latter then the only thing it could segment on is applications.  If you’re going to segment the data center by application, you could also think about segmenting WAN traffic by application, which is what application performance management tools that LineRate has would do.

My point is that LineRate (and now F5) have a good position in the cloud.  They might even have a role in connecting user traffic to an SDN, and they certainly provide a reasonable NFV story.  So why not say all of this?  Hey, I just said it in less than 500 words, and if I can so can competitors.

Getting taken out with the SDN wash isn’t just a LineRate/F5 problem.  Juniper released a new white paper, in advance of an analyst event yesterday (which I missed because it’s too far to travel for a half-day event) to talk in more detail about their SDN strategy.  The paper turns out to be a reprise of VP Bob Muglia’s blog after the partner event where Juniper introduced its SDN position.

The central point in any vendor’s SDN position is their implementation.  It’s easy to talk about SDN goals and philosophies, but at some point you have to get on the road and travel.  In the Juniper paper, that point is covered by the section “Getting from Here to There”.  Essentially it offers four steps to SDN, which are to centralize management, extract services, centralize control, and optimize network and security hardware.  While some of these points aren’t directly relevant to SDN implementation in my view, Juniper does at least include the truly critical point in any SDN strategy—centralizing control.

The thing is, it’s in this third and critical step of centralized control where things really get squirrely.  For “centralize control”, Juniper jumps into a discussion of “service chaining”, or linking the NFV-virtualized functions that Network Functions Virtualization creates into a sequence that creates an experience.  I blogged about this the first time Juniper referenced the concept in their partner conference, pointing out that it was really about NFV.  But in this context, the key point is that it’s not defining how you centralize control of forwarding, and that’s what a separate, centralized, control plane does.

And here’s Juniper’s irony.  Juniper’s PSD CTO Michael Beesley did an SDN talk in Tokyo and the slides were posted online.  In this talk, he offered a picture of SDN on Slide 9 that I think captures what SDN really has to be for Juniper, which is a fusion of current network architecture and centralized control using some emerging IETF protocols.  On Slide 13 the presentation describes “The Juniper Architecture” in a nice, explicit way—more explicit than most of Juniper’s rivals.  Juniper also has a solutions brief “Software Defined Networking: Extracting More Value from Networking Infrastructure with SDN” from October 2012 that has the same elements and structure as the talk.  There is nothing about service chaining in either the slide deck or the solution brief, but there is detail on what Juniper sees as the core of SDN—changes to Internet protocols to centralize knowledge of network state (BGP, ALTO) and use of MPLS and OpenFlow to selectively control traffic.  This is what I’ve called the “distributed” model of SDN because it retains a reliance on current IP device behavior, adding an overlay for control.  It’s a perfectly good model, and Juniper has apparently thought about it in detail, and even presented that detail.  So why not present it now?  As was the case with LineRate, Juniper risks a real story getting brushed aside by an SDN-wash.

As an industry, we’re way too easy in accepting extensions to basic ideas like SDN.  It started with firm principles and goals, and it’s now just a big attractive industry billboard.  Which puts us at risk of achieving SDN by “acclamation”; everyone declares they support it, we stop developing anything because, hey, we’ve already met the goal, and what we’ve done all along becomes the “new network”.  Is that what we want?  Not me, but you can speak for yourselves.

The Data Center Network and the Metro Cloud

Mellanox Technologies just announced a couple of “data center connect” systems designed to allow companies to use fiber to link InfiniBand switches to create linked “cloud” data centers at distances of up to about 50 miles.  This development makes what I think are two key points about cloud computing; first, we need to think outside the data center when we talk about it, and second that there’s been a switch technology (InfiniBand) available to support it for some time now, and it’s not set the world on fire.

Let’s start with the second point first.  InfiniBand has exactly the same mission as a general “fabric” switch.  Companies like Mellanox have been supporting it for over a decade, and probably many (most) of my readers have never heard of them.  They’re arguably the market leader in InfiniBand, which is the most-deployed fabric technology, and they’re a shadow in the networking market.  Which proves that if you look at data center network evolution in the future as being an extension of the trends of the past, fabrics aren’t going to mean squat.  But they will, so you have to look at something more revolutionary than evolutionary, which brings us to the cloud.

Cloud computing is a resource-pool architecture, one that’s designed to break down the hard association between application and hardware that exists today even in nearly all virtualized data centers.  Cloud computing is also an architecture designed to alter the relationship between workers and resources, to permit the closer coupling of compute-based tools to activity (what I’ve been calling “point-of-activity intelligence”.  This new-style productivity support creates new benefits, which drives new spending, and it also creates more horizontal traffic.  The cloud is also the model that’s emerging for the service logic of the next-gen network.  Carrier interest in SDN and NFV prove that they’re interested in a cloud-hosted model for all of the network intelligence above simple forwarding.

Cloud computing as a service, if it fulfills its promise, would have enough demand to justify deployment in all of the metro areas with reasonable opportunity.  Unless we presume zero competition we need to presume competitive overbuild too.  Furthermore, if we assume that SDN and NFV go forward as operators want them to, we’ll be deploying hosted service-layer functions throughout not only these cloud-opportunity metro areas but the remaining ones as well.  All of these service-layer clouds will have to be made up of distributed “data centers” that are likely located in all the current central  offices and service points of presence.  That’s a lot of data centers.

Building a distribute virtual cloud from discrete data centers is something that’s likely to require very high connect bandwidth.  In fact, it’s very possible that the most important driver in designing a metro network will be intra-cloud traffic because this traffic will require QoS control, which best-efforts Internet service does not.  The data center switch in this kind of picture isn’t just a data center switch, it’s an element in a metro-cloud matrix, and it has to be architected for that purpose.

We see, in today’s data center networking market, a division.  Some vendors, including Mellanox Technologies, have a fabric but not a metro-cloud fabric because they don’t have the broader positioning to support that larger mission.  Some vendors, including Cisco and Juniper, have a metro-cloud vision that’s more inherited from the fact that their switching implicitly supports interconnect than that they’ve really designed themselves for the mission.  A very few (one that we know of, Plexxi) has an architecture that’s arguably designed for the metro-cloud mission.  We think Juniper is probably (belatedly) pushing its QFabric toward that mission.

The mission of metro cloud is a mission of SDN and NFV, which means you have to be able to articulate an SDN position that’s complete in order to claim support.  Right now, we don’t have any articulations that completely meet the test, as I’m finding as I develop our comprehensive view of SDN models, implementation, and market impact.  What everyone who’s interested in SDN, NFV, next-gen network architectures, metro, or the cloud should be looking for in the coming months is a maturation of the metro-cloud vision of the vendors and their effectiveness in making their SDN strategy both a logical metro strategy and a logical cloud strategy.  Will any meet the test?  We can hope!

What To Look For in the New Network

The end of a week is my usual recap time, an opportunity to try to collect rumors and minor items and combine them with broader and more visible ones to find some interesting trends.  This week, the question that seems to come out of this process is “What’s happening with network equipment vendors?”

We all already know that Alcatel-Lucent suffered a loss this quarter and that their CEO is leaving.  Oracle bought Acme Packet.  We’re now hearing that the joint Nokia/Siemens NSN venture might be breaking up, either sold off (perhaps with private equity help or even to one) or absorbed by Nokia alone.  There’s certainly enough evidence of business change to suggest something important underneath, and also to suggest more activity and even M&A to come.

Network equipment is not, overall, a healthy market.   Moody’s says that the focus of telecoms is toward returns to shareholders and away from capex and investment, which is hardly a good sign on the demand side.  Cisco hasn’t reported yet, but for the rest of the space there continues to be major issues with government spending, telco spending, and in many geographies even enterprise spending.  The challenge of the space is simple; you can’t have radical increases in spending and high profit margins in a market that’s focused on managing cost.  Anyone who really wants to be a long-term player (or even survivor) in the network equipment space needs to be thinking about how to boost the benefit case for what they sell.

So how?  Well, for the service provider the “benefit” is new service revenue.  Right now, you can argue that offering a cool phone with a good subsidy does more to get you customers (and retain them) than the network.  AT&T is consistently rated at the bottom in service quality and yet it’s doing pretty well with iPhones.  For the enterprise, “benefit” is incremental productivity gain.  Right now, we’re in a decade-long slump in finding new ways of improving productivity through technology.

I think it’s clear that software is the key to both provider and enterprise benefit-building.  Software is what couples hardware to experiences, both for consumers and for workers.  Those who have it can get closer to the top of the food chain and claim a larger share of the overall margins, but they can also drive the bus in terms of demand creation.  Lower down you lose differentiation and pricing power, and you’re also a slave to the higher players’ ability to make the business case to the ultimate buyer—consumer or worker.  That’s why the decision by Oracle to buy Acme Packet is important.  Oracle is a software giant, and Acme offers them a way of hooking software to network behavior in a host of different ways.

The challenge for Oracle is then the challenge that Alcatel-Lucent has been facing, that NSN has been facing.  Both NSN and Alcatel-Lucent have “software” in that they have a software framework for mobile broadband services and also an architecture (IMS, largely) on which services can be built.  So if software is king why have these two players not been able to capitalize on their credentials, and why has Cisco seemed to be stealing the software thunder?

Cisco has always been the master of positioning, with Chambers putting pretty words on even mediocre concepts.  It’s not that Cisco doesn’t have substance, but that they realize that you don’t sell a consumer or worker on a “benefit” by giving them a laundry list of technology elements, references to standards, and project roadmaps.  You sell them by INSPIRING them, which Cisco has done and which neither Alcatel-Lucent nor NSN has been able to do.  Nor, of course, have the other competitors in the vendor space, like Ericsson, Juniper, and Huawei.  So the lesson of software is that in order to validate it you have to present an inspiring vision first, and when they’re hooked you get into the educational process.  Otherwise you’re a teacher and not a salesperson.

So should we now look for a mad software-vendor rush to acquire network companies?  I don’t think so.  First, most of the network companies out there are in spaces that software doesn’t have a compelling hook with.  The service layer is where experiences are created, and while Acme has most recently focused on VoIP and SBCs, it was also a DPI player and a broader service play in the past.  But in any event, the key to creating a software hook to a service is to be able to recognize a service or application, which means DPI, sessions, application acceleration, and all that related good stuff.

So what do we look for?  Well, if I were a second-tier network vendor or a specialty player in the application or services area of the network, I’d be positioning myself as an element in a future network where virtualized functions and centralized software was the rule.  Some have that capability but have nothing in the way of positioning—they lack that grand vision that we know now is essential in creating any credible link between your product and the top of the demand chain.  Thus, look for some serious positioning grooming from the remaining VoIP players, application acceleration players, SDN controller players, and SDN startups in general.  And look quickly, because whatever your position in SDN and NFV is in 2013 is probably what you’ll have to live with, in these critical early days of the market.

Alcatel-Lucent: Stop Predicting the Future and Shape It!

Alcatel-Lucent, battered by high costs and declining revenues and by internal tension since the merger, is now going to have to get another CEO.  Ben Verwaayen is stepping down, having failed to turn the company’s fortunes around.  That he didn’t is beyond dispute.  Whether he could have is open to debate in my view, and whether it will do any good is the biggest question of all.

With all of networking slipping into the status of “plumbing” there’s no question that something radical was needed to recondition Alcatel-Lucent.  The company has an enormous exposure to the low-margin sides of a market with shrinking margins overall, which is never a good thing.  It has a very expensive R&D process that, to be a contributor to profit, would need to be focusing on areas where Alcatel-Lucent’s customers could make money.  It didn’t have that focus.  The company has arguably been the leader in developing a service-layer position that WOULD have helped customers transform, but the articulation was never equal to the products.  And Verwaayen was never the guy to fix any of these problems in my view.

If you look at the telecom equipment scene today, you can only conclude that the most important attributes of a vendor CEO are charismatic drive and naked aggression.  That pretty well describes Cisco’s Chambers but none of the other players in the network-vendor-CEO game.  It’s not that the CEO has to be the mouthpiece of the company as Chambers is, but the CEO has to be the driver of company culture, which is the thing Alcatel-Lucent has to change.

Changing culture means making the company into a master positioning-marketing-sales trajectory team.  Networks are a cooperative community of smart elements, and you can’t sell them in product silos for that reason.  Any successful networking company has to start with a vision of HOW IT MAKES ITS CUSTOMERS PROFITABLE that is compelling and realistic at the same time.  That vision has to then translate into specific “transformation initiatives” that are service-targeted, and those initiatives then finally implicate specific products.  You fine people who say things like “we sell IMS” or “we sell routers”.  No you don’t.  You sell profits for your buyers, which creates profits for you.

To make this work you have to focus on some compelling target, and that target is created by the intersection of technical initiatives like SDN and NFV, architecture initiatives like the cloud, and service initiatives like mobile broadband.  I’ve said for over a year that there is only one place where these come together, and that’s the metro network.  Alcatel-Lucent has to transform itself by transforming metro, because that’s the place where all the money is going to be spent.  But if you do a Google search on the company and metro networking, you get not a vision for the network but a bunch of product pitches.  And none of these focus on the cloud, SDN, or NFV, despite the fact that Alcatel-Lucent is active in all three of these areas.

There is nothing important happening that’s more than 40 miles from the customer.  Not now, not ever.  Why?  Because in a mass market for content and experiences and compute services, there’s nothing that’s valuable that isn’t OPTIMALLY positioned close to the buyer.  How many movies will people watch?  A bunch if it costs nothing, but far fewer if they have to pay, and if something costs nothing it earns nothing.  And the few movies will simply be cached in CDNs.  Same with cloud; if there’s a big opportunity then the opportunity is big enough to justify metro data centers.

Alcatel-Lucent has all the pieces, but they won’t show us the picture on the front of the puzzle box.  While it’s not impossible to put a puzzle together without knowing what the final product would look like, it’s a darn sight harder than it should be, and many won’t bother.  With Alcatel-Lucent, many haven’t and that’s the problem.  A problem that could be fixed in ONE CALENDAR QUARTER if they really were determined to fix it.

This isn’t a test of intelligence or even one of product depth.  This is a test of WILL.  Does Alcatel-Lucent have the will to be radical instead of conservative?  They either have to prove that in 2013 or they will be unlikely to ever be able to do so.  Other players from Ericsson and Huawei to Juniper and Brocade and Extreme and Ciena and Tellabs have the opportunity to do the right thing too.  Any productive move any of these other guys make will occupy a step on the value-proposition stepping-stone bridge across the market stream.  That will mean Alcatel-Lucent will have to get wet to move forward, and wetness is only a small step from drowning.

 

Dell, the Cloud, and the Lesson of History

Dell’s decision to buy itself out of being a public company into a private one (with private equity help) generated a gratuitous slap from HP, but it’s clear that there ARE really questions about Dell’s future.  The thing is, the same questions can be asked about HP’s future too, and maybe the future of tech as we know it.

I’ve been in tech a long time, and one thing that’s been pretty constant and obvious is that computers are faster every year, and that unit cost of computer performance has been falling sharply and continuously.  Even in the software space, there’s a continuing need to find new stuff that can be added to a program to justify a buyer’s getting a new version.  LAN pioneer Novell, for example, fell into the business abyss because once you’d done file and printer sharing, there wasn’t much users were willing to continue to pay upgrade fees in order to get.

PCs have the same issue.  Somebody in a chip company told me that since the 1990s, over 85% of the increased power of desktop/laptop processors has gone into the GUI.  How much processing does somebody’s Word or Excel job take, after all?  The Internet was a boon to PCs because it created a new application to drive PC sales, but when tablets and smartphones came along, they sucked the Internet opportunity right out of the PC space.

Then we have the cloud.  Smartphones and mobile devices encourage a “thin-client” application framework whatever we call the server side.  And the more you pull out of a device and host somewhere else, the fewer features differentiate that device.   A browser-in-a-box may be the future.  HP is making Chromebooks, after all.

And pushing that value into servers?  Not hardly.  A server today is something you run software on, and differentiation there is getting more difficult every day even without the cloud.  Include in concepts like virtualization and cloud stacks and you see that applications don’t even run on “servers” any more, they run in virtual partitions software creates.  And if you think something virtual is invisible, think of how invisible the thing that hosts that virtual something is!

The challenge for the computer space is that value and differentiation are fleeing hardware, period.  The successful players in the IT space are those who have found something else to sell, a combination of software, integration, professional services, market expertise, industry expertise…how many people would say that an IBM box is better than someone else’s box?  But lots say that IBM is better as a company, and the financials show they’re right.  Cisco’s success in servers is obviously not due to the feature-for-feature excellence of their servers, but to the ability of Cisco to make sense of hosted network-related activities that buyers are looking to support.

Do you see some network-industry similarities here?  We have SOFTWARE-defined networking.  We have network functions VIRTUALIZATION.  There is the same declining unit cost of the “product” (bits, in this case) driven by a diminishing ability to differentiate that product.  Are we seeing network vendors in the early stage of the same commoditizing decline in hardware, the same push toward software?  I think we are.  I think Oracle believes that’s their opportunity with Acme Packet—to grab a piece of the future by grabbing the hardware segment that’s best able to leverage software…the service layer.  I think Cisco believes that they have to become an IT company for the best of all reasons; they can’t stay a network company and grow like they need to.

Nothing matters in computing now but software.  Within five years, nothing will matter in networking but software.  Every single network company today has to be measured not by its incumbency (remember what HP’s and Dell’s incumbencies were?) but by its software agility.  And by that measure we’re not seeing a lot of super-giants.

Remember Novell?  Eventually you have to get even software to go beyond the basics.  Network software used to mean network management software, but no more.  Benefits drive growth; cost control manages diminution.  What’s needed in both networking and IT isn’t software that makes stuff cheaper, it’s software that makes stuff HAPPEN.  Who gets that?  Maybe Oracle will gel software and service-layer hardware.  Maybe Cisco will gel network and IT hosting software/hardware.  Maybe some other player will find that magic formula that propels them into the lead.

One decision arguably made IBM king of computing.  It could happen again in networking.

Details and Destinations

The devil, they say, is in the details.  The difference between motion and progress, they say, is a sense of goal or destination.  Well, we have a couple of examples in the news that demonstrates the truth of these statements.

We’re hearing more and more about a plan by FCC Chairman Genachowski that would use some reclaimed spectrum to create a “super-WiFi” network that could span the country and compete with wireless services.  Obviously, OTT players like this idea a lot, and obviously the nework operators don’t like it at all.  The obvious question is whether this is a good idea, but while the question is obvious the answer is a lot more complicated.

First, it’s my view that Genachowski hasn’t fully shed his VC past; he tends to favor positions on communications issues that are good for the Internet/OTT community and less good for the network operators.  That position has its risks when we consider that network operators are already reducing capital budgets on infrastructure wherever possible, and that they’re pressuring for things like settlement with content players and OTTs.  We may WANT next-to-free Internet at gigaspeeds, but getting to that is going to take a sophisticated combination of an improved business model and enlightened public policy—if it’s even possible.  International regulators have long recommended that telecom regulations be written to balance consumerism with the health of the industry.  Genachowski seems to have his seesaw out of whack.

Second, there’s the question of how this new super-WiFi would work.  Obviously free spectrum doesn’t mean free networking; there would still have to be money spent to deploy the WiFi radios and to backhaul them to the Internet.  There are security and performance issues to be considered.  It’s almost certain that the new spectrum isn’t in the current WiFi band (since it’s supposed to penetrate structures better and have longer range, implying it’s lower-frequency) and so current WiFi devices wouldn’t work with it.

Presuming that super-WiFi goes forward, it presents some significant risks.  Just the threat of a free national WiFi service could reduce the incentive to invest in wireless infrastructure by threatening future returns.  This threat could be magnified by the fact that the new band needed special devices, since it’s likely that even OTT players like Google would rush out to provide the devices, and they would likely not work with standard cellular services.  For operators, to get their own customers on the new band could require re-issuing devices or providing dongles or docking units.

And who builds the free network?  We’ve had “muniFi” attempts before and they’ve been almost uniformly unsuccessful except in costing taxpayer dollars.  I’m really not happy with this new super-WiFi concept because it seems to me that there’s not been nearly enough planning to secure success but plenty of talk to start generating risks to investment.  Where are the details?  What is the real goal?  If the FCC had a serious desire to do this they needed to go about it the right way, the complete-story way, which so far they have not.

Then we have Cisco and SDN.  The company finally said something more about its SDN strategy, but what Cisco did wasn’t much more than saying a bit more about their SDN controller and saying that they were going to support SDN on some more devices (no surprise there either).  They did talk earlier about a network-partitioning application for the controller, and they’re now adding in more monitoring capabilities, reasonable given their M&A in the monitoring space.  But what they’ve not done is fill in the details inside their onePK APIs or talk about just what their end-game for SDN is.

I’ve been saying for months that any plausible SDN strategy has to support three functional layers and two “models”.  The “Cloudifier” feeds a service model to “SDN Central” where it’s combined with a network map/model that the “Topologizer” produces, and from that combination you get the specific commitment of resources needed.  It’s hard to see how you can deliver SDN without this, and yet we don’t have the specifics on how this functionality would be provided or where the information needed to populate the maps and control everything would come from.  Monitoring?  Sure, but how?  There are IETF standards in at least the consideration stage here; are these what Cisco intends?  If so, they say it.  If not, then say what IS intended.

Absent details on the “how” of Cisco’s SDN, we really don’t have that goal-line vision that turns motion into progress.  We have controller applications that are logical steps to anything, meaning nothing in particular.  All this hoopla is overkill for network-slicing and monitoring.  What is this all leading to?  The funny thing is that I think Cisco knows; it understands exactly what it’s planning and where it’s going.  It may be that it’s the best-prepared of the vendors to get there.  But gosh, Cisco, I still can’t persuade myself to ride your bus if I don’t know how the interior is laid out and I can’t read the destination from the sign on the front.

What’s in the Oracle and Acme Packets Deal?

Oracle announced its intention to buy Acme Packet, a company it describes as a leader in session border control technology for carriers and enterprises.  On the surface, this would appear to be a deal targeting the VoIP, mobile/LTE, and UC/UCC spaces, and surely that dimension of the M&A will worry network giant Cisco, who has a major stake in using UC/UCC as a lever point to spring into a key role with servers and IT in both the enterprise and SP spaces.

Sessions are a decent way of creating a secure and controlled information pathway, not only for voice/video and UC/UCC but in theory for anything that requires special handling.  That’s why IMS proponents have pushed for IMS-session services like RCS rather than traditional best-efforts web services.  The question has always been whether this special handling, which obviously requires incremental infrastructure, is something users will pay for.  In the broad market, the answer may be clearly “No”.  But the deal may go beyond that.  UC/UCC, after all, is a market that’s been reportedly on the verge of taking off since some of the current executives in the space were teens.  Oracle may realize that and be looking at something else.

Hypothetically, like SDN and NFV.  In fact, if ever there was a deal that literally screams “Network Functions Virtualization!” it’s the decision by Oracle to buy Acme packet.  Or at least that’s what it had BETTER signal if Oracle hopes to get any bang for its bucks out of the acquisition.  And speaking of “hopes” the move into SDN/NFV would dash more than a few M&A hopes of other network vendors.  Oracle is one of two IT giants (the other being IBM) who has no networking position; they’re a software company with server technology they acquired out of their Sun acquisition.  With Cisco and HP striving to be full-service network/IT players, many have expected Oracle to respond by acquiring some network switch-and-router vendor.  I didn’t see that as logical, frankly, because margins on switching and routing have nowhere to go but down and most players don’t want to buy into a declining market.  Oracle may be proving my point; for themselves and maybe for IBM too.

The service layer is another story. Certainly UC/UCC is a space where session value would be easier to establish than it might be for content delivery or website access, and so it’s very likely that there are some at Oracle who see an immediate opportunity with session-level services for UC and LTE.  Managing security is one of the things that Acme has been on an M&A kick to support, too, and it fits well with Oracle’s growing portfolio of management tools.  Of course, we’ve heard this song before for session services and UC/UCC too.  There may be deeper assets within Acme Packet.  Recall, for example, that Acme Packet also used to be a big player in DPI, and they still have quite a bit of DPI technology in their DNA.

SDN and NFV are both about pulling functionality out of network devices to dumb them down, and in our January issue of Netwatcher I described the logical culmination of that trend as a Policy-Handling Engine or PHE with cloud-hosted control logic.  If you wanted to build a PHE from boxes out there today, what Acme produces would be a decent start.  Thus, Oracle may be looking not at “networking” as it is today, dominated by bit-pushing, but as it will be in the future, dominated by service-pushing.  The difference between bits and services is context, and the best source of context in the market today is deep packet inspection combined with policy management.

My view here is that this is too big a deal for Oracle to be totally linked to UC/UCC and SIP sessions.  But even if it is, there are going to be major ripples not only in the UC/UCC and SIP spaces but in the overall network equipment space as well.  Oracle is a very big player with enormous assets in the critical software space, and networking is a market area crying out for software smarts—in more ways than one.

“So Take a Letter to Tellabs…”

Tellabs has, like other network companies, seen its sales slide through the last couple of years, and also like other companies Tellabs has decided to cut back staff, drop a product, and focus on (you guessed it!) mobile and SDN.  The big question is whether it’s too late.  Tellabs had actually been focusing more on mobile with its 9100 and 9200 products, but never got the former really going and the latter hadn’t really even come out.  What’s different?

The big question for Tellabs and frankly for a lot of other players is what COULD BE different.  If you look at vendor goals for SDN combined with how operators see it, and how they see companion concept network functions virtualization, you see that this can’t be about mobile it has to be about metro, and metro is a complicated problem to solve.

Some think of “metro” as a city or Standard Metropolitan Statistical Area, and old-hand telco vets like me think of it as a Local Access and Transport Area, which means roughly the areas within which calls could be “local”.  Today changes in infrastructure and business structure have merged the definitions a bit.  The US, for example, has about 250 “metro” areas and there are over 1200 worldwide.  Generally they’ll have a population numbering in the high six to seven digits, and generally there’s some point in the area that serves as a commercial and communications center.  Where people are, money is.

The metros of the wireline day had about 40 central offices on the average serving an average of about 25,000 people.  There are anywhere between 10 and several hundred times that number of mobile sites in a metro, and with LTE we’re quickly getting to a point where a mobile tower has more bandwidth than a residential CO did even in the early age of DSL.  Mobile networks are the fastest-growing component of networking by far.  But we’re also delivering content from local CDNs, and increasingly we’ll be working with local clouds.  Along the way, we’re changing the model from classical networking to funnel networking.  Everything profitable travels 40 miles or less to less than 100 destination points.  That’s not connection, it’s aggregation.

The battle in the market today is over how we funnel stuff, in an equipment sense.  The big vendors would like operators to build massive IP/MPLS networks in their metro areas instead of using Ethernet and BRASs.  They want the cloud to extend IP.  When Tellabs commits to an SDN and mobile course they’re committing to a metro strategy that would stand off the determined drive of router vendors to spread routers all the way to the horizon, and inward into the metro area right to the CO.

SDN has to be not a mobile strategy but a metro strategy because mobile and metro and cloud and content are all one delivery network that’s driven by a play on the old saw of politics; “All profit is local”.  We are going to use SDN principles to flatten aggregation from three layers to one virtual layer spanning optics to IP.  The metro is going to become a big distributed cloud data center.

For Tellabs or anyone who wants to play in the big leagues, that means that you can’t have a mobile strategy without a cloud strategy without an NFV strategy without a metro strategy.  This is all one market, and it’s the only market that’s going to have any respectable capex behind it within a couple of years.  You win here, or you lose.  That’s a very tough market to go after when you’re a low-level network player who is already cutting products and cutting staff.  This is a time for naked aggression and Tellabs is not historically the company known for that sort of thing.  The question for Tellabs now is “Are you ready?”  I hate to see any company sink unnecessarily, but this is not going to be an easy play for Tellabs.

Nor for the other players in the space.  The network of the future is going to have to be a LOT cheaper to build and to operate, and that means it’s going to be less profitable to sell on a unit-device basis.  So you have to sell EVERYTHING that goes in it, from optics to software.  The players who get that point, and who quickly drive to get a leading position in all those critical sectors, will be the players who avoid the Sycamore-and-Tellabs post-mortems down the road.