The Year of the Cannibal

We’re at the end of 2012, a year many of us will remember less than fondly if for no other reason the politico/economic foibles in both the US and Europe.  Call it “The Year of Class Warfare” if you like, economically speaking.  We should also remember it as a very important transitional year, and if we need a name for it in the tech space, I suggest “The Year of the Cannibal.”

There’s lots of talk about technology, new technology, for 2013.  We have carrier WiFi and Ethernet, we have virtual networking and SDN, we have good old cloud computing and virtualization.  Articles speculate about 100G Ethernet, vectored DSL, and other developments, but what gets missed all too often is that these trends are driven not by new revenue opportunity but by cost management.  The challenge of our industry is that we’ve missed the boat on benefits.

We did a study for Wall Street years ago on how growth in IT investment and GDP growth related.  Most people would think intuitively that the result was the good old marketing hockey stick, an exponential explosion.  It’s not, it’s a sine wave.  What the chart below

IT Spending Growth relative to GDP Growth, baselined to historical average
IT Spending Growth relative to GDP Growth, baselined to historical average

shows is that we’ve had periods (cycles, if you will) where some factor drove IT investment to rise much faster than GDP, and other periods when its sunk below the long term average, which is our baseline for this chart.  If I go back to our surveys for the period since 1982 I can glean the primary causal data on this phenomenon; we find new paradigms to drive productivity growth and so unlock a new set of benefits.  As companies exploit these new paradigms to garner some much-needed ROI, they drive spending faster than historical averages.  If you look at the timing of these cycles you see that they roughly coincide with the advent of distributed computing and the Internet, certainly two things that have changed our perception of the value of technology.

Look now at the end of the chart.  We hit a big bottom coincident with the NASDAQ crash, the “bursting of the tech bubble” in the late 1990s, and since then we’ve been unable to muster anything convincing to drive up the rate of IT spending relative to GDP.  We’re now hovering at about 85% of historical levels, meaning that IT GROWTH is expanding slower than GDP growth is.  Remember, this chart shows rate of change in GROWTH, and it shows that when productivity value is on the table we accelerates the rate at which IT spending grows.  It’s not been on the table, and that’s what makes this the Year of the Cannibal.  “Growth” comes from eating somebody else’s share.  Eventually everything gets eaten.

This is the challenge that faces the vendors in 2013, and that faces “new technologies” in 2013 as well.  If the cloud, if SDN, represents nothing but a reduction in costs then these new things will erode the TAM for everything in tech and we’ll become cheap as dirt.  If these developments advance the VALUE of technology, by improving productivity and driving benefits upward, then we can have another of those waves of growth—a wave that’s already long overdue.


Two Giants Try the Beanstalk…Again

Today we have two indicators that even giant companies have to face reality, meaning have to face the future evolution of their markets and the consequences of their past strategies.  Apple (seriously) is said to be looking at an “iWatch” and Cisco has created a software group.  OK, you probably don’t find the two comparably interesting, but they’re comparable in their drivers.

Apple has made itself into an appliance company.  Since the iPod, its growth has come because it’s been able to define a follow-on gadget that it could create a cool version of, a version that would appeal to relatively affluent buyers and command high margins.  They’ve engineered to that market to perfection.  But the problem with appliances is that any one you get reduces the need for similar ones down the line, and so you either have to expand your addressable market by inventing new appliances or go down-market and lose margins.  So you jump to TVs or iWatches, but the problem is that these are still appliances and your next jump will be even harder.

Cisco has made itself into a box company.  For two decades they’ve ridden the wave of network growth, but the problem with connectivity as a business model is that the valuable connections get made first and the marginal utility of later connections is harder to establish.  That’s not just because the later connections are late because they’re offering less ROI either.  It’s also because what you need to gain the “utility” of those connections is increasingly a software-set context of productivity.  We got as far as we could get with bit-pushing, and Cisco then went to server-pushing.  The only thing is that servers are still boxes.

Apple’s decision to look hard at TV or watches is probably at this point nothing more than an exploration, an effort to see whether their winning model can win for just one more cycle.  Cisco’s decision to collect software into a group (the Service Provider Software and Applications Group) is probably a reflection of a view that software needs to have a combined voice.  For both companies it doesn’t address the fundamental question, which is what the market they’re committed to be in will demand of them in the future.  That’s a big shift from thinking about what you can milk from it.

Every device Apple adds to its repertoire further advances the Cause of the Cloud, because symbiosis among the devices becomes the natural limiting factor to how big a device ecosystem can be.  And you need a device ecosystem or your past successes can’t pull through your future successes.  But Apple is arguably the most behind of all the major players as far as cloud-committed strategy is concerned.  Why?  Because cloud-centric planning contaminates appliance-centric selling.

Cisco has had software groups before.  Is it possible that spreading software across a bunch of managers would erode its value?  Sure it is.  The problem is that SPSAG doesn’t fix that spread; it’s simply collecting the service-provider-oriented M&A results and some existing software into a common position.  Cisco needs to be embracing the reality, which is first that the cloud is the vision of evolution for ALL intelligence and second that there’s no such thing as a cloud architecture for providers and another for consumers.  They’re part of the same food chain, just at the opposite ends.

We are in the most innovative industry the world has ever known, but we seem to be losing our edge.  Big companies like Apple and Cisco are just feathering the old nests, and VCs are funding startups that are simply playing to short-term consumer trends that depend on a set of business and technical relationships those VCs aren’t interested in advancing.  Is innovation dead?  Probably not, but it’s looking like a lot of the innovators are.

More on “SDN Domains”

Some of you emailed me after my last blog to ask about the notion of “SDN domains” so I thought this would be a good time to take them up.  First let me say that this is simply my own way of looking at SDN evolution; to my knowledge it’s not sanctioned officially by ONF or other bodies.

Data centers, especially cloud data centers, are logical examples of SDN domains.  In fact, anything that could look like a Level 2 LAN is a logical example because these “subnetworks” are typically connected to the larger IP world through a default gateway.  LAN addressing works inside them, but IP addressing works overall.  This model could fit SDN handling nicely too.  In fact, any Level-2-like network could be an SDN domain, and that offers what I think are the most significant expansions of the SDN opportunity space.  There are two such areas; metro and branch office.

In the metro space, we have a classic example of a domain because the goal of metro networking is aggregation and not connection.  Users don’t usually talk to each other, they talk with a point of presence for a higher-layer service.  Their traffic is just backhauled/aggregated to those POPs.  Since metro demands optical and electrical handling to complete a service connection, a logical goal for SDN is to smoosh them down into a single logical layer that can be managed at the path/route level and not by media type.  Operators are enthralled with this mission and groping for a path to realize it.

In the branch space we see what might be the logical enterprise extension of the data-center-cloud concept of SDNs.  Branches are also Level 2 networks in most case, and so the basic model of a domain with a default gateway fits there.  But if you think about it, SDNs could partition branches into “functional domains”, by class of employee.  That would align the domains with the applications each class of worker needs.  If data centers (clouds) were similarly partitioned by application, you could then use a simple forwarding rule to link each application to the branches’ worker class domains.  It changes the whole notion of security and networking.

Even in the Internet, as Google has shown, we can create SDN domains inside BGP routers as long as we can produce the edge conditions needed to interface with the rest of the IP world.  That, in fact, is the most general of the SDN domain’s “general rules”.  You have to fit an SDN domain into a service context, and so while inside the domain you can do what you like with SDN principles, at the border you have to look like an element in the service network you’re supporting.

I think that the SDN domain concept is critical for another reason; we need to make the cloud SDN-aware if SDNs are going to mean anything other than a way to commoditize hardware and reduce cost.  Central control can bring service awareness and value, but only if you can link the “service” to the thing that’s being served, which is always going to be the application.  My branch domain example illustrates that, I think.  The OpenStack Quantum interface is “domain-aware” in a limited sense, and we need to expand the Quantum notion to embrace application and service domains.  I don’t think that would be difficult to do at the technical level; somebody just has to publish some models and take some thought leadership positions.

So what could SDN domains do?  They could create a “network”, including the Internet, that was made up of SDN domains acting as virtual devices.  They could open the door to adopting SDN traffic handling and connection management on a wide scale.  They could support today’s network protocols like IP, and also support future internetworking and network addressing mechanisms.  It might be very smart to think about what SDN domains could now, as we’re on the cusp of moving the Internet to IPv6 and as we struggle to address some of the commercial issues arising from the evolution of OTT services, especially content.

Could SDN domains undermine current network vendors?  Sure, but I think no more than any standards do.  Operators’ Network Functions Virtualization initiatives are far more likely to impact network vendor sales/profits because they have the explicit goal of unloading network features from devices to servers to reduce costs.  SDN goals of central control don’t necessarily demand that the devices are totally dumb, or that the central logic is open-source.  There are still many paths to differentiate inside SDN domains…if vendors will look past their incumbencies to see them.

It’s Not Where SDN is Going, it’s Where the NETWORK Is!

Light Reading is asking the question “Where is SDN Going?” today ( and I’d like to propose we ask a broader question, which is “Where is the network going?” as we wind down 2012.  The reason is that SDN is a network trend being driven by network forces, and you have to get to the heart of the food chain to understand the menu.

We don’t have settlement on the Internet.  Given that reality, we don’t have any mechanism to prevent financial arbitrage of Internet transport and connection resources.  The landscape of the Internet would be very different if we had settlement, and not “very bad” as most suggest.  We’d have less pressure on operators to build out the access network, we’d have less pressure on privacy because we’d not rely as much on ad funding, and we’d have fewer VCs sucking the blood of innovation with mindless startups.  But I digress.  The point is that absent a mechanism to value bits, bits become non-valuable and that means transport/connection services are loss leaders.  That raises two questions; what is the profit source they presume to lead to, and how do we minimize the damage of the loss-leading process.

The cloud is the goal of pretty much everyone these days, and the cloud is best visualized as an IT architecture to deliver the “service” component of network services using the connectivity fabric of the Internet.  The two concepts are complementary, in other words.  We’re moving toward a cloud-based execution of network services of all types—services above and beyond transport and connection.

There are drivers of change created by the cloud, the most significant of which is the creation of “domains” where connectivity and traffic handling rules are different.  In these domains, applications drive the network, and the notion of software-defined networking (SDN) and virtual networking address the issues these domains create.  One such domain is the data center, but it’s only a subset of the real domain issue, the “cloud core” where interprocess cooperation composes the experiences we’ll pay for in the future.

On the cost side we also see SDN, driven by another set of issues.  Networks, even without the cloud, face a different set of reliability/availability issues today.  “Least-cost” routing, for example, has to consider more the cost of disruption than the cost of steady-state operation.  With the cloud included we face the reality that the cost of a service and the availability of the experience it brings its users depends more on IT resources than the delivery fabric.  Both these forces drive us toward considering the network more as a software resource.  They also demand we answer the question “does this allow us to dumb down devices to reduce costs?”

Network Functions Virtualization is a more general response to the cost question.  If we have cheap industry-standard servers made cheaper by creating cloud resource pools, why could we not offload anything but the data plane functions of network devices onto these pools?  The result would be cheaper handling of the connection-related features of the network.  NFV is cloud for connection-related features, in short.  It’s also obviously aimed at cutting costs both by reducing the cost to host features and by reducing differentiators that let a given vendor charge more for gear.

SDN can’t move on cost alone, or on cloud alone, and NFV or virtual networking have the same constraint.  All of this is aimed at creating a survivable service ecosystem, and that demands a combination of revenue upticks and cost reductions.  The network can never be a real loss leader because if it is, operators start their service-layer competition with a deficit they have to make up.  That’s a deficit the OTT players would not share, and so they’d have the advantage.  No amount of new revenue will stop operators from pressuring vendors to reduce network cost.  That’s why Cisco is smart in saying they have to become an IT company.  Even in the NETWORK of the future, IT is going to be the value-creating, competitively differentiating, element.  It didn’t have to be that way; five years ago operators tried to get vendors to fix their problems.  The vendors didn’t respond, and now it’s too late to make the future of the network be about networking.  It’s all the cloud now.

How Red Hat Could be a Cloud Contender

Red Hat reported its numbers, which were good, and its stock took a nice pop in after-hours trading.  Red Hat also announced something that it’s been needing to announce for some time, a cloud strategy that goes beyond simply saying they can support some open-source cloud stack.  They’re a poster-child for the companies in both the network and IT spaces today; you need a cloud strategy or you risk irrelevance.  And it has to be a decent one.

The good news for Red Hat is that theirs might BE decent.  The company has purchased ManageIQ, which is essentially what I call a “DevOps” play, a tool provider who links applications to provisioning rules that can deploy, expand, and contract its resources in the cloud.  The specific target for Red Hat is the hybrid cloud, and that makes sense for two reasons.

First, there’s zero chance that everything in IT is going to the cloud.  The rate of migration of existing applications, according to my model, will never reach even 25%.  That means that the role of the cloud has to be something far different from the role of consolidating servers or chasing nickels in cost-based substitution of IaaS for virtualization.  As we develop new value propositions for the cloud, these new applications will have to interwork with the IT assets the cloud will—CAN—never displace.  Thus, hybrids.

The second reason this makes sense for Red Hat is that they’re primarily an enterprise play.  Cloud providers tend to deal directly with the cloud stack software players, or they’re themselves founding members of standards groups and open-source projects.  Giants like VMware have gotten an edge in the hybrid world by having a play on both sides of the cloud business fence.  Red Hat can only cover one side, so it needs an angle.

The VMware reference here is also relevant, because arguably Red Hat is a competitor with VMware.  It’s not that VMware offers exactly what Red Hat does, but that both of them are promoting the same buyers, and there’s really only one driver for strategic IT change.  If it’s cloud, you need to be there.  VMware’s claim to fame in the cloud space is based on two factors; virtualization incumbency and superior operations tools.  Red Hat can hardly match the first factor, so they have to go after the second.

The big question for Red Hat isn’t so much whether ManageIQ is a credible strategy to address the hybrid cloud (it is) as whether it’s a long-term value proposition.  Hybrid clouds are the way of the future, and so the question is what the architecture of the future would look like.  It’s not going to look like server consolidation and IaaS.  It’s probably going to look a lot more like SOA, and in fact in this department RED HAT HAS A LEAD over players like VMware.

SOA componentization of applications facilitates elastic deployment and it also facilitates composition-based worker empowerment.  Remember that my vision of the future is POINT-OF-ACTIVITY EMPOWERMENT through a combination of mobile broadband and the cloud.  I can build this sort of thing based on SOA, as long as I modernize the SOA processes to the new mission.  So that’s what Red Hat should be thinking about.  How can they make ManageIQ into not only evolutionary DevOps but revolutionary “SOAOps?”  If they get that right they can suddenly become a true power in the cloud, and their competitor VMware becomes a sauropod munching leaves and waiting for extinction.  Will Red Hat see the light?  That’s another of those “we’ll see” questions.

Google Loses its (STB) Network Edge

Gosh, somebody other than Cisco is doing M&A; that’s news in itself!  What may be even more interesting is that Google is selling off its Motorola Home business to Arris.  That creates a new competitor for Cisco, of course, but it may also speak volumes about how IP TV is going to evolve.

All the research I’ve done, and that which I’ve been able to glean from sources like Nielson, show that viewers who watch channelized TV (which despite the hype number in the great majority) would prefer to have IP video integrated with their TV delivery.  They want minimal technical fiddling to switch to viewing streamed video and they’d like to see channel guides integrated, with “personal virtual channels” that would contain offerings drawn from online programming.  The critical step of creating a unified platform would be most easily accomplished in the STB.

If an STB could “know” about online video in the same way that Roku, for example, does, then it could simply make streaming a “channel”, and it could probably also (with the help of cloud intelligence) make up a composite channel guide and virtual channels.  This would seem to be a great strategy for Google to follow, getting them to a position where they’d be able to drive streaming video, but clearly it isn’t. Why not?

First off, people don’t buy their own STBs.  They get them from network operators who are normally both their source of the Internet (unprofitable for the operator) and channelized TV (profitable).  So how interested are these operators in bridging the consumer to a new consumption model or offering features that let OTTs complete with operator VoD or PPV?

Second, the model of streaming supplementing channelized video isn’t nearly as attractive to Google or other OTTs as the notion (however unrealistic) that channelized video could be DISPLACED by streaming.  Does Google want to hold Comcast’s or Verizon’s hat forever?  Google TV, and Google’s FTTH venture, are aimed more at validating an IP-OTT model.  Motorola Home is a distraction.

Arris is likely to take the business in much the same direction Cisco has with Scientific Atlanta, and that offers a reason for Cisco’s interest in selling Linksys.  If you want to be a player in the STB space you have to stop thinking about the consumer and start thinking about the CUSTOMER, who happens to be the network operator.  So Arris will focus on cable infrastructure as an end-to-end ecosystem, as Cisco has.  That raises interesting questions down the line; can they hope to compete with Cisco without offering more transport gear than they do now?  And did Google make a bad choice by selling Motorola’s STB business or by buying Motorola?

Actually they apparently made a bad choice period.  I never believed that the Motorola deal was about Google getting patents, and now it seems maybe it wasn’t because Arris gets a license.  Was it about building handsets and tablets?  Not so far.  Underneath it all, Google is groping for a vision of the future just like Cisco is, just like HP is, and just like Apple is.  Buying companies is an expensive way to test the market’s waters.  Sitting on your duff is an expensive way to judge the pace of market change too—you’re too late if you get rolled over.  I think we’ve underestimated just how convulsive a period of market change we’re entering.  I think vendors have been particularly egregious in their misjudgments.  I think they’re starting to pay the price, and they’ll continue to do so through 2013.

Speaking of 2013, our Netwatcher Annual Technology Forecast issue will be released this weekend, and as usual this will have our forecasts for next year, a look at the world beyond next year, and survey results outlining what we found from business buyers of technology and services providers.  The winds of change can be seen in the surveys, and so they’re accommodated in the forecasts.  We may have a wild ride next year.

We will not be publishing on Monday December 24th, so I want to wish all my readers and clients the very best for this Holiday Season.  I’ll resume blogging on the 26th.

The Maybe-Secret-Mission of BroadHop

Cisco’s still on an M&A tear, this time buying up BroadHop, a company who provides specialized policy control software, particularly what’s now called the Policy/Charging Rules Function (PCRF) and was formerly known as the Policy Decision Point or PDP.  These are elements of a mobile multimedia infrastructure designed to provide for service quality control, and that’s led most to speculate that Cisco “needed” this for their mobile strategy or simply wanted to keep all the money in the deals.  BroadHop was a supplier to Cisco’s ever-growing carrier WiFi strategy.

I think there may be more to this.  Cisco’s not typically a company who buys up something that they already get on a simple partnership basis.  Anyway, the whole mobile policy control process is based on 3GPP standards, which would hardly seem a margin-building framework.  So what might this be?  I think BroadHop might just be a link in Cisco’s complicated (convoluted?) SDN evolution.

No, BroadHop doesn’t do OpenFlow.  But if we think of “SDN” as a black box, then the way it would be viewed by users would be based on its properties and not its implementation.  Benefits build business cases, and technologies are relevant only to the extent that they impact benefits and costs.  In the case of SDN, Cisco has (correctly I think) assessed the buyers and found that what they want from SDN is application control of network behavior.  Whether that’s done with OpenFlow or by having some magical, mystical, process invoked is of much less concern.  So Cisco, who doesn’t find the notion of having all of networking subsumed into a bunch of commodity forwarding engines fed over OpenFlow, is touting an SDN-over-what-I-sell-you approach.  That’s logical.

Just for starters, the fact that Cisco doesn’t want to build its SDN vision on OpenFlow should give pause to those who think it’s buying BroadHop for mobile reasons.  Remember those 3GPP standards?  Why is OpenFlow a threat and not those from 3GPP?  But if you dig deeper you can see another point.  If you need to control network behavior down at the bit-moving level and you don’t want to be branded with the scarlet letter “P” for “proprietary”, then you need to pick an industry-sanctioned approach.  The 3GPP Policy and Charging Control (PCC) framework fits that bill, and the fact that everyone thinks of it as a pure mobile play is all the better.

Suppose that Cisco applied BroadHop principles to EVERYTHING in networking and not just to mobile?  Now we have a policy-based mechanism for controlling connectivity and QoS, and that’s pretty darn close to what users would call a “software-defined network”.  If we add in the management/telemetry smarts that Cisco has been buying up, you get a picture of a new layer of service control—the policy and telemetry layer.  That’s’ the functionality that links to the “software” and defines the network.  Absent BroadHop, there’s no structured mechanism to turn central policies into distributed network handling decisions—except OpenFlow.  With a BroadHop PCRF, and with Cisco implementation of Policy/Charging Enforcement Functions (PCEFs) on its network devices, Cisco can communicate central knowledge of network status (from telemetry) and software needs (from DevOps tools and virtual-network interfaces like OpenStack Quantum) to the masses.  Furthermore, they can wrap their program around other vendors, because who among them will refuse to support the 3GPP mobile multimedia initiatives?

Obviously I can’t know what’s in Chambers’ mind here.  Obviously there may be a value to mobile policy control that Cisco sees as sufficient to justify the deal.  But I’m skeptical of a tactical acquisition in a flood of strategic ones.  Let’s take time out from examining the Theory of Relativity and toss a ball around, then go back and invent Unified Field Theory?  Anyway, Cisco does say that BroadHop fits into its ONE architecture, which is where its SDN stuff goes.

Nothing is said on Cisco’s blog about SDN with respect to the deal, and that’s also interesting.  They’ve not been shy about SDN-washing other M&A.  Instead they gave the tired old clichés about Internet traffic expanding (at least they didn’t say “exponentially” or I’d have puked).  So maybe BroadHop is the secret sauce, the thing that tips the whole strategy to competitors.  Or maybe I’m dreaming.  We’ll just have to see how this one develops.

For Cisco, the big question is whether BroadHop specifically and their SDN strategy in general is defending the right fort.  Oracle reported their numbers today, and they beat estimates sending their stock up after hours.  The interesting thing was that despite their CEO’s defense of the Sun acquisition, Oracle’s hardware sales declined by 16% y/y.  Oracle’s servers run Oracle’s software, and even there software can’t pull them through.  If Cisco’s SDN strategy is a boxes-first strategy in disguise, can that strategy succeed even if it’s perfectly executed?


Linksys and Optics: Common Ground

Cisco’s decision to buy Linksys 9 years ago was hailed by most as an indication the company was going to expand into the hotter consumer broadband space.  It was clear to some at Cisco even then that margins on the business were going to be low, and so Linksys was a kind of arms-length business unit.  Now apparently it’s no business unit at all; Cisco is rumored to have engaged a banker to look for bidders to take Linksys off its hands.  Obviously consumer-market TAM isn’t what it was cracked up to be.

Alcatel-Lucent, as part of its efforts to sell off some assets, has provided more information on its optical business, and business analysts were astonished by the low margins they were facing there.  Here we are in the age of broadband, with everyone seemingly demanding bits left and right, and you can’t make money selling capacity?  Obviously Internet traffic isn’t the driver of build-out that it was cracked up to be.

You can see the theme here.  The industry hasn’t been particularly good at figuring out what’s going to be valuable, and that has created some serious issues and disguised some others.  Everyone is paying a price for a simplistic view of the market.

No mass market can sustain margins, simply because you can’t achieve a mass market except by the tightest pricing and most extensive advertising and distribution.  Cisco was right in thinking that the consumer broadband space would be hot, but wrong in thinking that the heat could be exploitable by a company who has probably never in its history wanted to be in a space with razor-thin margins.

Cisco and the cloud?  Under my theory, Cisco’s success with cloud-exploitation depends on its success in promoting the model of the cloud that I’ve been advocating—not because I’ve been advocating it but because a cloud whose market is those who want to save money is a cloud on a direct course to commoditization.  You can’t sell commodity services from high-margin infrastructure.  Cisco needs to make its cloud vision into what it says it wants to have–a software vision.

NSN’s optical decision and Alcatel-Lucent’s recent optical disclosures are also examples of narrowthink.  First, Internet bandwidth has been dipping in unit cost and profit for over a decade, at an average of about 50% per year.  As I noted above, you can’t sell low-margin service from high-margin infrastructure, so operators are building out by doing everything in their power to pressure prices down.  That hits the optical space in a direct way, obviously, because it means only the cheapest gear gets sold.  But the indirect force is the most insidious, and it’s the metrofication I’ve been talking about.

Nobody makes money on bits any more.  So anyone who makes money has to move up the value chain, to selling content, etc.  We wouldn’t have a prayer of having revamped cable networks or FTTH absent the delivery of channelized TV to make the process pay.  The Internet would never be able to do that.  But valuable content experiences are few in number, so you can cache the content locally to avoid rolling your access bandwidth needs deeper into the network.  Twenty years ago, the core had more combined capacity than the edge.  Now it’s reversed, and it will only get further polarized in favor of metro as we go.

What does this have to do with optics?  Well, metro optics is very different because it’s a pure aggregation play, and because the fiber paths follow a logical tree in structure.  The majority of fiber in the metro doesn’t need that fancy DWDM stuff because the total downstream traffic won’t justify it.  We aren’t doing less optics, we are in fact deploying more fiber miles.  But we’re doing more CHEAP optics for the metro mission.  The application of SDN principles to metro to create a virtual-single-layer opto-electrical union is driven entirely by the need to reduce deployment costs for wireless cells (WiFi and 3/4G).  Core networking will never again be a good business, optics or routing.

Is this bad?  That depends on how you define badness.  It’s certainly going to be giving the consumer more of that which they want—at least for a while—but it’s also going to change the vendor landscape and the provider landscape forever.  The Internet as a seamless pool of bandwidth won’t exist; some will miss the populism of publishing of old when they find that those who can’t cache are second-class citizens.  But all of this is a consequence of those simplistic views.  We can’t get everything we want, and everything that’s interesting isn’t real.

My Take on the Cloud of 2013

GigaOM has published its vision of the cloud for 2013 (, and it’s a decent document overall.  Still, there are places I think need further comment, and that’s a decent way to open the last week before the holidays, and for many the last working week of 2012.

The first point in the GigaOM commentary is that this is the year that the public cloud has to be proved for enterprises.  I agree, but I think it’s going to be more complicated than dealing with security or reliability concerns.  Anyone who thinks all the current “business critical” apps are going to the public cloud is dreaming, first of all.  The key is to find the correct role for the public cloud, because to insist it’s the be-all-end-all is to end it, in credibility terms.  The success of the cloud won’t happen in 2013, but what will likely happen is that we’ll come to understand that hybridizing SOA applications and creating a cloud-based virtual operating system that envelopes the enterprise and public resources equally is the only approach that can win.  Maybe that will be enough to get that approach moving…finally.

It WILL be a make-or-break year for HP, largely because of my first point.  HP needs, more than anyone, a vision of “the cloud” that’s accommodating to everything from tablets to servers, public to private.  If you’re the biggest IT company then you need really big visions to turn yourself around.  HP has made the mistake so many companies make, which is to let themselves get compartmentalized in a marketplace where the buyer wants ecosystems.  You can’t solve problems one box at a time, no matter what the box is.

OpenStack is also facing a big year, and again because of my first point.  Arguing over who can produce the best IaaS is like arguing over how many angels can dance on the head of a pin.  That’s not where the cloud’s success will be.  OpenStack doesn’t mandate a specific virtualization model but so far it’s pretty tightly coupled with IaaS as a model of service.  If OpenStack can’t extend itself to the PaaS space, if it can’t extend platform services and create that virtual operating system, then someone will do it above them.  If that happens, then OpenStack is simply part of the plumbing.

On the next point, I argue it makes my first point.  Infrastructure extends beyond the walls of the data center.  It doesn’t eliminate the data center, and it just as much extends the cloud INTO the data center as it does pull the data center out into the cloud.  There is only one IT architecture in the future, and that architecture doesn’t know boundaries or administrative domains or technology barriers.  Because the cloud is what conceptualizes the new model, it’s up to the cloud to create it.  What current cloud technology does is just mimic the data center.  Imitation is the sincerest form of flattery, but it’s a lousy way to create a high-margin, high-value future.  The cloud has to find its wings.

Software-defined-everything doesn’t get easier, but it’s the inevitable result of the need to create that boundary-less infrastructure and that virtual operating system.  The purpose of SDN for example is to make the network cooperate explicitly with the IT model instead of serving as a dumb independent substrate.  We can’t build distributed platforms without control of what does the distributing.  But the goal of SDN has to be to slave behavior to the cloud model, and absent that model there’s little chance that SDN can reach its full potential.  Like everything else in the world of the cloud, SDN depends on finding its place in a glorious whole.  Otherwise it’s an un-summed part.

So given that all these predictions hang together on the presumption that there’s a universal cloud out there somewhere, why don’t we see more of it?  Part of the answer is that everyone is trying to both protect their base and sustain near-term momentum.  You can’t sell a guy a box that has to live for five years if that buyer thinks the future is going to be radically different from the present in some unknown way, at some undetermined point.  Thus, we ask buyers to accept evolution in abstract only and bet that current technology will somehow lead in the right direction, even if we can’t define just what direction that is.

We are approaching the grandest of all IT visions with the most near-sighted of all possible mindsets as we enter 2013.  Even our vision of next year is mired in our limited perceptions, our limited aspirations.  Grand dreams created every revolution in computing and networking in the past, not scrabbling in the dirt for a few crumbs by hosting server consolidation and telling ourselves it will matter more next year.  Do you want me to say nice things about your company next year?  Then show me your dreams.

Can Alcatel-Lucent Be Networking’s Comeback Kid?

Alcatel-Lucent has secured the financing it needed to shore up its balance sheet and prevent a messy problem, but the company still faces the same demons it’s faced since the merger.  Those problems have been enough to stall a giant with the best strategic influence of all network vendors and the best strategic product set.  Can they now be solved?

In our surveys of strategic influence, Alcatel-Lucent is the runaway winner in terms of raw score, scoring a full third higher than their nearest competitor.  They’re also the most consistent, with good scores in every aspect of network infrastructure.  Finally, they’ve managed to stay engaged on every single major hot monetization issue in the market.

In product terms, Alcatel-Lucent has the only complete service-layer architecture, the best framework to link PaaS cloud deployment to service features, the only respectable IMS evolution strategy, a good content story and excellent mobile story, and what’s emerging as a powerful cloud and SDN story.

So why aren’t they kicking posterior?  Here’s my insight.  Last night, I dreamed of umbrellas.  Why?  Because when you’re focused on picking low-hanging fruit one tree at a time you create a forest of trees that look like umbrellas.  The Alcatel-Lucent merger occurred just as the industry was transitioning from one that tolerated cost to one that was obsessed with controlling it.  They were immediately faced with the challenge of creating value for their shareholders, and the obvious path of consolidation pitted every business unit against every other one in proving its immediate value.  That focused everyone on grabbing those low apples.  You don’t need a strategic ladder if you’re stuck on the ground, so they never really managed to tell a story that valued all the leadership positions they had.  Sales people were focused on doing deals not building relationships, and that helped for a while but built up strategic disengagement like genetic load.

Remember that third-higher score I mentioned?  Well, the problem is that it’s been dropping every single survey for the last three and a half years.  Remember those strategic assets they have?  The problem is that while customers know about the parts, they don’t understand the assembly instructions.

There’s always good and bad news, so let’s get to that.  The bad news is that when you mess something up for half a decade or more, there a powerful reason to believe you’re structurally incapable of fixing it.  One reason may be “Bell Labs”, which gives Alcatel-Lucent the heart of a market-disconnected geek and turns its innovation culture into abstract experimentation.  If ever there was a company that needed to turn the product management whips on the R&D masses, Alcatel-Lucent is that company.

The good news is that their problems are truly cosmetic.  The right product management and marketing and positioning and articulation could cure everything in two quarters.  A trivial example is Old Spice.  If you can sing pretty in today’s market, you can BE pretty.

The specific place where Alcatel-Lucent needs to focus is the cloud, and most specifically the cloud/network boundary.  There are compelling opportunities with SDN and NFV, created in no small part because the interest has far outrun the reality.  Alcatel-Lucent is in a position to create an SDN story that easily migrates upward into being a story about the service architecture that will support both future network services and future cloud applications.  Such a story, if compelling, would make Cisco’s server position much less significant as a tool in gaining influence and engagement.  Cisco itself is likely to be coy in this area to protect its base, but if Juniper does something compelling (a big IF) with the Contrail assets and if they can position it in an arresting way (a bigger IF) they could force Cisco’s hand.  And they could also limit Alcatel-Lucent’s opportunity.  For Juniper, you can’t knock off number one if you can’t get number two, competitor-wise.

So to go back to Alcatel-Lucent and its prospects, it’s simply a matter of will.  I know of no company whose problems could be solved so easily, whose future could be so bright.  They are the only player in networking who could truly derail Cisco’s plans and who could almost single-handedly reshape the industry in a stroke.  I would love to think that they would do that, but as Alcatel-Lucent (and competitor Juniper) has shown in the past, there is no inertia so strong as company-culture inertia, and none so destructive.