Learning SDN by Picking Blackberries

Here’s a question to ponder; “How is Blackberry like most vendors’ SDN strategy?”  The answer is “too late to the revolution”.

RIM, who changed its name to “Blackberry” to reflect the market reality of where brand loyalty lies, is looking to re-launch more than a name.  They have the almost-insurmountable task of making their new phone and new OS relevant in a market that’s already strongly polarized into two camps—and Blackberry isn’t either one.

All of this stems from RIM (as they were then) committing what I think is the cardinal-and-yet-all-too-common market sin of holding back on an aggressive move in a changing market.  Had RIM simply announced something iPhone-like when the iPhone first came out, there might well have been no Android and for sure there would have been no problems on the company’s horizon.  Now?  Well, in some sense it’s too late.

Some pundits are saying that the key to Blackberry’s future will be in luring app developers.  Not true.  There’s unlikely to be any path for Blackberry that leads to developer success approaching Android, much less Apple.  There’s a developer dimension in any successful re-launching of Blackberry, but the key is the cloud.

Right now, Blackberry’s competition is using the cloud for simple storage-and-sync applications.  Blackberry can’t even match that, right now, because they say they’re working on their own approach.  Well, that approach had better be a much more “cloud-as-a-feature-host” model, because that’s the only way it will matter.  Sync has been done, like apps.  What has not been done is a handset that uses the cloud as a functional extension, not a memory extension.  That’s what Apple should do and likely won’t do (till it’s too late) because Apple doesn’t want to devalue its own handset-is-cool model. For Blackberry, immobility of an opponent is an invitation to attack…or it should be.

Blackberry could do truly cloud-hosted UC, setting up an email and IM/Jabber/Text client that would mediate all of the user’s channels and let Blackberry stand at the intersection.  It sort of does that now with its single view of mail or IM regardless of accounts.  But instead of pulling all the junk to the device, Blackberry should make the connections inside the cloud, construct a policy-based view, and export the view to the device.  If they did that they would be able to selectively support even Apple or Android devices (via HTML5).  They could then look at other applications that could benefit from having a cloud agent—which is nearly any productivity app.

Speaking of apps, there’s that app dimension I promised was there.  Blackberry needs to refine its target market first, though.  You can’t say “I’m a consumer handset”, it’s like being a one-size-fits-all shoe.  There are segments of the consumer market that haven’t committed convincingly to Apple or Android.  Identify those segments, Blackberry, and then find the apps THOSE SEGMENTS want, without trying to get all the app developers wooed over.  Blackberry’s original success was built on one app—email.  Going from one-app to all-apps is too big a step.

What does this have to do with vendor SDN strategies, you may ask.  Well, go back to that cardinal sin I opened with.  You can’t come late to a revolution.  It’s not easy to analyze the SDN strategies of the key vendors in the network equipment space, but it’s fair to say that so far 1) they lack details, 2) they’re more defensive than revolutionary, and 3) they’re broadening their SDN appeal by broadening what “SDN” means rather than broadening their opportunity through functional advances.  And remember my earlier blog; you can’t make a market bigger by segmenting it differently.

Most vendors are in SDN-neutral here.  Cisco did nothing much at its partner event, blowing a big chance.  Juniper did NFV instead of SDN at it’s event.  Alcatel-Lucent seems to be following Cisco’s detailed-at-the-edge-vague-inside approach.  Ericsson has articulated the pieces of SDN and seems to be embracing a radical OpenFlow model in its experiments and conference demos, but it’s not productized.  Huawei has demonstrated prototype applications in optical-SDN control.  HP has delivered an OpenFlow controller and some SDN applications, but seems curiously reticent in pushing its own, potentially leading, position.  NSN, with its new focus on wireless, seems to be working to find out whether SDN matters to it, and if so how.  Smaller vendors like Brocade, Extreme, and Infinera are more SDN-proactive, and of course the startups are positively SDN-strident.

Is there an “Apple” emerging here in the SDN space to relegate a major vendor to “Blackberry” status, far behind and hoping to re-launch itself?  There could be.

VMware’s Cloud Plans: Cloudy?

VMware reported its quarterly numbers, which were light in terms of revenue growth, and this caused the stock to take quite a tumble in after-hours trading.  The experience of VMware is interesting, I think, because it illustrates a couple of market realities we forget all too often.

Reality number one is that market segmentation doesn’t create actual markets.  Reality, however we divide it up, has to equal the same thing in the end.  If we look at virtualization, for example, the reality is that it’s been driven almost totally by server consolidation.  The server explosion hurt capital budgets a little but hurt operations costs a lot.  Virtualization capitalized on that.  The problem is that if you look at IaaS-style cloud computing you find a lot of commonality with virtualization.  The top things that drive IaaS project success are low utilization of resources by the target application, and high unit labor costs for support.  Those are the same things that drive virtualization.

The server-consolidation market problem is a problem of the past, in a way, and the second problem is a problem of the future.  VMware believes that its future success will be driven by the software-defined data center, hybrid cloud deployment, and direct-to-user SaaS services.  I don’t disagree, but these concepts aren’t any less interdependent than the virtualization and IaaS markets.

VMware’s purchase of Nicira makes it a natural player in the virtual networking or software-defined data center play.  The problem is that this is something most valuable to cloud providers because the primary application of a virtual-network-segmented cloud data center is the separation of tenants.  VMware needs an APPLICATION-SEGMENTATION not tenant-segmentation strategy for its data centers, or it can sell its solution only to cloud providers.

Application segmentation plays to hybrid clouds too.  If I’m going to hybridize a cloud, I need to decide what resources are public, what are private, and what my strategy for cross-allocation of applications to this dual pool would be.  I also have to accommodate the componentization of the applications and the inter-process traffic.  That’s a network problem, but also a component addressing and workflow problem.  Orchestration of SOA or SOA-like components isn’t part of VMware’s current portfolio.

And direct-to-user SaaS collides with all of this.  First, SaaS tends to be sold most to SMBs who lack a strong centralized IT culture or in isolated application areas.  The buyers aren’t IT professionals so how do they deal with the issues of hybridization?  If they’re going to SaaS because they don’t have centralized IT, what is it that they’re hybridizing their public cloud services with?  Are SaaS services easily hybridized in any case?  These are all questions that need to be answered for buyers, and so VMware will need to answer them.

Looming over all of this is the Big Problem of the cloud.  Why, if the cloud is so wonderful, would we continue to write applications that aren’t cloud-specific?  Server consolidation and those basic IaaS-type drivers are valid only for as long as there remain applications that expect to run on single servers.  Will developers blindly write for those old server models, and buyers continue to install them, just so they can migrate to virtualization or IaaS?  If we know clouds equal sharing of resources in an orderly way, if we know that clouds mean creating distributed resource pools across organizational and public/private boundaries, why wouldn’t we write apps designed for that?  And once we did, would that not diminish the value of the cloud and virtualization tools that are designed to fix those old single-server problems?

First and foremost, what VMware needs to have is what everyone in the cloud needs to have, and what I’ll assert none have today.  That’s a vision of the cloud AS AN END-GAME.  We’re migrating to the cloud, and we’re stuck in how we’re getting there and not what we’ll find when we arrive.

The Right Kind of Cloud Vision

Anyone who has read my blog for a while knows that I’m a believer in the thesis that the cloud changes everything.  Yes, I believe it’s over-hyped (what isn’t these days), but it represents the reformulation of the partnership between networking and information technology, and in particular it provides the framework for us to direct vast information resources to what I’ve been calling the “point of activity”.  Whatever changes are driven in networking and in IT will be driven by the cloud.

But that, of course, means the cloud has to be driving somewhere, and THAT in turn means we have to bull through the crap to come to terms with what cloud computing will really mean and really do.  That’s critical in assessing who’s really doing it, or helping to do it.

We’ve had shared computing for longer than most who read this blog have been alive, much less been in the space.  I worked on a “timesharing” computing project back in the ‘60s, for example.  The great majority of those who have an Internet presence have always relied on “hosting”.  My point is that when the concept of cloud computing first came along, it was really supposed to be a new model, not a name that people would apply retrospectively to all manner of stuff that had been going on for decades.  Yet today we read about the “cloud market” in volume terms that can only be justified by a presumption of mental disorder or by the inclusion of a bunch of “stuff-that-wasn’t-the-cloud-but-is-now-because-it-makes-a-better-report”.

Why all the hype?  The total incremental revenue opportunity for the “real cloud” is nearly the same as the total of current provider revenues.  And the players that will realize this opportunity?  Well, let’s look at the Internet today.  We have Google and Netflix and we have ISPs.  Who’s realizing the content opportunity?  No, I’m not saying that the carriers are incapable of being cloud players, I’m saying that the opportunity in the cloud space is up at the higher layers, with “software” and “experiences” as a service.  Sure you can build all that stuff on IaaS or PaaS, but until you do you’re scrabbling for the scraps, profit-margin-wise.  Realizing the cloud’s potential is going to be about CLOUD APPLICATION DEVELOPMENT and not moving stuff to the cloud.

I’ve talked about the fact that realizing cloud opportunity means creating what’s effectively a virtual OS in the cloud.  Let me get more specific on what would be needed.

First, a recognition that the software framework for the real cloud opportunity will look a heck of a lot more like SOA than like virtualization.  I’m not saying that virtualization might not be used for resource partitioning, but we don’t write apps to be virtualized.  We DO write SOA apps to be componentized and distributed, and the sooner we realize that’s what the cloud of the future is doing, the better.

Second, a model to visualize how individual experiences or services are built in a “virtual container” with secure internal communications and then delivered into the outside world using an open network.  The OpenStack Quantum model is a good start for this, but it’s not complete in describing how the in-container and from-outside views of the application components and resources are isolated and how that would scale profitably.

Third, a model of federation across these virtual containers.  Applications and experiences don’t live in a vacuum, they’re created from partnerships and delivered in complicated user-orchestrated behavioral symphonies.  How do the pieces get identified, trusted, and linked?  How can players provide packaged capabilities or draw on them?  Will we create programmatic chaos and then expect unity here?  The right federation approach makes black boxes and interfaces the way we build experiences.  The wrong approach makes every experience a custom development.

Fourth, a model of the network that supports all of this.  SDN and NFV are attempts to frame a new network model, but neither of them is grounded solidly enough in the reality that absent the cloud and the revenues and services it offers, there’s no economic framework to pay for the changes and we’re back to grubbing nickels and dimes in cost savings to pay for a lunch.

Finally, devices.  An optimum cloud host isn’t a standard server, or a virtualized server.  What is it?  An optimum cloud-consuming appliance could be a current tablet or smartphone only if you believe in a happy accident.  What would that gadget be DESIGNED to be like?  And what’s left of network devices when you virtualize all your network functions and centralize all your route control?  Yes these devices are in all kinds of different spaces, but they are either “one” to the cloud or we don’t have the cloud vision right at all.

Cisco: Out of Consumer and into…

Last week, Cisco turned away from one of its most important-at-the-time market-expanding initiatives when it sold Linksys off to Belkin.  It was the latest in a long string of retreats from the consumer space, the market sector that’s clearly the largest.  Some think that the retreat was a bad idea, but I’m not sure that’s the case, and I think that a Cisco acquisition also made last week demonstrates how Cisco now sees its future.

Consumer broadband was clearly on a roll when Cisco bought Linksys, and at the time it was certainly a useful acquisition, but I think there was one fundamental question that Cisco needed to answer when they made the decision to buy Linksys; “What next?”  The problem with home routers and DSL modems is that they’re invisible.  Some carrier support people told me a couple years ago that most users with network problems couldn’t even find the devices to see if they were plugged in or turned on.  Here’s a Marketing 101 question for you:  “What features differentiate an invisible device?”  That should have been a Cisco question.  They needed to immediately make the home network the hub of VISIBLE services.  They didn’t do that, and so the devices commoditized.  At this point, it’s too late to get a do-over, so you may as well cut your losses, which Cisco is doing.

The thing is, the notion of “visible services” is still valid.  Now, though, the focus of those visible services is inside the network—the cloud.  Furthermore, this notion of visible services is combining with the explosion in the number of on-network devices and the mobility of those devices to create a network of bewildering potential complexity.  Repeating a quote by Larry Page on the Google earnings call, “We are living in uncharted territory”.  Darn straight, and “uncharted” is only a step away from “prohibitively expensive”.

Cisco says it wants to be an IT company, but that’s a little white lie.  They want to be a network company with ownership of the increasingly large market space created by the intersection of the network and IT.  They want to manage the complexity of that intersection, to operationalize it, to exploit it as a platform for creating carrier revenue and business productivity benefits.

The key element here is a different vision of network services, something that makes the network a direct partner of the application and not just a fabric that is presumed to be in place.  That latter vision disconnects the network from direct application-driven benefits, which goes back to that invisibility problem.  I think Cisco’s SDN strategy is aimed at creating, in onePK, a set of APIs to provide easy application control of network behavior.  They’re taking the notion “software-defined” in SDN seriously.

The next element is a necessary offshoot of the first.  If you are going to give applications better access to network services, you have to operationalize those services to meet application specifications, not fob off management to the applications.  The network has to be a lot more autonomous, and some of that can be accomplished by having service management elements running off those same APIs, some by collecting better data on network behavior, and some by introducing new controllable aspects of networking.  You can see a lot of Cisco’s recent M&A in these requirements, including its latest pick-up of self-optimizing-network player Intucell.

The remaining piece of Cisco’s plan to dominate the IT/network intersection is the implementation of SDN principles INSIDE the network and not just at the API boundary.  I’ve said from the first that there are only two ways to “define” an SDN; from the bottom with new forwarding principles and a connecting protocol (OpenFlow) or from the top as a set of network services to applications and the cloud that offer better control over network behavior.  Cisco was smart by being the first of the vendors to adopt the second option, but whether you start at the top or the bottom you’ve got to cover the waterfront eventually.

For Cisco, there’s another truth.  Whatever their SDN is, it’s still invisible, still plumbing.  While onePK is a credible link between the cloud and SDN, there’s still the question of what’s in the cloud, what experiences and productivity gains are driving the investment.  What do you see when you look upward from onePK?  Remember the Beatles’ song “Imagine”.  “Above us, only sky.”  That’s a nice image for a song but it’s not going to cut it for the cloud, and those cloud applications that consumers want are the target Cisco still needs to hit.  Other competitors don’t have to look as aggressively to the cloud for TAM as Cisco, and so the real question this week is not whether Cisco fleshes in a bit of the middle of its SDN strategy, but whether it starts to stake out a position in building those VISIBLE features—in the cloud.

Microsoft and Juniper: Cases of Cloudaphobia?

Microsoft and Juniper both reported their numbers yesterday, and when I looked at their stocks pre-market it happened that both were up exactly the same percentage. Interesting because both companies’ future literally depends on the cloud, and neither company is fully exploiting that reality.

Microsoft’s Windows numbers were up for the quarter and off for the six-month period, their Server and Tools numbers were up both quarterly and six-month, as was their online business.  Office and Entertainment were both off slightly.  It’s hard to get a detailed picture of the lower-level breakdowns, so we don’t know exactly how well Microsoft is doing in the cloud (Azure) and we don’t know how RT and Phone are doing either.

I think Windows 8 was a big gamble that has yet to pay off, and I think the reason is that they didn’t gamble right.  First, they were playing the wrong game at the Market Casino, and second they were betting too little.

Microsoft sees the future as PCs-versus-tablets, and that’s wrong.  It’s locally anchored computing versus cloud computing.  Tablets are useful to the extent that we conceptualize future experiences as being hosted and network-delivered.  If we really don’t have a future in the cloud we don’t have a future for cloud appliances.  Microsoft is missing the real opponent.  You can’t win by fighting Goliath’s retainer, you have to kill the giant.

You also can’t win by wrestling the giant to death.  Microsoft had a chance to redefine computing as a completely symbiotic device-to-cloud relationship, one where power migrated to where it was needed and where resources were hosted where they worked best.  The key product in this proof point was the Windows 8 RT, because it was the most different appliance Microsoft offered.  But even Phone could benefit from this notion, and so could Azure.  The problem is that RT is the worst of the Windows launches, and there’s now a danger that Microsoft’s tablet strategy will reduce to offering a detachable keyboard.

In business/enterprise software Microsoft is still strong, but there they should also be stronger.  Again, the cloud is the key issue and the Microsoft cloud vision is still tentative and inconsistent.  The most significant thing Microsoft has done in the cloud is to begin to open Windows clouds up to third parties, and to accent the value of PaaS in hybrid clouds.  Where do they say that, and to whom?

Here’s the thing.  If you want to be a success in the cloud you have to offer three things.  First, a cloud service of your own, to serve as an on-ramp to a broader customer commitment.  Second, a platform that’s available to other cloud providers and that will federate with any of your platform customers or your own cloud on demand.  Third, a platform that enterprises can buy to join the federation as a hybrid cloud user.  Microsoft has the first and the last in the bag, but they’ve been soft-pedaling the middle one.  Only in the last year have they had sanctioned third-party cloud capability.  Weak bet, because it weakens Azure, and what weakens Azure weakens Microsoft.

Juniper also reported their and the result was a kind of tactical good and strategic bad.  Juniper’s execution has clearly improved and they’re getting some traction with their new products (QFabric, PTX), but Street analysts are still a bit in a quandary on the strategic side, and so am I.

The issue for Juniper, in financial terms, is that they have about a 40x P/E multiple in an industry where the average is about 10.  Put in simple terms, that means future earnings are expected to just blow the market away.  The challenge is figuring out how that’s going to happen.

Routing and security products were down on the full year versus 2011, and switching and security down in the quarter versus last year.  Software was down y/y and q/q.  The service provider sector was a bit stronger and enterprise a bit weaker, but Juniper beat most estimates and their guidance was a bit above midpoint as well.  If all they’d done was numbers it would have been hard to get much from the call at all.  You could say that Juniper needs to get security moving, because that’s the sector they get the most enterprise traction with.

They talked about security too, but in (not surprisingly given it was an earnings call) a more financials-and-execution model.  What I think was missing from the discussion is what’s also missing from Juniper’s security positioning.  Security changes radically with the advent of what I’ll call the “cloud trio” of cloud hosting, SDN, and network functions virtualization (NFV).  I think a stronger cloud story would have helped Juniper in both spaces, and I also think it would be easy for them to tell it based on nothing more than the unification of what they have.

Juniper also mentioned SDN, which I find interesting because SDN technology isn’t going to have a major impact on spending in 2013, though SDN positioning is critical for 2014 and beyond.  The problem, as I noted in my blog on their SDN story, is that they don’t have one yet.  I’ve looked at a couple of Juniper presentations on SDN besides the Partner Conference pitch and I think it would be easy to cobble elements from those stories to build a really nice SDN picture.  I also think that the Partner pitch, which was really about NFV as I’ve noted, could have then been combined to create a truly powerful positioning.  That wasn’t done, and the SDN introduction in the call seems out of place given that.  Is Juniper countering Cisco’s SDN stuff next week?  Are they trying to answer Street concerns about margin pressure from SDN?

This gets back to the 40x P/E multiple thing.  If you’re going to explode in earnings in the future you have to get a revolutionary paradigm to put wind in your sails/sales.  Technically, Juniper is fully prepared for the cloud, for SDN, for NFV.  But when they say they’re “well positioned” I have to disagree; they’re not positioning at all.  What should the position be?  “The Cloud changes EVERYTHING!”  Juniper is so much better than they sing, and that’s a challenge when they’re facing Cisco, a vendor who often balances those two in the opposite direction.  Suppose Cisco actually DOES something?

 

Apple: Not About the iPhone

The trouble with earnings seasons (and there are four of them annually so it seems we’re always in one) is that there are a lot of data points you could talk about, many of which are significant.  They’re also often disconnected, making it hard to blog about them without writing a book every day.  So if I pick and choose differently than you might…sorry!

Apple has to top everyone’s list, including my own.  Apple’s numbers were good but its iPhone sales were lower than expected and there’s a good indication that the company is seeing a shift of sales from premium to value lines.  The fact is that nobody should be surprised by this.  The PC market has tended to shift to lower-cost units over time too, for example.  The challenge for Apple is to prove it also foresaw the change and has done something about it.  Just fielding an iPhone 5s or something isn’t the proof point needed, either.

We have smartphones, for the most part, not to wave them around to show our coolness but to use them as a portal for information access and communications.  The smartphone grew out of the value of the Internet, proved out in more fixed environments, and the recognition that value would be helpful while mobile.  So the gadget was a means to an end, and with all the hype over this or that model or feature or OS or vendor, we tend to lose sight of the fact that this is an on-ramp we’re talking about, not the Highway to the Sun.

To me, the critical point for Apple to address is the fact that just as the Internet changed mobile, mobile is changing the Internet.  We have a much more app-driven view of “online” today than we have a page-driven view.  You can see that with Windows 8 tiles and Google’s intention to retire iGoogle in favor of an app model.  Because developers create apps, and because Apple has a large installed base acquired early in the market cycle, it’s always had the most apps.  And if an app is a window on the online world, increasingly a window on the cloud, then Apple could be said to have the most “cloud”.  Except of course that it doesn’t.

What’s the difference between a military column and a mob?  Organization.  What Apple needed, and still needs, to do is to create not only the appliance side of the developer equation but the cloud side.  We know from SOA evolution that software is increasingly an orchestration of functional components based on a set of rules/policies.  The online experiences of the mobile future, the stuff I’ve been calling “point-of-activity intelligence”, are similarly orchestrations of functional components.  The symbiosis among these components is what creates the organization, the community, that makes an organized military formation more powerful than a mob of the same size.  Apple, by defining the organizational/orchestration rules that bind their vision of the cloud, could control that symbiosis and profit from it.  So could Google, or Microsoft.

Apple TV or Google Glass are examples of the next generation of this same issue.  As long as we focus on the instantiation of the capability and not the production of the value, we miss the point.  Augmented reality is useful to the extent that you can provide the augmentation.  How you do that isn’t important until the utility of the added information can be validated.  A new generation of TV that uses the same HDTV screen and shows the same material isn’t going to be much of a revolution, even if we want to believe that streaming it versus linear RF is revolutionary.  The value will come in how we can personalize it, socialize it, integrate it with the rest of our lives.  And this isn’t a Glass or TV problem, it’s a cloud problem.

To me the most significant thing going on right now in the appliance space is Google Glass.  First, it creates a whole new concept of “online”.  Virtual reality used to be a choice versus “real” reality.  With Glass the two are blended into a single visual experience.  That opens a host of applications that we’ve never been able to deliver.  It doesn’t create them, though.  The real importance of Glass is that it’s impossible to realize the value of Glass without creating the value in the cloud.  The collection of real-time information, the merging of that information with knowledge and policy, will be what makes augmented reality distinctively different and more useful.  Google’s asset in Glass is, in no small part, the fact that it forces Google to look at the mechanics of point-of-activity intelligence.  Apple COULD look, but nothing is forcing it to, and for Apple-watchers, an indication that they’re looking at the experience now and not just the appliance is the critical signpost for success.

 

Signposts on IBM’s and Google’s Paths

Tech got some semi-good news in two earnings reports yesterday, from Google and IBM.  I insert the “semi-“ because the quarter measures the past, which is only an indirect indicator of the future.  The most significant insight from the reports is that the politically driven economic slump we had in the holiday period last year didn’t stall businesses and didn’t completely kill consumer interest either.  It’s not a signal that the stupid “cliff” impasse didn’t matter, but at least it suggests it wasn’t fatal—yet.

Google reported slightly better-than-expected results, but the big news was that the critical CPC (cost per click) ad metric lost only 6% rather than the 15% it had lost the prior quarter.  Sharp declines in CPC indicate advertisers aren’t willing to pay as much for ads, which means they are less confident about a return.  Whatever the reason for that, CPC declines for Google are bad for everything online that depends on ad revenue.  But the Google revenue line was light, demonstrating that advertising is impacted by consumer confidence, which is still fragile.

Another interesting piece of news is that Motorola’s loss for the quarter lightened by about $200 million, but some of the improvement was the exclusion of the Home group that Google is selling off.  The take-away here is that a company like Google can’t just jump into areas like phones and hit a home run.  In fact, manufacturing isn’t the kind of high-margin business Google likes.  It seems to think it needs to be an appliance player in the long run to keep Android competitive with Apple.

Getting rid of Home may say something too.  Here’s an interesting progression.  First step; you can’t make money on broadband home access without video.  Second step; you can’t do home video without a set-top box.  Third step; Google is getting rid of its STB.  So…fourth step; Google isn’t serious about large-scale gigabit broadband?  That seems a pretty inescapable conclusion to me.

IBM’s numbers were good, no question about it.  Earnings and revenues were up.  Free cash flow was up, and so were gross and net margins.  On the hardware side, the mainframe revenues were up 56% and POWER systems off by 19%.  Software was up 4% with branded middleware up 5% and WebSphere up 11%.  Tivoli Security was up 16% and Rational up 12%.  Cloud revenue was up 80%.  Obviously IBM continues to perform, but there are some less obvious but interesting insights here.

First, IBM’s hardware growth is focusing on the big data center area, the “mainframes” that people have been declaring dinosaurs for decades.  What IBM is doing is integrating mainframes with adjunct elements to create vertically integrated computing.  They’re also fitting cloud into data center instead of trying to displace the latter with the former.  The point is that if we believed the cloud story we hear in the market today, IBM’s numbers would make no sense.  Thus, we shouldn’t believe it.

Second, IBM is reflecting a polarization in business computing that may be one factor in our distorted view of the market.  IBM has very large customers who spend a lot of money and who justify dedicated account teams.  They’re not as much market evangelists as firms like HP who have to sell into the broad market.  The lack of marketing impetus for IBM creates a limit on how big and broad it can get, but it also helps it to sustain stable margins and growth.

Third, even in uncertain financial times IBM is demonstrating that you can make money on tech.  Why?  Because IBM more than anyone else sells into BUSINESS BENEFITS and not technology features.  They understand that if you want a buyer to spend more, you have to show them a bigger and more compelling benefit case.

We might want to consider this perspective in network equipment, a space IBM supports with OEM deals and not its own products.  Look at IBM’s positioning of hardware and software and you find, as I’ve said, alignment with buyer productivity and profit.  Look at network positioning now.  We tell carriers to buy more gear to carry traffic whose revenue per bit is declining by 50% per year.  We tell enterprises to buy more gear to improve connectivity or to support business video collaboration, which are technical goals imperfectly tied to buyer benefits.

I want to come back to a quote from Google’s Page on their earnings call:  “People carry a super computer in their pocket all the time. In fact, we feel naked without our smartphone. And many users have more than one device: a laptop, a phone, and a tablet. We are living in uncharted territory. It’s a new kind of computing environment.”  IBM’s mobility efforts have leveraged this reality effectively for the enterprise, and Google’s success depends on its leveraging that same principle for the consumer.  They’ve done that with Maps, which demonstrates that they can make a cloud concept work even for iOS users.  They need to do that beyond Maps.  And IBM needs to do its magic, eventually, beyond its big-account base.  Page is right, this is a new kind of computing environment, and no matter how you tack goodies onto the past, you’ll never reach the future without striking out on your own.

This Week’s Data Points Point to Metro

The future is written in the data points of the present, so let’s start today by looking at some of those data points, then reading some tea leaves.

NSN is said to be looking to issue nearly a billion dollars in bonds in a move that may well be a precursor to the joint venture becoming an independent company.  There aren’t any rumors of specific new projects or programs to be funded, which would suggest the funds would be used to pay down current debt.  Still, it’s not impossible that some new stuff, even M&A, might be included in the billion.

This isn’t too far from the Alcatel-Lucent capital-improvement program that was undertaken recently using private equity investment.  Both Alcatel-Lucent and NSN are traditional companies with traditional issues of a large worker base, a large number of low-margin products, and a large number of hungry competitors with lower cost points.  NSN has so far been focusing its response on narrowing its product family, which has the collateral effect of reducing headcount.

Ericsson, competitor to both Alcatel-Lucent and NSN, just had an S&P downgrade in outlook, though its BBB+ bond rating was affirmed.  It’s been looking to somehow exit its ST-Ericsson semi deal and also sold off a patent portfolio to a third-party patent firm (many would say “troll”).  Expectations are that Huawei will pass Ericsson to take the largest network vendor slot in the current quarter.

Now it’s interesting that this is all happening as Telefonica Digital announces a new “Smart M2M” platform that it developed for the machine telemetry space.  What’s interesting is that this is the sort of application that network vendors would have been expected to field and sell.  Not only that, you’d have expected network vendors to have established an architecture to deploy these new applications on.  You can be sure that Telefonica Digital is doing that (yes, a bit retrospectively but still doing it).

The FCC wants every metro area to have at least one gigabit Internet provider.  Presumably this is going to stimulate the other operators to match that provider’s speed and launch a new age of Internet and OTT (which, since the FCC Chairman is a former VC, has great interest to him).  Never mind that the great majority of users don’t want to pay for even the current highest-tier services, or that objective speed reports from video providers show that the difference between gigabit and 20 meg services in terms of video download is about 10%.

Verizon issued its earnings report, and its profits were hit because of subsidies on smartphones.  At the same time it saw a 6.6% increase in data revenues for mobile services.  Video and other content that would otherwise be raising costs are contributing to profits in mobile because of usage pricing.  Bit-pushing is OK as long as you’re not giving everyone a free ride.

Got the tea?  Now let’s start reading.

First, whether we like to admit it or not, we’re entering the end-game of unlimited usage.  When data usage growth is the driver of mobile ARPU growth and nothing else is performing, it doesn’t take dazzling deductive logic to see that we’re going to end up tightening up the limits on usage even for wireline.  Operators are not going to invest in creating losses.

Second, nobody believes that usage pricing will be popular, so operators will try to alleviate the pressure by cutting their costs radically and by offering services beyond Internet bit-pushing. Their challenge in the former is to break the traditional paradigms of networking, paradigms the operators believe have been manipulated by network vendors to assure rising spending on the vendors’ gear. Their challenge on the service side is twofold; they have to identify what might work and they have to be able to create it at a low enough cost to make the service marketable and profitable.

Third, the trio of the cloud, SDN, and NFV are important to both the cost side and the service side, and operators are happy to ride either side to at least temporary victory in an overall profit sense.  Yes, cloud computing can make money but no, IaaS isn’t a good long-term high-margin opportunity so you have to look beyond it.  But beyond it lies the problem of creating “experiences-as-a-service” based on a cloud-like architecture that bonds computing and networking in new ways.  This is about benefits, guys—the “why do I do it?” has to precede the “how do I do it?”

Nothing meaningful is going to happen in networking without redefining the IT/network relationship, which will have the effect of redefining networking AND IT.  To redefine that relationship means focusing on the area of the network where the benefits of change are most easily realized.  You all know I believe that’s the metro network.  Content delivery is a metro CDN application.  Mobile point-of-activity intelligence for consumer or enterprise is a metro cloud application.  Cloud computing itself, in any credible large-scale form, is a metro application.  Where we should be focusing today is on how to embrace the cloud, SDN principles, and NFV in the metro network.  Only that is going to give the network vendors like Alcatel-Lucent or NSN something to spend their cash on with a fair return.  Only that is going to give operators a good shot at high return on infrastructure.  Only that is going to change the benefit case that either drives or limits our industry.

 

 

Is There a Revolution in our Revolutions?

The notion of “revolution” is always exciting, sometimes useful, occasionally destructive.  The notion of two or three of them at once tips the scales into the latter category in my view.  We have been looking at “the cloud revolution” for several years, we’ve just started “the SDN revolution” and now we’re facing “the NFV revolution”.  I propose that the latter two are driven by the first, the cloud, and that these two are destined to become one.  So while we need to understand each of our “revolutions” the most important thing is understanding how they’re joining up.  Because, friends, the future network is ONE NETWORK and not three.

The cloud defines a new relationship between applications and resources, and thus defines a new mission for networks.  All revolutionary changes demand a massive influx of benefits to drive them, and the cloud is the embodiment of these new benefits.  No, it’s not the crap about IaaS displacing internal IT, but the way the cloud defines the fulfillment of point-of-activity intelligence—a market that holds over two trillion dollars in incremental service revenues.

An immediate result of cloud consideration is SDN.  I’ve blogged before that we have three “accepted” models of SDN.  One is the OpenFlow “purist” model that actually replaces adaptive discovery to build forwarding tables with explicit central control.  One is the “virtual network model” that abstracts the connection network above Level 3 and thus disconnects it from real devices (vSwitch).  One is the “distributed” model that controls the network not with OpenFlow but with protocols/standards that evolve the current Ethernet and IP.  If you attempt to extract a sense of mission from this combination, you find the only really common link is “the cloud”.  All of the models of SDN presume a tighter linkage between network and application processes to facilitate a new union of IT and networking.  There’s a cloud above everyone’s view of SDN.

The emerging notion of Network Functions Virtualization (NFV) our most recent “revolution”.  The basic notion of NFV is simple; separate the logic associated with a higher-level service from the data-plane connection that delivers it.  Network functions are virtualized, meaning made platform-independent, and then hosted in a resource pool.  That pool would, to most of us, be “a cloud”.  The purpose of NFV is to decouple network features from monolithic devices and in doing so make the network more responsive to changes in mission, which would of course represent changes in market needs.  So there’s a cloud above NFV too.

If SDN and NFV are both cloud-driven, are they convergent?  If we presume that the “software definition” that’s driving the “network” in SDN is a set of application functions that are today at least implicitly supported in the network devices, then NFV is a superset of SDN.  In my view, SDN is a subset of NFV that’s focused on the specific “service” of topology management and route selection.  That there is an overlap at least is acknowledged by the NFV white paper.  So let’s look out past the inevitable (to me) convergence of SDN and NFV and focus on the clearly more complicated goal of NFV.  It postulates the offloading of “service logic” from network appliances to be hosted on servers.

To me, this goal begs two questions.  First, what will define the architecture of the platform-as-a-service that hosts this functionality?  You can’t write software in a vacuum; we can’t use different principles to unload every possible network feature from devices to host it in the cloud.  Second, what’s left in the network when you remove the “virtual functionality”?  You can never dispense with the network as a collection of devices, so in the NFV world of the future, what exactly are those devices?  If the goal of NFV is to simplify network devices and make them more flexible and responsive, you’d have to assume that the network device is changed as a result of removing of the virtual functions.

This whole SDN/NFV thing is critical for “networking” because whatever you believe about their relationship the two show we are transitioning from a world of “IT and network” to a world of “IT/network” that we’ve called “the cloud”.  Where the IT/network balance will fall, both in terms of technology contribution and in terms of future capex, will determine the fate of network vendors, network operators, and of course network professionals.

Out in the great beyond, where will cloud networking, SDN, and NFV combine?  In a new service layer that’s cloud-like and a new device that’s not switch or router but something more?  I’ll be rolling out the general requirements of that device in this blog too, and I’ll be interested in what you think.

 

Contradiction: The Street’s View of Networking in a Nutshell

It’s sometimes nice to end a week with some analysis of the Street’s view of networking.  To hear that there are sometimes contradictions is likely not going to surprise you, and I think contradictions often expose some interesting truths.  They also give me a chance to lay out a financial view of industry health.

JP Morgan issued a research note that said, despite many notes and stories to the contrary, that carrier capex is actually poised for an increase.  The reason they give is that the ratio between capex and sales has actually been increasing, and that means that as sales now grow capex might explode (I paraphrase, of course).  In the same note, they say that SDN is going to have an impact on enterprise networks between now and 2015, shrinking the data center switching market by as much as a third.  In summary, their view is that there are structural (like “server refresh cycle”) drivers to grow carrier capex, and that SDN will hurt the only robust area of enterprise network spending.  Got it?

At almost the same time, we heard a rumor that AT&T was looking to buy a European wireless carrier.  Hungry for new customers in a saturated market, faced with flat-to-declining ARPU in the US, AT&T would flee to Europe.  The presumption behind this is that a company looking, as all network operators do, to long-term growth, would see their best option as buying a cheap carrier in Europe, not buying infrastructure to convert into service revenue.

Well, the contradiction here is a good place to start.  According to JPM, here’s AT&T, with new customers drying up and ARPU in the mobile space at a plateau, preparing to decline.  So what do they do?  Increase capex!  Yes, they sort of said that they were doing some of that in certain areas, and so have other operators.  In a competitive market you sometimes have to spend a bit more than you’d like.  But nearly everyone who’s looked at the numbers says that operators are simply not going to spend a lot on equipment, particularly in wireline.  It’s hard to square an operator deciding to run to Europe for growth with the notion that things are rosy enough to justify network investment here.  Net?  I think that the capex-to-sales argument is lame. It’s not going to create a rising tide to lift all network boats.

Surely not lifting all boats in the enterprise, because according to JPM, the SDN juggernaut is going to hit the data center and create pressure on Cisco, who has more enterprise exposure than other network vendors.  This, by 2015, is going to hit data center switching revenue by a third!

Well, I don’t think so.  My surveys say that enterprises believe the data center is the HOTTEST space for the next three years, the place most likely to get more spending rather than less.  Cloud deployment is a drive for both enterprise data centers and provider data centers, and of course the gear is the same for both.  The market is so good it’s a shame SDN is going to eat it, right?  JPM expects, their report says, that vSwitch (Nicira) will likely win.  Hey, here’s some news; vSwitch is an overlay technology that doesn’t displace a dollar’s worth of gear!  You have to use the normal network gear just like before.  And vSwitch is mostly a multi-tenant solution, which isn’t a problem most enterprises have.  I’m scratching my head here.

What, then, is real?  Let’s look first at business networking.  Enterprises are going to be doing a number of things, from SOA and virtualization of old to private cloud of today, that will drive up data center spending.  Big data and analytics also drive it up.  My model says that that trend will continue through at least 2017.  Yes, we will see SDN features and factors play a part in buying decisions starting as early as this year, but the important point is that SDN in the data center is not driven by cost reduction goals but by performance and availability benefits, and the need to support IT changes (SOA, cloud) already funded.  Fabric switches, the big trend in the data center, aren’t necessarily any cheaper than stacks of LAN switches and certainly not 30% cheaper.

For the network operator, it’s even simpler.  Every single profitable thing an operator does with a customer has a traffic “range” of less than 40 miles.  Content is a metro application.  Mobile/behavioral or point-of-activity intelligence is wireless, and wireless is metro.  Cloud is metro.  There will be an enormous transformation of network spending to focus on metro.  That is going to help fiber players and device vendors to the extent that they understand that the architecture of that future metro network will be one designed to make the whole of metro look like a cloud data center.  The revenue available to fund this is huge, so it’s not the problem with capex.  The problem is that vendors still can’t tell a sensible story to operators that link the business value of operator infrastructure to spending decisions.  No credible benefit equals no secure spending.  So net?  Operators are likely to keep their foot on the capex breaks for at least a couple more years because, based on past history, vendors won’t present them with any new options to justify their new network spending.

Overall, though, the spending on network equipment is going to decline over time as it refocuses on areas where good ROI can be obtained.  As I said, there are no large-scale, good, secure, spaces there, which matters a lot to giants like Cisco who can’t hope to gain market share.  That’s why Cisco wants to be an IT giant; they need more revenue opportunity than the network can provide.  Smaller players can gain market share, startups perhaps create literally exploding value propositions, but these will have to be framed in the metro model I described for operators, and in the “enterprise SDN” model I’ve blogged about.  A few are nibbling at these positions, but nobody has them nailed down.  Go back to the error of JPM; there is no structural capex-to-sales revolution to save the industry.  It has to save itself, which is very possible but not likely to be accomplished by putting the ship on automatic pilot and going to sleep.