The CloudNFV Proof-of-Concept Was Approved by the ETSI ISG!

I’m happy to report that the CloudNFV project’s Proof-of-Concept application to the ETSI NFV ISG has been approved.  We were the first group to make formal application, the first to be approved (as of today, we’re the only ones in either category, but I expect much more through the first quarter).  It’s a validation of CloudNFV, of course, and the work of our term.  It’s also a validation of the ETSI NFV work that launched our CloudNFV initiative back in the late fall of 2012.  Finally, it’s a validation of the work done by some service-modeling pioneers at the TMF, whose GB942 principles are the foundation not only of CloudNFV but of what I think is the only solution to managing virtualization—anywhere.

The ISG was born out of openness, the desire to open up services by replacing proprietary elements with hosted software, the desire to create the future by selecting among the best standards of the present, and most of all the desire to let the buyers of infrastructure define the playing field so the sellers (individually or collectively) could never dominate it again.  The ISG defined the issues, the use cases, the functional framework, for the future of networking.

CloudNFV from the first sought to build on the ISG work, to validate it within the broadest possible framework of service creation and operations, and to incorporate the other critical revolutions of our time—the cloud and SDN.  All this while not only sustaining but extending the open principles of the ISG.  We are not, nor have we ever intended to be, a “replacement” or “alternative” to the ISG’s approach.  We’re an implementation, an extension of their principles into adjacent domains where the ISG can’t realistically go because the scope is too large to hope to progress it in time.  Our extensions are based on the work of both the TMF (GB942 and GB922) and the IETF (i2aex).

We are a software architecture that has now become a software implementation.  We are running now; we have demonstrated that too many of the founding operators in the ISG.  Our goal is to develop a software framework that best meets the goals of operators, goals that include but are not limited to those of the ISG.  We are an open software architecture too; we’ve demonstrated that by launching a website, an integration program, a wealth of tutorial material.  We’ve also demonstrated it by the fact that our PoC includes not only the original 6 companies (my own CIMI Corp, 6WIND, Dell, EnterpriseWeb, Overture, and Qosmos) but two integration partners (Metaswitch and Mellanox) who have joined us through our open approach, and a TMF/OSS integration partner in Huawei, who has supported our operations evolution even without contributing technology to our platform.

The hard work of dozens of people in the ISG and the TMF are also foundations for what we’ve done even if their names aren’t on the PoC submission.  Two operators, Telefonica and Sprint, have agreed to sponsor our PoC, and the representatives of these operators are also key figures in the critical SWA and MANO groups within the ISG.  We don’t promise that we have or will conform completely to what any of these bodies develop, but we promise to present our approach, justify any differences, and work with all the bodies involved in good faith to come to the answer that’s best for the industry.

Our PoC is now posted on our website (see the link above) in keeping with our open approach.  It defines 16 scenarios to support its eight broad goals, so it’s no trivial NFV-washing.  These together define a framework in which we’ll prove out the initiatives of the NFV in an implementation that fully integrates with service creation, deployment, and management.  We’ll be starting PoC execution in mid-January, reporting on our results beginning in February, and contributing four major documents to the ISG’s process through the first half of 2014.  We’re already planning additional PoCs, some focusing on specific areas and developed by our members and some advancing the boundaries of NFV into the public and private cloud and into the world of pan-provider services and global telecommunications.

These advances will require insight and effort on the part of every one of the companies who have contributed to take us to this point.  It will also require further involvement from other companies, involvement we will secure through our commitment to an open approach.  We’ve defined an open and adaptable service orchestration process that works not only with NFV elements but also legacy devices and services.  We can manage what we deploy, no matter whether it’s built from software or from switches and routers.  We can even mix cloud applications like CRM or Unified Communications with network features to build services that are both of the cloud and of the network.  We want help with advancing and proving out all of these things, and our call for participation is still open.  It’s working too; we have two integration partners involved in this PoC and six others in some stage of development.  Even more would be great.

In the first quarter we’ll release open specifications for Service Model Handlers that can link any management system that can deploy software or connect network elements with our high-level deployment orchestration and management model.  This will create an open, agile, network and cloud connection and bind SDN and NFV with open interfaces.  It also supports the hardware of any vendor.  We have also proposed a TMF Catalyst to extend our integration potential upward into operations processes in a fully open way.  We hope that interested parties in both these spaces will engage with us to test our theories in future public processes like this PoC and the TMF Catalysts.

The foundation of NFV is the virtual function logic available for deployment, and there’s a real risk that this space will be as proprietary as network appliances.  We think it should be fully open to developers of all types.  To help that along, we’ll release a specification for an open Virtual Function hosting framework, something that can bring every piece of network code that can run in the cloud today and any application developed for a cloud platform, into the NFV world.  That includes specifically open-source software.  There’s no proprietary lock-in for network functionality in our architecture, which is how we think it should be.

Our commitment to open virtual functions is demonstrated by the fact that our PoC and Catalyst processes are based in part on Metaswitch’s Project Clearwater IMS project, a project that’s open-source.  We have onboarded it to NFV with no modifications required for Project Clearwater software, but we can also demonstrate how to build APIs that virtual network functions can exercise to improve operations and management, using familiar web-service interfaces that work for every software platform and programming language in use today.

There are people who don’t think that something like this can be done in an open way; that it’s too large an undertaking to be advanced except by some giant vendor with a proprietary goal that will deliver a big payday.  Hopefully the fact that an open project has received the first formal approval as an ETSI PoC says something about the power of open architectures.  We’re going to continue to prove the skeptics wrong, and we ask like-minded people to contact us and be a part of that proof.  This PoC approval is a critical step for CloudNFV, but it’s only the beginning.

The World According to Cisco’s Analyst Event

I know I’ve blogged a lot this week about industry conditions, but there’ve been a lot of developments in that space and we’re all in this together, industry-wise, in any case.  Cisco’s analyst event offers more fodder for the discussion and also I think some clarity on the choices that face us.

At one level, Chambers repeated a theme he’s used so often that it could be set to music.  He talks of market transitions and macro conditions and all the other good stuff, and I still think that dodges the real conditions.  Look at the financial reports of Cisco’s customers.  Industries are reporting record profits in many cases, and few are saying they’re starved for cash or credit.  Yes, we’re in a period of economic uncertainty, but when in the last decade has that not been true to a degree?  The thesis that Cisco (and, to be fair, most of Cisco’s competitors) seem to strike is “If things were just a little bit better, buyers who can’t justify spending more on networking because it’s not building a business value proposition would surrender and spend anyway.”  I don’t think that’s realistic.

The Street has its own quirky views here, though.  They want to claim that Cisco faces problems created by SDN.  My surveys say that’s nonsense in a direct sense; there’s no indication that SDN adoption this year or next has any impact on equipment sales.  What is true is that SDN has become a proxy for the real problem, which is that nobody will invest in something if it presents a dwindling return.  If you ask, “Will SDN hurt Cisco?” the answer is “No!”  If you ask “Will the buyer fears of dwindling network ROI both spur interest in SDN and in simply holding back on spending?” the answer is “Yes!”  SDN isn’t the cause of Cisco’s malady; it’s just a symptom.

Chambers did address technology, even technology change.  He talks about how Cisco is facilitating the cloud, partnering with others to help integrated solutions.  But if you have a market that’s starved for ROI, you have to boost the “R” if you want the “I” to improve.  Cisco like most other vendors in the network and IT spaces are offering guidance on how to do something with tech when buyers want to know how to do something different, something better.

The challenge Cisco has in the “betterness” dimension is twofold.  First, their new market areas like servers may offer additional revenue opportunity, but they also offer (as UBS Research pointed out recently with respect to Cisco’s UCS) lower margins.  The Street will beat Cisco up just as fast because margins are dropping as they will if sales are challenged.  Second, even if Cisco knows what the buyer really wants and needs (which I think they do, and so do all their competitors, at least deep in their psyche) that doesn’t mean that it’s easy to offer it.  Cisco is like all vendors forced to kiss Wall Street ass every quarter, and any sign of weakness tumbles the stock.  That’s why I’ve been saying that we have a financial market that rewards tactical behavior.  Well, anything to improve the fundamental buyer economics of network deployment is hardly likely to be “tactical”.

The most substantive thing Chambers said, the most positive thing, was that Cisco would be making a lot of acquisitions in the software space, not for “material” meaning P&L reasons.  That implies they’re making them for strategic reasons.  That’s critical because if we’re building the network of the future from SDN principles or NFV principles or cloud principles, how the heck did we overlook software?  Cisco, to be a real IT company or even a real network company with a future, needs to be a software company and so do their competitors.

And the software isn’t a software replacement for switching and routing.  OVS won’t change the game much, and “defining” the network is really about defining the network’s missions and services.  Cisco is right in believing that it’s more important to get current network technology to slave itself to a software mission than to support legacy network missions with software instead of hardware.  But they’re wrong in thinking that building ONEpk to allow for software control is enough.  Without those higher-level service functions, without novelty and innovation in what networks can do for us, application control of the ONEpk sort are a bridge to nowhere.

That’s what we should be looking for in pronouncements by Alcatel-Lucent or Juniper too.  I pointed out that Juniper, under its current CEO, was supposed to be turning itself toward software.  It didn’t happen.  Now that CEO and his senior software protégé are out and a new team is coming in.  This could drive Juniper down the right road.  Alcatel-Lucent is also promising a new focus, but their CEO isn’t promising that focus will be on software either.  And unless you have a software vision in networking these days, you’re decoupled from all the trends you say you’re supporting.  Live by the box, die by the box.

Citi’s View of Networking: Is it Right?

Citi has just initiated research coverage of the networking space, and I think it’s useful to look at what they say and compare it with what my own surveys and modeling suggest.  Wall Street, after all, makes money for people by guessing right on stock trends.  The big “however” here is that sometimes it makes money when the company isn’t; stocks aren’t a perfect indicator of fundamentals.  Let’s see what Citi says, then, and make some assessments on fundamentals.

With coverage initiated on 16 network companies, Citi puts eight in the “neutral” category (Ciena, Corning, Ericsson, Garmin, Infoblox, Juniper, MSI and Polycom), assigns three “Sell” ratings (Blackberry, Brocade, and Cisco), and five “buy” ratings (Alcatel-Lucent, F5 Networks, Nokia, Qualcomm and Riverbed).  Let’s look at what seems to be the overall thesis, then at some of the ratings that might seem a bit of a surprise.

According to Citi (we interpret) the macro conditions in the networking industry for the next couple of years will be unfavorable.  Both enterprises and service providers are likely to spend to save and not spend to advance, which will keep revenue growth limited.  Opportunities that develop will focus on narrow segments of the market, so people with a narrow footprint will tend to either hit (and be rated well) or miss (and be rated poorly) those good places.

Radical technology changes are not going to contribute as much opportunity as buyer-side initiatives that are most likely to perpetuate the status quo.  If Citi believed that SDN or NFV would present a major opportunity, they’d cast somebody likely to capitalize on these technologies as winners, which they do not.

How does this thesis compare with my own predictions?  Generally, pretty well.  I think that buyers are interested in a new model for networking but they’re not seeing a complete solution to their problems or a complete addressing of their own opportunities.  They will kick tires in 2014 and frankly be happy to do more, but they don’t think vendors will present them with the product mix they need for a radical change.  They’ll buy influenced by things like SDN and NFV (they’ll demand a “strategy” or “architecture” and an evolution path) but they won’t buy a lot of either.

Among the buy/sell calls that I think reinforces my view is the “Neutral” rating they give Juniper and the “Sell” they give Brocade.  Brocade has actually gained traction on SDN and NFV in 2013, as I’ve noted in the past.  If you assume that traction could be linked to sales on a broad scale, there’d be no reason to put Brocade in the “Sell” column.  With Juniper, the problem is that while the analysts think it has an opportunity, the new CEO is a wild card they say.  Which says that the current Juniper strategy was a loser and they don’t know whether the new guy will do any better.  That implies that SDN and NFV aren’t market trends that will shoot up like the old hockey stick, but rather are things that will require a lot of vendor prep.

How about the most “controversial” calls; Cisco as a “Sell” and Alcatel-Lucent as a “Buy”?  If you polled people in the industry (and maybe even on the Street) you’d likely get a lot of pressure to switch those labels.  But is Citi right?

Cisco’s challenge is that they’ve built up their sales over time in no small part by increasing total addressable market (TAM).  Offer more and you can sell more.  However, this is clearly a diminishing-returns thing because you necessarily go further afield from both your comfort zone/skill set and the magnitude of the opportunity as you keep picking those low apples.  I think Citi believes that 1) Cisco can hardly gain much market share in its core markets and 2) it’s running out of attractive alternative places to be.  I’m not sure they have this one right.

My surveys, as I noted yesterday, show that Cisco has by far the best influence trajectory in the cloud space, and has ridden those gains to match IBM in overall IT influence—a significant accomplishment.  IBM built its success—the best long-term success in all of tech—out of account control and buyer influence so you have to give it a chance to work for Cisco in my view.  I also think that Cisco has an appropriate strategy for SDN and NFV for the dominant player—make it into an evolution of strategy because the buyer is unlikely to be presented enough benefits for revolutionary change by any competitor.  This is why Juniper is the potential Cisco-killer in the mix; it’s pretty likely they’re the only player who could create a revolutionary paradigm and make it stick (the management changes Citi cites do create uncertainty, but the old management would IMHO have certainly failed here).  Net?  I’d have given Cisco a Neutral.

Then there’s Alcatel-Lucent, who has a toe in so many ponds that they’re starting to look like moss.  I think that broad exposure should have scared Citi under their thesis of specialized opportunity, but instead they like them.  Why?  It comes down to the fact that the new CEO has “brought focus” to key areas.  In short, they seem to be saying that we like Alcatel-Lucent because they’re going to toss off all the sectors of networking that clearly aren’t going to be sterling opportunities and focus on the only ones that likely have a shot.  I’m on the fence here too.

On the positive side, I do think that focusing on areas like wireless/LTE are important.  However, Citi doesn’t capture what I think my surveys show is the key point.  You can’t win in networking unless you can stretch out to win in the data center.  You can’t win in the data center if you don’t win in the cloud.  While Alcatel-Lucent has a cloud story (CloudBand) that I like, it’s not positioned as effectively as it would have to be in order for it to create the kind of influence trajectory Cisco has.  You’d have to give Alcatel-Lucent a “Buy” on pure technical reasons—they suffered more in stock price than they deserve.  The problem is that Alcatel-Lucent is way up YTD and Cisco is virtually unchanged.  If you believe that Alcatel-Lucent could be the SDN and NFV disruptor that Juniper could (possible, not easy, for them as much as for Juniper) then you can justify a speculative “Buy”.  On valuation I’d give them a “Sell” so that nets to neutral for me.

I think the important thing about the Citi coverage is that it’s showing an industry starved for innovation and unlikely to generate enough from any player to move the ball.  Given all the opportunities to innovate that I believe are out there, that’s sad.

Vendor Rankings on Buyer Influence, and What They Can Do About It

We’ve surveyed buyers since 1982 for enterprises and 1991 for network operators, both to find out what they’re planning to do and to find out what they think of vendors.  I’ve been sharing some of the findings on plans and attitudes, and this seems a good time to share some on vendor influence.  You’ll see why as we go along.

Who’s the top player in tech, influence-wise?  That question would have been easy to answer at any point in the last 30 years—IBM.  It’s not so easy today, because another player has gained ground and IBM has lost it.  In the fall, Cisco and IBM were in a dead heat for overall influence on tech buyers.  Cisco’s aspirations to be the number one IT company are coming to realization, if influence is a measure of future ability to drive purchases.

But just as interesting is the fact that Cisco is gaining by losing.  Or at least losing less.  The fact is that since 2010 vendors have lost influence with their customers.  Cisco has lost less than rival IBM, less than HP or Juniper.  They’re losing a race to the bottom, which puts them nearer the top.

What buyers say is causing the decline in vendor influence is a factor I’ve already blogged on.  They believe that vendors are simply pushing boxes at them without regard for whether the buyer can make a business case.  In the service provider space especially, vendors are seen as not supporting buyer business transitions.  Cisco gained ground on rivals not so much because they did better at this transition support but because they introduced UCS.  If you look at past trends, Cisco would be in trouble in terms of account control if they hadn’t added servers to their portfolio.  The data now suggests that Cisco’s next big push had better be in software; buyers think Cisco will sink by 2016 without a stronger software strategy.

If you look at the losers in influence, you see some common threads.  One is scope helps.  Companies with broader product lines generally exercised more influence than those with narrow product lines.  That’s not surprising, I think; if you can talk to a buyer about everything they need you’ll talk to them more often and have more shots at gaining traction.  Another factor is marketing/positioning.  Vendors like Cisco who are seen as marketing machines tend to do better than vendors like Juniper who are seen as being inept in positioning their offerings.

IBM may be the biggest poster child for the value of marketing.  Buyers in both the enterprise and provider space say that IBM’s website and public positioning is muddy and confused and uninspiring.  Big Blue does well face to face, but the problem is that there are only so many major accounts that can justify a full-court press sales-wise.  As IBM has come to depend more on SMBs for revenue and profit and on channel sales for engagement, they become more dependent on marketing to get their message out.  It’s not working, as my numbers have shown.

Buyers also don’t like management shifts and confusion.  HP leads the parade in terms of loss of influence and every down-tick corresponds to a new management foible.  An average buyer expects 4.8 years of useful life from a piece of tech gear, and if you don’t know what your seller is going to be doing a week from now there’s certainly grounds for concern.  That raises questions for companies like Alcatel-Lucent, IBM, HP, NSN, and now Juniper who have recently made key management changes and/or ownership changes.

So what can we say about what vendors should do next?  For IBM and HP and Juniper, it seems clear that what they need more than anything else is better positioning and marketing.  None of these companies score well with buyers on the critical measure of “does the company fully exploit its own technology and benefits?”  IBM and Juniper have been sliding in this metric for quite a while and so a reversal is critically important for them.  If they don’t reverse their trend line it’s almost certain that Cisco will take over as the most influential player.

Juniper’s big chance comes at the end of this month when their new CEO takes over.  An encouraging sign IMHO is that software head Bob Muglia has announced his departure, following his CEO mentor Kevin Johnson out of Juniper.  I think that this pair took Juniper in a decisively wrong direction.  What I don’t know is whether new management will do any better.  If Juniper presses Cisco hard on effective innovation in networking they’ll erode Cisco’s influence and give IBM a chance.  Juniper could be a spoiler, and that of course could have a major impact on Juniper’s sales and profits.

For the giant telco vendors like Alcatel-Lucent, Ericsson, and NSN, things are complicated.  Full-spectrum network product lines have made it almost impossible for any of these companies to weather changes because everything is a zero-sum game when you support the new and old technologies at the same time.  Ericsson has been levering professional services and OSS/BSS, which is smart, but they are not innovators by nature and they are at serious risk as technologies like SDN and NFV mature.  They’ll demand innovation, and that’s where Alcatel-Lucent and NSN can catch up.  I see both these Ericsson competitors trying to hew out a position in SDN and NFV.  Nothing imaginative yet, but it could happen.  And, of course, Huawei is in the wings waiting for the opportunity to simply price them all out of the market.  Stand still and Huawei will sell your niche out from under you.

So that’s a summary of where we are (subscribers to our journal Netwatcher will get a full report later this month).  A lot could happen next year, but we’ve had fairly stagnant and unimaginative productization in networking for a while now.  We have to shake off the cobwebs or the trends of the past will fossilize into the pebbles of the future.

IT Players Plans, and Buyers’ Thoughts, on SDN and NFV

One of the comments that was posted on one of my blogs (on LinkedIn) was that it was surprising that the IT vendors were not heard from much regarding SDN/NFV.  I agree at one level; IT is the obvious beneficiary of this whole software-defined-stuff initiative set.  However, there are IT vendors involved in the process and some might even be considered quasi-active.  It’s just not totally clear what their intentions are.

One example of this is Open Daylight.  Among the big IT names, IBM, Microsoft, and Red Hat are platinum members, and Dell, HP, and Intel are Silver.  Certainly this would qualify for a form of SDN support.  In the NFV ISG we find IT giants HP, IBM, Intel, Oracle, and Red Hat, so it would seem that the IT guys are aware of and involved to some extent in both activities.

Where the intentions stuff comes in is that my perception is that a lot of the IT companies are on the bench rather than on the field.  It’s not that they don’t support the notion of SDN or NFV as much perhaps as they aren’t ready to step up and do something specific.

Arguably, Pica8 has an interesting notion (for SDN) that could be applied to both SDN and NFV—a “starter kit”.  Many of the IT players sell packaged configurations, so why not sell an SDN or NFV stack or package?  Operators in my fall survey told me that they would like to see NFV offered by an IT vendor.  Enterprises still think the network vendors are the best source of SDN and they have no significant current interest in NFV.  I think that they’d be interested if someone painted an enterprise-centric NFV picture (which is actually quite easy to do) but they’re not seeing that now.  A kit for either one could be a game-changer if it was correctly formulated and offered by an “expected” source.

Correct formulation?  It has to be something that plugs into current network/IT systems with clear points of integration and manageable efforts.  It has to be “play” as well as “plug” meaning it should include a sniffing component that figures out what’s there and makes recommendations.  It has to have multiple points of application within a network, be capable of starting off in any of them, and still eventually build and coalesce into a unified end-to-end strategy.  Islands of SDN or NFV don’t cut it according to my survey.

Expected source?  Enterprises want network revolutions extended by network vendors or at least by vendors with a strong network story.  They’d love Cisco or HP because both have network gear and servers.  They’d largely accept VMware or Brocade or Dell as well.  Network operators want, as I noted, IT players because they’re far from convinced the big network vendors are sincere, so they’d like to see HP and IBM and Dell and Red Hat do something, in that order.

According to both enterprises and network operators, their hopes of a plug-and-play solution to SDN and NFV is vain so far.  In the operator space, only a bit over 10% say they are aware of a cohesive SDN or NFV strategy from a major IT vendor (HP gets the most mentions for one).  In the enterprise space, the “I-know-of-one” responses are in the statistical noise level.  Which is interesting given that some of the IT vendors actually purport to have at least an SDN story for enterprises.  HP again gets the nod in terms of the most mentions, but as I said the data is in the noise level for the enterprises at this point.  Most of them are still seeing SDN as a network play and looking to their network players.

The plug-and-play idea suggests that a big problem with both SDN and NFV is the fear of integration.  Buyers do not perceive either technology to be mature enough to be installed without specialized skills and even modifications or customization.  Despite the fact that arguably a successful NFV implementation would make installing virtual stuff as easy or easier than installing real boxes, the buyers are not so far seeing it that way.  They may want that kind of easy transition but they don’t apparently think it’s currently available.

This, I think, is why almost three-quarters of enterprises and over two-thirds of carriers say that SDN did not advance materially in their shop in 2013 and fewer than half of either category believe it will advance materially in 2014.  This, despite the fact that both enterprises and operators say (by 90% or better) that SDN would be valuable for them and almost 100% of operators say NFV would be.  Among enterprises, the largest reason given for lack of progress is that “products aren’t ready”.  Among operators, it’s “lack of standards”, “management integration”, and “support from major vendors”, with all three getting almost identical scores.

So are we going to fix this in 2014?  I think it would be possible.  There are efforts underway to create cohesive implementations of both SDN and NFV that could be the foundation for a plug-and-play solution.  There may be enough competitive pressure placed on IT vendors to stimulate them to offer something, and of course any entre into the market by the IT guys would spur network vendors to do something too.  It’s one of those at-the-starting-gate-waiting-for-a-move moments, in short.  If one moves, all will.  If none moves?  Well, you can figure that out too.

Can Operations Join Top-Down and Bottom-Up in SDN and NFV?

With news that Intel is announcing a platform (“Highland Forest”) for hosting network functions for SDN and NFV and an HP exec is taking the role of chairing the ONF’s “northbound API” group, it would seem that our world of “software-defined-everything” is taking on new life.  I hope so, but there’s still a question of whether we’re attacking the right problems.  We may be seeing less SDN/NFV-washing, but we’ve still got a lot of light rinsing going on.

SDN and NFV are starting from the bottom and working their way upward.  We have low-level technology solutions well-defined, and yet here we are just taking up those critical northbound APIs.  How do you build value for something when it’s disconnected from rational paradigms for use?  The problem, of course, is that the opposite is true too.  How do we create credible value for SDN or NFV if we don’t understand how to evolve them from the trillion dollars or so of current investment?  So we’re left with the old “groping the elephant” paradigm; we don’t have a truly systemic view of the network of the future and so we’ve got a lot of things happening that depend on the completion of a task everyone isn’t eager to face.

Not eager?  Why, you might wonder, would network vendors and others be unhappy with a complete SDN/NFV story?  Because it’s pretty clear that the world economy is recovering and that’s created a hope for a tech spending rebound in 2014.  This is not the time when anyone wants technical planners at network operators or enterprises to start humming “Let the old earth take a couple of whirls” when it comes to planning projects and launching spending initiatives.  The focus of the industry right now is not to build value for either SDN or NFV but to demonstrate that what you buy now can be fit into a puzzle for SDN and NFV later on.  That way, you buy now—so they hope.

I think that there is some credibility to the notion that a new hardware platform could be valuable in an SDN/NFV age.  However, I also think that we have ample proof that current software technology from companies like 6WIND (who I’ve mentioned in prior blogs) can provide data plane acceleration to COTS servers.  Before we declare that we have to move into new hardware combinations, we need to understand why this approach wouldn’t be better overall.  After all, it would preserve the value of current server technology.

I also think there’s credibility to the standardization of northbound APIs.  However, I have to wonder if, as you build up from your skyscraper foundation toward the sky, you might not encounter a point where knowing what you expected the roof to be was as important as knowing what’s holding you to the ground.  Can we realize those APIs fully with no clear vision of what we mean by “SDN services?”  If we try, do we not risk building APIs that can do little more than support our current conception of networking?  There is no SDN or NFV revolution if we use different technology to create the same services at the same cost points.

Over the last year, I’ve watched as operators have matured their views of the network of the future.  It started, arguably, with frustration with their vendors boiling out in a “replace proprietary with COTS” model.  Who can blame them; for almost eight years now operators have been asking for support in business model transformation and not getting it.  But in the summer these same operators were recognizing that capital savings won’t do the job; they have to look to profound opex changes and savings.  Now, the leaders are saying that won’t be enough either, we need service agility and the ability to quickly frame offerings to suit the needs of a market that’s increasingly tactical and consumeristic.

All of these new things are at the top of the service food chain, above not only “Level 3” but above all the OSI layers.  Truth be told, they are service creation and service management activities and not traditional networking activities at all.  Do we believe the operators as we describe the future?  I think we have to, and if we do believe them we have to start thinking about how the stuff above the hardware platforms, even above those northbound APIs create the benefits that these operators will demand if they’re to invest in SDN or NFV at all.  If we can’t build down from those benefits to meet the bottom-up-VMs-and-OpenFlow model that’s evolving, if we can’t secure both evolution and our driving goals, then we’re watching a PR extravaganza and not an industry revolution.  In that case, Cisco’s problem with “infrastructure SDN” wouldn’t be that it was too conservative for market needs but that it was unnecessarily radical.

And do you know what?  The same battle between sustaining current spending and securing the future is taking shape up there in the management and service creation layers.  I think I recounted my experience of sitting in a big meeting with a Tier One and listening to one person say that NFV had to support the current operations practices and investment while the person sitting next to them said they needed NFV to quickly replace both.  The TMF is actually grappling with some changes in its architecture that would acknowledge the business and service reality of the network of the future.  I’m not getting a lot of comment from the OSS/BSS guys to suggest that they’re rushing out to make that same sort of thing happen at the product level.

Here’s my suggestion to everyone.  The evolution/revolution balance is set by benefits.  If the future is really, really, valuable then there’s a lot you can tolerate in terms of costs and write-downs to get there.  If it’s only marginally valuable, then you can’t fork-lift even a small pallet.  By not looking to the skies here, by not specifically framing our path to the value of both SDN and NFV in the long run, we’re making it harder to justify even tiny little steps in the present.  I believe firmly in the value of both, and so I’m determined to get those benefits on the table.  That’s my promise for 2014.

Nudges that Could Add Up to a Big Push

Well, it’s catch-up Friday today, and there are a number of items that cropped up this week but didn’t make the cut for dedicating a blog entry.  If you put them together you can see some forces acting to create industry change—not in a giant push but in nibbles.

There’s a report that FCC Chairman Wheeler has said that he believes that OTT players should be able to pay ISPs for premium handling.  If true, this is a pretty significant policy shift from the Genachowski camp and the current Neutrality Order.  I’ve never liked the “consumer-must-pay” approach because I think it reduces the incentives for investing in better Internet infrastructure, and so reversing that view would in my view help the industry—not to mention providing consumers with better video and enterprises with better cloud access.

The problem with the “consumer-must-pay” approach is that consumers will in the main elect to roll the dice on quality.  The supplier of video or cloud, on the other hand, would very likely want to use service quality as a differentiator.  If the OTT supplier can pay when they want, then the ISPs are likely to make QoS available.  If the consumer has to pay (and they won’t) nobody will even offer QoS.  That’s why reversing the current approach would almost surely increase the flow of revenue from OTT to ISP, which would help fund network enhancements.

The argument on the other side (which Genachowski was perhaps a bit too ready to accept as a former VC) is that smaller OTTs might not have the money to pay for QoS.  That’s like saying “We won’t allow BMWs to be sold because everyone can’t buy one.”  If the VCs want to fund a content or cloud company, let them expect to pay for premium carriage if that’s what the market ends up with.  We’re at the stage where we need innovation more on the network side than in new sources of OTT video or new cloud providers.

Another interesting note is that Juniper made two announcements of enhanced product functionality, one relating to improvements to its Junos Pulse mobile agent technology and the other to VPN capabilities.  The changes to Pulse provide the basis for creating a cooperative mobile-management ecosystem, something that Juniper could have done three years ago (and should have) but that’s now critical given that Juniper has ended its mobile-specific product initiative (MobileNext).  The VPN changes provide for application-specific rather than device VPNs.  While this is also positioned at the mobile level, it could be a step to something important.

If you look at SDN applications you realize that we’re kind of playing with half the deck.  We have  SDN solutions for the data center and application-specific networks there, but we can’t network to application users remotely on a per-application basis.  If we had that capability we could build a whole new model of application networking and security, one where communities of users with the same access rights were connected to communities of applications.  Combinations other than those explicitly allowed would just not connect, which creates an explicit rather than a permissive model of communications.  It’s critical for full exploitation of mobility but important for everything.

The obvious question is whether Juniper now intends to make an application-specific networking push based on its Pulse collateral, something that’s truly a differentiator.  At an even higher level, does this mean that the new Juniper CEO (who takes over next month) is going to drive not to consolidate Juniper’s costs till the company implodes, but rather focus Juniper’s innovation?  I said before that how Juniper goes in terms of consolidate/innovate will have a major impact on the competitive dynamic and thus on the industry, so we need to see how this one plays out.

We’re also seeing some contradictory attitudes on Huawei emerging.  On the one hand, Huawei’s CEO is saying that they are going to stop even trying to sell equipment to US carriers because of US government suspicion that Huawei might be an on-ramp for spying from China and the PLA in particular.  On the other hand, the UK has approved a Huawei security center.  You could argue that the US feels it’s at a greater risk here, or that they know something that the UK doesn’t.  You could argue that Huawei is a victim of a combination of US politics and lobbying by US networking companies (Cisco, of course, comes to mind).  Cisco is said to believe that its success in China is being impacted by its lobbying against Huawei here.

I’ve talked to carrier engineers worldwide and I’ve yet to find one who believes that a network vendor like Huawei could or would build a back-door into their equipment to create an opportunity to spy or to interfere with network operation.  Most say that if you wanted to disrupt a network, you’d disrupt it the same way you’d gain access to a power plant or a defense database or a list of usernames and passwords—hacking.  The hacking risk, which has also been identified with China, is a far larger risk that we’re already facing.  Is a back-door risk even real, much less a significant incremental one?

Cisco needs a level playing field in China, and that’s probably not going to happen if Huawei doesn’t have one in the US.  I think we can expect to see either a shift in policy here, or a hardening, and either of these will be a force in shaping how the industry goes in 2014.  Huawei unbridled will put enormous pressure on vendors who have been able to dodge Huawei’s pricing power in the US market.  Cisco bridled in China will inevitably hurt its numbers, and if all US networking companies were to be treated in China as Huawei is here, it could knock a noticeable amount of profit off their balance sheets.

The market in 2014 is going to be a sum of forces, big and little.  As we move into Q1 we should have a better notion of where they’re going to push us all.

Who Wins in SDN/NFV? Maybe Professional Services Groups!

Hardware, whether network or IT, is commoditizing in the view of most in the industry and on Wall Street.  Software licenses now make up less than a quarter of the revenue of some major “software companies” and the open source movement is making credible progress toward making a big chunk of it free.  What’s left?  The answer is “professional services”, and we are already seeing signs that the future of tech might belong not to people who make stuff but to those that can make stuff work.

In networking, we have all of the credible telecom equipment vendors moving to become more professional services companies.  Ericsson, for example, has made no bones about its position that this is their revenue/profit future.  We’ve seen most of the big network vendors launch at least embryonic professional services efforts.  While you could argue that this was simply an example of the universal desire to increase total addressable market (TAM), you could also argue that it represents a growing realization that this may be where the money really is.

Government data on tech spending has always divided the total pie into hardware/software, networking, and professional services, and in many of the last ten years the size of these pieces have been roughly the same.  Most of the professional services spending in the past has gone to the big consulting and integration companies, like Accenture, but there’s a key factor in market evolution that’s created incentive for the vendors themselves to get into the business—an incentive beyond direct professional services profits.

In a recent Gartner report, HP and Cisco were seen as being the server players who were “thriving”, and that’s pretty consistent with my survey results this year.  What’s interesting is that the buyers of servers (enterprise or telco) say that the big factor in the deals is specific experience of the seller in integrating servers into a complete solution for the buyer.  In short, professional services.  If services can pull through hardware/software sales, then anyone who’s a vendor darn sure better start thinking about it.

Another issue is that of margin pressure.  I remember my first IBM PC—it had two 5.25-inch floppy drives with about 160k capacity each, a text-only monitor, and 128kb of RAM, and it cost about four thousand dollars.  Retail margins were about 35%, so the store made twelve hundred bucks from the sale.  The local retailer could afford to spend a little time helping get the (for the time, enormously powerful) system running and even involved IBM Boca to help.  Today the full sale price of a low-end desktop would be about a third of the retail margin of my first system, and the gross margin is less than 20%.  Nobody is going to hold your hand for that; they won’t even blow you a kiss.  Thus, as hardware/software commoditizes the loss of profits reduces “included” services.

The problem is that in a commodity market you can make more money only by selling at that lower market price to more people.  A mass market is an illiterate market, so they need more support and not less.  What we’re now finding is that this has divided the market for tech into the consumer space—where the motto is “fire and forget”—and the business buyer who now has to buy support incrementally.  And of course that support doesn’t require manufacturing costs or shipping or repair.

The support-and-integration-centricity of our tech markets isn’t getting less, it’s growing.  Look at SDN and NFV, both technologies that purport to call for open solutions.  The effect of that openness is to drive down prices of the components of both technology or even make some of them open-source.  That means that anyone who wants to do a real SDN or NFV implementation is going to have to pull together pieces to create a cohesive system.  Can they do that?  Internal literacy on both SDN and NFV today is in the statistical noise level.  The biggest SDN and NFV opportunities probably accrue to the companies who can be the integrators—fit the puzzles together with confidence and take responsibility for what they create.

I expected to see this trend, and I also expected it might empower the current giants of software consulting and integration—the Accentures, for example.  Instead, what my surveys are showing is that the buyers typically want to cede the professional services associated with a tech project to a vendor with a lot of product skin in the game.  Part of that is the fact that the understanding of how to build and sell a given product builds some understanding of deployment and use.  Part is that buyers who used to think of independent consulting/integration firms as being “unbiased” now think of them as first having hidden relationships with vendors and second being interested only in lining their bottom line by adding unnecessary costs.  Two-thirds of enterprises said they believed that independent third-party firms were less likely to offer them the best strategy than a vendor integration organization.

It may be that professional services will be the differentiator among “vendors” as we move into the era of the cloud, SDN, and NFV.  I don’t think that this will mean that we move to a world where every hardware element is a nameless white box given a personality by an independent integrator, but rather into a world where the name on the box stands for integration skill and problem-space knowledge rather than for manufacturing.

If that’s true, this will be a massive shift.  It’s hard to build a professional services story from simple cost reduction—why not reduce costs by reducing the professional services spending?  On the other hand, productivity and other “benefit-enhancing” projects demand that problem-space knowledge and that might force sellers to value true solutions and benefits.  It could, over time, help restore innovation (maybe of a different kind) to our industry.

Analyzing the Wall Street View of Networking

It’s always fascinating to get a Wall Street view of networking, so I was happy to review the William Blair tech outlook.  While I don’t always agree with the Street, they certainly have capabilities that I don’t in analyzing the financial trends.  Even where we disagree, there’s value in contrasting the Street view with the view of someone who does fundamentals modeling (me).

The report paints a picture of the industry that I’d have to agree with.  There are pockets of opportunity (WiFi is one, created by the onrush of smartphones and tablets that have data demands that simply can’t be satisfied through traditional mobile RANs) and some specific areas of systemic technical risk (SDN obviously, but NFV more so and I think Blair overplays the former and underplays the latter).  Things like storage and some chips, technologies that are low on the food chain and directly impacted by demand factors, are better than things higher up.  That’s true in the OSI sense too; optical transport is a better play than the higher layer.

In the cloud, I’m happy to say that the report conforms to the survey of enterprises I just completed.  That survey shows that enterprises are using IaaS but are targeting SaaS and PaaS for future cloud commitment.  The simple truth is that everyone who’s run the numbers for the cloud recognizes that the problem with IaaS is that it doesn’t impact enough cost.  If you have low utilization for any reason, IaaS is enough to build a business case for the cloud.  If you have more typical business IT needs then you need to be able to target more cost.  Maintaining software on a third-party-hosted VM isn’t different enough from maintaining it on your own.

Another area where I think the Blair comments make sense is in the monitoring space, but there I’m not sure it goes far enough.  The fact is that as networks come under revenue-per-bit pressure there’s a need to optimize transport, which tends to drive up utilization and create greater risks for QoS and availability.  The normal response to this is better management/monitoring, but the problem I see here is that getting information on network issues isn’t the same as addressing them.  “Monitoring” is an invitation to an opex explosion if you can’t link it to an automated response.

Service automation is something dear to my heart because I’ve worked on it for a decade now (IPsphere, the TMF SDF, ExperiaSphere, NFV, CloudNFV) and come to understand how critical it is.  The foundation of lower opex is service automation.  The foundation of service automation is a service model that can be “read” and used as a baseline against which network behavior can be tested and to which that behavior can be made to conform.  We’re still kicking the tires in terms of what the best way to model a service might be.  We’re even earlier than that in the critical area of linking a model to telemetry in one dimension and to control in another.  That’s an area we’ve been working on in CloudNFV, and one where I think we might be able to make our largest contribution.  Blair, for now, seems too focused on the telemetry part.

In terms of network infrastructure, I think the Blair theme is that there are things that directly drive traffic and thus would encourage deployment of raw capacity.  I agree, and I’d add to that the fact that as bandwidth becomes cheaper at the optical level, the value of aggregation to secure bandwidth efficiency at the electrical level reduces.  That’s particularly true when, as I’ve already noted, the higher layers tend to generate a lot of opex just keeping things organized and running.  SDN is a theme, IMHO, because of this factor.  If you can simplify the way that we translate transport (optical) into connectivity (IP) then you can reduce both capex and opex.  The question that’s yet to be answered is whether SDN processes as they’re currently defined can actually accomplish that because it’s not clear how much simplification they’d bring.

Network infrastructure is where NFV comes in, or should.  The Blair view is that NFV addresses the “sprawling number of proprietary hardware appliances”, which is certainly one impact.  In that sense, NFV is an attack on an avenue equipment vendors hoped to address for additional profits.  As services move up the stack, vendors move from switches/routers to appliances—or so the hope would be.  But I think that NFV is really more than that.  First, it’s a kind of true cloud DevOps tool, something that can automate not only deployment but also that pesky management task of service automation that’s the key to opex reduction.

I’ve blogged before that opex savings are what have to justify NFV, and operators are now starting to agree with that point.  The challenge is that while COTS platforms are cheaper than custom appliances in a capex sense, the early indications are that they might well be more expensive in an opex sense, and unless the capex savings are huge and the opex cost differential is small, the result would be a net savings (at best) too small to justify much change.  I think that the success of NFV may well depend on how easily it can be made to accommodate (or, better yet, drive) a new management model.  The success of that will depend on whether we can define that new model in a way that accommodates where we are and where we need to be at the same time.

Blair’s picks for tech investment are largely smaller players, and that fits the theme I opened with.  Networking is in the throes of a major systemic change that will challenge most those who are most broadly committed to the space.  If you’re wide as a barn, some of the pellets of a double-barreled broadside of change are bound to hit you somewhere.  But even narrow-niche players have their issues.  Strategic engagement with the buyer seems, in both carrier and enterprise networking, to be very hard to sustain with a narrow portfolio.  So the fact is that while all big players are challenged, all little players are narrow bets made in an industry whose directions and values are still very uncertain.  For sure, we’re in for an interesting 2014.