Vendor Rankings on Buyer Influence, and What They Can Do About It

We’ve surveyed buyers since 1982 for enterprises and 1991 for network operators, both to find out what they’re planning to do and to find out what they think of vendors.  I’ve been sharing some of the findings on plans and attitudes, and this seems a good time to share some on vendor influence.  You’ll see why as we go along.

Who’s the top player in tech, influence-wise?  That question would have been easy to answer at any point in the last 30 years—IBM.  It’s not so easy today, because another player has gained ground and IBM has lost it.  In the fall, Cisco and IBM were in a dead heat for overall influence on tech buyers.  Cisco’s aspirations to be the number one IT company are coming to realization, if influence is a measure of future ability to drive purchases.

But just as interesting is the fact that Cisco is gaining by losing.  Or at least losing less.  The fact is that since 2010 vendors have lost influence with their customers.  Cisco has lost less than rival IBM, less than HP or Juniper.  They’re losing a race to the bottom, which puts them nearer the top.

What buyers say is causing the decline in vendor influence is a factor I’ve already blogged on.  They believe that vendors are simply pushing boxes at them without regard for whether the buyer can make a business case.  In the service provider space especially, vendors are seen as not supporting buyer business transitions.  Cisco gained ground on rivals not so much because they did better at this transition support but because they introduced UCS.  If you look at past trends, Cisco would be in trouble in terms of account control if they hadn’t added servers to their portfolio.  The data now suggests that Cisco’s next big push had better be in software; buyers think Cisco will sink by 2016 without a stronger software strategy.

If you look at the losers in influence, you see some common threads.  One is scope helps.  Companies with broader product lines generally exercised more influence than those with narrow product lines.  That’s not surprising, I think; if you can talk to a buyer about everything they need you’ll talk to them more often and have more shots at gaining traction.  Another factor is marketing/positioning.  Vendors like Cisco who are seen as marketing machines tend to do better than vendors like Juniper who are seen as being inept in positioning their offerings.

IBM may be the biggest poster child for the value of marketing.  Buyers in both the enterprise and provider space say that IBM’s website and public positioning is muddy and confused and uninspiring.  Big Blue does well face to face, but the problem is that there are only so many major accounts that can justify a full-court press sales-wise.  As IBM has come to depend more on SMBs for revenue and profit and on channel sales for engagement, they become more dependent on marketing to get their message out.  It’s not working, as my numbers have shown.

Buyers also don’t like management shifts and confusion.  HP leads the parade in terms of loss of influence and every down-tick corresponds to a new management foible.  An average buyer expects 4.8 years of useful life from a piece of tech gear, and if you don’t know what your seller is going to be doing a week from now there’s certainly grounds for concern.  That raises questions for companies like Alcatel-Lucent, IBM, HP, NSN, and now Juniper who have recently made key management changes and/or ownership changes.

So what can we say about what vendors should do next?  For IBM and HP and Juniper, it seems clear that what they need more than anything else is better positioning and marketing.  None of these companies score well with buyers on the critical measure of “does the company fully exploit its own technology and benefits?”  IBM and Juniper have been sliding in this metric for quite a while and so a reversal is critically important for them.  If they don’t reverse their trend line it’s almost certain that Cisco will take over as the most influential player.

Juniper’s big chance comes at the end of this month when their new CEO takes over.  An encouraging sign IMHO is that software head Bob Muglia has announced his departure, following his CEO mentor Kevin Johnson out of Juniper.  I think that this pair took Juniper in a decisively wrong direction.  What I don’t know is whether new management will do any better.  If Juniper presses Cisco hard on effective innovation in networking they’ll erode Cisco’s influence and give IBM a chance.  Juniper could be a spoiler, and that of course could have a major impact on Juniper’s sales and profits.

For the giant telco vendors like Alcatel-Lucent, Ericsson, and NSN, things are complicated.  Full-spectrum network product lines have made it almost impossible for any of these companies to weather changes because everything is a zero-sum game when you support the new and old technologies at the same time.  Ericsson has been levering professional services and OSS/BSS, which is smart, but they are not innovators by nature and they are at serious risk as technologies like SDN and NFV mature.  They’ll demand innovation, and that’s where Alcatel-Lucent and NSN can catch up.  I see both these Ericsson competitors trying to hew out a position in SDN and NFV.  Nothing imaginative yet, but it could happen.  And, of course, Huawei is in the wings waiting for the opportunity to simply price them all out of the market.  Stand still and Huawei will sell your niche out from under you.

So that’s a summary of where we are (subscribers to our journal Netwatcher will get a full report later this month).  A lot could happen next year, but we’ve had fairly stagnant and unimaginative productization in networking for a while now.  We have to shake off the cobwebs or the trends of the past will fossilize into the pebbles of the future.

IT Players Plans, and Buyers’ Thoughts, on SDN and NFV

One of the comments that was posted on one of my blogs (on LinkedIn) was that it was surprising that the IT vendors were not heard from much regarding SDN/NFV.  I agree at one level; IT is the obvious beneficiary of this whole software-defined-stuff initiative set.  However, there are IT vendors involved in the process and some might even be considered quasi-active.  It’s just not totally clear what their intentions are.

One example of this is Open Daylight.  Among the big IT names, IBM, Microsoft, and Red Hat are platinum members, and Dell, HP, and Intel are Silver.  Certainly this would qualify for a form of SDN support.  In the NFV ISG we find IT giants HP, IBM, Intel, Oracle, and Red Hat, so it would seem that the IT guys are aware of and involved to some extent in both activities.

Where the intentions stuff comes in is that my perception is that a lot of the IT companies are on the bench rather than on the field.  It’s not that they don’t support the notion of SDN or NFV as much perhaps as they aren’t ready to step up and do something specific.

Arguably, Pica8 has an interesting notion (for SDN) that could be applied to both SDN and NFV—a “starter kit”.  Many of the IT players sell packaged configurations, so why not sell an SDN or NFV stack or package?  Operators in my fall survey told me that they would like to see NFV offered by an IT vendor.  Enterprises still think the network vendors are the best source of SDN and they have no significant current interest in NFV.  I think that they’d be interested if someone painted an enterprise-centric NFV picture (which is actually quite easy to do) but they’re not seeing that now.  A kit for either one could be a game-changer if it was correctly formulated and offered by an “expected” source.

Correct formulation?  It has to be something that plugs into current network/IT systems with clear points of integration and manageable efforts.  It has to be “play” as well as “plug” meaning it should include a sniffing component that figures out what’s there and makes recommendations.  It has to have multiple points of application within a network, be capable of starting off in any of them, and still eventually build and coalesce into a unified end-to-end strategy.  Islands of SDN or NFV don’t cut it according to my survey.

Expected source?  Enterprises want network revolutions extended by network vendors or at least by vendors with a strong network story.  They’d love Cisco or HP because both have network gear and servers.  They’d largely accept VMware or Brocade or Dell as well.  Network operators want, as I noted, IT players because they’re far from convinced the big network vendors are sincere, so they’d like to see HP and IBM and Dell and Red Hat do something, in that order.

According to both enterprises and network operators, their hopes of a plug-and-play solution to SDN and NFV is vain so far.  In the operator space, only a bit over 10% say they are aware of a cohesive SDN or NFV strategy from a major IT vendor (HP gets the most mentions for one).  In the enterprise space, the “I-know-of-one” responses are in the statistical noise level.  Which is interesting given that some of the IT vendors actually purport to have at least an SDN story for enterprises.  HP again gets the nod in terms of the most mentions, but as I said the data is in the noise level for the enterprises at this point.  Most of them are still seeing SDN as a network play and looking to their network players.

The plug-and-play idea suggests that a big problem with both SDN and NFV is the fear of integration.  Buyers do not perceive either technology to be mature enough to be installed without specialized skills and even modifications or customization.  Despite the fact that arguably a successful NFV implementation would make installing virtual stuff as easy or easier than installing real boxes, the buyers are not so far seeing it that way.  They may want that kind of easy transition but they don’t apparently think it’s currently available.

This, I think, is why almost three-quarters of enterprises and over two-thirds of carriers say that SDN did not advance materially in their shop in 2013 and fewer than half of either category believe it will advance materially in 2014.  This, despite the fact that both enterprises and operators say (by 90% or better) that SDN would be valuable for them and almost 100% of operators say NFV would be.  Among enterprises, the largest reason given for lack of progress is that “products aren’t ready”.  Among operators, it’s “lack of standards”, “management integration”, and “support from major vendors”, with all three getting almost identical scores.

So are we going to fix this in 2014?  I think it would be possible.  There are efforts underway to create cohesive implementations of both SDN and NFV that could be the foundation for a plug-and-play solution.  There may be enough competitive pressure placed on IT vendors to stimulate them to offer something, and of course any entre into the market by the IT guys would spur network vendors to do something too.  It’s one of those at-the-starting-gate-waiting-for-a-move moments, in short.  If one moves, all will.  If none moves?  Well, you can figure that out too.

Can Operations Join Top-Down and Bottom-Up in SDN and NFV?

With news that Intel is announcing a platform (“Highland Forest”) for hosting network functions for SDN and NFV and an HP exec is taking the role of chairing the ONF’s “northbound API” group, it would seem that our world of “software-defined-everything” is taking on new life.  I hope so, but there’s still a question of whether we’re attacking the right problems.  We may be seeing less SDN/NFV-washing, but we’ve still got a lot of light rinsing going on.

SDN and NFV are starting from the bottom and working their way upward.  We have low-level technology solutions well-defined, and yet here we are just taking up those critical northbound APIs.  How do you build value for something when it’s disconnected from rational paradigms for use?  The problem, of course, is that the opposite is true too.  How do we create credible value for SDN or NFV if we don’t understand how to evolve them from the trillion dollars or so of current investment?  So we’re left with the old “groping the elephant” paradigm; we don’t have a truly systemic view of the network of the future and so we’ve got a lot of things happening that depend on the completion of a task everyone isn’t eager to face.

Not eager?  Why, you might wonder, would network vendors and others be unhappy with a complete SDN/NFV story?  Because it’s pretty clear that the world economy is recovering and that’s created a hope for a tech spending rebound in 2014.  This is not the time when anyone wants technical planners at network operators or enterprises to start humming “Let the old earth take a couple of whirls” when it comes to planning projects and launching spending initiatives.  The focus of the industry right now is not to build value for either SDN or NFV but to demonstrate that what you buy now can be fit into a puzzle for SDN and NFV later on.  That way, you buy now—so they hope.

I think that there is some credibility to the notion that a new hardware platform could be valuable in an SDN/NFV age.  However, I also think that we have ample proof that current software technology from companies like 6WIND (who I’ve mentioned in prior blogs) can provide data plane acceleration to COTS servers.  Before we declare that we have to move into new hardware combinations, we need to understand why this approach wouldn’t be better overall.  After all, it would preserve the value of current server technology.

I also think there’s credibility to the standardization of northbound APIs.  However, I have to wonder if, as you build up from your skyscraper foundation toward the sky, you might not encounter a point where knowing what you expected the roof to be was as important as knowing what’s holding you to the ground.  Can we realize those APIs fully with no clear vision of what we mean by “SDN services?”  If we try, do we not risk building APIs that can do little more than support our current conception of networking?  There is no SDN or NFV revolution if we use different technology to create the same services at the same cost points.

Over the last year, I’ve watched as operators have matured their views of the network of the future.  It started, arguably, with frustration with their vendors boiling out in a “replace proprietary with COTS” model.  Who can blame them; for almost eight years now operators have been asking for support in business model transformation and not getting it.  But in the summer these same operators were recognizing that capital savings won’t do the job; they have to look to profound opex changes and savings.  Now, the leaders are saying that won’t be enough either, we need service agility and the ability to quickly frame offerings to suit the needs of a market that’s increasingly tactical and consumeristic.

All of these new things are at the top of the service food chain, above not only “Level 3” but above all the OSI layers.  Truth be told, they are service creation and service management activities and not traditional networking activities at all.  Do we believe the operators as we describe the future?  I think we have to, and if we do believe them we have to start thinking about how the stuff above the hardware platforms, even above those northbound APIs create the benefits that these operators will demand if they’re to invest in SDN or NFV at all.  If we can’t build down from those benefits to meet the bottom-up-VMs-and-OpenFlow model that’s evolving, if we can’t secure both evolution and our driving goals, then we’re watching a PR extravaganza and not an industry revolution.  In that case, Cisco’s problem with “infrastructure SDN” wouldn’t be that it was too conservative for market needs but that it was unnecessarily radical.

And do you know what?  The same battle between sustaining current spending and securing the future is taking shape up there in the management and service creation layers.  I think I recounted my experience of sitting in a big meeting with a Tier One and listening to one person say that NFV had to support the current operations practices and investment while the person sitting next to them said they needed NFV to quickly replace both.  The TMF is actually grappling with some changes in its architecture that would acknowledge the business and service reality of the network of the future.  I’m not getting a lot of comment from the OSS/BSS guys to suggest that they’re rushing out to make that same sort of thing happen at the product level.

Here’s my suggestion to everyone.  The evolution/revolution balance is set by benefits.  If the future is really, really, valuable then there’s a lot you can tolerate in terms of costs and write-downs to get there.  If it’s only marginally valuable, then you can’t fork-lift even a small pallet.  By not looking to the skies here, by not specifically framing our path to the value of both SDN and NFV in the long run, we’re making it harder to justify even tiny little steps in the present.  I believe firmly in the value of both, and so I’m determined to get those benefits on the table.  That’s my promise for 2014.

Nudges that Could Add Up to a Big Push

Well, it’s catch-up Friday today, and there are a number of items that cropped up this week but didn’t make the cut for dedicating a blog entry.  If you put them together you can see some forces acting to create industry change—not in a giant push but in nibbles.

There’s a report that FCC Chairman Wheeler has said that he believes that OTT players should be able to pay ISPs for premium handling.  If true, this is a pretty significant policy shift from the Genachowski camp and the current Neutrality Order.  I’ve never liked the “consumer-must-pay” approach because I think it reduces the incentives for investing in better Internet infrastructure, and so reversing that view would in my view help the industry—not to mention providing consumers with better video and enterprises with better cloud access.

The problem with the “consumer-must-pay” approach is that consumers will in the main elect to roll the dice on quality.  The supplier of video or cloud, on the other hand, would very likely want to use service quality as a differentiator.  If the OTT supplier can pay when they want, then the ISPs are likely to make QoS available.  If the consumer has to pay (and they won’t) nobody will even offer QoS.  That’s why reversing the current approach would almost surely increase the flow of revenue from OTT to ISP, which would help fund network enhancements.

The argument on the other side (which Genachowski was perhaps a bit too ready to accept as a former VC) is that smaller OTTs might not have the money to pay for QoS.  That’s like saying “We won’t allow BMWs to be sold because everyone can’t buy one.”  If the VCs want to fund a content or cloud company, let them expect to pay for premium carriage if that’s what the market ends up with.  We’re at the stage where we need innovation more on the network side than in new sources of OTT video or new cloud providers.

Another interesting note is that Juniper made two announcements of enhanced product functionality, one relating to improvements to its Junos Pulse mobile agent technology and the other to VPN capabilities.  The changes to Pulse provide the basis for creating a cooperative mobile-management ecosystem, something that Juniper could have done three years ago (and should have) but that’s now critical given that Juniper has ended its mobile-specific product initiative (MobileNext).  The VPN changes provide for application-specific rather than device VPNs.  While this is also positioned at the mobile level, it could be a step to something important.

If you look at SDN applications you realize that we’re kind of playing with half the deck.  We have  SDN solutions for the data center and application-specific networks there, but we can’t network to application users remotely on a per-application basis.  If we had that capability we could build a whole new model of application networking and security, one where communities of users with the same access rights were connected to communities of applications.  Combinations other than those explicitly allowed would just not connect, which creates an explicit rather than a permissive model of communications.  It’s critical for full exploitation of mobility but important for everything.

The obvious question is whether Juniper now intends to make an application-specific networking push based on its Pulse collateral, something that’s truly a differentiator.  At an even higher level, does this mean that the new Juniper CEO (who takes over next month) is going to drive not to consolidate Juniper’s costs till the company implodes, but rather focus Juniper’s innovation?  I said before that how Juniper goes in terms of consolidate/innovate will have a major impact on the competitive dynamic and thus on the industry, so we need to see how this one plays out.

We’re also seeing some contradictory attitudes on Huawei emerging.  On the one hand, Huawei’s CEO is saying that they are going to stop even trying to sell equipment to US carriers because of US government suspicion that Huawei might be an on-ramp for spying from China and the PLA in particular.  On the other hand, the UK has approved a Huawei security center.  You could argue that the US feels it’s at a greater risk here, or that they know something that the UK doesn’t.  You could argue that Huawei is a victim of a combination of US politics and lobbying by US networking companies (Cisco, of course, comes to mind).  Cisco is said to believe that its success in China is being impacted by its lobbying against Huawei here.

I’ve talked to carrier engineers worldwide and I’ve yet to find one who believes that a network vendor like Huawei could or would build a back-door into their equipment to create an opportunity to spy or to interfere with network operation.  Most say that if you wanted to disrupt a network, you’d disrupt it the same way you’d gain access to a power plant or a defense database or a list of usernames and passwords—hacking.  The hacking risk, which has also been identified with China, is a far larger risk that we’re already facing.  Is a back-door risk even real, much less a significant incremental one?

Cisco needs a level playing field in China, and that’s probably not going to happen if Huawei doesn’t have one in the US.  I think we can expect to see either a shift in policy here, or a hardening, and either of these will be a force in shaping how the industry goes in 2014.  Huawei unbridled will put enormous pressure on vendors who have been able to dodge Huawei’s pricing power in the US market.  Cisco bridled in China will inevitably hurt its numbers, and if all US networking companies were to be treated in China as Huawei is here, it could knock a noticeable amount of profit off their balance sheets.

The market in 2014 is going to be a sum of forces, big and little.  As we move into Q1 we should have a better notion of where they’re going to push us all.

Who Wins in SDN/NFV? Maybe Professional Services Groups!

Hardware, whether network or IT, is commoditizing in the view of most in the industry and on Wall Street.  Software licenses now make up less than a quarter of the revenue of some major “software companies” and the open source movement is making credible progress toward making a big chunk of it free.  What’s left?  The answer is “professional services”, and we are already seeing signs that the future of tech might belong not to people who make stuff but to those that can make stuff work.

In networking, we have all of the credible telecom equipment vendors moving to become more professional services companies.  Ericsson, for example, has made no bones about its position that this is their revenue/profit future.  We’ve seen most of the big network vendors launch at least embryonic professional services efforts.  While you could argue that this was simply an example of the universal desire to increase total addressable market (TAM), you could also argue that it represents a growing realization that this may be where the money really is.

Government data on tech spending has always divided the total pie into hardware/software, networking, and professional services, and in many of the last ten years the size of these pieces have been roughly the same.  Most of the professional services spending in the past has gone to the big consulting and integration companies, like Accenture, but there’s a key factor in market evolution that’s created incentive for the vendors themselves to get into the business—an incentive beyond direct professional services profits.

In a recent Gartner report, HP and Cisco were seen as being the server players who were “thriving”, and that’s pretty consistent with my survey results this year.  What’s interesting is that the buyers of servers (enterprise or telco) say that the big factor in the deals is specific experience of the seller in integrating servers into a complete solution for the buyer.  In short, professional services.  If services can pull through hardware/software sales, then anyone who’s a vendor darn sure better start thinking about it.

Another issue is that of margin pressure.  I remember my first IBM PC—it had two 5.25-inch floppy drives with about 160k capacity each, a text-only monitor, and 128kb of RAM, and it cost about four thousand dollars.  Retail margins were about 35%, so the store made twelve hundred bucks from the sale.  The local retailer could afford to spend a little time helping get the (for the time, enormously powerful) system running and even involved IBM Boca to help.  Today the full sale price of a low-end desktop would be about a third of the retail margin of my first system, and the gross margin is less than 20%.  Nobody is going to hold your hand for that; they won’t even blow you a kiss.  Thus, as hardware/software commoditizes the loss of profits reduces “included” services.

The problem is that in a commodity market you can make more money only by selling at that lower market price to more people.  A mass market is an illiterate market, so they need more support and not less.  What we’re now finding is that this has divided the market for tech into the consumer space—where the motto is “fire and forget”—and the business buyer who now has to buy support incrementally.  And of course that support doesn’t require manufacturing costs or shipping or repair.

The support-and-integration-centricity of our tech markets isn’t getting less, it’s growing.  Look at SDN and NFV, both technologies that purport to call for open solutions.  The effect of that openness is to drive down prices of the components of both technology or even make some of them open-source.  That means that anyone who wants to do a real SDN or NFV implementation is going to have to pull together pieces to create a cohesive system.  Can they do that?  Internal literacy on both SDN and NFV today is in the statistical noise level.  The biggest SDN and NFV opportunities probably accrue to the companies who can be the integrators—fit the puzzles together with confidence and take responsibility for what they create.

I expected to see this trend, and I also expected it might empower the current giants of software consulting and integration—the Accentures, for example.  Instead, what my surveys are showing is that the buyers typically want to cede the professional services associated with a tech project to a vendor with a lot of product skin in the game.  Part of that is the fact that the understanding of how to build and sell a given product builds some understanding of deployment and use.  Part is that buyers who used to think of independent consulting/integration firms as being “unbiased” now think of them as first having hidden relationships with vendors and second being interested only in lining their bottom line by adding unnecessary costs.  Two-thirds of enterprises said they believed that independent third-party firms were less likely to offer them the best strategy than a vendor integration organization.

It may be that professional services will be the differentiator among “vendors” as we move into the era of the cloud, SDN, and NFV.  I don’t think that this will mean that we move to a world where every hardware element is a nameless white box given a personality by an independent integrator, but rather into a world where the name on the box stands for integration skill and problem-space knowledge rather than for manufacturing.

If that’s true, this will be a massive shift.  It’s hard to build a professional services story from simple cost reduction—why not reduce costs by reducing the professional services spending?  On the other hand, productivity and other “benefit-enhancing” projects demand that problem-space knowledge and that might force sellers to value true solutions and benefits.  It could, over time, help restore innovation (maybe of a different kind) to our industry.

Analyzing the Wall Street View of Networking

It’s always fascinating to get a Wall Street view of networking, so I was happy to review the William Blair tech outlook.  While I don’t always agree with the Street, they certainly have capabilities that I don’t in analyzing the financial trends.  Even where we disagree, there’s value in contrasting the Street view with the view of someone who does fundamentals modeling (me).

The report paints a picture of the industry that I’d have to agree with.  There are pockets of opportunity (WiFi is one, created by the onrush of smartphones and tablets that have data demands that simply can’t be satisfied through traditional mobile RANs) and some specific areas of systemic technical risk (SDN obviously, but NFV more so and I think Blair overplays the former and underplays the latter).  Things like storage and some chips, technologies that are low on the food chain and directly impacted by demand factors, are better than things higher up.  That’s true in the OSI sense too; optical transport is a better play than the higher layer.

In the cloud, I’m happy to say that the report conforms to the survey of enterprises I just completed.  That survey shows that enterprises are using IaaS but are targeting SaaS and PaaS for future cloud commitment.  The simple truth is that everyone who’s run the numbers for the cloud recognizes that the problem with IaaS is that it doesn’t impact enough cost.  If you have low utilization for any reason, IaaS is enough to build a business case for the cloud.  If you have more typical business IT needs then you need to be able to target more cost.  Maintaining software on a third-party-hosted VM isn’t different enough from maintaining it on your own.

Another area where I think the Blair comments make sense is in the monitoring space, but there I’m not sure it goes far enough.  The fact is that as networks come under revenue-per-bit pressure there’s a need to optimize transport, which tends to drive up utilization and create greater risks for QoS and availability.  The normal response to this is better management/monitoring, but the problem I see here is that getting information on network issues isn’t the same as addressing them.  “Monitoring” is an invitation to an opex explosion if you can’t link it to an automated response.

Service automation is something dear to my heart because I’ve worked on it for a decade now (IPsphere, the TMF SDF, ExperiaSphere, NFV, CloudNFV) and come to understand how critical it is.  The foundation of lower opex is service automation.  The foundation of service automation is a service model that can be “read” and used as a baseline against which network behavior can be tested and to which that behavior can be made to conform.  We’re still kicking the tires in terms of what the best way to model a service might be.  We’re even earlier than that in the critical area of linking a model to telemetry in one dimension and to control in another.  That’s an area we’ve been working on in CloudNFV, and one where I think we might be able to make our largest contribution.  Blair, for now, seems too focused on the telemetry part.

In terms of network infrastructure, I think the Blair theme is that there are things that directly drive traffic and thus would encourage deployment of raw capacity.  I agree, and I’d add to that the fact that as bandwidth becomes cheaper at the optical level, the value of aggregation to secure bandwidth efficiency at the electrical level reduces.  That’s particularly true when, as I’ve already noted, the higher layers tend to generate a lot of opex just keeping things organized and running.  SDN is a theme, IMHO, because of this factor.  If you can simplify the way that we translate transport (optical) into connectivity (IP) then you can reduce both capex and opex.  The question that’s yet to be answered is whether SDN processes as they’re currently defined can actually accomplish that because it’s not clear how much simplification they’d bring.

Network infrastructure is where NFV comes in, or should.  The Blair view is that NFV addresses the “sprawling number of proprietary hardware appliances”, which is certainly one impact.  In that sense, NFV is an attack on an avenue equipment vendors hoped to address for additional profits.  As services move up the stack, vendors move from switches/routers to appliances—or so the hope would be.  But I think that NFV is really more than that.  First, it’s a kind of true cloud DevOps tool, something that can automate not only deployment but also that pesky management task of service automation that’s the key to opex reduction.

I’ve blogged before that opex savings are what have to justify NFV, and operators are now starting to agree with that point.  The challenge is that while COTS platforms are cheaper than custom appliances in a capex sense, the early indications are that they might well be more expensive in an opex sense, and unless the capex savings are huge and the opex cost differential is small, the result would be a net savings (at best) too small to justify much change.  I think that the success of NFV may well depend on how easily it can be made to accommodate (or, better yet, drive) a new management model.  The success of that will depend on whether we can define that new model in a way that accommodates where we are and where we need to be at the same time.

Blair’s picks for tech investment are largely smaller players, and that fits the theme I opened with.  Networking is in the throes of a major systemic change that will challenge most those who are most broadly committed to the space.  If you’re wide as a barn, some of the pellets of a double-barreled broadside of change are bound to hit you somewhere.  But even narrow-niche players have their issues.  Strategic engagement with the buyer seems, in both carrier and enterprise networking, to be very hard to sustain with a narrow portfolio.  So the fact is that while all big players are challenged, all little players are narrow bets made in an industry whose directions and values are still very uncertain.  For sure, we’re in for an interesting 2014.

Could the FCC’s VoIP Initiative Help Vendors?

I mentioned yesterday that the cable industry was regulated differently from telcos, and one of the differences is that cable is “capped” on maximum subscribers per provider.  Many would like to see that regulation lifted to support M&A more broadly.  In the telco space, there’s also an effort for regulatory change underway, this one linked to the transition of telephony to IP.

The new FCC Chairman (Wheeler) gave a speech on “The IP Transition: Starting Now” in mid-November.  On the surface, the issue may seem very simple; telephony has been based on time-division multiplexing technology that makes little or no sense in an age when the same consumer is being supplied with Internet access that could serve to carry voice at a very low marginal cost.  The problem is that the FCC has long taken the position that it regulates services and not technologies, and so even taking up “IP” rather than “voice” or “the Internet” is a stretch for the body.  That raises the question of where it might stretch to, and how it might impact the market.

Circuit-switching is different from packet-switching (which is what IP is) in a couple of ways significant in a regulatory sense.  One is that a circuit is a dedicated path between a provisioned pair of endpoints—one being a central-office switch and the other being your local access loop.  That means that there’s no ambiguity as to where a given call originates.  In packet switching, a path is determined ad hoc by the address of the endpoints, and a packet voice user can get a different address depending on when and how they connect.  It is more difficult to pinpoint the exact location of a call, and this is what creates some of the E911 issues.  When you decide to use Skype or Google Voice you are sternly told that you don’t have local exchange voice services and 911 service, and you can’t port a landline number to such a VoIP service because there’s a fear consumers might lose 911 without knowing it.

Another thing that’s different in packet voice is that the resources for plain old telephone service (POTS, as it’s known) are dedicated once the signaling phase of the call completes and the connection is made.  With packet voice, packets continually vie for attention in the network, and unless the network has QoS capability the traffic is passed best-efforts, which means call quality can be variable.  Many who have commented to the FCC on the IP transition don’t like that variability; they want their POTS.  But the big question here is whether the legislation requiring priority calling for first-responders in emergencies can be accommodated.

A more general challenge is that operators were declared to be monopolies in POTS and were forced (in the US by the Telecom Act of 1996, and by similar “privatization” regulations elsewhere) to unbundle the elements of their voice plant—a plant built under regulated monopoly protection.  Other companies now lease loops and build their own services on top, and these fear that the decision to eliminate POTS would eventually eliminate the unbundling regulations that they depend on.

The FCC’s efforts will have to address the four issues of public safety, universal access, competition, and consumer protection (as one commissioner delineated them).  It’s likely that they will take some decisive steps to modernize how regulations impact the IP transition, but it’s also true that the FCC (as all Federal Commissions) is effectively a court of fact and not a legislative body.  They act within the law, which means that they can exercise discretion or drive change only where the law permits.  I think the FCC has the latitude to do what they think best here, but Congress can always step in and we all know how that can mess things up!

If we assume that there is an FCC policy shift that drives change, the result would likely be an evolution toward the notion that IP access is the new POTS dial tone.  Universal service then means IP access, and it could be provided by copper, copper/fiber hybrids, FTTH, or 3G/4G depending on the situation and cost.  The Internet and voice services, and likely other services as well, become “true services” on top of IP access.  To provide for separation and QoS differences as needed by the services, we’d likely see more SDN principles deploy.  It’s not that SDN is the only way to support IP voice (obviously it’s not because we have it now and almost never use SDN for any part of it), but that a massive new QoS application would justify a massive shift in technology, which is essential to validate SDN on a large scale.

NFV could also be a beneficiary of a VoIP shift, though I do want to point out that cloud-hosted functionality for shared technology elements like those in VoIP don’t necessarily require any form of NFV to deploy.  What is likely to be true is that the notion of IP access carrying multiple services would encourage the evolution of more and more of those services, and hosting the functionality is the logical path to take.  Where that hosting is more single-user and dynamic in nature, NFV principles could be critical.

The normal process of creating regulatory change starts with a Notice of Inquiry, followed by a draft Order and a final Order.  The FCC launched the process last year, and Wheeler says that the goal now is to have a draft Order ready to present to the Commissioners in January.  That could mean that the final order would be published next year.  It’s very possible that it would be appealed (to the Federal Appeals Courts and even the Supreme Court) but the courts generally try to defer to the FCC in technical matters; as long as they follow the letter of the law.  This might mean that there would be a regulatory stimulus behind the IP voice transition, and that means vendors need to be thinking about how it would impact them—and how it could be exploited.

Could Cable Operations Modernization Make the Industry an NFV Power?

“Cable” has always been seen as different from “telecom”.  The industry is regulated differently, it’s primary profit sources are different, it has very different infrastructure, and it’s OSS/BSS/NMS tools and strategies are also different.  Most insiders in the SDN world say that cable operators are less interested in SDN than telcos, and virtually everyone says they’re less interested in NFV.  Finally, there are signs that cable and telco are converging in some ways and diverging in others.

Everyone knows that “cable” used to mean “cable TV” because that was the original profit model for the providers, just as POTS voice was the original profit driver for the telcos.  The broadcast-tuned delivery infrastructure (CATV cable, from which the name of the industry comes) proved a more cost-effective platform for broadband delivery than copper-loop DSL, and cable took an early lead in broadband.  Most industry sources would tell you that the cable “pass cost”, meaning the cost to run infrastructure past a potential subscriber, was about 60% that of telcos, and the potentials service revenue (because of TV) was much higher.  This made cable an early Wall Street darling.

Less so now.  Cable’s pass-cost advantage is diminishing as DSL technology improves, telcos turn to deeper fiber (including FTTH), and opex overwhelms capex in terms of contribution to total cost of the plant.  Insiders in the cable space admit that the industry has done much less than the telcos to create an automated plant, and they’ve fought a decades-long battle with a bad customer service image for (at least in part) this reason.

Things are changing on the revenue side too.  Networks charge TV providers more for programming, cost of original programming is rising, and VoD in any form is creating a lot of confusion on the advertising front.  Internet video is generating between 2% and 4% as much per-view revenue as channelized TV does, and yet the combination of increased use of mobile devices and the general not-at-home lifestyle of a big chunk of the video market has shifted focus to Internet or on-demand delivery.

I’ve worked with the cable industry for a couple decades, and I think it’s fair to say that 1) virtually every player in the industry is looking at how to change its cost and profit models to best fit the current market and 2) hasn’t gotten too far with the transformation so far.

The biggest issue cable faces is mobility.  It’s not just that mobile services are almost exclusively the venue of telcos, but rather that with the WiFi capability of virtually every smartphone and tablet, cable actually has a way of getting into the mobile space without becoming a mobile operator.  They seem somewhat paralyzed by that choice, and that’s hurting them.  Telcos have just as good a shot at becoming a provider of WiFi as cable companies do, and most of the telcos have been exploring offering hotspot support for 3G/4G offload for ten years or more.  The cable guys are just getting started.

Mobility is an area that hits cable right in their most vulnerable spot, which is operations.  I can remember days not so long ago when cable techs would tell horror stories about going to remote vaults to find a spiders’ nest of cables and systems and no record of how things were connected or even if they were working.  Things have improved recently, but in our survey of network operators almost 90% of telcos said they had “satisfactory” or “very satisfactory” operations systems in place where only 28% of cable companies made that claim.  Less than 4% of telcos rated their operations software as “unsatisfactory” and almost 40% of cable companies did.

If you don’t like your operations tools, you’d likely be a candidate for a next-gen OSS/BSS system, right?  Well, the cable companies do think that they need those tools but they admit that they aren’t pushing the button on next-gen operations quite yet.  One reason they give is that OSS/BSS tools and even standards activities are biased toward the traditional telco space, in no small part because the focus of telco investment today is 3G/4G mobile service, which few cable operators are directly involved with.  Another reason is that cable companies lack the technology-optimization culture that has helped the telcos along.  Remember, telcos were worrying about what the optimum structure of a lineman’s tool belt was as cable guys were trying to get their contracted installer resources to actually show up on a call.

My view is that the NFV stuff will be the watershed for cable and operations.  CableLabs, the R&D arm of the industry, is a member of the ETSI ISG but the big operators are not directly represented there.  Some in the industry believe that they’ve ceded too much to CableLabs and are now reluctant to make operations and technology a differentiator because it is a shared activity.  One big cable operator said “If you want a signpost that says cable is taking NFV seriously, look for one of the big players to join the ISG directly.”  That’s not happened yet.

It likely will happen in 2014, in my view.  The push by Google, Amazon, and Apple into what’s effectively VoD creates a major challenge for cable operators.  If they don’t have any real network assets of their own to address the mobile user, they can’t really differentiate themselves from this group except by offering things like sports programming on mobile devices.  The satellite guys have been attacking that particular option for some time, which means cable is at risk for being squeezed.  Cable is also late in supporting home control and other cloud-based consumer applications.  And what makes all of this NFV material?  Well, you need modern operations to be competitive in the space.

NFV could give cable an opportunity to build and deploy a new network model that pulls through a new operations model.  Being underinvested in OSS/BSS is a liability for them today, but it becomes an asset if a massive change in OSS/BSS strategy opens up.  NFV might open one up, and so cable might be on the cusp of recognizing it should be an NFV leader.

SDN and NFV Benefits: It’s a Matter of Scope

Stories about network transformation tends to focus on capex, despite the fact that network operators have consistently indicated that opex is likely more important.  One reason for this is that opex is one of those giant fuzzball sort of areas where you can make almost any claim and get the numbers to work in your favor.  Another is that the whole OSS/BSS space tends to be dull, in no small part because every time you try to talk about it, the example you get is “billing”.

Carol Wilson of Light Reading did a story on CloudNFV yesterday that brings home some of the realities of operations and the challenges of next-gen networking overall.  The focus of the piece is how the concept of CloudNFV evolved as the project matured, and in particular how the project found it necessary to expand its scope to cover enough of the network problem set to be able to present some true benefits.  My role in CloudNFV is known and I’m not going to reprise it here, but I do want to make some points about that critical question of “scope”.

Let’s say that I’m a builder of nice upscale homes, maybe four or five thousand square feet.  These homes contain all manner of carpentry, electrical work, plumbing, flooring, painting…you get the picture.  So now let’s say that somebody invents a new way of doing bathroom floors that claims to reduce floor cost by 25%.  That kind of thing might induce me as a builder to run out and commit to the new approach.

The problem is that a bathroom floor isn’t the product here, a home is.  I have to explore at the minimum two critical questions.  First, does that new floor paradigm impact the cost of the surrounding/supporting elements?  Suppose the floor costs 25% less but the cost of running plumbing through it doubles.  Second, is this paradigm of flooring applicable to a larger part of the house?  Atomically, I can’t make a decision on my floor; I need to think along a broader scope.

Remember the “first telephone problem?”  It goes, “Phones will never be successful because nobody will ever buy the first one—there’d be nobody to call.”  NFV and SDN are not going to sweep into networking overnight and displace legacy technology for the very good reason that this legacy stuff has nearly five years of residual depreciation to be accounted for.  We’ll have pockets of new stuff embedded in the cotton ball of legacy networking for years to come.  That means that the business cases for SDN and NFV will have to be met inside an operations framework that’s been established by legacy gear for an almost-epochal period of time.

The “First SDN” or “First NFV” has to make a business case, but it has to make it when there’s just a little island here and there.  If opex savings are the goal, how do these islands pay back?  The majority of the network won’t be “new” and the new stuff will, if anything, present higher costs because it’s different.  So what does this mean?  It means that if we are going to shift the justification for SDN or NFV from capex to opex as nearly all operators say, we need to be looking primarily at opex.  In our just-completed fall survey, all but one Tier One said opex improvements were the benefits that would drive both SDN and NFV forward.  But even if we know how SDN or NFV can achieve these benefits inside their little initial enclaves, how do those benefits manifest in the network at large?

There are two pieces to NFV, conceptually.  One is the issue of creating network features from virtual functions—what we could call “incremental NFV”.  This is what the ETSI ISG is working on.  The other issue is creating a management framework that can not only sustain current opex costs/practices as increments of NFV deploy here and there, but actually create a new paradigm for management overall—a paradigm that accommodates NFV islands and then rewards operators for deploying them.

It should be clear to everyone that if we were to define NFV management by simply creating virtual versions of every current device, creating virtual MIBs to correspond with real MIBs, and then linking up to the same management systems we had all along, there’s no change in operations practices and no change in opex.  So why do we continually hear about that approach?  It can never deliver meaningful opex savings.

The TMF may have the critical elements here.  The GB922 specification, known as the “SID”, has proved (in CloudNFV) to be a highly useful framework for modeling customer services and service elements.  The GB942 specification, sometimes called the “NGOSS Contract”, defines how a data model of a service that includes resource commitments can then become a conduit for channeling management events to the right resource lifecycle processes.  The challenge is that neither of these two specs are used this way today.  I think they have potential far beyond anything we’ve tried to exploit so far and I hope NFV (CloudNFV and every other implementation) can exploit that potential.

I think that the right answer to the operationalizing of our future network, including a network that’s rich in SDN and NFV capabilities, is going to be based on the principles of GB922/942.  I also think that as we adopt these principles, both the NFV ISG and the TMF are going to have to make some accommodations to the principle of management unity.  The most efficient operations practices are those that work for everything.  Every exception is a cost center.  Only effective mechanisms for abstraction can automate unified management over evolving infrastructure.  To me, the most critical lesson that CloudNFV and other NFV implementations can teach us is how do we model services so that we achieve efficient operations, without creating resource-specific service definitions.  We’re not addressing that now.  We need to be.