Juniper’s Numbers Shed Light on the Value-versus-Optics-Driven Revolution Choices

Juniper had their earnings call yesterday, and the refrain was hardly unexpected given general market trends and Juniper’s recent earnings calls.  CEO Shaygan Kheradpir started by saying that the company had made significant progress and the disappointing results were due to “industry headwinds”.  That may be different words than the last Juniper CEO used to use, but the sentiment was the same.

Like IBM, whose focus on near-term shareholder value above all else was the subject of a recent editorial, Juniper is spending money to buy back shares and raise the dividend, both of which have the effect of boosting or at least propping up the stock price.  However, the measures aren’t resulting in unqualified success in sustaining share price, and they’re killing Juniper’s long-term prospects.

Of all the companies in the networking industry, Juniper should be the one facing the transformation to come with the most confidence.  Since its founding about 20 years ago, Juniper had a reputation for technical excellence, and over those years it’s done some of the most innovative and forward-looking things any vendor ever did.  Juniper’s IPsphere initiative, about a decade ago, was in fact a harbinger of SDN and NFV change and had aspects yet to be addressed (but critically important) by both technology revolutions.  They predicted cloud computing, they actually had a product that separated the control plane processing and offloaded it onto a server blade.  They even developed a chip set that could have blurred the line between traditional network hardware and software.

Juniper’s problems arose, IMHO, in part because of a management change—Kevin Johnson and the “Microsoft club” took power—and in part because of the crash of 2008 that put pressure on spending.  Nearly all the critical insights that Juniper had were wasted for lack of execution or improper positioning.  The company’s M&A launched opportunities in both mobile and content delivery, and in both cases the efforts came to nothing.  This, at the very time when both technologies were on the cusp of revolution.

IPsphere and Juniper’s chip and product architectures should have prepared them for SDN and NFV, but the company waffled even to the extent of not positioning announcements for NFV as NFV but as SDN instead.  Their new CEO, brought in so it’s said by a board under pressure from activist investors, decided to “build shareholder value” through financial/accounting means and with that decision funds that could easily have established Juniper as a leader in SDN and NFV bought shares back instead.  As I said, it’s not done all that much for share price.  Juniper’s stock is trading about where it did on the launching of NFV in 2012 and on the one- and two-year anniversaries of that event.

All of the companies in the network equipment space have booted SDN/NFV to at least some degree, IMHO, though Juniper may have the distinction of selling itself shorter than any of the rest.  There’s a valuable lesson to learn from them, though, especially when you consider them in contrast to players like Ciena and Infinera.  If you are selling gear above Layer 1 of the network, then you are in the intelligent network services business whether you acknowledge that or not.  Carriers are shifting their strategies for basic connectivity/transport down the OSI stack in favor of capacity creation and away from the layers that have traditionally supported bandwidth management.  The “Why” is critical to applying a fix to this dilemma, which is critical if you supply switches or routers.

The electrical layer of a packet network is really about aggregation and multiplexing.  Since the seminal report (by the Rand Corporation in 1966) on packet switching, networking has advanced in features and performance by dividing up user flows into packets and multiplexing the packets on a web of physical media (soon optical) trunks.  This provided for full connectivity without dedicated optical paths, something that was essential with cost per bit so high.  But optical prices in cost-per-bit terms have fallen sharply and the cost of managing electrical-layer networks has risen sharply.  In more and more cases it makes sense to dumb down the electrical layer and simply throw bits at the problem.  That’s one of the fundamental forces in the networking space today.  We’re learning to do with agile optics what we used to do with core routers.  Soon we’ll move that understanding further out into the metro networks, where most of the investment is anyway.  When that happens, the impact on the electrical-layer gadgets will be profound.

NFV and SDN have been portrayed as other steps in the diminution of switching/routing layers.  That view is likely why Juniper and others didn’t jump into either of the two as aggressively as they should have.  But the perception that the only thing you can do with new technology is to cut costs is likely a bias among CEOs/CFOs who think the only way to raise their own profits in what they see to be a steady long-term revenue decline is to reduce their headcount.  NFV and to a lesser degree SDN are ways of linking new services, higher-layer services, OTT-like services, to the network.  They are the thing that the network operators need to have.  Even that enterprises need to have.  But you’re not going to get them to fulfill this truly valuable and revolutionary mission by handing the future of SDN and NFV over to bean-counters.  You need revolutionaries.

Everyone has had some of these people, including or even especially Juniper.  I suspect that at this point, with the CEO’s strategy for the company’s future made clear, most of Juniper’s have moved on to greener (read “nearly any other”) pastures.  In the good old days these people would have started companies, but VCs these days only want quick social-media flips, not incipient revolutions.

Ten years ago, Juniper decided not to get into optical.  They’ve decided now, at least implicitly, not to get into SDN/NFV revolution.  As I said, to be fair, so have most of their competitors.  But there is going to be a network revolution over the balance of this decade.  It will either be a transformation of Layers 2 and 3 into the service value network or it will be a transformation of those layers into a simple on-ramp to agile optics.  You either build valuable bits or you just push bits around.  Which mission do you want, dear vendor?  Juniper, sadly, has made its choice.

Posted in Uncategorized | Comments Off

VCE Wars and Market Wars

The relationship between Cisco, VMware, and EMC has been a great source of sound bites in the last year or so.  When EMC and Cisco partnered to create VCE, a kind of orphan acronym that likely stands for “Virtual Cloud Environment” but which the VCE “About” page fails to define, there was speculation that the union could be revolutionary.  Well, the union is pretty much off with the announcement that EMC will buy most of Cisco’s stake and that VCE will join EMC in the same status as VMware.  The revolution?  Let’s see.

The basic notion of VCE is that cloud infrastructure and virtualization are first and foremost the big thing in IT in general and the data center in particular.  I’ve noted that in my surveys, it’s clear that for enterprises today data center planning and networking are driving the bus for all IT and network spending, and that companies with influence in the data center influence everything.  Even product areas they don’t sell into, in fact.

The second notion behind VCE is that the cloud is a vast whirling assembly of stuff whose integration can easily get totally out of control.  Building a cloud or adding to one could easily become a task so formidable that planners would break into sweats and take the day off.  The VCE solution is the notion of packaged infrastructure—Vblock Systems.  VBlocks have models just like the mainframes of old, but they contain not just the compute but also storage and networking elements.  They are hardware platforms out of the box, things you could align to your business needs and augment as needed.

I was never sure how sensible this approach was, frankly.  It’s not that the basic notions behind the venture are false, but that the project seemed doomed in providing what was really needed, in part because of the fundamental competitive rift I opened this blog with.  It’s not that VMware is a Cisco competitor despite the former’s acquisition of Nicira, a virtual network play.  The problem is that VMware is driving a notion of networking and application-building that Cisco thinks threaten their own efforts to increase their server market and salvage as much traditional routing/switching as possible.  Remember, whoever rules the data center rules the budget.

At any rate, the Cisco/VMware dynamic makes it hard to build a truly one-stop-shop “block” solution for virtualization and the cloud because the real solution would have to include all the platform software.  Go to the VCE website and look at the product descriptions and you find more mentions of Microsoft and SAP and Oracle than of VMware.  While it’s not impossible to create a block/platform strategy that’s missing the software platform ingredients, it’s darn sure sub-optimal.  It also begs the question “is this just another hardware player?”

IBM, no slouch at reinventing itself in times of turmoil, has determined that the commodity server (x86) business is something you can’t be in and be profitable.  Frankly, I think they decided you can’t be in any hardware sector and be profitable, but they can’t strand their proprietary-platform users.  So just what is it that VCE is supposed to do?  Replace IBM as a server company?  Combat HP at a time when turmoil in the market and in HP’s business strategy weaken it?  Darn if I know, but VCE starting as it did as the classic horse-designed-by-committee, seemed from the first to have little going for it.  I think the current situation proves it didn’t.

So what now?  VCE, as an EMC satellite, can definitely create a software-centric vision of a virtualization/cloud platform, but what’s different about that without specific compute hardware?  Will they continue to incorporate Cisco UCS (forget what they say; this is a period of maximum defensive positioning for everyone)?  Will Intel, another minority investor in VCE, step up and field its own commercial server line (risking alienating its current server customers)?  And in any event, how would a software-centric positioning of a Vblock look different than the positioning of VMware’s offerings?

One potential benefit for EMC of a satellite VCE would be as a framework for absorbing (rather than spinning out) some of its strategic elements, like VMware.  VMware’s own growth is becoming more problematic as the market matures, and storage is still something that fits into an architecture rather than forms one as far as enterprise buyers are concerned.  It would certainly help EMC to be able to tell a cohesive story, to frame a combination of things as strategic when any individual piece of that pie was going to look pedestrian.  But EMC would have to do that positioning, or have VMware do it, and neither of them has been very good at singing and dancing.

That may be the foundation reason behind the Cisco/EMC split.  Cisco, marketer extraordinaire, may have concluded that either 1) VCE will fall on its face, embarrassing Cisco and leaving a hole in its positioning, or 2) Cisco would make VCE into a powerhouse that would eventually end up combined with VMware to pose a greater threat than before.

All of this of course ignores the two foundation points; data center control and increased platform complexity.  Neither of these is going away, which means that every vendor with any product in networking or IT is eventually going to have to address them.  Do software-centric players (which means IBM, Microsoft, Oracle, VMware) build up a notion of the platform of the future as software that envelopes and commoditizes servers further?  Do hardware players try to bundle software with hardware to undermine the influence of companies who want to provide software only as an independent overlay?  Do network vendors embrace server/software integration or try to stand against it and position against the current trend?

Probably all of the above, but as you likely guessed my view is that none of these is the right approach.  If complexity is the problem then something that can directly manage complexity is the solution.  I don’t think the NFV ISG undertook their task optimally but that doesn’t mean they didn’t contribute some critical insights.  The notion of MANO (management/orchestration) may be the key to the kingdom here, no matter what kingdom you’re aiming for.  Automation of the relationship between application components, resources, and users could be the glue that binds all we have today into a new software architecture.  That would be a prize to any of the groups, the vendors, I’ve named.  Enough of one to make VCE and EMC a true strategic player if they seize it, or a true strategic loser if they don’t.

Posted in Uncategorized | Comments Off

VMware, Virtualization, the Cloud, and Application Evolution

VMware reported its quarter, and while the company beat expectations overall, the report still raises several questions and doesn’t answer some of the bigger holdovers.  I’ve been talking about the “Novell effect” in prior blogs, and it’s obvious that VMware faces the risk of simply saturating its market.  While there are exits from that risk point, it’s neither clear that they’d sustain growth nor that VMware is capable of driving the good outcomes.

Over the last couple of years, VMware’s growth has come increasingly as a result of large deals with enterprises and governments—what the company and the Street call “ELAs” or enterprise license agreements.  The shift toward revenue derived from megadeals has made VMware’s revenue more lumpy, which means that a small shift in timing can have a big impact on the quarter.  However, you also have to consider the old adage that the problem with selling to the Fortune 500 is that there are only 500 of them.  Targeting big companies inevitably results in your running out of buyers, and so the Street is always looking for signs of that saturation softness.  The ELA lumps can look like those signs, so everyone’s skittish.

The company doesn’t do itself any favors.  The issues associated with ELA lumps must have been known but was only just articulated, making it seem like management was holding back or that the current story is just “spin” they came up with.  On the flip side, though, the Street doesn’t understand the real issues and so is looking for the wrong problems downstream.

Virtualization is a tool to help companies utilize hardware better, and it’s valuable because the power of new servers continues to grow while the needs of applications are growing slower.  Historically, users tended to put many applications on dedicated boxes because a server was needed to run a given application, but as server power grew there was a desire to “consolidate” servers.  The important points here are 1) that virtualization is simply a mechanism for sharing servers between applications that weren’t designed to share nice, and 2) that consolidation eventually has to run its course.

VMware, like other companies (Novell comes to mind) has addressed the inevitable by expanding its scope to include networking, storage, and so forth.  Novell added directory management and other things to file/printer sharing; same principle.  The problem of course is that the more complicated your offering is the longer your selling cycle is and the more difficult it is to prove to buyers that the incremental gains from upgrading are worth the cost.  The longer and more consolidative the sales cycle, the more the revenue per deal needed to justify the effort, and eventually you get mega-ELAs and lumps.

The cloud, which the Street is perpetually worried about, is less a risk to VMware than a potential path of escape.  The notion that the public cloud would replace or retard virtualization growth isn’t complete myth, but it’s certainly exaggerated.  The megadeals that VMware now does are with companies whose internal economies of scale are simply too good to make public cloud services a slam dunk financially.  Yes, cloud consideration could further complicate sales cycles, but it’s not a killer.  Private cloud, meaning the addition of additional application agility tools, could help VMware by adding a feature that most buyers think is valuable.  Yet VMware is rarely seen as a cloud leader, and in no small part because they’ve avoided being a big public cloud player.

On the call, VMware’s CEO Pat Gelsinger talked about “fluid IT” which is not an unreasonable way of describing where we’re headed or what’s needed in the way of server platform tools.  However, what he’s describing is really at the minimum an argument for cloudbursting and at the other extreme a world where applications are built to be more directly coupled to workers’ activity and thus more agile.  In either case, you can argue that this is a story about the cloud, DevOps and MANO, and eventually a whole new collaborative/empowerment model for workers.  That could be a nice story.

But VMware doesn’t tell it.  They have treated the cloud defensively almost from the first, and have allowed open-source tools like OpenStack to steal the thunder of the cloud in both the public cloud space and among cloud planners in those big enterprises VMware covets.  They have been laggards in DevOps, simply supporting it and not trying to drive it, and they’ve been almost non sequiturs in NFV, an activity that could well be producing (perhaps by accident) the orchestration model for applications in the future.

Any multi-tasking operating system lets applications share hardware.  Linux containers offer enough isolation of applications to suffice in single-tenant clouds like the private clouds that VMware users are contemplating.  You don’t need to have platform uniformity to cloudburst if you’ve designed everything right.  There is no natural need to have rigid hardware virtualization continuing to expand into the future unless you believe that the issues of server efficiency justify a band-aid and never drive changes to application design to render the issues moot.

The Street is right that VMware faces major risks, and right to think that management is groping more than driving the bus here.  They’re wrong thinking that public cloud is the threat; public cloud is yet another mechanism to address utilization issues and challenges for smaller companies in sustaining required IT skills in-house.  What VMware has to do isn’t defend itself against cloud, but use cloud evolution to expand its own opportunity.  And even there, it will have to look increasingly to the tools that make it possible to build and deploy agile apps and not to the simple platforms that they run on.

Nobody thought that LAN operating systems like Novell’s Netware could ever run out of steam, and yet it should have been clear from the first that if networked-resource features like file and printer sharing were valuable to users, OS providers would simply include them.  That’s the risk now for VMware; they could become the lemonade stand sitting at the roadside near where a new mega-restaurant is being built.  Being a one-stop shop can’t become being a one-product shop.

Posted in Uncategorized | Comments Off

Can Apple and Verizon Push Tech into a New Age?

Today we had the interesting combination of Apple and Verizon quarterly reports, and it’s worth looking at the two in synchrony because of the (obvious, we hope) linkage between the service provider space and the premier provider of consumer mobile technology.  There is in fact likely a bit more synchrony than you’d think.

Verizon slightly beat on revenue and slightly missed on earnings, but the most interesting contrast was between wireless services (up 7%) and wireline (down 0.8%).  It’s clear that for Verizon as for most of the big telcos, it’s wireless that’s making the money.  Verizon is also talking about its capex focus on capacity augmentation for 4G LTE, rather than about how much more it’s spending on FiOS or wireline in general.

For about a decade, carriers have found revenue per bit higher in the wireless space for the simple reason that wireless users pay more for broadband bits than wireline users.  As long as that’s true, money is going to flow more into wireless, but it’s also clear when you look at global trends that all of the factors that have driven down revenue per bit on the wireline side are already pushing at wireless.  Unlimited usage plans, even with data metering to prevent users from running amok, can only do for wireless what they’ve done for wireline years ago—commoditize bits.

In some countries, notably the EU, we’re also seeing regulatory factors pressure wireless.  Roaming charges have been a way for wireless carriers to make their own services more sticky, but the EU has been at war with higher roaming rates and so that factor is both putting revenue at risk and also creating competitive churn at a higher rate.  You don’t see as much of that in the US of course, but were there to be a major shift in neutrality policy that enveloped wireless services, it’s possible there might be a similar regulatory-driven impact here.

The churn thing is where Apple comes in, I think.  What all wireless operators want is for their customers to renew contracts by habit, but in most cases the renewals are tied to handset upgrades.  Apple users are the most likely to upgrade their phones, and so Verizon and others have tended to focus their new plans on Apple.  Verizon rival Sprint, for example, has a new plan that offers low-cost unlimited usage for iPhone 6 only.  Apple is essentially building the prizes in Crackerjack boxes, and so of course they’re riding the wave of carrier churn management.

Underneath the current situation are a couple of complications for both companies, and some unique ones for each.  Top of the joint issues is what I’ve called the “Novell problem”, which means that users can’t be made to upgrade constantly unless you can radically grow utility with each upgrade.  That, we know from other experiences in tech, isn’t likely to be possible.  However, churn-driven upgrade policies could be curtailed even before users succumb to feature boredom if competitors like T-Mobile and Sprint stop putting the pressure on the major providers by promising special deals to users who upgrade their phones.

Most operators are already reporting ARPU growth has plateaued or even reversed, so only increases in wireless TAM or market share will help drive up revenues.  Clearly those sources aren’t going to contribute to radical shifts at this point.  LTE build-out is nearing maturity too, which means that capex augmentation in wireless will be less likely.  Wireless is becoming less a growth business, less a driver of infrastructure change, for the simple reason that it’s just another bit-based service and will surely be impacted by the same forces that have already hit wireline.  To my mind this tamps down speculation that Verizon might spin off its FiOS business, which I’ll get to in a minute.

Apple’s challenge, then, is what happens when one of the forces that are driving handset upgrades begins to lose steam.  It’s unlikely other forces will emerge, and so over time iPhones will taper.  Apple’s tablet numbers were a disappointment and it’s clear that the Mac isn’t going to drive the company to double-digit stock price appreciation either.  That makes Apple dependent on new markets, such as the wearables that its watch offering represents.  The fact that Apple indicated it would not break out watch revenues once the product becomes available suggests it’s worried that lack of hockey-stick growth there will sour investors.

Verizon’s challenge is similar.  Its major revenue growth comes from the consumer side; revenues from non-consumer sources were off.  All the forces we’ve talked about make it unlikely that consumer revenues will grow forever; wireless market shares will plateau and FiOS penetration has slowed as well since Verizon isn’t entering new areas.  It’s this force that is putting pressure on capex.  If you can’t raise revenues every year to justify stock price increases, you have to cut costs.  While network equipment vendors don’t like this equation for their customers, they apply it relentlessly in their own businesses.

What I think is indicated here is the exploration of a more valuable relationship between mobile devices and mobile services.  This has to go beyond the simple substitution of payment by iPhone for payment by credit card, too.  The question is what the service might be, and I think it’s likely that it would involve combining user presence/context in the broadest sense with voice-based personal assistant technology like Siri or Cortana.  The challenge in this transition will be daunting, though, and not for technical reasons.

If you combined personal assistant technology (speech recognition) with improved AI-aided contextual search, you could generate something that could approach the movie “Her” in sophistication and utility.  However, this is a marriage of cloud and handset and that means there’s likely be a war to determine where the most valuable elements of the relationship would be hosted.  Apple would obviously like the differentiation to be in the handset, but operators like Verizon would rather it be in the cloud.  Further, hosting complex analysis in the handset would mean pumping more data out there, which would in an unlimited-usage mobile world raise carrier costs.

Apple was up slightly, pre-market.  Verizon was down slightly.  No big movements, because there’s no big news.  The question for investors, and for those of us who watch telecom overall, is whether the forces that have driven the two in semi-synchrony will now separate them, and whether that separation will then contribute to tech under-realizing its potential even further.

Posted in Uncategorized | Comments Off

IBM and the Great Tech Decline

According to one of the financial pundits, “IBM needs revenue growth.”  Forgive me, but that’s not going to win any profundity ribbons, gang.  Every public company needs revenue growth unless it wants its stock to decline at some point.  Recognizing that is probably less useful than recognizing a blue sky on a clear day.  What would be useful would be for companies who need revenue growth to look at the implications of that need.

Buyers will spend more on IT if they can obtain a benefit from the spending, such that the excess of the benefit over the cost exceeds their ROI target.  Probably every CFO on the planet and most first-year students of accounting or economics would know that.  So it follows that if you want to sell more stuff so your revenue increases, you should provide your buyers a greater benefit.

The “benefit” of IT is in almost all cases associated with improved productivity, which means that it can make people work more efficiently.  I’ve personally watched our industry progress from punched card processing to analyze transactions after the fact, to real-time systems and now to point-of-activity empowerment via mobile technology.  The problem is that we tend to think of these steps along the path as being driven by technology.  We had mainframes, then minicomputers, then PCs, then smartphones.  We forget that the reason why these technologies succeeded is that they filled a need, provided a benefit.

For much of the summer and fall, the Street has been whining about the lower levels of carrier spending.  Why are operators dragging their feet on capital spending?  It must be some complicated financial cycle, perhaps the juxtaposition of several refresh cycles that all happened to be in the wane at the same time, right?  Well, why couldn’t it be the fact that operators are earning less revenue per bit every year, and are thus not inclined to increase spending per bit?  Again, it comes back to benefits.

For IBM, though, the big revenue shortfall they just reported is very bad news.  Of all the tech companies we’ve ever had, IBM has been the very best at navigating the twists and turns of the relationship between IT and benefits.  I’ve blogged at past IBM earnings announcements that IBM seems to have lost its way, and I think now it’s clear that they have.  What is less clear is whether IBM really dropped the ball—the problem is IBM’s alone—or whether IBM’s challenges are simply a litmus test for the ills of the industry.

Tech in the last 20 years has been revolutionized by populism in a business sense.  We have seen an enormous expansion in the market for computers, for networking, because we’ve extended computers and networking to the masses.  But where do you go after you’ve done a mass market?  An intergalactic market?  That’s Cisco’s challenge with its “Internet of Everything” story; it’s hard to see what the next tag line would be.  And all of this is foolish for the same reason (which isn’t simply semantic).  Increasing revenues by increasing total addressable market will eventually run out of steam because you end up addressing the total market.  At that point, you have to increase ARPU.

In enterprise IT and networking, which is where IBM is, increasing “ARPU” means harnessing more productivity gains so you can justify higher IT investment.  The problem IBM has had for the last five years is that, as my own surveys have consistently showed, it lost strategic influence with buyers by not being on the leading edge of IT justification.  There was a time when IBM’s views commanded senior management attention automatically; those times have passed.  IBM needs to get back in the lead by showing it has new and credible and different strategies for applying IT to productivity.  They started, belatedly, down a track to work with Apple to develop mobile symbiosis with IT for point-of-activity empowerment, but all they’ve done so far is announce the deal.  They need to show results.

For networking and IT in the broader sense, it’s a consumer market.  The problem with that consumerism is that growing TAM can be accomplished only by lowering costs, which means that there’s a downward pressure on ARPU at the same time there’s a hope of growth in the total market base.  We know from enterprise IT that you can saturate a market—there are only so many “enterprises” possible.  It should be obvious that it takes years to build a new consumer, and that at some point you can’t look to population growth as your strategy for revenue gains.

IBM in IT and Cisco in networking have both been sticking their heads in the sand.  IBM has wanted to believe that companies would figure out their own paths toward higher per-worker investment; they have not done that.  Cisco has wanted to believe that consumer pressure to do more online would force the network operators to spend more on switching and routing even if their ROI on those investments falls sharply.  We are seeing, in both networking and in IT, the signal that this sort of silliness just can’t be tolerated for much longer.  The market will impose logic on firms who refuse to impose it on themselves.

But what I find the most astonishing about all of this is the response toward SDN and NFV.  Networking is an enormous market.  IT is an enormous market.  SDN and NFV create a kind of shunt between the two, to allow an IT company to become a networking giant or vice versa.  Talk about TAM gains; you could potentially increase the size of either the networking or IT markets by stealing from the other.  Cisco could become “the new IBM”.  IBM could become the “next Cisco”.  With a stroke, the problems of one space could be solved by robbing from the other.  So why is nobody doing this?

You can see my concern here, I’m sure.  Have we gotten to the point where we want easy answers and social-media flips instead of incumbent insight or real investment in a revolutionary technology?  For all the hype about the relentless progress of SDN and NFV, the fact is that neither is progressing relentlessly.  We can see, in both areas, the need of network buyers to do better.  Does “better” mean cutting costs by 30% or 40%?  If that’s the case, then winning for the Cisco’s of the world will look a lot like losing.  Same with IT; IBM can introduce cloud computing and new software paradigms, but unless those introductions bring new benefits, their introduction will succeed only if they lower costs to the buyer.  That means lowering IT spending.

IBM is telling us that we have a systemic problem of benefits here.  We’d better listen while there’s time.

Posted in Uncategorized | Comments Off

Irresistible Forces, Immovable Objects, SDN, and NFV

SDN and NFV are revolutions in a sense, at least.  They both offer a radical new way of looking at traditional network infrastructure, and that means that were they to deploy to their full potential they could change not only how we build networks, but what vendors make money on the deployments.  A lot of really good, and bad, things can happen to people in the networking business.

But what will happen may be a lot more pedestrian.  The future of SDN and NFV is a classic example of the “irresistible force and immovable object” paradox.  It made for a nice song in the days of my youth (“Something’s Got to Give” for those interested, a theme I’m sure you know I’ll be returning to here).  It’s likely to make for a messy five years or so, and thus it’s important we understand what the “forces” and “immovable objects” are in my analogy.

Let’s start with the force of declining revenue per bit.  Networks produce bits as their product.  Yeah there can be higher layers to deliver experiences, but the network itself is a bit-producing machine.  Network equipment moves bits around, and as the price of using these bits declines in marginal terms (price per bit), there is corresponding pressure on equipment vendors to provide the bit-producing gadgets cheaper.  SDN and NFV are not transforming carrier capex, what’s transforming it is financial pressure.  If SDN and NFV don’t present any more favorable cost points, either something that does will be explored or eventually investment in bits will slow and stall.

There’s a financial immovable object, too, which is the residual value of current infrastructure.  Operators have hundreds of billions of dollars of sunk costs, assets they’ve purchased with strung-out expectations of useful life so they’re written down over a period of years.  If an asset with two years of a five-year depreciation cycle is replaced, you bear 40% of the cost of that asset (assuming straight-line depreciation) when you toss it.  That’s over and above what the new gadget costs.  That means that any new approach that comes along will likely have to be gradually phased into the network to avoid big write-downs.  That, in turn, creates a pressure on buyers to stay the course, and in particular to manage their expected benefits and expected costs very carefully to avoid having to go through another replacement cycle a year or so down the road.

In the network operator space, over the last ten years, we’ve experienced rapid declines in revenue per bit.  Wireline broadband was impacted first, but you can see that as users shift more to mobile services we’re also seeing declines in return on infrastructure for mobile.  It’s these declines that are putting pressure on the vendors, not onrushing SDN or NFV.  Some operators are fleeing their home markets to invest in services on other continents because they can earn a better return there.  We got rid of regulated monopolies in public networking in the ‘90s, and so the operators of today have as much a right to earn a profit and shareholder return as the vendors do.  That may turn them into businesses less dependent on bit-pushing.

Obviously we get now to the “something’s got to give” phase.  If you define return on infrastructure as your total service revenue divided by your total cost of network ownership, you have to raise the former, lower the latter, or both.  What could be done?

Capital cost management is an obvious approach, and in fact it’s the one that drove the founding of the NFV Industry Specification Group about two years ago.  The notion was simple; shift from custom devices to hosted software-based features on commodity servers and save money.  The problem is that operators quickly realized that the gains likely from this source were less than 25%.  One operator’s response to that was “If I want those kinds of savings I’ll just beat Huawei up on price.”

Why isn’t 25% enough?  In part because of the financial inertia existing infrastructure poses; we can’t change the network quickly so it would take three or four years for net benefits from NFV to be fully available.  In part because NFV is a lot more complicated than just sticking a gateway device at the edge of a customer network.  Read the ISG’s public documents and you see all the steps they want to support—scaling, optimization of selection of locations…you get the picture.  Complexity costs, and even in today’s networks the total operations costs make up about half of TCO.  Suppose we cut capex by 25% and raise opex by 30%?

This is how we got to our current vision of what SDN and NFV could do—they could improve operations efficiency and enhance service agility, raising revenues.  These benefits are in a sense more credible than capex reductions—we can’t disprove their value.  The only problem is that we can’t really prove it either.  Just how do we improve operations efficiencies with SDN or NFV?  According to operators they’ve not run trials that can establish the operations efficiency of NFV or SDN, largely because there is at the moment no firm way of operationalizing either one of them.  We’re hearing a lot about how faster deployment could lead to higher service revenues, but what it really leads is to revenue acceleration.  If somebody orders a corporate VPN with a bunch of features and you can deliver it two weeks earlier, you get two weeks of revenue.  I saw an article that said that could increase service revenues by 4%.  Wrong.  It only applies to new services, and it presumes that the customer would be prepared to accept the service two weeks earlier in all cases.  My spring survey this year asked buyers how much two weeks’ improvement in service availability would change their total spending for the year, and the result was chilling; a quarter of one percent was the consensus.

We are almost exactly two years from the seminal white paper that launched NFV, and over three years from the dawn of SDN.  We’re busily defining protocols and processes, but are we addressing those irresistible forces and immovable objects?  Not yet, but I think we will be.  Every day that we move forward is a day of greater pressure on operators to improve revenue per bit and return on infrastructure, and a day of lower guidance from equipment vendors whose own plans are starved by buyer budget constraints.  Watch this market, particularly operations, in 2015 because (let’s all sing it!)  “Something’s got to give, something’s got to give, something’s got to give!”

Posted in Uncategorized | Comments Off

How To Tell NFV Software from NFV Vaporware

We’re getting a lot of NFV commentary out of the World Congress event this week, and some of it represents NFV positioning.  Most network and IT vendors have defined at least a proto-plan for NFV at this point, but a few are just starting to articulate their positions.  One is Juniper, whose reputation as a technical leader in networking has been tarnished over the last few years by insipid positioning and lack of management direction.  Juniper’s story is interesting because it illustrates one of the key problems with NFV today—we can’t easily assess what people are saying.

Juniper, in an interview published HERE, is promoting an open NFV architecture, meaning multi-vendor and exploiting open-source software.  OK, I’m with them so far.  They define three layers to NFV, the data center layer, the controller layer, and the services layer.  That sort of corresponds with the three areas of NFV I’ve been touting from the first—NFV Infrastructure, management/orchestration, and VNFs—so I can’t fault that structure either.  The problem with the Juniper position comes when you define the layers in detail to map them to the architecture.

NFVI is more than just a collection of hardware, or every data center and network would be NFVI and we’d be a long way toward deploying NFV already.  The key requirement for NFVI is that whatever resources you represent to be your contribution to NFVI, they have to be represented by a Virtual Infrastructure Manager.  A VIM takes a resource requirement from the management/orchestration of a service order and translates it to a set of commands/APIs that will actually commit the resource and establish the desired behavior.  Thus, any time a vendor says that they support NFV and tout a data center or infrastructure layer, they should offer a specific NFVI.

Does Juniper?  Well, this illustrates the next level of complexity.  Remember that the middle level of NFV is the management/orchestration (MANO) function.  This is where a service, specified in some abstract form, is decomposed into resource requests which are passed through the VIMs.  The Orchestrator function is explicit in the ETSI NFV ISG’s end-to-end model, so it has to be explicit in any vendor NFV architecture as well.  Juniper’s control layer, which sits where MANO sits in the real model, is based on their Contrail SDN controller.

SDN controllers are not Orchestrators, whoever’s you may be talking about.  In fact, SDN controllers could probably be placed in the NFVI zone given the fact that they are one way of commanding network service creation.  So you need a VIM to drive an SDN controller, and you still need an Orchestrator to drive a VIM.

Orchestrators are a pet peeve of mine, or should I say “lack of Orchestrators” are.  If you look closely at NFV you see that there are really two levels of orchestration—service orchestration and resource orchestration.  The latter is used to commit resources to a specific service feature, and the former to meld service features into a cohesive service.  OpenStack is a resource orchestration approach, and you know that because you can’t define every possible service in OpenStack, you can define pieces of service that are cloud-hosted.  Even there, the ISG specs call for things like horizontal scaling and optimization of hosting selection based on service criteria that OpenStack doesn’t offer.

There are some vendors who offer their own Orchestrators.  HP and IBM on the IT side and Alcatel-Lucent on the network vendor side have presented enough to demonstrate they actually have orchestration capability.  I’ve contended that you can combine open-source functionality (OpenTOSCA, Linked USDL, and implementations of the IETF’s i2aex) to produce most of orchestration (80% in my estimate based on my ExperiaSphere project work), so I’d have no objection to somebody calling out an open and integrated strategy based on this or some other credible combination of components.  Juniper doesn’t do that, however, and that’s true of most vendors who claim “orchestration”.

Then we get to the VNFs, and here we have issues that go beyond vendor representations.  One of the biggest holes in the current NFV approach is that it never took a top-down look at what it expected VNFs to be, or where they’d come from.  As I pointed out a couple of blogs back, VNFs should be considered as running inside a platform-as-a-service framework that was designed to present them with the management and connection features the software was written for.  There is no way to make NFV open and generalized if you can’t provide a framework in which VNF code converted or migrated from some other source can be made to run.  What exactly does it take to run a VNF?  What do we propose to offer to allow VNFs to be run?  If either of those questions could be answered, we could then say that code that met a given criteria set could be considered VNF code.  We can’t say that at the standards level at this point, nor do vendors like Juniper indicate what their framework for a VNF is.  Thus, nothing open is really possible.

What’s frustrating to me about all of this is that here we are getting the third annual white paper on the progress of NFV and we’re still dancing with respect to what makes up an NFV product.  I can’t believe that Juniper or the other vendors who are issuing NFV PR don’t know what I’ve just outlined.  I can’t believe that people covering the vendors’ announcements don’t know that you have to prove you’re doing something not just say you are.  Yet here we are, with relatively few actual NFV implementations available and any number of purported ones getting media attention.

We think NFV is a revolution.  Likewise SDN.  They’re not going to be if we can’t distinguish between a real implementation of NFV (which right now could be obtained from Alcatel-Lucent, HP, and perhaps Overture) and something that’s still more vaporware than software.  Juniper, if you have something do a PowerPoint that maps what you offer to the ETSI document and defend your mapping.  Same for the rest of the vendors.  I’m waiting for your call.

Posted in Uncategorized | Comments Off

Is NFV and Cloud Computing Missing the Docker Boat

Often in our industry, a new technology gets linked with an implementation or approach and the link is so tight it constrains further evolution, even sometimes reducing utility.  This may have been the case with cloud computing and NFV, which have been bound from the first to the notion of harnessing units of compute power through virtual machines.  The truth is that other “virtualization” options have existed for ages, and some may be better suited for cloud and NFV applications.  We should probably be talking less about virtual machines and more about containers.

Hardware-level virtualization, meaning classic virtual machines, take a host and partition it via hypervisor software into what are essentially separate hardware platforms.  These act so much like real computers that you run their own operating systems on them, and facilities in the hypervisor/virtualization software make them independent in a networking sense as well.  This approach is good if you assume that you need to have the greatest level of separation possible among the tenant applications, which is why it’s popular in public cloud services.  But for private cloud, even private virtualization, it’s wasteful in resources.  Your applications probably don’t need to be protected from each other, at least no more than they would be if run in a traditional data center.

Linux containers (and other containers based on other OSs like OpenSolaris) are an alternative to virtual machines that provide application isolation within a common OS instance.  Instead of running a hypervisor “under” OS instances, containers run a virtualization shell over it, partitioning the use of resources and namespaces.  There is far less overhead than with a VM because the whole OS isn’t duplicated, and where the goal of virtualization is to create elastic pools of resources to support dynamic componentization of applications, the difference can add up to (according to one user I surveyed) a 30% savings in server costs to support the same number of virtual hosting points.  This sort of savings could be delivered either in virtualization or private cloud applications.

For NFV, containers could be an enormous benefit because many virtual network functions (VNFs) would probably not justify the cost of an autonomous VM, or such a configuration would increase deployment costs to the point where it would compromise any capex savings.  The only problem is that the DevOps processes associated with container deployment, particularly container networking, are more complicated.  Many argue that containers in their native form presume an “instance first” model, where containers are built and loaded and then networked.  This is at odds with how OpenStack has evolved; separating hosting (Nova) and networking (Neutron) lets users build networks and add host instances to them easily.  In fact, dynamic component management is probably easier with VMs than with containers, even if the popular Docker tool is used to further abstract container management.

There’s work underway to enhance container networking and DevOps.  Just today, a startup called SocketPlane announced it would be “bringing SDN to Docker”, meaning to provide the kind of operational and networking agility needed to create large-scale container deployments in public and private clouds and in NFV.  There are a few older and more limited approaches to the problem already in use.

Containers, if operationalized correctly, could have an enormous positive impact on the cloud by creating an environment that’s optimized to the future evolution of applications in the cloud instead of being optimized to support the very limited mission of server consolidation.  They could also make the difference between an NFV deployment model that ends up costing more than dedicated devices would, and one that saves capex and perhaps even could enhance operations efficiency and agility.  The challenge here is to realize the potential.

Most NFV use cases have been developed with VMs.  Since in NFV the way that virtualization hosting and networking is managed is the responsibility of the Virtual Infrastructure Manager or VIM, it is theoretically possible to make containers and container networking (including Docker) work underneath a suitable VIM, which means that it would be possible in theory to make containers work with any of the PoCs that use VM hosting today.  However, this substitution isn’t the goal or even in scope for most of the work, so we’re not developing as rich a picture of the potential for containers/Docker in NFV as I’d like.

One of the most significant questions yet to be addressed for the world of containers is the management dimension.  Anyone who’s been reading my blog knows of my ongoing concerns that NFV and cloud management is taking too easy a way out.  Shared resources demand composed, multi-tenant management practices and we’ve had little discussion of how that happens even with the de facto VM-based approaches to NFV and cloud services.  Appealing to SDN as the networking strategy doesn’t solve this problem because SDN doesn’t have a definitive management strategy that works either, at least not in my view.

The issues that containers/Docker could resolve are most evident in applications of service chaining and virtual CPE for consumers, because these NFV applications are focused on displacing edge functionality on a per-user basis, which is incredibly cost-sensitive and vulnerable to the least touch of operations inefficiency.  Even in applications of NFV where edge devices participate in feature hosting by running what are essentially cloud-boards installed in the device, the use of containers could reduce the resource needs and device costs.

While per-user applications are far from the only NFV services (shared component infrastructure for IMS, EPC, and CDNs are all operator priorities) the per-user applications will generate most of the scale of NFV deployments and also create the most dynamic set of services.  It’s probably here that the pressure for high efficiency will be felt first, and it will be interesting to see whether vendors have stepped up and explored the benefits of containers.  It could be a powerful differentiator for NFV solutions, private cloud computing, and elastic and dynamic application support.  We’ll see if any vendor gets that and exploits it effectively.

Posted in Uncategorized | Comments Off

Service and Resource Management in an SDN/NFV Age

I mentioned in my blog yesterday that there was a distinct difference between “service management” and “resource management” in networks, and it’s worth taking some time to explore this because it impacts both SDN and NFV.  In fact, this difference may be at the heart of the whole notion of management transformation, the argument on whether we need “new” OSS/BSS approaches or simply need changes to current ones.

In the good old days of TDM networks, users had dedicated capacity and fixed paths.  That meant that it was possible to provide real-time information at a highly granular level, and some (like me) remember the days when you could get “severely errored seconds” and “error-free seconds” data.  When you got a service-level agreement (SLA) it could be written down to the conditions within an hour or even minute, because you had the data.

Packet networking changed all of this with the notion of a shared network and oversubscription.  One of the issues with TDM was that you paid 24×7 for capacity and might use it only for 20% or so of that time.  With packet networks, users’ traffic intermingled and this allowed more efficient use of resources.  It also meant that the notion of precise management information was forever compromised.  In packet networks, it would be very difficult and expensive to recover the exact state of routes and traffic loads at any given time.  Operators responded by extending their SLA guarantee periods—a day, a week, a month.  Packet networking is all about averages, including management of packet networks.

This is where the service/resource management differences arose.  The common principle of packet networks is to design for acceptable (within the collective SLAs) behavior and then assume it as long as all the network’s resources are operating within their design limits.  So you managed resources, but you also sought to have some mechanism of spotting issues with customer services so that you could be proactive in handling them.  Hence, the service/resource management split; you need both to offer SLAs and reasonable/acceptable levels of customer care and response.

The ability to deliver an SLA from a shared-resource packet network depends in large part on your ability to design the network to operate within a given behavioral zone, and to detect and remedy situations when it doesn’t.  That means a combination of telemetry and analytics, and the two have to be more sophisticated as the nature of the resource-sharing gets more complicated.  To the extent that SDN or NFV introduce new dimensions in resource sharing (and both clearly do) you need better telemetry and analytics to insure that you can recognize “network resource” problems and remedy them.  That gives you an acceptable response to service problems—you meet SLAs on the average, based on policies on violations that your own performance management design has set for you.

However, SDN and NFV both change the picture of resource-sharing just a bit.  First, I’ll use an SDN example.  If you assign specific forwarding paths to specific traffic from specific user endpoints, from a central control point, you presumably know where the traffic is going at any point in time.  You don’t know that in an IP network today because of adaptive routing.  So could you write a better, meaning tighter, SLA?  Perhaps.  Now for NFV, if you have a shared resource (hosted facilities) emulating a dedicated device, have you created a situation where your SLA will be less precise because your user thinks they’re managing something that’s dedicated to them, and in fact is not?

In our SDN example, we could in theory derive pretty detailed SLA data for a user’s service by looking at the status of the specific devices and trunks we’d assigned traffic to.  However, it raises the question of mechanism.  Every forwarding path, route, through an SDN network has a specific resource inventory, and we know what that is at the central control point.  But is the status of the network the sum of all the route states?  Surely, but how do we summarize and present them?  Management at the service level should now be viewed as a kind of composite, a gross state derived from the average conditions based on some algorithm, but a drill-down to a path-level state as needed.  That’s not what we have today.  And if SDN is offered using something other than central control, or if parts of the network are centralized and parts are not, how do we derive management then?

In NFV, the big question or issue is the collision of management practices and interfaces of today with virtual infrastructure.  A user can manage a dedicated device, but their management of a virtual device has to be exercised within the constraints imposed by the fact that the resources are shared.  I can never let a user or a service component exercise unfettered management on a resource that happens to host a part of their service because I have to assume that could compromise other users and services.

All of this adds up to a need for a different management view.  Logically what I want to do is to gather all the data on resource state that I can get, at all levels.  What I then have to do is to correlate that data to reflect the role of a given resource set in a given service, and present my results in an either/or/both sense.  On the one hand, I have to replicate as best I can the management interfaces that might already be consumed for pre-SDN-and-NFV services.  They still may be in use, both at the carrier and user levels.  On the other hand, I have to present the full range of data that I may now have, in a useful form, for those management applications that can benefit.  This is what “virtualizing management” means.

What we need to have, for both SDN and NFV, is a complete picture of how this resource/service management division and composition process will work.  We need to make it as flexible as we can, and to reflect the fact that things are going to get even more complicated as we evolve to realize SDN and NFV fully.

Posted in Uncategorized | Comments Off

Here’s What I Mean by Top-Down NFV

I’ve talked in previous blogs about the value of a top-down approach to things like NFV, and I don’t want to appear to be throwing stones without offering a constructive example.  What I therefore propose to do now is to look at NFV in a top-down way, the way I contend a software architect would naturally approach a project that in the end is a software design project.

Top-down starts with the driving benefits and goals.  The purpose of NFV, the goal, is to permit the substitution of hosted functionality on virtualized resources for the functionality of traditional network devices.  This substitution must lower costs and increase service agility, and so it must be suitable for automated deployment and support.  A software person would see this goal in four pieces.

First and foremost, we have to define the functional platform on which hour hosted functionality will run.  I use the qualifier “functional” because it’s not necessary that virtual network functions (the hosted functionality in NFV terms) run on the same OS or physical hardware, but only that they have some specific functional resources that support them.

I contend that the goal of NFV can be achieved only if we can draw on the enormous reservoir of network features already available on popular platforms like Linux.  Therefore, I contend that the functional platform for VNFs has to be directed at replicating the connection and management framework that such a network feature would expect to have, and harnessing its capabilities to create services.

Second, we have to define a compositional abstraction that permits the creation of this functional platform.  A functional platform would be represented by a set of services offered to the VNFs, like the service of connectivity and the service of management.  These services have to be defined in abstract terms so that we can build them from whatever explicit resources we have on hand.  This is the approach taken by OpenStack’s Neutron, for example, and also by the OASIS TOSCA orchestration abstraction.

A compositional abstraction also represents what we expect the end service to be.  A “service” to the user is a black box with properties determined by its interfaces and behavior.  That’s the same thing that a service to a VNF would be, so the compositional abstraction process is both a creator of “services” and a consumer of its own abstractions at a lower level.

We host application or service components inside an envelope of connectivity, and so I think it’s obvious that we have to recognize that compositional abstractions have to include the network models that are actually used by applications today.  We build subnets, Ethernet VLANs, IP domains, and so forth, so we have to be able to define those models.  However, we shouldn’t limit the scope of our solution to the stuff we already have; a good abstraction strategy says that I could define a network model called WidgetsForward that has any forwarding properties and any addressing conventions I find useful, then map it to elements that will produce it.  A compositional abstraction, then, is a name that can be used to describe a service that’s involved in some way with a functional abstraction.

The third thing we have to define is a resource abstraction.  We have resources available, like servers or VMs, and we need to be able to define them abstractly so we can manipulate them to create our compositional abstractions.  If we have a notion of DeployVNF, that functional abstraction will have to operate using whatever cloud hosting and connectivity facilities are available from a particular cloud infrastructure, but we can’t let the specific capabilities of that infrastructure rise to the point where it’s visible to our composition process or we’ll have to change our service compositions for every resource variation.

Here we have to watch out for specific traps, one of which is to focus on device-level modeling of resources as our first step.  I don’t have anything against Yang and Netconf in their respective places, but I think that place is in defining how you do some abstract resource thing like BuildSubnet on a specific network.  You can’t let the “how” replace the “what”.  Another specific trap is presuming that everything is virtual just because some stuff will be.  Real devices will have to be a part of any realistic service for likely decades to come, and so the goal of resource abstraction is linked to the goal of functional abstraction in that what we create with VNFs has to look like what we’d create with legacy boxes.

The final thing you need is a management abstraction.  We’re forgetting, in many NFV implementations, something that operators learned years ago with router networks.  Any time you have shared resources, you have to acknowledge that service management and resource management are not the same thing.  Composing services based in the whole or in part on virtual resources is only going to make this more complicated, and how we manage services we’ve composed without collaterally composing a management view is something I don’t understand.  Largely because I don’t believe it’s possible.

Management abstractions are critical to functional platforms because you have to be able to provide real devices and real software elements with the management connections they expect, just as you need to provide them with their real inter-component or user connections.  But the connection between a VNF or router and its management framework has to be consistent with the security and stability needs of a multi-tenant infrastructure, which is what we’ll have.

If you look at just this high-level view, you can see that the thing we’re missing in most discussions about NFV is the high-level abstractions.  We have come close to making a mistake that should be obvious even semantically.  “Virtualization” is the process of turning abstractions into instantiations, and yet we’re skipping the specific abstractions.  I contend that we have to fix that problem, and fix it decisively, to make NFV work.  I contend that we’ve not fixed it with the ISG E2E conception as yet, nor have we defined fixing it as a goal of OPNFV.  This isn’t rocket science, folks.  There’s no excuse for not getting it right.

Posted in Uncategorized | Comments Off