Ericsson’s Pre-MWC Announcement Sweep and NFV

It used to be that trade shows were the best place to make new announcements because the media was all there, presumably captivated by the potential of all those vendors and products.  Lately the opposite has been true; there are simply too many voices shouting to be heard and prospects get clickitis from even trying to follow the flood of URLs that emerge.  Ericsson is taking this to heart by making a big announcement (a series, in fact) ahead of MWC, and it’s worth seeing how it might tie into industry conditions, competition, and futures.

Ericsson is interesting among the major network vendors because it’s really a “network equipment vendor” only in the mobile space.  The company relies much more on a combination of OSS/BSS software (from its acquisition of Telcordia) and on professional services.  On one hand, that could be a plus in the NFV space because it’s not much at risk to having its hardware displaced and operations efficiency is the sweet spot for NFV.  On the other hand, NFV is a wide-scope initiative with much of the early PoC work focused on areas where Ericsson isn’t an incumbent.

Their MWC pre-announcement seems to be aiming at shoring up its position, and one reason they may be doing that is that at least some vendors (Oracle for one) are planning NFV announcements at MWC.  Why not rain on their parades a bit?  However, the most interesting thing about what Ericsson is doing is who it seems to be targeting, and why.  For more we have to look at what they’ve announced.

The first step in Ericsson’s positioning sweep is what they call Digital Telco Transformation, which is a combination of their two strategic priorities, OSS/BSS software and professional services.  The goal of this is offered by their head of consulting and integration:  “Only a holistic approach that reinvents the telco operating model can ensure operators avoid major business model disruptions and realize their digital telco vision.”  The key phrases are “holistic” and “reinvents the telco operating model”.

Because of their narrow equipment footprint, Ericsson is vulnerable to the death-of-a-thousand-cuts fragmentation of NFV momentum that all these PoCs create.  They need to focus on the problem at a higher level than equipment, which means they have to focus on operations as the unifier of all the NFV stuff going on.  “Holistic” means “across all services and equipment” to Ericsson, and “reinventing the telco operating model” means starting the transformation to NFV with an operations transformation.  The net is that operations is driving operators to the future in Ericsson’s view.

This operations focus is interesting because up to this year, operators were driving PoCs out of their science and technology activity with the CIOs largely on the sidelines.  That’s been changing this year as I’ve recently pointed out, and Ericsson is surely playing to that change and to this new and critical constituency.  One that, by the way, Ericsson is in a good position to leverage.  If operations efficiency is a key to making a broader business case for NFV (as I think it is), then the OSS/BSS people are the ones to work with.

Another thread in the same tapestry is the second Ericsson positioning step.  Expert Analytics 15.0 is aimed at tracking customer satisfaction through the service transitions NFV and other developments would create.  The tool is aimed at creating a feedback loop from customer perception/satisfaction to service automation and change, and it also involves OSS/BSS and the CIO.  The tool is aimed at gathering information from many sources and driving automated processes, which sure sounds like a broad strategy.

Thread three is the App Experience Optimization which aims at the app user, and whose goal is the optimization of the app experience and operator profits in tandem.  This is really a professional services tie-in, something that brings Ericsson’s integration and optimization expertise to bear.  It extends that notion of a broad process that involves a lot of stuff, and it ties things into mobile services more tightly.

The fourth and fifth elements are products, both aimed directly at mobile infrastructure.  One enhances Ericsson’s solutions in mobile backhaul routing and the other in CDN.  Ericsson is integrating these network product elements with the operations announcements in a new software release and through some customized portfolios.  This appears to me to be a second anchor—exploit operations, exploit mobility—to a new positioning that’s intended to do what I called “supersetting” in yesterday’s blog (referencing Cisco’s own SDN and NFV approach).  Ericsson likely believes that if you generalize the problem and involve operations, you can defeat point-PoC NFV initiatives from other vendors, all of whom are reluctant to be “holistic” on NFV at all for various reasons, largely tactical.

While Ericsson makes a few connections between their announcement-fest and NFV it’s pretty obvious that NFV isn’t the specific focus.  It sure seems to be waiting in the wings, though.  Pushing this out now, well ahead of MWC, might also tune reporters at the show to the importance of operations in NFV, which is at the minimum going to force those who have an operations story to accent that piece of their announcement, perhaps laying back on areas where Ericsson has nothing comparable.  Those who don’t have an ops story could even risk negative coverage.

It’s harder to say whether the Ericsson strategy will actually pay off in NFV traction.  While it is true that the CIO is an ever-more-important player in NFV, it’s also true that they haven’t exactly been the prime movers in NFV so far.  Operators are conflicted about whether operations needs to be leading the charge or changed out completely in favor of something else.

It’s also true that OSS/BSS vendors, including Ericsson, have yet to deliver on a real NFV story, though some have come close or promised it.  But of course if you’re Ericsson you may reason (as I believe Cisco has) that if the price of advancing NFV is advancing your own disintermediation, why bother?  By positioning their suite of announcements as operations enhancements and business model transformers, they’re elevating their story out of the NFV clutter.  They may also be elevating it out of NFV relevance, though.

The tangible weak point is the MANO thing.  Operators generally believe that service automation is about MANO, that NFV has introduced that key concept to the picture, and that MANO isn’t a part of OSS/BSS.  That’s what leads some operators to see a completely new operations picture orchestrated around MANO.  Some also see MANO creeping up into the OSS/BSS, and Ericsson doesn’t seem to be taking a position on that critical point.

It’s hard to see how Oracle would expect to win over competitors who could actually deliver an NFV product unless Oracle could deliver the heart and soul, at least, which is MANO.  Ericsson may be hoping that the OPNFV effort (open-source software for NFV, under the Linux Foundation) will create software, or that it will at least marginalize those who attempt to provide an NFV product.  That might be true if 1) OPNFV gets something done quickly and 2) it’s comprehensive enough to deliver on the expected NFV benefit case.  It doesn’t seem likely either condition will be met this year.

Maybe I’m a conspiracy theorist here, but it seems to me like we’re seeing a lot of maneuvering around NFV right now, and I’m inclined to believe that it’s because operators are making progress with the business case.  If that’s true, then we’re approaching the point where lack of a coherent NFV position could be a real problem for vendors who want to sell to operators.  If that is true, then a lot of vendors who have been blowing kisses at NFV and engaging in vacuous partner programs (as opposed to programs where there is real value from a key player) are going to have to at least buy a ring and practice getting down on their knees to a strong partner.  It’s too late to build something at this point.

Can Cisco Win the SDN/NFV Race by Not Running?

One of the “in-passing” comments made by John Chambers in Cisco’s earnings call was that the company was “We are pulling away from our competitors and leading in both the SDN thought leadership and customer implementations.”  Interestingly there’s a strong element of truth in that statement despite the fact that Cisco is widely regarded as being anti-SDN, and that truth and the reason behind it may have important consequences beyond SDN.  Cisco raises the question of whether less is more in addressing “revolutionary” technology.

Most people think of SDN in terms of the Open Network Foundation (ONF) and the OpenFlow protocol.  This combination is supposed to revolutionize networking and the vendor space by changing the focus from adaptive L2/L3 protocols and existing switch/router products to central control of forwarding and white-box devices.  Obviously vendors like Cisco (and all its competitors) are less than happy about that particular outcome, but many of the competitors have at least blown kisses at ONF/OpenFlow SDN because it might help them gain market share at Cisco’s expense.

From the first, Cisco’s SDN strategy has been one I’ve always described as supersetting.  You defend yourself against a market development by enveloping its goals in a larger goal set whose implementation you can control better.  Cisco came out with Application Centric Infrastructure (ACI) to do just that.  All network forwarding behavior is a black box to the users of the network, so Cisco embraced the visible consequences of SDN in ACI, which taps the same benefits and thus lessens incentive for users to migrate.  They even added concepts like a richer management and policy-based control framework, things lacking in formal SDN.  Finally, they made ACI an evolution of current network technology and not a centralized white-box-and-standards-geeks revolution.

What supersetting as a concept relies on is the fact that revolutionary change tends to expose users to costs faster than to benefits.  A user looking at their first deployment of purist OpenFlow SDN has to get a controller (new), learn to use it correctly (new operations practices), get white-box switches (from a new vendor), and then stick this whole thing inside a network that still largely works the old way.  Cisco says “Hey, suppose we could give you what this SDN revolution thing is supposed to deliver, but without any of that expensive novelty?”  You can see how that would be appealing.

Even if buyers are willing to look ahead a bit and value the new networking model, there’s still a set of issues.  We know for sure that you can build a data center network from purist SDN.  We know you can build a core domain that way.  We know that it absolutely would not scale to the level of the Internet and we can’t even prove it would work at the scale of a decent-sized enterprise VPN.  SDN “domains” have to be small enough for controllers to be able to handle, but it’s not clear how “small” that is.  It’s even less clear how you connect the domains, unless you want to fall back on traditional L2/L3 concepts that is.

NFV faces an almost-identical challenge.  What we can do with NFV today not only isn’t revolutionary, it’s often not even particularly useful.  That’s largely because the early SDN applications simply don’t impact enough cost or have generate enough prospects for new services to impact either cost or revenue overall.  You have to go big, but operators, like most buyers, will not overhaul their whole technology model for five or ten, or even twenty percent reductions in cost.  I’ve quoted a good operator friend on this before:  “If we want a 20% savings we’ll just beat Huawei up on price!”

We have to realign that problem of cost and benefit timing with NFV, perhaps even more than we do with SDN.  In both SDN and NFV one of the biggest unresolved issues is the relationship between an “enclave of the new” and the vast network of the old.  The specific challenge for NFV is that the necessary scope of benefits is much larger—opex efficiency, new service revenues, and capital savings.  NFV raises early risks, creating more of the very thing that Cisco saw as a vulnerability of SDN-a-la-OpenFlow, and then exploited with ACI.  Might Cisco have similar supersetting plans for NFV?

They might, indeed.  Their Tail-f acquisition gives them a Yang/Netconf combination that lets them control multi-vendor infrastructure.  Not only does this help them in their SDN supersetting by giving Cisco a way to extend ACI benefits across other vendors’ equipment, it provides Cisco with a solution to the end-to-end service control challenge that NFV so far has decline to address.

If you marry ACI with Tail-f’s ConfD approach and then stir in a little OpenStack, you’d have something a lot more useful than NFV would be alone.  It’s not fully responsive to market needs in my view, but it’s better than “standard” NFV.  And Cisco just announced a single-northbound-API version of ConfD would be free, making it either a nice camel’s proverbial nose or a kiss blown at purist NFV just as Cisco’s OpenFlow support is a kiss for purist SDN.  Or both.

The most vulnerable period of any new technology is its initial adoption, because of that mismatch of pace-of-costs versus pace-of-benefits I opened with.  Most buyers probably realize this; remember my comment that one of the top three NFV questions is “Where do I start?” even among those who have started?  I also think that the reluctance or inability of both standards groups and vendors to take a big, full, bite of the issues up front contributes to this problem.  Any uncertainty about starting a revolution tends to send people back to the coffee shop, favoring evolutionary approaches.

Cisco now has a policy-management strategy for handling SDN and NFV services.  It’s not what I’d endorse as ideal, but it’s there, and better than a chasm of omission such as we get from many other vendors.  Cisco also has a management/orchestration strategy with that same qualifier of “better than nothing”.  They are singing well in a market where most of their competitors are tongue-tied.  And they have servers, a cloud presence.  All that adds up to their being a force in NFV in the same way as they’ve been a force—a winner—in SDN.

I think Cisco is winning in SDN and their supersetting strategy could win in NFV too.  What could change the opportunity landscape for a competitor?  Here’s my list of requirements a Cisco competitor would have to deliver on, and I want to point out that these same requirements exist for the cloud and SDN as well.  Thus, it’s these three things that will determine whether Cisco itself has a lock on being the “next Cisco”, or whether somebody could still ride a revolution to that heady goal.

First and foremost strong service modeling using the “declarative” approach that defines objects and goals rather than scripting steps.  This is something that HP and IBM can deliver for sure today (Cloudify probably can as well), and there may be others who can as well but haven’t supplied me the documentation.  All deployment and management has to be based on this modeling.

Second and nearly as critical, integrated management that includes legacy infrastructure.  That means you have to be able to deploy services edge to edge, and also to manage both legacy and NFV elements comingled in whatever way operators find useful.  HP has this for sure, and I think Alcatel-Lucent, Oracle, and Overture have it as well.  I think IBM could supply it but I’m less sure about them.

Third, federation of administrative “zones”.  Operators will need to partner with other operators, and likely with cloud providers as well.  Many will have to create arms-length subsidiaries that will need to create cooperative services across the border.  Everyone will have to deal with how NFV domains link with legacy VPNs, the Internet, etc.  Here I can’t name vendors because I don’t have enough information on anyone’s solution.  In fact, I have doubts that anyone has a full solution here yet.

Federation has been ignored by vendors because it’s not as important in making the business case for NFV at present.  It may be absolutely critical for getting that first NFV application or trial to expand, not only because partnerships are routine among operators but because early NFV enclaves are likely to develop from independent trials, and they’ll have to link up quickly or risk creating silos.  Remember that absent NFV- or SDN-specific federation approaches you end up falling back on adapting legacy interconnect, and that favors Cisco’s evolutionary model because it works with existing gear by definition.

Little steps reduce risk in one sense and expand it in another.  It’s fine to shuffle to success as long as your movement adds up to progress toward the ultimate goal.  In SDN, Cisco is exploiting the shuffling of others by simply embracing the goal without the revolution.  That could work in NFV and even in the next generation of the cloud, where cloud-specific apps will remake the landscape.  It may be that standards groups and Cisco competitors alike have gotten too used to incrementalism as a means of controlling cost and risk, and don’t recognize that related notion of “critical mass”.  You have to generate enough early benefits to justify ongoing commitment and growth in any new technology.  Cisco’s SDN supremacy is proof that didn’t happen in the SDN space, and supersetting isn’t out of gas yet.

Why Mobility and NFV are the Cloud’s Best Friends

A recent research report continued on a theme that’s become a bit of a cloud computing mantra—we’re exiting the “early adopter” phase of the cloud and heading into the main event.  In some sense this is true, because we are certainly in the “early” phase of the cloud.  In another sense it’s misleading because almost all big companies whose spending dominates IT overall have been cloud adopters for a year or more.

The biggest issue with the early adopter model, IMHO, is its intrinsic presumption of homogeneity of the cloud market model itself.  We assume that adoption is working against a static goal, that users are on a learning curve that will take them to the same place at the end.  Suppose, though, that it’s not the user that’s changing but the cloud?  If in fact the cloud is a moving target, then how would you know if users were “early” to the cloud or perhaps not even there yet if measured by the evolving cloud paradigm?  What is the actual evolution of the cloud model, and where are we in that process?

If we start at the top (as I’m always inclined to do) then we can divide cloud computing into two main benefit models—cost-based and feature-based.  The cost-based model of the cloud says that it can do the stuff we do today at a lower cost.  The feature-based model says the cloud can do stuff we don’t do today at all.

There has always been a problem with the cost-based benefit model.  If you look at the essence of cost-based cloud adoption, it is this:  “I can create an IT service so much cheaper than you can that I can earn a respectable ROI on my service and still offer you enough savings to offset your concerns.”  That’s obviously possible, but also obviously something that gets harder as you move out of “early adopters” with special cost situations and into the mainstream.  Economy of scale isn’t linear, as I’ve pointed out before.  The curve plateaus, and that means that every win makes the next one harder to come by.

If the cost-based cloud is indeed under pressure you’d expect warning signs, and I think they’re there for all to see.  If IaaS margins and value propositions are under more threat, we should see more focus on PaaS and SaaS because these two cloud models displace more costs and thus build benefits for a broader community.  That’s exactly what we do see.  You’d expect to see cloud pricing fall as providers try to get volume adoption kicked off, and we see that too.  You’d expect to see cloud providers looking more to augment basic cloud revenues as their own margins shrink, and new “platform service” enhancements to the cloud seem to be announced every day.

IaaS, in my model, runs out of gas short of 10% penetration into overall IT spending.  If you add in SaaS and PaaS you can get cost-based cloud up to less than 25%, but beyond that the model says that savings are too low for buyers and profits too low for sellers.  By 2020 you need a transition to a feature-driven model of the cloud or you hit the wall.

Feature-based cloud opportunity seems simple and obvious, even inevitable, on the surface.  It’s easy to identify things (like mobility) that are opportunity drivers.  The problem is that these drivers have been around for a year or more and nothing of any consequence has happened.  At least, so the enterprises say.  Almost all of them recognize that there are special application planning and development techniques associated with building cloud-specific apps that could drive a feature-based cloud explosion.   They just don’t know what they are.

Some startups do.  Many, in fact, have build custom applications for the cloud and are exploiting cloud capabilities extensively.  Most social-network companies fit this model, for example.  The problem of course is that the skills involved are in great demand and enterprises can’t afford them, nor would there be enough to drive a buyer-side revolution.  It would have to be up to vendors…

…who in the main are happy to stay the course with regard to application design.  It makes sense to milk your R&D as far as you can, and if everyone else has that same mindset you have little to fear.  To be fair, though, it’s easier to think cloud revolution in a narrow social-network application range than for a broad enough enterprise market to make things interesting.

The other issue with evolving the cloud to being feature-driven is that the architecture of the cloud today is all wrong.  Most cloud providers have few data centers designed to be (you guessed it) cost-efficient, there is little attempt to integrate big data or contextual resources.  The latter of these conflicts with the former; contextual mobile apps demand fairly local processing for short control loops and response time.  We need a rich intra-cloud network and that’s hard when you don’t even have an intra-cloud infrastructure outside a data center.

The feature-driven model of the cloud seems to need the very thing that the cost-driven model doesn’t want—distribution of resources.  There are two paths that might be taken to that goal—one by exploiting the opportunities of mobile contextual services and the other by exploiting Network Functions Virtualization.  In both cases there’s an incentive to push processing more to the edge, and to distribute processes and data co-equally.  That distributable-process notion seems to be the major difference between a cloud-specific application and one that just runs in the cloud.

Users of the cloud aren’t in the early-adopter phase, then.  The cloud is in the early-adoption phase, and we get out of that when we can enlighten enterprises and enrich cloud infrastructure at the same time and at a reasonable pace.  I don’t think any evolution of the traditional cloud model is going to get us to that point, but I think that either mobility or NFV might well do it.  That would make these two network technologies the best friends the cloud ever had.

Some Progress to Report in the NFV Business Case

There’s some good news on NFV, which I’m happy to be able to report since good news (other than hype) is hard to come by these days.  I reported early this year that operators’ CFOs had told me in the spring of 2014 that they were so far unable to make a conclusive business case for NFV.  They also said that they were not of the view that current PoCs would enable them to do so.  Over the last three weeks those same operators have told me that they are in fact seeing more clarity on the business case.

Where we stand today is that about half the operators I’d talked with in 2014 now believed that they could make a business case for NFV deployment based on the evolution of their current tests and trials.  The difference between then and now is that operators say they’re getting a lot more involved with operations integration in their testing, and that operations was the weak link in their business cases up to now.

An even more heartening piece of news is that l two thirds of operators now say that their CIO is getting engaged in the NFV trial process, representing the OSS/BSS side of things.  While many network vendors are squeamish about CIO involvement because they don’t deal with that side of the house much, the fact is that without strong support from operations there was never much of a chance of making NFV work.

Most of this progress has come along since late November when operators completed their annual technology preview to their 2015 budget processes.  A couple operators told me that this process is what brought what one called “clarity” to their trial planning and was instrumental in getting the CIOs involved.  The general view was that this fall’s tech planning will likely focus on broader deployment, which means 2016 could be a good year for NFV types.

I got a couple of other interesting comments from operators while talking about the trials, and so I want to summarize them here.  I can’t say too much in detail without violating confidences.

First, it seems clear that trials are dividing into those that are starting to get broader CxO buy-in and those that are still science projects.  About a third of all PoCs seem to fall into the latter category and I’d guess that as many as two-thirds of these won’t advance beyond the lab this year.  All I can say is that the ones that have made the most progress are those that involve one of the key vendors I’ve talked about.

Second, even where CIO involvement has solidified and operations progress made, there is still an indication that the scope of the projects is pretty limited.  While the operators themselves may not see it this way it’s my own view that most of the cases where the business case for NFV can likely be made will still not prove the NFV case broadly.  Early applications “covered” by the tests probably won’t involve more than about 10% of capex maximum, which means NFV has to be pushed further into other areas to make a big impact.  As far as I can see that big impact can be proved in only five or six trials.  Everyone else may have to dance some more before they get a broader sign-off.

Third, we are starting to see a polarization in NFV announcements just like we have already noted for trials.  Some vendors are involved enough with the “real” activity, the stuff with a business case behind it, to be able to frame products or product changes in the light of real opportunity.  The rest are still just singing in the direction of the nearest reporter/editor.

Fourth, we’re starting to see some pressure and desperation among vendors, even sadly a few who are actually doing good things.  Operators have long sales cycles and a lot of backers/VCs and even internal company sponsors are just not able to stay the course much longer.  We will almost surely see some casualties in the space late this year or early in 2016.

Fifth, I’m starting to see some winners.  Some companies—a very few—are stepping up and doing the right thing in trials and in customer engagements.  My favorite vendor has generated a lot of good stuff now, documents so thorough and relevant that it is amazing that so little is known about it.  Perhaps those who really have the right stuff aren’t interested in letting it all out yet, or perhaps companies are still groping for way to make something as technical as NFV into a marketing story that’s not going to take rocket scientists (or at least computer scientists) to understand.

So are we ready for field trials?  Even the more optimistic operators aren’t betting on trials before perhaps September, though a few say that if things were to go just soooo…well, perhaps mid-summer.  Personally I think that as many as three operators could go to full field trials by mid-July if they and their vendors did the right thing, but there’s a lot of work still to be done, particularly on the integration front.

That’s my key take-way, in fact.  Yes, we are finally making progress but a full solution to agile, efficient, operations is still elusive.  A few players could get there for sure, a few more could get close enough to be able to do a convincing trial and complete their work in the fall.

There are still a few troubling signs, even among operators who are making progress.  The question “Where do I start with NFV?” still rates among the top three in the most-advanced group of operators.  What they’re asking is for help identifying the places where NFV can make an early difference at the business level, and that shows that planning for NFV is still more technically driven.  The fact that they’re asking vendors the question is troubling given that there is no simple answer, no strategy that works for even a majority of operators.  You have to understand your own business to understand how to improve it.

And yet…I’m more hopeful.  It’s better to be wondering about NFV business cases than worrying over details of standards and specifications in a vacuum.  NFV has absolutely no merit if it doesn’t have business value, and we’re further along in deciding where that value can be found and how prevalent those situations are in various markets and operators.  That’s good progress.

Finding the Opportunity to Justify Agility

In recent blogs I’ve been arguing that we’re focusing too much on the technology of new services and not about the actual opportunity.  There are scads of technological suggestions on where new revenue can be located, but where do you find opportunity?  That’s a question that’s always being asked by anyone who sells anything, including network and cloud providers and vendors.  The only sure way to find out whether someone is a prospect is to present them with an optimum value proposition and see if it resonates, but that’s obviously not a scalable scenario.  There are better approaches, and that’s the topic for today.

I’ve studied buyer behavior since 1982 in a survey sense, and in 1989 I built a decision model that uses survey data to forecast stuff.  The model was designed to incorporate what I learned about how decisions are made, and a high-level view of that process is essential as the opening round of any opportunity targeting approach.

The process of turning random humans or companies floating in space into committed customers is what I call trajectory management.  All these things are jumping around in Brownian Movement, with no structure or organization, no cohesion, no way to target with sales efforts efficiently.  What you have to do is to get them in sync, or at least get a useful community of them in sync.  You want them moving toward your marketing/sales funnel in an orderly way, so they intersect it, are guided into a common mindset, and turned over to the sales process.

In modern times, trajectory management is based on a progression:  editorial mentions sell website visits, website visits sell sales calls, sales calls sell products.  Buyers become “suspects” when they see your name in the media, they become prospects when you engage them on your website and with “anonymous collateral”, and they become customers through a personal contact.  You can’t jump over steps in this progression, so you have to be very wary of presenting too much information in the early part of the progression.  You will never sell a product because it’s mentioned in Light Reading or Network World, and trying to get enough into a story for that to happen will cause the story and your effort to fail.  Get the story, get the website visit, get the sales call.

Where does “targeting” come in, then?  Everywhere, largely for a combination of collateralization and prioritization.  You need to understand the messages that resonate most and most quickly, and you need to understand who will likely pull the trigger on a deal fastest given a particular value proposition.  All of this is like jury selection; it’s not an exact science but people still pay a lot of money to find stuff that could help.

Most countries have economic resources available online, and that’s true of the US of course.  The combination of tools that works the best is economic information by industry, particularly on employees and capital spending, and distribution of industries geographically.  With this combination you can do a lot to improve your chances of making the right statements, building the right products, targeting the right prospects with the right message.  You can also learn a lot about business behavior.

A logical question to ask when you’re talking about the opportunity for a product or service is the size of the market the new thing fits in, preferably by industry and geography.  Government data offers this in those two phases I mentioned, but running through a complete example would take too much time, so let me look only at the first phase in detail and describe the second in general.

If you think that cloud computing is the transfer of current data center apps to public cloud services, you could presume the biggest spenders on centralized IT would be the best targets.  The US market data says that North American Industry Classification System (NAICS, the new version of Standard Industrial Classification or SIC) 552, which is credit intermediation and related activities, has the highest central IT spending of all sectors.  Wholesale trade, renting/leasing, and retail trade follow (excluding the IT industry itself).  Many of these wouldn’t be considered cloud prospects, so an analysis like this could open up some new possibilities.

Suppose you think that the real opportunity factor is prospects for SaaS?  Obviously the government isn’t going to survey on that topic, but what they do survey on is the spending on integration services.  If you’re a company who relies on integrators you may well be unable to acquire and sustain an IT organization of your own, so you’re a great prospect for a cloud service that gives you the applications you want in an as-a-service package.  Top industry there is retail trade, which is interesting because they’re also high on the ranking in terms of central spending.

For doubters, let me point out that the retail industry has been a big consumer of architected services for electronic data interchange—EDI, the stuff that transfers purchase orders and payments and so forth.  Data on current SaaS spending is hard to break out from company sources, but my surveys have shown retail firms to be early adopters of SaaS as well.

How about opportunities for network services and NFV-based services?  These are likely tied pretty closely to what companies are spending on network equipment.  The industries leading that category are (again omitting IT and networking companies themselves) our old friend credit intermediation followed by miscellaneous scientific and technical services.  This last category also ranks high in use of integration services, which would suggest it’s a good target for network-as-a-service.

Or perhaps you see network and cloud opportunities arising where companies have a very large investment in personal computers and distributed elements?  Two old friends here, retail trade and miscellaneous scientific and technical services.  So the same top industries show up using this metric as well.

This total-spending measurement is a good opportunity gauge, but it may not reflect the level of acceptance.  We can also look at firms by how much of their IT budget is spend on centralized IT, distributed IT, networking, and integration.  In terms of percent spent on central IT the banks and credit firms top the list and retail is down in the standings quite a bit.  The industries with the largest percentage of budget spent on distributed IT are petroleum and coal products and printing and related activities.  Networking as a percent of total spending is highest for machinery firms, food-beverage-tobacco, and our miscellaneous scientific and technical services.  Integration spending is highest as a percentage in construction, ambulatory health care, and apparel.

Whatever measure you use to rank industries, the next step is to get a density map of that particular NAICS across your prospecting geography.  If you know you want to target our favorite “miscellaneous scientific and technical services” you look for metro areas or other points where that NAICS is found most often.  If you have access to more refined data (typically from commercial rather than government sources) you may be able to get the density plot for headquarters locations only, which is better since in most cases technology buying is done from the company HQ.

With knowledge of NAICS and some further insight on things like how much the NAICS outsources and how its budgets are spent overall, it’s possible to determine what messages would likely resonate.  My surveys show that companies with very centralized IT have strong CIO organizations and are less likely to favor things like SaaS, where companies with more distributed IT are the opposite.  That helps not only in generating collateral for sales use, but also in deciding the kind of reference accounts that would be considered most valuable.

The point of this is that NFV and SDN and the cloud are all selling into a world where IT is the rule and not the exception, and by understanding how IT dollars are currently spent, you can optimize getting some of those dollars for your early SDN or NFV services.

Reading Cisco’s Earnings Through a VMware NFV Lens

Cisco came out with their numbers, and while I could talk about them by themselves, I think it might be more helpful to consider them in light of some other news, which is VMware’s decision to get serious about NFV and even Facebook’s tossing in a new data-center switch.  I’ll mention some Cisco-numbers commentary where appropriate, but I want to open with some quotes from John Chambers:  “Let’s start with a basic question, like why Cisco?”, and end with “We are well positioned for a positive turn in either service provider or emerging markets. But we are not modeling those turns for several quarters despite the better results we saw in this last quarter….”

VMware and Cisco have a lot of competing dynamics and a lot of common factors.  Most recently, NFV as a technology to support has joined both groups, a response to the market conditions on the provider side.  The reason is obvious; my model has been showing that NFV is likely to be the largest single source of new data center deployments for the next decade, and also likely the largest single source of new server deployments.  It will consume the most platform software and the most new management tools, and frame cloud computing’s future.  That’s a heady mix; too much upside to ignore.

The challenge for everyone in the space is that Facebook proves that if you build data centers for net-centric use, you may find it easier to design your own stuff, or at least pick from commodity white-box options (Cyan and Pica8 had some new developments in these areas earlier this week too).  The obvious solution is to rely on symbiosis to get a piece of the new action.  If you can support the NFV revolution that’s driving change you might stave off the wolves.

For VMware, there may be another dimension.  Nothing feels better than to rub a rival’s nose in a dirty dog dish, and that’s less risky when the rival is chained.  Cisco has demonstrated that because of its hardware dependency, software-defined anything is considered a risk to be managed not an opportunity to be seized, which attitude somewhat limits Cisco’s aggressive options.  VMware has no incumbency in network equipment and so can dance wild and free into the virtual age.

Let’s get to John’s early question, “Why Cisco?”  This has two faces, one of which is why Cisco is doing what it always does, which is be a fast follower.  Cisco’s need to sustain its near-term hardware revenue stream creates for the company is significant.  Most of what Cisco announces these days boils down to 1) assertions that the “Internet of xxx” where “xxx” is anything exciting and different (Chambers did his Internet of Everything routine on the call), generates a boatload of new traffic operators have a Divine Mandate to carry or 2) enhancements to switching/routing protocols to create SDN-like features without having to adopt white-box SDN.  The second face is “Why Cisco?” in the sense of why buy from them, and for network operators it’s clear the answer has to be “because they’re supporting where we need to go”.  That’s clearly at odds with the first interpretation of that same question.  They’d like Cisco to lead in NFV.  They are not, which is probably a factor that VMware has noticed.

However, VMware has its own issues.  Their stock has been in general decline over the last six months as financial rags tout views like “Virtualization in Decline!”  Under pressure from open-source software in the cloud, they can hardly afford to ignore an opportunity like NFV, but their traction opportunities are a bit slim.  Cisco is a server vendor, which means they have a big financial upside from all those servers and data centers.  VMware can hope for a platform software upside on a hosting deal, but servers cost more than platform software, particularly when so much operator focus is on open-source versions of what VMware would like to license to them.

If you look at NFV objectively it’s hard to see why Cisco would find it such a threat.  Because NFV could at least admit standard boxes under the NFV tent through generalization of the Virtual Infrastructure Manager (which HP and Oracle among others have already proposed) it could build a layer above both legacy boxes and evolved network device notions like SDN.  That might reduce operators’ interests in changing out gear by making it unnecessary to switch switching (so to speak) just to talk to it from NFV.

Part of the explanation may be Cisco’s beating the revenue line, which came about because it increased revenue 7% y/y after a bad similar quarter last year.  It’s hard not to see this as a possible indication of a rebound in carrier capex, especially if that’s what you devoutly hope it is.  Stay the course a few quarters longer and 1) Chambers can retire on an up or 2) he won’t have to retire at all.  Take your pick, Chambers-watchers.  But even Chambers has to admit that nothing is turning around for the next couple quarters and their forward growth guidance of about 4% reflects that.

VMware is probably salivating over Cisco’s business challenges, but it has its own issues.  The cloud has jumped more on OpenStack than VMware hoped because public cloud has been much more valuable than private cloud, which VMware could have supported easily (and did, belatedly IMHO) as an evolution of virtualization.  Cisco took the OpenStack approach but while aggressive action there might have at the least handed VMware its head by advancing OpenStack more visibly, Cisco has been more interested in differentiating UCS than promoting standards.

UCS growth was strong in Cisco’s quarter, up by 40% compared to switching’s 11% and routing’s 2%.  But UCS was a bit under $850 million when routing was about $1.8 billion and switching double that.  You can see that Cisco needs more UCS but doesn’t need it to indirectly (via SDN and NFV) cannibalize switching and routing revenue.  And above all it doesn’t need any weakening of account control.

Now we have an NFV war, which pits a new (and thus totally hazy) VMware offering against a Cisco offering that’s always seemed more like and NFV billboard than a product.  Cisco doesn’t sing a bad NFV song (it sounds in fact a lot like Oracle’s story on the surface including the TMF-ish CFS/RFS angle) but they aren’t singing it out loud and strong as another song goes.  VMware has a chance to take a lead in positioning…if…

…they can figure out something to say.  Obviously parroting Cisco’s story with more verve and flair would be better than nothing (or even than Cisco) but probably not enough given that others in the space are demonstrating substance.  VMware needs to join my club of players who actually can do something to drive NFV, not the club of NFV hype artists (a much larger group).

So does Cisco, not only to compete with VMware, but also to compete with Oracle and Alcatel-Lucent and HP and even Overture.  There are big companies with big stakes in the ground leaping on the NFV bandwagon daily, and it’s hard to imagine none of them have taken the decision seriously.  I suspect that Cisco did too, and took the tactical short-term approach.

The stakes are rising now and it’s time to get to the last quote from the call that I opened with.  Chambers admits that the service provider market isn’t going to turn around in a few quarters.  Earth to John: It’s never going to “turn around” in the sense of returning to investment patterns of the past.  You just gave yourself a couple of quarters to face facts, as so many of your competitors are obviously doing.  I’d get moving if I were you.

They could, too.  Cisco’s NFV story may be a billboard but I think I know the company well enough to know that what’s missing is intentions and not capabilities.  They almost certainly have as much as Oracle or Alcatel-Lucent, at the least, and they have a lot of inventive people.  And their desire to preserve the present could align pretty well with operator goals not to sink themselves in accelerated write-downs.  If Cisco could get the orchestration/operations story right they could come out of this quickly.  Faster than VMware for sure, unless they bought their position.

That’s my advice for VMware.  Unless you already have a product sitting somewhere waiting for an organization to spring up, you’re going to need to start spending like a sailor on M&A and hope that John waits just a little bit longer.  Hope, but don’t bet on it.

Oracle’s NFV Orchestration: Does it Stack Up?

NFV as a technology has captivated a lot of people.  For it to be a “revolutionary technology” it has to do something revolutionary to the way we create network services and purchase network infrastructure.  That obviously has to start with some set of NFV products, created by credible sources and delivered with a compelling vision that wins over risk-adverse decision-makers.  Such a source has to find a benefit for themselves too, and this kind of win-win has been hard for vendors to frame so far.

Especially for software vendors.  There are three companies in the market that we would say are decisively on the software side of the vendor picture—IBM, Microsoft, and Oracle.  None of these guys have been powerhouses in NFV positioning; in fact all have been non sequiturs, at least until now.  Oracle is now stepping up in NFV with an announcement it made yesterday, and the fact that underneath the covers Oracle is a major cloud provider, a major application provider, and even a provider of highly credible carrier-level servers and operating systems software makes it even more interesting.

Oracle laid out its basic NFV approach last fall, and since all NFV approaches map to a degree to the ETSI E2E architecture it wasn’t revolutionary.  What they are now announcing is the details on their Network Service Orchestrator.  To put that term into perspective (meaning ETSI perspective) it’s kind of a super-MANO NFVO, but that’s probably not the way to look at it.  The best part of Oracle’s talk is that they’ve put both NFV overall and their own stuff on a kind of vertical stack from OSS/BSS down to infrastructure, so let me use that reference as a starting point to talk about Oracle’s strategy.

In the Oracle vision, OSS/BSS and the TMF world guide a series of service lifecycle processes that focus on what gets sold and paid for.  They also coordinate craft activity, so things about service ordering that involve real humans doing stuff like installing CPE are up in this top layer.  In the TMF world, this layer builds toward the Customer-Facing Service, a term I’ve used a lot.  The TMF has recently seemed to be working around or away from the CFS concept, but it may be making a comeback with the TMF’s ZOOM modernization process.

Below CFS in Oracle’s stack is the Resource-Facing Service (RFS), which is also a TMF term.  According to Oracle’s model, ETSI is a means of realizing RFSs, so the ETSI processes start below the RFS.  In a diagrammatic sense, Oracle is saying that their OSS/BSS offerings cover the top of the service-to-resource stack, and that their new Network Service Orchestrator will cover the bottom, with a critical overlap point at RFS.  If you’ve followed my blogs and my work in ExperiaSphere you know that I believe in CFS/RFS and believe that service-to-resource boundary is critical, so I’m in favor of this positioning.

Using this structure and this service-resource stack as their foundation, Oracle then applies service lifecycle management examples to explain what they’re up to, which again is something I do myself so I can hardly criticize it.  The end result is a strong presentation, positioning that is clear at the high level, and one that is anchored firmly in both the ETSI stuff and the TMF stuff.  Oracle is the only NFV player to offer a highly OSS/BSS-centric vision and to build a story from top to bottom that’s fairly clear.

Oracle also passes a couple of my litmus tests for sanity in VNF positioning.  They don’t say that OpenStack is NFV orchestration.  They have a generic VNF Manager (VNFM) and not VNF-specific ones.  The have analytics and policies.  There’s enough here to prove that Oracle isn’t just blowing smoke at the world.  The bad news is that I can’t validate a lot of the detail from their material, and there are questions that in my mind demand validation.  There’s not enough detail to prove it works.

Let’s start with NSO.  When Oracle introduced ETSI MANO as a concept they cited the notion that there were three layers of orchestration within it—MANO, VNFM, and VIM.  I don’t disagree that there are three “orchestrable elements” represented, but it’s not clear to me why we have three independent orchestrations going on there.  It’s not clear to me how these three orchestrations are coordinated with the Oracle approach.  My own vision is to model everything at all three levels using a common modeling language.  Since I don’t have any details on Oracle’s modeling approach can’t offer a firm answer to what they do, though their material suggests to me that there may actually be three different models and orchestrations here.

If you count the OSS/BSS service-level stuff there could even be four.  Oracle cites an example of a deployment that might involve a piece of CPE in some cases or a deployment of a virtual element in others.  Obviously if CPE has to be deployed in a truck roll, you’d have to manage the craft activity via OSS/BSS, but suppose there’s CPE there already, or that the service choice between a physical and virtual function is inside the network, a matter of whether a given user is in a zone where NFV is available or not?  Then operations really doesn’t have to know—it’s a deployment choice to parameterize or to host a VNF.  And all of these decisions have to be modeled, orchestrated, if you’re going to choose between rolling a truck and pushing functionality out to virtual CPE.  Is this orchestration option four, with yet another modeling and orchestration toolkit?  It would make more sense to collect orchestras here, I think.  If event-driven OSS/BSS is a goal then why not fold it you’re your functional-MANO stuff?  Maybe they do, but I can’t tell from the material.

The third point is that service/resource boundary and the overlap point for OSS/BSS and NSO.  There are a lot of powerful reasons why the service and resource domains have to be separated and a lot of benefits that could accrue once you’ve done that.  Some of these are truly compelling, significant enough to deserve mention and collateralization.  In fact, these benefits are the real glue that binds that lifecycle process set Oracle uses.  They are not mentioned much less collateralized.

But there is a lot of promise here.  Oracle’s cloud strategy is based in no small part on SaaS delivery, which means it has to face at the cloud level many of the same deployment and operationalization issues that operators face with NFV.  Solaris is a premier operating system for the cloud, with strong support for containers and big data built right in.  Oracle has database appliances that could be used to manage the collection, analysis, and distribution of operations information.  They even have servers, though their positioning suggests they’re not going to get hidebound on the question of using Oracle iron with Oracle NFV.  The point is that these guys could really do a lot…if all this stuff is as good in detail as it is in general.  So I’m not carping as much about their concepts as I am about their collateral.

Oracle has done the best job so far of positioning its NFV plans for public consumption.  Their slides are clear, they cover the right high-level points, and they demonstrate a grasp of the problem overall.  In that regard they’ve beat out all the other players, including those whom I’ve given the nod as the top contenders in NFV.  However, NFV has to be more than just a pretty face.  My own standard for rating NFV implementations is that I have to have documentation on a point to fully validate it.  Oracle did offer me a deeper dive briefing but as I told them, I can’t accept words alone and no further details in document form were provided.

I can’t give Oracle an NFV rating equal to HP or Overture, both of whom have given me enough collateral to be confident of their position.  I can’t even give them as much as I’d give IBM, whose SmartCloud Orchestrator is the only example of TOSCA modeling, which I think is the best approach for NFV.  They fit in my view of the NFV space where Alcatel-Lucent fits, a company who I believe has the stuff to go the distance but who’s not been able to collateralize their offering enough for me to say that for sure.

Of course, they don’t have to sell me, they have to sell the network operators.  I know they have decent engagement in some NFV PoCs, Oracle is going to make a push for their NFV strategy at Mobile World Congress and again at the TMF meeting in Nice in June.  These events will probably generate more material, and I’ve asked Oracle to provide me with everything they can to explain the details.  If I get more on them from any of these sources, I’ll fill you all in on what I see.  In the meantime, it will be interesting to see if Oracle’s entry causes some of the other NFV players to step up their own game.  Singing a pretty song doesn’t make you a pretty bird in the tree, but absent the song nobody will know you’re there at all.

My Thermostat Doesn’t Want to Talk to You

OK, I have to admit it.  There is nothing on the face of technology today that fills me with the mixture of hope and admiration and disgust and dismay like the Internet of Things.  Even the name often annoys me.  It brings to mind the notion of my thermostat setting itself up a social network account.  The hype on this topic is so extreme, the exaggerated impacts so profound, that I’d despair and call it all a sham were it not for the fact that there is real value here.  We’re just so entrapped in crap that we’re not seeing it, and we’ll never prepare for the real parts if we can’t get past the exaggerations.

Home control is a big part of the IoT, right?  Suppose that we were to make every light, every outlet, every appliance, every motion sensor or infrared beam in every home a sensor.  How much traffic would this add to the Internet.  Zero.  None.  Suppose we were to place similar controls in every business.   How much traffic?  None.  Even if we were to add sensors to every intersection, add traffic cameras, add surveillance video to every storefront we’d add little or nothing.  We already have a lot of this stuff and it generates no direct Internet impact at all.

Control applications aren’t about broadcasting everything to everybody, they’re about letting you turn your lights on or off without getting up, or perhaps even turning them on automatically.  You need to have sensors to do that, you need control modules too, and controllers, but you don’t need to spread your home’s status out across the Internet for all the world to see.  Sensors on sensor networks don’t do much of anything, and most controllers don’t do anything to traffic either.

How about the fact that “home control” can sometimes be exercised from outside the home.  There are times when you might want to turn your home lights on from your car as you drive onto your street.  There are times when you might want your home to call you (to tell you your basement is wet) or call the police or fire department.  The thing is, we do all of this already, and it’s not like your home is going to call you every minute even if it could.  The fact is that in a traffic sense the IoT is kind of a bust.  Similarly, it’s not going to add to the list of things that need IP addresses; most sensors and controllers work on non-network protocols and those that don’t use private IP addresses that aren’t visible on the Internet anyway.  If you call your home from your cell to turn your lights on, you’re likely only consuming a port on an IP address you already have.

Besides being an obviously great yarn to spin, does the IoT actually offer anything then?  I think it does, and I’ve blogged about a part of what it might offer in the past.  We could expect to see control domains (sensors and their controller) abstracted as big-data applications available through APIs for the qualified to query.  We could expect to see some of the real-time process control stuff, which is what self-drive vehicles are conceptually related to, generate “local” traffic that might even get outside our control network and touch online.  But traffic isn’t going to be where IoT impacts things, nor will addresses or any of the stuff you always hear about.

The biggest impact of IoT on networking is in availability.  If I want to turn on my home lights as I drive up, I don’t need a lot of bandwidth but I darn sure need a connection.  If I were to find that most of the time my lights-on process failed, I’d be looking for another Internet provider.  If I’m expecting my car to turn left when it’s supposed to and it runs forward into the rear of a semi because it lost Internet connectivity, I’m going to be…well…upset.

The Internet is pretty reliable today, but most home alarm systems don’t use it, they call out either wireline or wireless on what’s a standard voice connection because that’s more likely to work.  Unless we want the Internet of Things to be phoning home (or out from home) in our future (how pedestrian is that?) we have to first be sure it can do the kind of stuff that’s already bypassing it for availability reasons.  But would we pay enough for that availability improvement?

A second big impact is latency.  My car is moving down the street and the sensors around on the corner and lamp-posts tell the Great Something that another vehicle is approaching on the side street.  If it takes two or three seconds for the message to get to the controller, be digested, get back to my car, and be actioned, and I’m moving along at 60 mph then I’m a couple hundred feet along before recommended action can be taken.  Even a hundred milliseconds is nine feet at my hypothetical speed.  I can hit somebody in that distance.

Related to this is the third impact, which is jitter.  Whatever the character of a control loop, the worst thing is a lack of stability in its performance.  You can design around something or avoid using it if it stays where it is, but if it jitters all over the place you find yourself stopping at the stop sign one minute and hitting the semi the next.  That sort of uncertainty wears you down and surely reduces the sale of self-drives.

Home controllers offer what I think is the logical offshoot of all these issues.  Why do we have them?  You have to shorten control loops where real-time responses are important.  Rather than try to create a sub-millisecond Internet everywhere, the smart way is to host the intelligence or the controller closer to the sensors and control elements.  So what we need to be thinking about with IoT isn’t traffic on the Internet or addressing or even Internet latency and jitter, it’s process placement.

Network standards won’t matter to IoT.  What matters is inter-controller communication, big-data abstraction APIs for raw information access, and the like.  My car controller has to talk to its own sensors, to local street sensors, to route processors, traffic analyzers, weather predictors, and so forth.  None of this is going to create as much traffic as a typical teen watching YouTube but it will create a need to define exchanges, or we’ll have cars running into areas where they can’t talk to the local control/sensor processes and whose drivers are probably watching YouTube too.

Making stuff smarter doesn’t necessarily mean it has to communicate, or communicate on a grand global scale.  What is different about “smart” versus “dumb” in a technology sense isn’t “networked” versus “non-networked”.  Being connected isn’t the same as being on the Internet.  The key to smartness is processes and the key to extending real-time systems over less than local space is process hierarchies.  Processes will be talking to processes, not “things to things” or “machines to machines”.  This is about software, like so much else in networking is.  It’s about the cloud, and it’s process hierarchies and cloud design that we need to be thinking about here.  The rest of the news is just chaff.

Our thermostats don’t need to talk to each other, but if yours has something to say about this feel free to have it call mine, and good luck to the both of you getting through.

From Service Management to Service Logic: SDN/NFV Evolution

If kids put teeth under their pillow and hope for quarters, or dig in their yards hoping to find pirate treasure, so financial planners in operators are looking for NFV and SDN to generate revenue.  Cost savings are fine as a means of delaying the marginalization of the network and bridging out of a difficult capex bind, but only new revenue can really save everyone’s bacon.  The question is whether the financial planners’ aspirations are any more realistic than those of our hypothetical quarter-and-treasure-seeking youth.

It’s really hard to say with confidence what services will be purchased in the future, even five years out.  Survey information on this sort of thing is totally unreliable; I know that from over 30 years’ experience.  I’ve made my own high-level guess in prior blogs—we have to presume that business IT spending growth will come from improvements to productivity gained by harnessing mobility.  Rather than tell you that again, I propose we forget for a moment the specifics and look instead at a very important general question.  Not “what” but how?

“Service agility” is the ability to present service offerings that match market requirements.  If we knew what those requirements were we wouldn’t need to be agile; we’d just prepare for the future in a canny way.  The fact is that even if we knew the next big thing, we’d have to worry about the thing after it.  Service consumers are more tactical than traditional service infrastructure.  What we need to know now is what the characteristics of the string of next-big-things-to-come are, so we can build SDN and NFV correctly (and optimally) to support them.

Sometimes it’s really helpful to engage in “exclusionary thinking” for something like this.  So, we’ll start with what they are not.  We are not going to get new revenue from selling the same stuff.  We are not going to get new revenue from selling new stuff whose only value is to allow customers to spend less.  We are not going to get new revenue from sources that could have given us new revenue all along had we but asked for it.  New revenue may not come from my guess on mobile empowerment but it’s going to come from something that is new.

If we follow this thread we can look with somewhat of a critical eye at the current NFV activities at least insofar as supporting new revenue is concerned.  Virtual CPE is not new revenue.  Hosted IMS is not new revenue.   In the SDN world, using virtual routers instead of real ones or OpenFlow instead of adaptive routing isn’t new revenue either.  All of these things are worthy projects in lowering costs and raising profits, but cost management (and those who depend on it) vanish to a point.  We have to see these as bridges to the future, but we have to be sure we can support the future as it gets closer.

“Old revenue” is based on two things—sale of connectivity and sale of connection-related features.  New revenue delivers stuff, provides information and decisions.  Thus, it’s much more likely to resemble cloud features than service elements.  Services will be integrated with it, and in particular we’ll build ad hoc network relationships among process elements that are cohabiting in a user’s information dreams, but we’re still mostly about the content rather than about the conduit.

If we look at SDN in these terms we realize that SDN isn’t likely to generate much new revenue directly. It’s important as a cloud enabler or as an NFV enabler, depending on what you see the relationship between these two things being.  NFV is definitely a support element too, but it’s supporting the dynamism of the features/components and their relationship with each other and with the user.  We’re using NFV to deploy VNFs, components, VNF-as-a-service, and so forth.  We’re using SDN to connect what we deploy.

What is that?  Not in terms of functionality but in terms of relationships?  There seem to be two broad categories of stuff, one that looks much like enterprise SaaS and the other that looks quite a bit like my hypothetical VNFaaS or what I’ve called “platform services”.  Amazon is building more and more value-added components in AWS to augment basic cloud IaaS, to the point where you could easily visualize a developer writing to these APIs as often as they’d write to common operating system or middleware APIs.  These combine to frame a model of service where composition of services is like application development.

This more dynamic model of service evolves over time, as companies evolve toward a more point-of-activity mission for IT.  Some near-term services will have fairy static relationships among elements, particularly those that support communities of cooperating workers or that provide “horizontal” worker services like UC/UCC.  Others will be almost extemporaneous, context- and event-driven to respond to conditions that change as fast as traffic for a self-drive car.

In general, faster associations will mean pre-provisioned assets, and as activities move from being totally per-worker-context-driven to a more planned/provisioned model we’ll move from assembling “services” from APIs and VNFaaS to assembling them in the more web-like sense by connecting hosted elements.  In between we’ll be deploying service components and instances on demand where needed and connecting them through a high-level connection-layer process that looks a lot like “overlay SDN”.

NFV’s MANO is about deployment instructions; it models the service’s functional relationships only indirectly.  As you move toward extemporaneous cloud-like services you are mapping functions and functional relationships themselves.  In many, likely even most, cases the “functions” don’t have to be deployed because they’re already there.  It’s a classic shift from service management to service logic, a boundary that’s been getting fuzzier for some time, though the shift has been gradual and largely unrecognized.

This is a new kind of orchestration, something that’s more akin to providing a user with a blueprint that defines their relationship with information and processing, and the relationship between those two things and all of the elements of context that define personalization because they define (at least in a logical/virtual sense) people.  Think of it as an application template that’s invoked by the user on demand, based on context and mission, and representing the user through a task or even entertainment or social interaction.  Identity plus context equals humanity?  Perhaps in a service sense that’s true.

Both NFV and SDN need to be expanded, generalized, to support this concept but I’d submit that we were as an industry, perhaps unwittingly, committed to that expansion anyway because of the cloud.  NFV is the ultimate evolution of DevOps.  SDN is the fulfillment of the ultimate generalization of OpenStack’s Neutron.

Yes, Neutron.  OpenStack is probably more important in the model I’m describing here than SDN Controllers because the latter will be abstracted by the former.  TOSCA is more important than Yang because the latter is about network connections and the former is about cloud abstractions.  We’re not heading to the network in the future, but to the cloud.

Don’t let this make you think that I believe NFV is superior to SDN.  As I indicated above, both are subordinate to that same force of personalization.  Whatever services you think are valuable in the future, and however you think they might be created, they’ll be aimed at people and not at companies or sites.  Both SDN and NFV have blinders on in this respect.  Both see the future of networking as being some enhanced model of site connection when mobility has already proved that sites are only old-fashioned places to collect people.  It’s the people, and what they want and what makes them productive that will frame the requirements for NFV and SDN in the future.

Alcatel-Lucent’s Strategy-versus-Tactic Dilemma

Alcatel-Lucent released its quarterly numbers before the bell this morning, and their results illustrate the complexity of the business of networking these days.  Do we look at things tactically, as current financial markets do?  If we say “No!” to that, then can we agree on what a strategic look would offer us?  Let’s see.

Tactically speaking, what Alcatel-Lucent’s numbers showed was a company whose IP business did well while other sectors largely languished.  IP routing was up 15% in constant currency, IP transport (meaning largely optics) was up by 3%.  Given that many of Alcatel-Lucent’s rivals were seeing negative growth in these same areas, they didn’t do badly here.

Elsewhere things weren’t as bright.  Revenues in IP Platforms, the service and operations layer, were off 15%.  Access was off by 11% overall, and that includes wireless.  Revenues were down overall, and while the quarter was judged as good by the Street it’s because cost-cutting helped profit more than lower revenue hurt it.

The overall picture here is that Alcatel-Lucent is in fact delivering on its transformation, and I believe that’s true.  It’s also marching to the future one quarter at a time, and so it’s fair to ask whether the transformation is taking them to a future where numbers can continue to be positive.  That’s a much harder question to answer because it demands we look at the model of networking not today but perhaps out in 2020.

It is very clear that we will need bits in the future, so it’s clear that IP transport is not going away.  In fact, Alcatel-Lucent and the whole sector are delivering more bits every year.  They’re just not delivering a lot more bucks.  Optical transport doesn’t connect directly to service revenues, feature differentiation is difficult to sustain, and competitive price pressure can only grow.  IP routing is a smaller business for Alcatel-Lucent today than IP transport, so in order for the revenue numbers to turn around, IP routing has to grow significantly over that five-year period or Alcatel-Lucent has to find another growth area.

Where have we heard network operators telling us that their plan is to build more network routing?  Every major initiative operators have undertaken in network modernization has been directed at doing more with optics and less with IP.  IP is the layer where content traffic growth, regulatory changes, and complexity increases have created the biggest threats to return on infrastructure.  Can Alcatel-Lucent deliver radical new revenues from that space?  Not without something to push things along a different path.

That path would have to be buried somewhere in what the company classifies as IP Platforms.  The challenge IP has is that Internet is not highly profitable today, is getting less profitable every year, and probably can’t reverse that trend any time soon.  And the Internet is where most IP equipment goes.  Operators, to invest more in IP rather than continue to try to invest less, would have to earn more on their investment, to turn around the converging revenue/cost-per-bit curves we’ve all seen.

I’ve said in earlier blogs that I believe the future of the network will be an agile opto-electric substrate at the bottom, the cloud at the top, and virtualized L2/L3 in the middle, tied to more agile optics below and hosted in the cloud above.  That model might well end up spending more on L2/L3, but not on routers as boxes.  It would spending more on virtual routers, virtual functions, and servers to host them on.  Alcatel-Lucent does not make servers, and that’s the company’s big strategic problem.  They have to face off in the new age of IP beyond 2020 with network competitors like Cisco who have servers and can win something through the transformation to hosted services.  They also have to face off against IT vendors who present hosting options and even virtual routers/functions and don’t even bother with network equipment.  That means that they have to do really, really, well in SDN and NFV and the cloud but do it without having the automatic gains that being able to sell the servers would generate.  I commented in my analysis of how various classes of vendors would do in NGN that the network equipment types would have a challenge because of the natural loss they face if money shifts from funding physical network devices to hosting virtual ones.  Alcatel-Lucent faces that risk.

Which is why I find their performance in IP platforms troubling.  This is where the company needed double-digit growth, massive evidence that they were gaining traction in the service-software part of infrastructure where new revenues could be created, and from which symbiotic tendrils could be extended to pull through equipment.  So far they don’t have the financial signals to validate they’re getting traction there.

CloudBand is Alcatel-Lucent’s biggest hope, operations is next.  That’s because first and foremost we’re building revenues from future services by hosting stuff in the cloud, and second because what we’re not hosting there for revenue’s sake we’re operating from there to manage costs.  SDN and NFV are important because they represent technology and standards trends that define these hosting-to-service relationships and also frame the operations challenges that all future services will have to meet.

In the cloud/CloudBand area, Alcatel-Lucent has what I believe to be a strong product set and good capabilities, but their ability to describe what they have is extraordinarily weak.  Of the three vendors who I rate as likely being able to drive NFV to deployment, they are the only one for whom I have to stress the word “likely” because I just can’t get the details needed to offer an unqualified response.  And hey, Alcatel-Lucent, if you don’t sell servers you have to be able to present the cloud in some kind of awe-inspiring software package or you have little chance of being a player.

On the operations side, Alcatel-Lucent doesn’t match rivals Ericsson or Huawei in terms of OSS/BSS tools and capabilities.  They may have plans and capabilities to link the cloud layer to operations in a good or even compelling way, but those plans and capabilities are among the things I don’t have details on.

Could it be that Alcatel-Lucent is pushing on IP Routing as a segment because it’s where they have growth today, and holding back in areas like SDN and virtual routing that could be seen as a threat to their IP routers?  If that’s the case then they are betting that routers will carry the company in the future, and I have no doubt that cannot happen.

From the time when Alcatel and Lucent became Alcatel-Lucent, I’ve groused over their insipid positioning and weak marketing.  I’m still grousing, and I think it’s past time when the company deals with that problem.  People facing a major transformation of revenue and technology, as operators are everywhere, want to follow what they perceive a leader.  For Alcatel-Lucent, the time to qualify themselves for that role is running short.  Routing can’t sustain them forever.