The Net on Cisco’s Numbers: It’s Time to Think Above UCS

Cisco reported their numbers last night, and while their results were better than expected, they showed both a drop in profits and in revenues.  That’s not the kind of indication that Cisco likes, and so it was pretty certain that Chambers would be raising the hope of change.  In the call, his “keynote” statement was:

Our conviction around how we’re evolving Cisco is strong and resolute. You’ve seen us deliver incredible innovation, make bold moves in the market to capture future opportunities and disrupt our competitors and ourselves when necessary. We remain committed to do the right thing to increase our long-term strategic value to our customers and advanced Cisco toward our goal of becoming the number one IT Company.

Cisco knows it needs to evolve, to make bold moves, to capture future opportunities and be disruptive “when necessary”.  But first and foremost, Cisco needs to make its numbers, beating at least slightly the expectations of the Street.  That’s not really that easy to do these days because of the dreaded problem of TAM, or Total Addressable Market.

Service providers have historically spent about 18 cents of every revenue dollar on capex.  That means that Cisco’s base opportunity is growing relatively slowly.  At the same time, competitors (especially Huawei) are driving down the prices and eating some of Cisco’s market share.  Cisco is forced to cut costs to avoid a highly negative trend in gross margins, and so far they’ve succeeded in doing that.  But if Cisco wants to be a market leader and sustain itself in its strategic “Number One IT Company” goal, they have to broaden their base, increase their TAM.

UCS was a big success story for Cisco in the quarter, with revenues up 29%.  The problem is that margins on UCS are lower than Cisco’s average, one reason why giant rival IBM sold off its x86 business.  Cisco has been able to drive UCS forward largely by leveraging network-related opportunities arising in its own installed base.  If Cisco decided to broaden its server thrust, it would encounter competitors who had a longer-standing position in the new spaces, and likely that would further erode UCS margins.  Switching, which is the data center stuff we know is critical, is off 6%, showing that competition and lack of differentiation are hurting Cisco where they can’t afford to be hurt.

Cisco needs a bold move that expands TAM and margins at the same time, and to me that means they need that “platform strategy” I’ve blogged about this week.  All my survey data shows that both enterprise and service provider buyers are having a difficult time creating a trial of next-gen technology or next-gen services that actually builds credibility for the business quality of the solution and not just the technology.  If you could bundle more business value into your product, you’d have a leg up on competitors who rely on the customer to do the heavy lifting up what’s clearly a pretty high ladder to major changes and opportunities.  Platforms start with data center support, and Cisco actually has a better hardware portfolio in the data center than IBM now has.  But, as we’ll see, it’s not enough.

Chambers said that Cisco generated about $3.2 billion in free cash flow, and spent $3 billion of that on dividends and stock buyback.  That means that Cisco is spending a heck of a lot just protecting share price.  Their reduction in OpEx of about 6% means they’re shedding people and product areas too, and this reduces their ability to innovate, even where it would be possible to do so without compromising near-term sales.

Everyone wants Cisco to jump out and win in “SDN”, but some realize that if you follow the current arguments on SDN its success is inevitably and explicitly going to reduce TAM, the very TAM Cisco needs to grow.  SDN is a way of doing connectivity cheaper based on current sales/trial manifestos, and that’s not a good thing for the market leader to support.  SDN, in OSI terms, is a trip down the stack at the very time when operators are demanding new services and revenues that live much higher up.  SDN is not the right strategy for Cisco, it’s a good tactic to help bootstrap the right strategy.

I can’t help but think that Cisco’s big opportunity is in the NFV space.  No, I’m not saying that the sun, rising in the east, will exhibit the letters “NFV” on the face tomorrow AM, or any other time.  NFV isn’t a directly consequential issue, but it’s a poster child for the need of the markets today and the way that need can be addressed in an evolutionary way, which is what Cisco has to have.

We aren’t entering the cloud era, or the SDN era, or even the NFV era.  We’re entering the virtual era.  Virtual means that we abstract functions to make the portable, composable, and agile across a pool of technically and geographically diverse resources.  This notion is going to transform applications.  It’s going to transform services, and it’s also going to transform the very foundation of infrastructure by creating a whole new set of relationships, issues, and optimalities.  The fact is that we don’t know what this new stuff will look like, not exactly.  We do know its properties, and that’s where Cisco could stand tall.

Cisco’s platform future could be solidly based on a set of layers that augment what we have—hardware, Linux, OpenStack—with the kind of platform-services-middleware stuff that will support the virtual era.  NFV is exposing the key issues, even if the body doesn’t necessarily see that or even want it.  In fact, I see in the discussions over what the NFV ISG becomes after the formal sunset of the first phase in January 2015, the first hints that the ISG sees the outlines of the “New NFV”.  Cisco hasn’t been exactly bombarding the NFV space with support and relevant follow-up.  It’s time they did something there.

Most of all, it’s time Cisco platformized their own cloud, SDN, and even network evolution.  The future of infrastructure is more like a vast swirling moat of specks of this and that, collected ad hoc to do something valuable.  That’s not what we have today.  Cisco is one of the few vendors who actually has most of the pieces needed to create this new superplatform.  So, John, if you’re serious about being disruptive when necessary, here’s your disruptive opportunity.

How do Operators “Think of themselves as Software Companies?”

According to a Light Reading article yesterday, Metaswitch’s CEO told an audience at their service provider event that operators needed to start thinking of themselves as software companies.  The comment provoked some backlash, on the grounds that it’s been said before.  True, but it’s also true to at least to a degree that the comment is valid.  Perhaps it’s this “to a degree” thing that’s making it hard for the operators to follow this advice, so let’s look at what “software” means to operators to see inside the “software company” point.

First, let me say that there is zero chance that software will make up even a third of total operator capital spending.  While software-based virtualization and feature hosting will impact more than half of all capital projects by the end of next year, the impact will be one of supporting symbiosis and not one of replacement.  The biggest impact of software, not surprisingly, will come in higher layers of service.  Today, these are subsystems like CDNs and IMS, but in the future they’ll include cloud services.  The reason software is important isn’t that it’s universal, it’s because higher-level services are where revenue and profit growth can occur.  Transport/connectivity, as a business, isn’t going to get any better.

The second point here is that “thinking of themselves as software companies” is dangerous to take literally.  It’s actually rather hard for most of the big operators to be software companies in the sense that we’d normally use the term.  In both the US and EU, operators cannot collude even to decide on standards.  The NFV activity launched a year and a half ago was hosted in ETSI because it had to be, or operators would have faced regulatory challenges for violation of anti-trust laws.  Think of what they’d face were they to cooperate to build software!  And building their own software for something of the scale of SDN, NFV, and the cloud is clearly not economical.  The fact is that what’s needed is for the operators’ vendors to think of their customers as software consumers rather than as device consumers.

But OK, semantics aside, what kind of thinking are we talking about?  Well, a software-centric vision of future infrastructure thinks of all service features not explicitly transport/connection as being hosted software objects and not devices.  That doesn’t mean that these hosted software objects don’t need hardware, or that hardware isn’t still the largest component of the budget.  It means the flexibility, customer impact, feature content, of a service is created from hosted stuff.

It doesn’t necessarily mean “the cloud” either.  The optimum software strategy is not one where traditional devices/appliances are excluded.  It’s one where the platform is abstracted in a way similar to that used in the cloud.  Not identical, because for quite some time there will be more “hosted stuff” hosted on devices and dedicated platforms than on any virtualized infrastructure.  A service is a collection of functions first, and the instantiation of those functions individually in/on optimal platforms.  The criteria for optimization will change radically as services deploy because, for first-cost reasons and to limit exposure and risk, operators will initially have to deal with small software-defined commitments that can’t justify mega-data-center deployments.  We have to evolve into those deployments based on the success of the business case.

Functional assembly of components to build services isn’t a new thing.  We build this kind of application every day in the enterprise space.  What makes operators different is both the scale of their operation (complexity increases in proportion to the workflow segments in a task) and the need to be highly efficient to insure optimum margins and prices at the same time.  I’ve been arguing for explicit tools to support functional segmentation and composition because of that need for optimization.  There’s little value in establishing a highly agile software-driven framework for services if you can’t make it valuable enough to sell at a price users will pay and efficient enough to be profitable to you.

It also makes no sense to fetter your agile future by tying it to current practices, and that’s where I think Metaswitch’s CEO makes his strongest point.   One operator already told me that in the future they think they’ll have ten times the number of “service architects” they have today, focused on assembling all this good functional stuff and creating the bridges that build resource commitments for each function when it’s ordered.  They’ll have ten times less operations hands on the ball, though, and since there are a heck of a lot more operations types than architects, the net here will be a significant reduction in operations handling, if “handling” is taken to literally mean human touch.

One impact of this is to completely redefine what “service trial” means.  Operators have traditionally looked at service introduction as a massive process of progressive trials, needed because it was going to mean a huge investment in technology.  Once you have a software hosting and service management framework in place, you don’t even think about that sort of trial.  You take something you’ve tested for stability and float it in that great pool of evolving demand—because you don’t care much if the boat sinks and you have to start over.  You’ve spent some electrons and a few software bucks, but you can afford to let the market speak for itself.  That’s what “service agility” means.

It also redefines “self-service”.  We talk today about an NFV revolution that lets an operator move a customer from a physical branch access device with static features into a virtual branch access device.  “With static features” could well have been added here because despite the flexibility that NFV promises, most operators are still stuck in order-form mode.  What you can buy isn’t limited by what your operator can do, it’s limited by what your operator can sell.  We don’t sell virtual, agile, stuff today and we’ll never sell it if we presume that customer service reps are on the phone taking feature orders.  They have to let the customer turn things on and off at will.  We are already heading there; some operators like AT&T have announced this sort of capability.

“Thinking of themselves as a software company” means thinking of themselves as a company whose future will depend on exploiting the strengths of software and not the limitations of infrastructure.  That’s the transformation.

Has Red Hat Reached Far Enough?

At their conference yesterday, Red Hat announced an expansion of (or redefinition of) its deal with eNovance that takes aim at the carrier cloud and (explicitly) the NFV space.  The deal is aimed at contributing to OpenStack to make it a more suitable platform.  So, yes, this is a data point in my “platformization” theme.  But the questions for today are “Why?” and “Will it work?”

I ran my market model on NFV last year, and the results were interesting.  While the current state of both buyer literacy and products make precision impossible, the model said that if we had an optimum realization of NFV benefits to drive an optimum deployment, we could see an additional 30,000 to 100,000 data centers generated by carrier deployments.  This would dwarf the incremental opportunity for data center growth available anywhere else.  Since every data center is at least a small pool of servers, software platforms, and data center switches, it’s pretty clear that this is a heck of a carrot to dangle in front of a revenue-and-profit-starved market.

I’ve said before that the Wind River people may have been the ones to validate the notion of carrier cloud and NFV as a platform opportunity, with their Carrier Grade Communications Server.  Given that they’re owned by Intel they have no shortage of backing and credibility, and they’ve jumped out in front in the space.  If that space is in fact the biggest new pie in the market, Red Hat can hardly ignore it.  That’s the “Why?”

The corollary question is “Why now?” and the best answer to that is that the NFV ISG is winding down its first-phase work.  We are getting close to the point where specifications would be released, and close to the point where early trials could be expected.  Everyone knows that trials tend to mature into deployments unless the vendors behind them screw up, so you need to be in on the ground floor.  Part of getting there is also PR and positioning, which get harder to develop as the field gets crowded and the easy topics are all written to death.  I think it will be very difficult for vendors to gain real control of NFV and carrier cloud issues much beyond the end of this year, if they don’t have it already.

The problem for Red Hat in execution isn’t easy, though.  The original NFV value proposition, reflected in the first NFV White Paper back in the fall of 2012, was an easy one for somebody like Red Hat.  You take commodity servers and use them in place of expensive SDPs and appliances.  Every appliance you displace offers a benefit, and the more you do the more you save.  There’s no first-cost issues, no worry about integrating trial enclaves.  Within a year, though, the operators themselves no longer believed in that story.  They found that the complexity of NFV would almost certainly raise operations costs enough to marginalize or eliminate capex savings.  You need a bigger story.

If the cloud is the logical host of NFV and carrier features and services, then it isn’t unreasonable to say that carrier cloud could evolve from something like OpenStack.  OpenStack has the ability to deploy and connect stuff, and if you made it bigger and better you could make it more efficient and secure lower opex, right?

Well, probably not as much as you might think, which is where Red Hat risks going wrong.  I’ve pointed out before that the current NFV target benefits of greater service agility and lower operations costs aren’t easily achieved by addressing functional islands of larger services built on legacy infrastructure.  However, we’re not likely to see NFV get to the point where it drives COTS platforms into access, transport switching, routing, and optics missions.  We’re darn sure not going to fork-lift a whole metro into NFV-dom to do a trial.  Thus, we’re at risk in having the benefits proved for early trials not only insufficient to justify my “optimum” NFV deployment, we are risking their not funding the next step or fully justifying themselves even in the first trial.

Red Hat can make structural orchestration better.  They can improve the carrier-grade quality of OpenStack, but it’s a real stretch to imagine that they can extend OpenStack beyond the cloud.  I’d also point out that OpenStack management doesn’t fit well within the framework of operator OSS/BSS/NMS, and obviously it doesn’t integrate with the management of legacy elements or the automated (and hypothetical) tools used to build those new and agile services.  What’s needed is a higher layer, a vision of unification of all the network technologies into a common virtualization-friendly service toolkit.  eNovance doesn’t have it, Red Hat doesn’t have it.

The good news for Red Hat is that nobody else has it either, at least not fully and generally.  Some are telling a story that when fully populated with products might well do the job.  Alcatel-Lucent is an example, and so is Cisco and in particular HP.  Overture Networks, I’ve pointed out, actually has an implementation of structural orchestration that could be made to climb up to the functional level if they (or a partner) decided to drive it there.  There are enough open source tools out there to form as much as 85% of what’s needed for orchestration, and I know of at least one major vendor who has been dabbling with the stuff and seems on the fence with regard to commitment.  But they’ve not committed, and so for a period of time Red Hat still could expand their vision to include the critical mass of benefits.

NFV could be a revolutionary technology, but nothing deploys by being technically revolutionary, it deploys by being benefit-revolutionary.  Progress toward a goal is helpful only if it can credibly extend far enough to start funding itself with benefits, and islands of NFV aren’t going to do that if the benefits operators are looking for are systemic and not isolated.

Should We All Wear Platforms?

Read the stories about the IT world and you can’t escape a couple of truths.  One is that hardware is commoditizing; IBM sold its x86 business and everyone is talking commodity servers.  Another truth a bit harder to pick out but no less real is that platforms in the broader sense are definitely coming into their own.

A lot of this is a sort of industrial inevitability.   If you make auto parts by stamping them out in the millions you don’t create a do-it-yourself auto industry (“See that bin over thar?  Do your thing!”), you simply change the limiting factors.  Buyers who see their server costs fall by half are dismayed to find out their IT costs didn’t fall much, but that’s because other stuff that added up to more was still there in the picture.

Platforms are my name for the plug-and-play mission-based configurations that could be hardware/software or just software designed for commodity hardware.  The idea is that somebody with a problem can buy a prefab solution.  Since somebody with a problem is likely somebody with a budget, that adds up to sales success.  And the more complicated the path from basic tools to solutions is, the more valuable the platform is.

For the cloud, SDN, NFV, and even things like our “Internet of Things” we’re obviously complicated enough for all practical purposes.  That’s what I think is behind the recent VMware/EMC strategy to create platforms.  Not only do they ease the problem-to-solution transition they also guarantee a vendor with a lot of piece parts an opportunity to easily fight off best-of-breed-driven fragmentation of the opportunity.

VMware isn’t the only company who is proposing platform-based selling.  Red Hat’s Enterprise Linux has been a platform from the first.  Recently, though, others seem to be taking platformization to a more detailed level.  VMware is one, but another example is the Intel/Wind River Carrier Grade Communications Server.  Cloud platforms, carrier platforms—both are examples of a refinement to a general platform model.  It’s not going to stop there either; IBM, HP, and Oracle are all working to an almost vertical-refined platform strategy, because it works.

The interesting question this all raises is how pervasive the trend will get, and how that will impact the vendors.  In particular, you have to wonder how cloud, SDN, and NFV strategies will be impacted by platformization, either in a positive sense (somebody is promoted by the trend) or a negative one (somebody is marginalized).

A data center solves an IT problem, not a computer or software.  If platforms are good then it would seem that the trend could benefit players like HP, IBM, Oracle, and SAP who could offer a fairly complete strategy if they adopted open-source tools as a foundation.  The challenge is that most everyone has data centers already, so you have to look not so much at what solves IT problems as what solves new IT problems—ones that don’t already have some data center support in place.  That’s why things like the cloud, SDN, and NFV are important.  They are incremental issues that admit to incremental solutions, and that opens the door for packages.

The obvious question is “What kind of packages?”  I agree that “cloud” is one of those incremental issues, but singing “Cloud!” at the top your lungs doesn’t make you an opera star.  The key to “package success” is to get astride a compelling value proposition.  Where are they to be found?  It’s never easy, but in some areas it’s easier than others.

SDN and NFV, according to buyers, are justified on cost management and agility.  In the main, those are operational issues, which suggests that the sweet spot in package deployment for SDN and NFV lies on the operations side.  At present, nobody has made a compelling case for having addressed that space, as I’ve noted in prior blogs.  We’re stuck in “structural orchestration” and we need to rise up to the functional level to augment what we have.

In the cloud, it’s more complicated.  I think you can make a case for functional-based operations changes there, but the question is whether a cloud based only on cost reduction can ever be really successful.  Benefit augmentation is always safer, but at least for now we have a notable lack of insight into what a benefit-driven cloud might look like, or how it might be different from what we have.

A cloud driven by benefits is a cloud that does new stuff, things we either can’t do effectively today or can’t do at all.  We’ve had virtually no dialog on what this kind of new stuff might look like.  It seems likely that the benefit cloud would be a kind of PaaS on which valuable applications and services would be created, but without knowing anything about the specific applications and services, it’s hard to say what the specific features of the PaaS might be.

It’s also hard to say whether this “benefit cloud” might not nibble at the edges of the portion of the SDN and NFV value propositions that relate to “Service Agility.”  Does a platform that facilitates value-based applications not also provide support for new service revenues?  The cloud, in its public cloud instantiation, is all about service revenues.

The people who have to care about this sort of thing are on either the leading edge or the trailing edge of the platform revolution.  Red Hat, Wind River and VMware are arguably the best-positioned to create a benefit-driven platform, no matter whether you call the effort “SDN”, “NFV”, or “the cloud”.  Alcatel-Lucent, Cisco, HP, and IBM are all somewhat at risk, but for different reasons.  Alcatel-Lucent and Cisco have specific software issues that make it harder for them to present themselves as a platform vendor.  Who thinks of either as software powerhouses?  Alcatel-Lucent, at least, has some obvious platform directions out of mobile services and even NFV.  Cisco can say “Cloud” and they’re one of many.

HP and IBM have too much legacy in the IT and applications area, and are inclined to bee to conservative with respect to platform positioning.  All these guys, though, have the common problem of “toppyness”; there are too many different application- or service-linked choices to make or defend.  It’s easier if you have some middle-level option (like operationalization), which is why I think Cisco needs to retune its cloud, SDN, and NFV story to grab what they can there.  Apart from the risk that VMware obviously continues to generate.

I can’t say whether platforms are for-sure the way of the future.  I can say that they’re a way that somebody can control the future.  Let’s see who, if anyone, steps up!

Can Alcatel-Lucent “Shift” into “Charge!” Mode?

Alcatel-Lucent became the latest network equipment vendor to announce their numbers, and the company made some very helpful progress in its “Shift” program to reorient its efforts into areas like mobile and IP, where future profit growth could be hoped for.  However, it didn’t have a breakout quarter, and that’s not unreasonable given the early state of its transformation efforts.  What Alcatel-Lucent did was to keep the wolves at bay in order to create a chance of success this year.  Now they have to actually create it, and their risks are a bit of a poster-child for the industry at large.

If you look at the numbers, Alcatel-Lucent shows signs of a shift in capital focus by operators.  While mobile was still strong, wireline optical numbers were better and so was the IP routing picture.  What this suggests is that operators have indeed been augmenting capacity for IP traffic and that the augmentation is perhaps a bit more optical-focused (Alcatel-Lucent’s optical switch did much better, for example).  I’m of the view that we’re seeing a shift from bandwidth conservation (an electrical-layer function) and bandwidth production (an optical-layer function).  If traffic growth is still accompanied by declining revenue per bit (as operators all report it is) then there would be a logical drive to build flatter, more optically intensive, infrastructure.  That would also manage operations costs by reducing the number of layers you manage.

Responding to operator (customer) pressure is important for any network vendor, but at the same time Alcatel-Lucent faces the question of whether they can increase the operators’ spending organically.  Otherwise they fight for market share in a pond that’s evaporating.  Even the recent Alcatel-Lucent focus on “cloud” or “NFV” versions of IMS and EPC can be argued to be offerings that will appeal to buyers to the extent that they set new and lower price points.  Again, that’s not bad in the current market, but in the long run it’s only fanning the pond to generate more evaporation.

There are two areas where Alcatel-Lucent could make some hay.  One is on the SDN side with Nuage, and the other is in the NFV/cloud area where their CloudBand offering is perched.  Nuage is one of the very best of the SDN approaches, with more strategic potential than perhaps anything out there from anyone of any size.  CloudBand is one of the very best carrier cloud and NFV approaches too.  My concern is that in both cases, Alcatel-Lucent is hammering on tactical issues and not facing up to the broader strategic ones.

A good Alcatel-Lucent plan for success would look something like this.  I start by shedding all of the stuff that’s never going to contribute to my long-term revenue growth.  Project Shift is doing that, so you can fairly say that Alcatel-Lucent has been successful with this step.  My second step is to apply strategic technologies to the tactical mission of cost reduction by focusing on lowering opex so that I’m delivering additional benefits without reducing capital spending on what I sell.  The third step is to use those same strategic technologies to augment revenues for operators thus allowing them to spend more on equipment.

Obviously Alcatel-Lucent is in stage two here, and there is a pretty clear indication that the notion of opex management isn’t as easy as they might hope.  Ericsson has a strong commitment to operations enhancements and Ericsson owns Telcordia, the largest of the old-line OSS/BSS vendors.  They had a bad-ish quarter, which suggests that operationalization to improve TCO without reducing capex hasn’t been able to drive their professional services or OSS/BSS businesses up enough to create overall gains.  In fact, Alcatel-Lucent did better than Ericsson did.  To me, this means that the hardest part of a three-step program to solidify growth is that second step.

I’ve noted an earlier blog that next-gen management and orchestration was arguably the key to operationalization, but also that it could easily become a “category eater” by forcing all of the changes in network technology and service direction into a single harmonious something that would blur boundaries between the operations categories, (BSS, OSS, NMS, EMS).  What I wonder is whether the smartest move for Alcatel-Lucent might be to encourage that category-eating effect.

Nuage could create a completely new model for cloud networking, and my work with NFV has convinced me that such a model is needed.  When we have both virtual services and virtual resources, and when the virtual resources are often created by linking virtual devices that are tenants on a common pool, an awful lot of traditional IP principles are tossed out because they don’t work well.  It could be that for SDN the most compelling mission is supporting the second-stage mission of operations enhancement by supporting optimum connectivity models.  Nuage is well-equipped for that, but so are other SDN strategies.  You can argue which is best in this mission but the most convincing answer would be “The one that explicitly accepts the mission and demonstrates its fulfillment.”  Nobody does that yet.

CloudBand could create a completely new model too.  MANO or Management/Orchestration, as I’ve pointed out, has to address the creation and management of virtual stuff on virtual stuff.  Yes, SDN support of the connectivity dimension of this mission is an independent opportunity, but there’s never a better seat during the plowing of new fields than on the top of a big tractor.  The furrows are the last place you want to be, waiting for something to happen.  Like most orchestration stories, CloudBand is focused mostly on what I call structural orchestration.  Overture’s Ensemble OSA stuff proved that you can take a structural mission and serve it with tools that can do functional orchestration as well.  Why not look at doing that within CloudBand, Alcatel-Lucent?

New demands for revenue and the success of the OTT players have combined to create a future for network services that make connectivity—the old mission—blasé.  If the services of a network are transformed, it is illogical to assume the network itself isn’t transformed in turn.  That’s the high ground that the second and third stages of business progression have to be aimed at seizing.  Alcatel-Lucent has nice jumping-off points, but no convincing momentum there.  So, Alcatel-Lucent, it’s time to stop talking about “The Shift” and start talking about “The Charge!”

Alcatel-Lucent: From Shift to Charge

Alcatel-Lucent became the latest network equipment vendor to announce their numbers, and the company made some very helpful progress in its “Shift” program to reorient its efforts into areas like mobile and IP, where future profit growth could be hoped for.  However, it didn’t have a breakout quarter, and that’s not unreasonable given the early state of its transformation efforts.  What Alcatel-Lucent did was to keep the wolves at bay in order to create a chance of success this year.  Now they have to actually create it, and their risks are a bit of a poster-child for the industry at large.

If you look at the numbers, Alcatel-Lucent shows signs of a shift in capital focus by operators.  While mobile was still strong, wireline optical numbers were better and so was the IP routing picture.  What this suggests is that operators have indeed been augmenting capacity for IP traffic and that the augmentation is perhaps a bit more optical-focused (Alcatel-Lucent’s optical switch did much better, for example).  I’m of the view that we’re seeing a shift from bandwidth conservation (an electrical-layer function) and bandwidth production (an optical-layer function).  If traffic growth is still accompanied by declining revenue per bit (as operators all report it is) then there would be a logical drive to build flatter, more optically intensive, infrastructure.  That would also manage operations costs by reducing the number of layers you manage.

Responding to operator (customer) pressure is important for any network vendor, but at the same time Alcatel-Lucent faces the question of whether they can increase the operators’ spending organically.  Otherwise they fight for market share in a pond that’s evaporating.  Even the recent Alcatel-Lucent focus on “cloud” or “NFV” versions of IMS and EPC can be argued to be offerings that will appeal to buyers to the extent that they set new and lower price points.  Again, that’s not bad in the current market, but in the long run it’s only fanning the pond to generate more evaporation.

There are two areas where Alcatel-Lucent could make some hay.  One is on the SDN side with Nuage, and the other is in the NFV/cloud area where their CloudBand offering is perched.  Nuage is one of the very best of the SDN approaches, with more strategic potential than perhaps anything out there from anyone of any size.  CloudBand is one of the very best carrier cloud and NFV approaches too.  My concern is that in both cases, Alcatel-Lucent is hammering on tactical issues and not facing up to the broader strategic ones.

A good Alcatel-Lucent plan for success would look something like this.  I start by shedding all of the stuff that’s never going to contribute to my long-term revenue growth.  Project Shift is doing that, so you can fairly say that Alcatel-Lucent has been successful with this step.  My second step is to apply strategic technologies to the tactical mission of cost reduction by focusing on lowering opex so that I’m delivering additional benefits without reducing capital spending on what I sell.  The third step is to use those same strategic technologies to augment revenues for operators thus allowing them to spend more on equipment.

Obviously Alcatel-Lucent is in stage two here, and there is a pretty clear indication that the notion of opex management isn’t as easy as they might hope.  Ericsson has a strong commitment to operations enhancements and Ericsson owns Telcordia, the largest of the old-line OSS/BSS vendors.  They had a bad-ish quarter, which suggests that operationalization to improve TCO without reducing capex hasn’t been able to drive their professional services or OSS/BSS businesses up enough to create overall gains.  In fact, Alcatel-Lucent did better than Ericsson did.  To me, this means that the hardest part of a three-step program to solidify growth is that second step.

I’ve noted an earlier blog that next-gen management and orchestration was arguably the key to operationalization, but also that it could easily become a “category eater” by forcing all of the changes in network technology and service direction into a single harmonious something that would blur boundaries between the operations categories, (BSS, OSS, NMS, EMS).  What I wonder is whether the smartest move for Alcatel-Lucent might be to encourage that category-eating effect.

Nuage could create a completely new model for cloud networking, and my work with NFV has convinced me that such a model is needed.  When we have both virtual services and virtual resources, and when the virtual resources are often created by linking virtual devices that are tenants on a common pool, an awful lot of traditional IP principles are tossed out because they don’t work well.  It could be that for SDN the most compelling mission is supporting the second-stage mission of operations enhancement by supporting optimum connectivity models.  Nuage is well-equipped for that, but so are other SDN strategies.  You can argue which is best in this mission but the most convincing answer would be “The one that explicitly accepts the mission and demonstrates its fulfillment.”  Nobody does that yet.

CloudBand could create a completely new model too.  MANO or Management/Orchestration, as I’ve pointed out, has to address the creation and management of virtual stuff on virtual stuff.  Yes, SDN support of the connectivity dimension of this mission is an independent opportunity, but there’s never a better seat during the plowing of new fields than on the top of a big tractor.  The furrows are the last place you want to be, waiting for something to happen.  Like most orchestration stories, CloudBand is focused mostly on what I call structural orchestration.  Overture’s Ensemble OSA stuff proved that you can take a structural mission and serve it with tools that can do functional orchestration as well.  Why not look at doing that within CloudBand, Alcatel-Lucent?

New demands for revenue and the success of the OTT players have combined to create a future for network services that make connectivity—the old mission—blasé.  If the services of a network are transformed, it is illogical to assume the network itself isn’t transformed in turn.  That’s the high ground that the second and third stages of business progression have to be aimed at seizing.  Alcatel-Lucent has nice jumping-off points, but no convincing momentum there.  So, Alcatel-Lucent, it’s time to stop talking about “The Shift” and start talking about “The Charge!”

In economic news, we’re still in the doldrums in terms of real market momentum.  There are clear indications that the global economy is recovering, and clear indications that the financial industry is torn between getting in on the ground floor and running screaming with some near-term profits clutched in their hot little hands.  Large-cap stocks have been seen as safer, and even in tech there are indications that the traditional players in the IT and networking spaces are seen as attractive.  But smaller caps have been depressed by fear that the recovery is either not real or is anchored on different principles, principles that might make jumping too far into a momentum trend a major risk.  We need more earnings reports, in other words.

Are Policies the Universal Constant for SDN, NFV and the Cloud?

When Cisco proposed a policy-exchange protocol as a piece of OpenStack, it raised yet again the profile of the notion of policies as the solution to our network ills.  Since then, I’ve had some pretty high-level people suggest that in the critical area of cloud, SDN, and NFV network evolution we could solve all our problems with “policy”.  Well, maybe.  I think there’s certainly a risk that we’ll embark on a policy hype wave, like we’ve ridden so many other exaggerations recently.  Let’s try to nip it in the bud (futile, I know, but hey we have to try!)

The basic premise of policies in networks is that of “managed autonomy”.  If you look at a network as a vast cooperative, you can see that simply inducing effective cooperation becomes an issue at some point.  Where that point is depends on how big, complex, and multi-service the network is.  On something like the Internet or even a big private network, you need to have some way to support scaling without an explosion in complexity.  Policies are a proposed strategy.

In a “policy-driven” network, what you do is essentially to set goals rather than to exercise specific control over how those goals are achieved.  Don’t tell every box in a metro network what you want it to do on an individual basis, tell logical collections of boxes how you want them to behave collectively, and define your instructions so that your logical collections’ behaviors add up to systemic efficiency and effectiveness.

One example of policy-driven networks that we’ve had around for years is the mobile network.  In IMS networks, the routine strategy is to design the network for suitable behavior under specific conditions, then to regulate actual conditions in the network by regulating what you allow into it.  Admission control is a policy mechanism, in short.

The current interest in policies comes from the drive toward SDN and NFV, and even the cloud.  The reason is that none of these technologies will deploy universally.  There will be little enclaves where the business case can be made—inside a service, a geography, a collection of equipment from a single vendor.  How do we then create any harmony in the service overall when we can’t expect technical harmony?  One approach would be to say that each of our enclaves was one of my “logical collections”, give it a policy to define its target behavior, and then mediate overall service behavior by coordinating these policies.

For today’s service evolution/revolutions, policies could be a kind of NNI or federation strategy.  Instead of making it possible for SDN elements to talk to each other at a level detailed enough to support coordinated traffic management, give every logical collection instructions on how to be a “black box” with specific properties.  Outside the box there’s no need to worry about how those properties are achieved.  That solves problems of how to make SDN and NFV and legacy talk to each other.

I’m a fan of policies.  I’m a fan of steering wheels.  My two fan-doms are alike in that both are necessary things for effectively meeting a goal, but neither are sufficient.  I commented when Cisco came out with OpFlex that it was a useful step toward a useful goal, but not a complete definition of the route.  That’s actually true of policies overall.  What policies really are, in terms of how we’d use them in modern networks, are an element in an abstraction strategy and an opening into orchestration.  Without recognition of these connections, policies are just another media show.

Abstraction?  Well, if our community of cooperative things is given a set of policies to allow it to internally meet an externally visible goal, then something that receives and enforces the policy is simply implementing an abstraction.  The community is a black box, with properties seen from the outside that are enforced by policies on the inside.  However, black boxes are not only opaque to specific measures, they’re opaque to everything, including whether policies are used at all.  An SDN enclave can meet its goals any way that works.  Policies could be one way, but not necessarily the only way.

The orchestration side?  Well, think for a moment about a network made up of our black-box abstractions.  We have services to build, so what happens.  Policy guys will tell you that we’ll send policies to the black boxes that represent the goal-based definition of the role of each in the service.  Gosh, that sounds a lot like orchestration to me.  In fact, if we don’t have some kind of formal service orchestration to the level needed to identify which of our black boxes are actually involved in the service, we don’t know who to send policies to.  Should we send them to everyone?  Should every enclave or black box know about every service?  That’s starting to sound like the problem of connection-oriented sessions at the scale of the Internet.

Then we have the question of what happens if something doesn’t fulfill its mission.  How does somebody tell the Master Policy Agent in the Sky that they screwed up?  What does the MPAitS do about it?  We’re inventing management here, so now we have management and orchestration, which some people think they’re getting rid of by adopting policies.

We can’t get rid of them no matter what we adopt, because complex multi-national, multi-tenant, services are inherently hierarchies and are always going to have to be addressed as a series of nested abstractions.  We can argue over how to communicate with the abstractions, but not with the fact we need them, and somehow need to communicate.

Any rational orchestration model will have a mechanism to define how, at a given level in the service hierarchy we need to create, the role of each element is coerced from the behaviors of what the element represents.  Policies are one way to do that, and so Cisco in that sense is doing us all a service by pointing that out.  But we need to ask Cisco to draw the whole picture, and we need to ask those who don’t like what Cisco does (or just don’t like Cisco, or just want to disagree with Cisco as a matter of “policy”) to define their whole picture too.  You can’t choose policies as your strategy if you don’t know the whole picture, including the alternatives.

 

Wind River CGCS: How High Can You Go?

One of the (many) implicit contradictions in SDN, NFV, and even cloud deployment is the conflict between infrastructure capital cost and infrastructure TCO.  The issues are created by a well-known truth, which is that it’s more expensive to make something work well than to just make it work.  How much more?  How good is “well?”  Those are the issues.

One area where there seems to be some happy agreement is the notion that network infrastructure will be held to a higher standard of availability and operationalization than a simple corporate server platform.  I blogged last year about the fact that “NFVI” or the NFV infrastructure framework would, for servers and the associated platform software, have to be capable of better networking performance than standard systems.  I also blogged more recently about the Wind River Carrier Grade Communications Server (CGCS), a platform intended to combine the right open-source ingredients to create a suitable framework for SDN, NFV, and perhaps even carrier cloud.

Wind River has now provided benchmark data on the specific topic of vSwitch performance.  This is especially critical to NFV because most commercially valuable NFV deployments would have more horizontal integration of components than traditional IaaS cloud apps would have, and so would exercise vSwitches rather heavily.  The data shows a 20x performance improvement with up to 33% less CPU cycles, which equates to having more residual processing capacity available for hosting VNFs.  Jitter is reduced and performance scales predictably (and almost linearly) with the number of cores allocated.

There’s no doubt that a traditional NFV hosting platform would benefit from this sort of thing, which is achieved without special NICs or hardware acceleration.  The interesting question the story raises is whether this means that a CGCS with the right NICs and dedicated to switching via OVS might perform well enough to displace a legacy switch or router in a wider range of applications.

Operators are fairly interested in the idea of having branch or service-edge-routing functions displaced by their virtual equivalent.  They are becoming more interested in broader use of hosted switching/routing, though my latest survey shows that they are more willing to accept virtual network switch/routing elements where the element is dedicated to a single customer.  The question of performance is one reason why “more interested” hasn’t become “highly interested”—it is in fact the largest reason.  There are others.

Next on the issue hit parade is the intersection between availability and operationalization.  It’s fairly clear to operators that there are benefits to being able to spin up router/switch instances ad hoc, create distributed functionality to improve availability, and significantly improve MTTR by being able to stamp out another collection of virtual functions to replace a failed element, faster than you could pull out a legacy box and put in a new one.  What is less clear is just how big these benefits are and what would be required operationally to make it happen.

A fixed platform like CGCS has an advantage in that it’s “code train” is organized, synchronized, and integrated for stability.  I’ve run into many cases where network integration has been limited by issues in versioning various software elements to support a common set of network features or services.  That addresses some of the variables in operationalization and availability calculation, but not all.  The work of the NFV ISG is demonstrating that a virtual switch or router is just the tip of an operational infrastructure iceberg.  There are a lot of questions raised by what might be lurking below the surface.

All forms of redundancy-based availability management and all forms of virtual-replacement MTTR management rely on an effective process of replacing what’s broken (virtually) and rerouting things as needed.  In many cases this means providing for virtual distributed load-balancing and failover.  We know this sort of thing is needed, and there are plenty of ways to accomplish it, but we’re still in the infancy of proving out the effectiveness of the strategies and picking the best of breed in cost/benefit terms.

This is where Wind River might take a step that elevates it from being a kind of shrink-wrapped middleware player to being a network infrastructure solutions player.  There is no reason why other operationalizing features or tools couldn’t be added to CGCS.  If I’m right in my assertion that a complete high-level management and orchestration system can be created using open standards and open source software, then there’s a path for Wind River to follow in that critical direction.

Imagine a complete orchestration, operations, and management solution integrated into a server platform for SDN, NFV, and the cloud.  Now, rolling out virtual stuff for any mission becomes a lot easier and the results become a lot more predictable.  Pre-integrated operational features, if they’re vertically integrated with the rest of the infrastructure and horizontally integrated across domains (separated for administrative reasons, performance, or because they represent different operator partners) could make all of NFV from top to bottom into a plug and play domain.  That would be a significant revolution in NFV, perhaps enough to move the whole notion of NFV along much faster.

Five of every seven operators who admit to looking at NFV say that they believe they would like to field-trial something in 2014 and have production deployments at a scale of at least a third of their target footprint by the end of 2015.  If that scale of deployment were reached, it would make NFV the largest incremental builder of datacenter complexes in the market.  However, almost all of the operators with these lofty hopes say they don’t expect them to be realized.  Issues of operational integration are at the top of the list of reasons why.

Wind River has proved that you can make a server into a network platform.  They are very close to the point of being able to prove that servers can be a network too, and the only barrier to that is the operational integration.  Get that right, and they have the secret sauce.

And remember, Intel owns Wind River.  I noted in a recent blog that the commoditization trends on the hardware side could have the effect of driving hardware momentum down to the chip level, and even encourage chip players like Intel to become server/platform integrated providers.  That would truly drive change in network operator deployment of all of our revolutionary technologies.

What SHOULD Be Next for Nokia?

Nokia, having completed its consolidation of the NSN partnership and sold off handsets to Microsoft, now faces the big question—“Where now?”  The “New Nokia” seems to be a network equipment company, but it seems to me that if you look at Ericsson’s quarter you’re left with the realization that they might not have enough network equipment.

Ericsson and Huawei are the two giants in the telecom industry, and the two companies are quite different apart from their national roots.  Ericsson has been making a highly publicized move toward professional services, and Huawei has always been a price leader in hardware.  Sure both companies have both capabilities, but what they ballyhoo is where they see themselves heading.  But Ericsson’s quarter was weak, and when it was announced I speculated that their problem might be too much service and not enough product.  If that’s true, it might be demonstrating what Nokia needs to do now.

Look out over the telecom landscape these days and you something that (botanically speaking) looks much like a long-neglected garden.  For almost a decade now, operators have been telling the world that the Internet pricing model and the growth of consumer service based on all-you-can-eat was devastating their revenue per bit.  Vendors listened, sighed in compassion, whipped out their order books, and said “Gee that’s a shame.  How many boxes can we put you down for this quarter?”  Cisco still thinks like that; traffic growth is all you need to justify carrier spending.  Profits are for the vendors.

When you look at initiatives like the cloud, SDN, and NFV, what you’re really seeing is a buyer community who feels abandoned by their sellers, pressured by market forces, and increasingly determined to make their own way in the world.  The culmination of this feeling is the drive toward the use of open-source components for future network infrastructure.  There has been operator interest in Open Source for five or six years now, but never with the management support behind it that we see now.  Operators are rebelling against inaction in the most effective way possible, which is embracing the commoditization by embracing free components.  Free stuff, after all, is always better for the buyer.

Ericsson got into professional services at the early part of the transformation shift, but they’re now facing the pressure of the new initiatives launched to disintermediate vendors.  In particular, operators are suspicious of what seems to them to be a desire to build solutions for things like the cloud, SDN, and NFV one buyer at a time.  They reason that if you want to support a network revolution you should have some product skin in the game.  Not only does that prove your own commitment to the market, it lowers the amount of customization (professional services) and integration (more professional services) needed by creating packaged functionality instead of custom stuff.  So Ericsson, lacking any conspicuous assets in the new revolutionary spaces, has a harder time pushing services.

Where this all matters to Nokia is in that point of “product skin in the game”.  Nokia has 4G assets and not much else.  Yes, we need 4G for mobile services but we also need shoes or tires, and nobody in the networking space sees those things as opportunities.  Truth be told, mobile has always been subject to the same kind of commoditization as wireline, and that’s becoming clear in looking at the business behavior of the mobile providers.  That means that doubling down on its mobile assets makes zero sense for Nokia, which means that the recent suggestion (by Nabila Popal, research manager at IDC MEA) that Nokia might buy Alcatel-Lucent is probably not a good idea.  Similarly, despite the expansion announced in the Nokia/Juniper alliance for mobile cloud, buying Juniper for its products wouldn’t do Nokia much good.  It just increases their exposure to a product area that’s commoditizing.

A more sensible alliance might be represented by the Alcatel-Lucent/NTT/Fujitsu announcement of a carrier-optimized (NFV-ready) server platform.  The point is that if you want to do cloud, or NFV, or run SDN controllers, or host open-source software, you need to have servers.  Cisco and HP, co-rivals to Nokia in the telecom world, both have servers and so they have automatic credibility in some of those key revolutionary areas.  And that’s the important point.  Nokia doesn’t need product representation, it needs strategic representation.  With Alcatel-Lucent, it gets some nice assets but it gets an awful deep pool of bathwater surrounding the strategic baby.  With Juniper it gets nothing, strategically.  Servers are “strategic” for the future, but they’re already commoditizing (IBM sold off their x86 business to Lenovo, remember?).

Nokia, at this point, needs to become a strategic-something-company if they want to defend their now-retired NSN roots (which are about the only roots they have left).  Their under-representation in hardware could be an asset if hardware isn’t where you need to represent yourself.  But what software is strategic?  Remember, operators want open-source stuff.  Remember that Ericsson, who had a rather bad quarter, is the largest provider of OSS/BSS.  The fact is that it’s going to take some careful positioning to find a niche where software value-add can be demonstrated, and sustained.

Is the cloud just a cheaper model for server consolidation?  Is SDN just a way of pulling in white-box switches with lower margins?  Is NFV going to cut the heart out of network-device differentiation by sucking out all the incremental functionality?  If the answer to these questions is “Yes!” then Nokia has no good choices, nor do any of the competitors.  Networking, as a separate industry, is doomed.  Why?  Because consolidation of network functionality onto servers will at least still demand servers.  What’ in demand in the way of network equipment after such a consolidation?  It would look like a giant ASIC.  So add Intel or Broadcom to the list of possible winners.

You probably think I’m going to tout some new software area as the salvation of all these threatened players.  I’m not, because it’s probably too late for any giant company to take a simple software route into profit.  Nokia or any other vendor would need to become either a broad IT-like company—like IBM or HP—or they’d need to become a little software/services company.  Buying somebody like Alcatel-Lucent or Juniper at this point only increases their risk.  It would be best for them, and for Ericsson, and even for Alcatel-Lucent and Cisco and Juniper, to start thinking small, to consolidate around a core of truly valuable intellectual property, and grow out from it to cover product spaces that can be pulled through at a profit.  Bigger traditional networking companies are only bigger dinosaurs, looking at that dazzling light in the sky.

Management/Operations: New Category or Category Eater?

Amdocs, the giant OSS/BSS provider, turned in a good quarter last week.  Operations software is  a long way from glamorous, but it’s also a long way from irrelevant—Amdocs’ quarter shows that.  The relevance of OSS/BSS is one reason why it might play an important role in the evolution of service management and orchestration.  The lack of glamour is one reason it might not.

I’ve mentioned in earlier blogs that the process of management/orchestration can be visualized as the creation of functional service objects from cooperating sets of resources, usually drawn from a pool.  The beauty of this model, which I’ve endorsed for better than half-a-decade, is that it allows for the creation of a kind of management two-way mirror.  From the top, the customer side, you can exhibit functional management properties because functionality is what customers expect to get from services.  Their vision of their service is a reflection of themselves.  From the resource side, the network operations side, you see not only the resource commitments behind the functions, you see through to the customer.

One of the attributes of abstraction using functional objects is that you could theoretically create a lot of different kinds of abstractions that add up to the same function-to-resource mapping.  A simple example is that I could model a VPN as a single “god-box” function, or as a collection of virtual devices.  The reason this notion of a floating representation of functional abstraction is important to vendors is that the level at which you abstract determines what part of network/operations software gets all the new management/orchestration roles.  That determines which vendors might win, or lose, big.

Let’s say that you build a “VPN” by assembling virtual functions to create virtual devices, then unite these into a single functional element called “VPN”.  In this approach, the OSS/BSS systems have no role in the lower-level orchestration processes at all, and in fact “see” nothing different between a VPN created by a legacy NMS and routers, one built on SDN, and one built based on virtual functions.  The responsibility for orchestrating behaviors of resources into functions of services lies entirely below the OSS/BSS.

On the other hand let’s assume that we build our VPN from real or virtual devices.  We have no “VPN” artifact.  Now, services assemble lower-level real-resource behaviors at the OSS/BSS level, which means there’s good and bad news for vendors.  The bad news is that the OSS/BSS guys now have to do potentially a lot to get next-gen virtualization-based services to market.  The good news is that makes them valuable.

There’s a kind of zone of possibilities in terms of where management/orchestration could lie.  At the bottom, you could imagine network processes building functional abstractions from resources and exporting them directly to service operations processes.  At the top, the service operations processes could reach down all the way to resources.  Anything in between is also possible, which is why management/orchestration could be considered to be nearly anything, deployed by nearly anyone.

Logic does provide some limits to this, though.  If we orchestrate resources directly to create services, then we’re either forced to integrate network-technology-specific processes into service definitions themselves, which creates total chaos in a network with multiple vendors and technologies, or build a structure of successive abstractions starting from the top and working down to the point where we divide the abstractions based on how they’re implemented.  Now a “VPN” has to exist, and has to spawn the three implementation options I’ve noted.

What that means is that we’re really not changing what gets done, which is the gradual decomposition of logical elements into physical resource commitments, only changing where it gets done.  In fact, if you had an agile and flexible way of doing this analysis/synthesis (depending on whether you’re moving down or up, respectively) you could stick it anywhere between resources and OSS/BSS and create what the other guy at the other end needs.  Logically speaking, next-gen management/orchestration is one of those few examples of what analysts doing magic-quadrant charts love—a new product category.

The problem with this whole NPC thing is that we can’t have a claim without claimants.  In the past, startups would be ballyhooing the media with their entries in this new group, eager to draw that first x/y chart with their stuff the sole inhabitant of the magic upper right.  Today, we have no VCs willing to fund anything with a general and complicated mission so there’s no claims being made.  Even vendors who actually do most of what’s needed here are reluctant to step up and admit it for fear it involves them in a black hole of sales cycles and pre-sale support.  An effort that by the way is unlikely to pay back.

Amdocs, like most OSS/BSS players, is content to presume that there will be enough of our NPC functionality below them to make it unnecessary for them to do much work to accommodate the cloud, SDN, or NFV.  To the OSS/BSS guy, it’s all about billing and charging in the end anyway.  That’s where pedestrianism catches up with them, because there is a decided risk associated with letting all the new value live outside your own product world.  The risk is that all that new value devalues you.

It’s my view that the only way to make our NPC of management/orchestration work is to adopt a state/event presumption.  Every functional element is a little state/event process that handles events based on the current operating state, and this is done by vectoring an event to a specific process depending on that state.  Instead of “workflow” we have a true real-time handler.  For each virtual construct, including the ones that represent real resource commitments, you build your logic by assembling processes into a state/event table.

Which means that any process that works is fine.  Which means that operations incumbents who have relied on workflow-driven symbiosis among their multiple elements to create the infinitely large camel’s nose to pull through their whole offering are now point-competitors for every intersection in the state/event table.  It completely redefines operations by making service, business, and network operations just squares on a common NPC chessboard.  The NPC is no product category at all, it’s a category eater.

You’d think this would be all the more reason for OSS/BSS players to want to jump in.  A successful revolution is successful regardless of who plays in it or drives it, but it’s more likely to be successful for you if you are in the driver’s seat.  Here’s where the lack of glamour comes in.  The fact is that SDN, NFV, and the cloud are not disruptive technologies for OSS/BSS despite the fact that many say the opposite.  We could manage the operations of a total-SDN or total-NFV network the same way we do with legacy technology, using intermediary layers as I’ve described here.

What is disruptive to OSS/BSS is the motivation behind all of these new technologies.  Operators can’t make enough profits the old way, and so the need radical cost-reducing strategies, or revenue-enhancing strategies, or both.  Operations is changed at the fundamental level by what you’re operating, and we’re going to operate different networks and services.  This is a high-level, fundamental change.  The kind of people who contemplate billing-system enhancements for a living aren’t rewarded for thinking about that kind of shift.  That’s what all the OSS/BSS guys and standards groups need to be thinking about.