Is it Time for the Rise of “Upperware”?

At Light Reading’s Big Telecom Event, Cisco SVP of Global Service Provider Delivery Cedrik Neike said that telcos have to transform themselves into platform providers.  Well, telcos would agree at least that they have to transform themselves into something other than what they are now—connection and transport providers.  Maybe platform providers would be better, but the question is just what a platform provider is and does…and perhaps most important what one would buy in the way of equipment.

If Cisco wanted converts to their view they could have picked a more flattering term.  The term “platform” is usually taken to mean “that on which something is built” and that meaning would create a few scowls in operator circles I’m familiar with.  It sounds like the operator is being asked to get out of commodity bit-pushing and into some other commodity business.  The OTTs have milked your bandwidth long enough so give them something else to milk!

A rose by any other name, though.  The truth is that some operator executives yearn for the role of platform provider, because it would get them out of the business of merchandizing new stuff to consumers, which network operators have not proved to be particularly facile at doing.  It’s also true that for many operators, regulatory requirements would put them in the platform business whether they wanted it or not; retail OTT-like services would either have to be pure OTT plays or exploit something about the underlying network.  In the latter case they’d likely have to be offered at reasonable rates to OTTs too.  That begs the question of whether it’s worth being a direct retail player.

It doesn’t answer the question of what a platform provider is or does, though, and it doesn’t suggest just what platform-building would look like either.  Cisco seems to see the cloud as the platform, and I would agree that the cloud, meaning cloud computing on a large scale, seems the inevitable destination of operator infrastructure evolution.  However, that’s about as insightful as saying that the sun rises.  Some specific detail would be helpful.

Operators have three paths they could take toward platformization.  First, they could build cloud infrastructure as they’ve built networks in the past and let buyers purchase capacity at utility rates.  Second, they could build some other kind of infrastructure that requires significant utility-like levels of investment, like IoT, and make a wholesale service of it.  Third, they could build a series of “platform services” by exposing elements of current networks and information gathered from the network.

In all of these cases, it would seem likely the operators would have to devise some special sauce to coat their offering with to boost margins and drive rapid acceptance.  The logical thing to do would be to build “PaaS-like” platforms of APIs upward from whatever low-level infrastructure and information the operator exposes.  If these would facilitate development of services they could provide an easy path to revenue for startups and gain at least some revenue for the telcos too.

It’s not that the cloud isn’t valuable, or that SDN doesn’t do connections.  We have architectures that can probably do what the future calls for.  The challenge is making those architectures pay off quickly.  Middleware in the traditional sense may be obsolete as a solution, because we need to empower people with less skill and less delay/cost tolerance quicker if we want our platform to be successful.  Hence, I think we need to have upperware.

Upperware is what I’m (jokingly) calling a set of problem-directed tools that get a developer or service architect quickly to value.  I blogged about two possible upperware examples recently; Alcatel-Lucent’s Rapport and HP’s IoT Platform.  These are not general-purpose service platforms, but rather are something that gets you part-way there—close enough to be functional quickly and earn some revenue.  If the network operators had platform tools of that type, I think they could attract a following.

One of the challenges of upperware is that it has to be problem-specific.  General-purpose tools are like general-purpose programming languages.  You can do anything in them, but the price you pay for all that adaptability is difficulty in using the tool, which means it’s more expensive.  We’ve had “database programming languages” for ages, designed to do database access simply.  We have web development tools, and tools for most other specialized tasks.  The thing is that even these kinds of “horizontal” tools may be too specific.

That doesn’t mean that we don’t need lower-level stuff (dare we say “underware?”) that would provide a kind of tool-for-tool-builder repertoire, and an architecture to define the ways that all the layers relate.  Think of upperware as building on foundation capabilities, one of which would be to call for dynamic deployment (NFV) and another to call for dynamic NaaS (SDN).  We could also define information interfaces to allow various contextual, behavioral, and social/collaborative information to be presented to upperware applications.

The upperware concept exposes one of the issues that NFV and cloud computing both need to face more squarely.  It’s clear that utility in either of these technologies depends on the ability to quickly assemble functional atoms into useful services or applications.  Well, there’s more to that than just stuffing something into a VM or container, or doing a service chain.  You need to think of “class libraries” of “foundation classes” that build upward to create our problem-specific upperware tools.

You can probably see that there are a lot of ways of creating upperware, meaning a lot of different places you could start.  For example, you could focus your upperware on information exposure, the presentation of data to services/applications to help refine an experience and make it more contextual and valuable.  That might be HP’s approach.  You could also focus it on relationships, on the collaborative exchanges between people.  That may be what Alcatel-Lucent intends.

It would be nice to be able to present some upperware plans for a variety of players, because it would be nice to think that vendors were projecting their customers’ needs for new revenue into vendor product plans in a realistic way.  Beyond my two examples here, though, I can’t see much going on.  That is almost certain to create problems down the line, and you can see a likely problem spot when you look at the notion of the “service catalog”.

Most NFV supporters envision an architect grabbing stuff from a service catalog and stringing it into a service.  Functionally that’s a good approach, but how exactly do the pieces end up fitting?  We could connect a router to a server or a switch in a dozen different ways, most of which would do nothing useful.  If we expand our functionality to that of a software component, knowledge of traffic status or the temperature in the basement, you can see how the chances of random pairing producing utility fall to near-zero levels.  Call it what you will, we need upperware to create function communities that can then build upward to something interesting.

And here again we have the question “Where does upperware come from, mommy?”  This sort of thing is clearly out of scope for the NFV ISG, for ONF, for the TMF, even if we assumed the bodies could make a respectable contribution if they tried to go there.  Do we need (perish the thought!) another forum?  Or do we just need vendors making their own contributions?

The broader a problem is, the less likely it is that we can sit down a bunch of interested parties and hammer out the details.  Groups aren’t doing what they’re currently tasked with doing in a complete and timely way, so they’re unlikely to be the answer to a call for additional structure in our service thinking.  I’m hoping both Alcatel-Lucent and HP frame their stories out fully, and I’m hoping other vendors develop their own upperware responses, whatever they decide to call them.

What We Can Learn from OPNFV’s Arno Release

The OPNFV project released its first (“Arno”) build, and it may be more noteworthy for how the pieces it contains were selected than for its functionality.  There’s logic to how OPNFV did its work, but there are also risks that its order and timing of releases pose.

Arno is a platform that includes the infrastructure (Network Functions Virtualization Infrastructure or NFVI) and the Virtual Infrastructure Manager (VIM) components of the ETSI ISG’s E2E model.  The combination offers the ability to deploy and connect VNFs in a cloud architecture based on OpenStack and OpenDaylight.  The features are based on the ETSI ISG’s Phase 1 work, and the model does not include orchestration or management (MANO, VNFM).

OPNFV is, as a group, in a challenging position.  Open source development is often done from the bottom up, simply because it’s hard to test something that sits on top of the functional stack and looks downward into nothing but vague specifications.  If the software design is done top-down, which I’d contend is the normal practice for the software industry, the bottom-up development doesn’t pose any major risks.

In the case of NFV, the problem is that ETSI’s work was also bottom-up.  The understandable desire to complete a model of the technical core of NFV—deployment and management of virtual functions—led the body to limit its scope, excluding things like control of legacy elements, end-to-end service management, operations integration, and federation of elements to/from other providers or partners.  Some of these may be addressed in Phase 2, but all of them create requirements that probably could be more easily addressed by being considered up front, before the high-level architecture of NFV was laid out.

OPNFV has a commitment to tie its activities to the work of the ISG, which means that the development will also have to contend with these issues as it evolves.  That might involve either making significant changes to achieve optimal integration of the features/capabilities, or compromising implementation efficiency and flexibility by shoehorning the capabilities in.

One interesting aspect of the OPNFV Arno release that demonstrates this tension is that Arno includes NFVI and VIM, which in my view implies that the two go together.  In the ETSI E2E model, though, VIMs are part of MANO, which is otherwise not represented in Arno.  I’ve argued that VIMs should have been considered to be part of NFVI from the first because a VIM is essentially the agent that transforms between an abstract model of a deployment/connection and the realization of that abstraction on actual resources.

The good news here is that OPNFV may be quietly rectifying a wrong here, and that this may be an indication it will continue to do that.  The bad news is that it’s a wrong that should never have crept in, and that significant divergence from the ISG specification as we evolve OPNFV begs the kind of disorder that creeps into development project that don’t have a firm spec to begin with.

Let’s go with the good first.  In the real world, infrastructure for NFV is totally abstracted, meaning that a VIM proxies for all of the stuff it can deploy/connect on or with.  This seems to be the approach OPNFV takes, but if that’s so then one thing I’d like to see is how multiple VIM/NFVI plugins would actually work and be connected with MANO.

If there are multiple VIM-plugins, then MANO presumably has to be given a specific plugin to use in a specific situation, meaning that MANO models would have to have selective decomposition so you could pick the right plugin.  If MANO doesn’t do that, then there has to be some sort of generic plugin that further decomposes requests into the individual infrastructure-specific plugins needed depending on configuration, service parameters, etc.  I’m not aware of either of these capabilities in the current MANO model.

Then there’s management.  If a VIM carries infrastructure upward into orchestration, management needs to know how that carrying chain is handled so that resources actually involved in a service can be associated with the service.  I don’t know how that’s supposed to work either.

Finally there’s federation.  If I have a VIM representing resources, why not have a VIM represent federated resources?  Does the current approach have the necessary hooks to be used across organizational boundaries?  Don’t know.  So to summarize, we have a lot of potential for good here but not enough information to realize it fully.

On the bad side, the challenge is both that we may have insufficient specifications to back up development, and that we may have too many.  The ETSI ISG, for example has tended to look at service and resource data in an explicit sense, like most of the TMF SID applications do.  You need a field called “maximum instances per load-balancer” so you define it.  The problem with that is that you end up with a billion or so fields most of which are irrelevant to a given service, and every new service or feature is likely to spawn a chain of new fields.  What developers would do is simply to declare variable attributes, in an XML-like way, and say that any “field” is fine as long as something fills it and something else uses it.  No need for names or pre-defining.

So what does the OPNFV group do about this?  Do they do the right kind of development or follow the ETSI model explicitly?  Right now we don’t have the answer because nobody has really defined what the northbound interface of a VIM should look like in detail.  In both my NFV projects (CloudNFV and now ExperiaSphere) I modeled this stuff as either XML (or a derivative) or TOSCA and presumed a structure that defined an “abstraction-type” and then type-dependent details.  I’d like to see somebody say what the ISG/OPNFV presumption is.

NFV has a way to go before it defines all the functions and features needed to make its own benefit case.  Where OPNFV could help is to address the places where that benefit connection is anemic.  But if the body wants to stay “technical” then the key point to look at would be the abstractions that define service models and that should define the interface between VIMs or other “infrastructure managers” and MANO.  You are, service-wise, what you orchestrate, and you orchestrate what you model.

The big challenge for OPNFV remains time.  Operators need something by the critical 2017 revenue/cost crossover, and markets are advancing with per-vendor solutions whose scope matches benefit needs, in part because nobody has a sanctioned open approach that does.  At some point, further work will get so far behind the state of the market it will be too late to influence things.

Two Vendor Models for Building New Services with NFV

There are a lot of ways to get to LA from NYC and a lot of ways to realize the service agility benefits expected for NFV.  Some paths are likely better than others in both these examples, and in both cases it may be necessary to travel a little to know what the best choice is.  Thus, it’s interesting to note that the two leading vendors in NFV have different approaches, and that we’ll likely see some proof points of both in the next six to nine months.

Service agility is one of the most credible goals for NFV, and it’s also the goal that could in theory generate the most benefit.  Capex savings are unlikely to amount to even $20 billion per year.  Opex efficiency-driven savings could reach $100 billion providing that NFV principles were extended universally to all infrastructure and all operations levels.  The revenue stream associated with agile services based on cloud principles is, by my current modeling, over $2.5 trillion, with a trillion coming from the enterprise space and the remainder from consumer mobile and content services.  Some of this could be addressed without NFV, but NFV would be critical in wringing the most profit from it all, and in addressing all but the low apples.

The problem we have is that most of the service agility discussions relating to NFV are crap.  When you wade through the morass you find two basic ideas—adding higher-level network features to services and bandwidth on demand.  Buyers have told me for decades that things like firewalls and NAT are per-site features that don’t change much once you have them deployed, and that their only interest in bandwidth on demand is lowering overall network spending.

I’ve speculated in the past that there are many new services that will emerge out of mobile users and their focused, contextual, view of network services and information resources.  There’s little point in my going over this opportunity side again.  What’s worth looking at is how vendors seem to be approaching the question of how you really could create services with NFV and grab some of those bucks.

From an architecture perspective, NFV isn’t enough to support highly flexible assembly of services from features.  Yes, you can deploy and connect elements with NFV, but while “chaining” works for things that have a simple “cascade-in-the-data-path” relationship with each other, it doesn’t support microservices and APIs fully.  Future services, that two-and-a-half trillion bucks, have to be seen more as transactional activities or workflows, not connections.  So what you need is a kind of platform-as-a-service framework that lives on top of NFV, meaning that NFV deploys and manages it, but that provides services to build feature applications.  Alcatel-Lucent and HP have each shown they have a vision for that.

Alcatel-Lucent’s approach is embodied in its Rapport product.  Rapport is a session-services framework that lets operators or even enterprises build applications (especially collaborative ones) based on IMS mobility control.  It leverages Alcatel-Lucent’s virtualized IMS/EPC elements and is compatible with CloudBand and NFV.  You can manage services using the Motive operations tools.

Despite the fact that IMS is hardly news (it dates back to about 1999), the fact is that IMS has been a target for NFV from the first, and it is an asset that operators own and would like to leverage for differentiation.  Rapport is also targeting the same opportunity that Google’s Fi MVNO strategy seems to target, augmenting basic mobile voice services with what are essentially extended forms of custom calling features (remember “custom local access special services?”)  Since transactional services based on sessions would generate a massive increase in signaling traffic, agile and scalable IMS would make sense.

The HP approach is different.  They see new services being tied at least in part to new exploitable information resources, most notably to IoT.  In their model, the network connections to resources, their controllers, and repositories and analytics to store them are all bundled into a kind of “USB-of-services” that you can plug in as needed.  Each of these plugins expose a set of APIs that then become available to service-creating applications.

This model is more “cloud-like” or “Internet-like” in that it exposes APIs for presumably open consumption either on the Internet or in some VPN subset of users/resources.  For now, at least, there’s no specific model of how the services would relate to each other or to the plugin microservices.  HP may have plans for “superservices” like IMS to plug into this (they’ve talked about MVNO-based IoT so they’re clearly thinking about it) but they’ve not exposed them as yet.

There’s a common thread in all of this, I think; several in fact.  High-level services are going to be built more like cloud applications than like connection services.  NFV can play a role in that where dynamism (either transient services or the need for scalability) is a significant requirement.  High-level services are going to be built inside a “sandbox” of APIs.  We don’t know what exactly will stick with respect to this approach.  We’re hoping somebody will tell us.  I like both the Alcatel-Lucent and HP approaches in no small part because they’re bold adventures into a world we need to get to, but don’t quite know how to describe.

Cloud services, to be optimally deployed, useful, and profitable, have to become highly agile and tactical.  So do NFV-deployed features.  So do SDN connections.  The future differs from the present largely in that concentrations of humans, computing, and connectivity associated with traditional worker relationships and homebound consumers are increasingly irrelevant.  With the elimination of natural human mash-ups we also lose opportunity to address those humans as a group.  Personalization is a natural part of addressing mobility.  Agility is the fundamental requirement of a personalized service set.

Network and IT/cloud vendors approach this differently, which is one of the interesting things about the Alcatel-Lucent and HP positions.  It’s hard to say whether networking or IT has the most natural connection, or even to say that there’s enough of a difference to matter.  If we have to transform both, then we’d end up starting where it’s convenient, or compelling.

It’s also interesting that both of these approaches nibble at the tenuous border between service management as a pre-service conditioning of resources and service logic as a real-time management of workflows.  As networks and the cloud become more tactical, it’s going to get harder to distinguish between coordinating deployment and simply running the application.  Might, for example, a transaction direct the deployment of its own resources?  That will generate some interesting shifts in both worlds.

Alcatel-Lucent’s Rapport addresses the model for personal connection in a world without branch offices and couches (philosophically, at least) and HP’s IoT and plugin model for information-driven services addresses how those mobile/personal users interact with the world around them.  We’ll have to do both of these things, and more.  The winning approach may be the one that can validate its vision of the future and link it to profitable steps in the present.

Will the Rush of M&A Around OpenStack Drive Big Changes?

OpenStack is hot.  IBM is going to acquire Blue Box and Cisco is acquiring Piston.  You could look at this as a kind of consolidation signal, but I think it has other implications.  In fact, it might signal some SDN maturity, some NFV questions, and some cloud directions.

OpenStack is obviously the premier approach to cloud deployment, despite the fact that the project has been growing like crazy, changing often, and generating a little angst on the part of vendors and users for limitations in functionality and perceived lack of stability.  One thing that’s clear from the two OpenStack M&A deals is that big-time cloud pressure is going to land on OpenStack and that it’s going to get a lot better.  Of course, if OpenStack is already winning, what would all that betterness mean?

From a competitive perspective, obviously it would mean a lot.  OpenStack would crush alternative approaches, frankly, and that’s what is likely to happen very quickly.  If you have a cloud product or a cloud strategy, it better be based on OpenStack at this point.  I’d guess that CloudStack, Eucalyptus, Nimbus, and OpenNebula proponents have six to nine months to prepare some graceful shifts in strategy, after which time they’ll end up being even more hurt by OpenStack momentum than they’ve been so far.

For vendors who have already accepted OpenStack as the cloud answer, this is good news in one sense and a warning of risks to come in another.  Obviously Cisco and IBM are in this camp, but so are giants like Alcatel-Lucent and HP.  The problem they all face is that differentiation of cloud strategies will now become more important and more difficult at the same time.  Nobody wants to be selling exactly the same product as a competitor, and so all the OpenStack-committed players are going to be casting around for something that makes them different, better.

For the cloud industry, public and private, a clear OpenStack victory may be important primarily for the fact that it will push OpenStack and Amazon’s AWS/EC2 into a clear face-off.  Competition between Amazon and OpenStack clouds will be driving down basic IaaS pricing, which I think is going to create a public-cloud focus on things that can extend profits.  That will be true for both Amazon, whose AWS services already include a lot of optional value-added features, and for OpenStack which has been slow to develop any specific PaaS or SaaS extensions.

The impact on the industry overall is complicated.  It’s even more complicated when you consider that SDN’s OpenDaylight is also generating a lot of M&A interest too.  Maybe all these acquiring vendors see the future.  Maybe they are all just knee-jerking to the same doctor’s mallet.  Whatever the case, I think what we’re seeing has a common theme and that is the notion that the future will demand more agility, more personalization than the present.

Wireline networking isn’t going to change very quickly because what you network with it is either branch offices or homes.  Nobody is going to build either of these just to consume more efficient connectivity, so it’s fair to say that favorable pricing and features won’t build new customer connections.  You’d have to sell more services to the same location.

Location services that we know about are bulk transport.  That’s not going to be more profitable for operators in the future, so it won’t be profitable for vendors to support it.  What we’re looking for is stuff that’s above the connection/transport zone, and that means features and applications.  Networking these things is different from networking branch offices and homes, because there’s a variability in higher-level service relationships that branches and homes don’t generate.  The cloud and NFV justify SDN.

Mobility is the instrument of personalization, and also what justifies the cloud, and that’s the causal chain that ultimately changes everything.  A hundred workers sitting in a branch office present a fairly consistent network requirement.  If that same hundred workers runs out into the wide world and connects via mobile networking, it’s a whole new ball game.  Even if those workers did the same jobs, they’d need a different level of support from networking in the form of connectivity, and from IT in the form of productivity enhancement.  This is what injects change, in the form of dynamism, into the network market.

It’s also what’s injecting the motivation for M&A.  I think vendors are starting to see that things are changing.  Yes, they’re into denial.  Yes, they see the symptoms and not the sweep of the future.  But they see something, and they’re reacting to it.  Most of the moves so far are tactical in nature; “We’re hearing a lot about OpenStack lately, so maybe we should do/know something there.”  It would be dangerous for us to assume that Cisco had suddenly changed its view of the future because it bought an OpenStack player, or that IBM did that.

Cisco probably does see that there are going to be a lot of clouds, and that services and applications are probably going to have to span them.  “Intercloud” is Cisco’s cloud strategy so that’s probably good at one level, but they have to make Intercloud more than a title on a PowerPoint.  Piston might help in that regard.

IBM probably realizes that private cloud is as likely (or more likely) to be driven by business operations as by the CIO, and that they need a nice wide on-ramp with an easy gradient if they want to succeed in private and hybrid cloud.  There are much broader implications to the notion of business-driven cloud than the ease of using the cloud stack software, but IBM would have to change a lot to address these.  So they’ll focus, via Blue Box, on what they feel they’re ready to accept.

If you have a sense of déjà vu here it’s not surprising.  We have at the same time a vision of a radically different future and a bunch of people who are tying themselves to the starting gate in the race to own it.  This isn’t a great surprise, of course, but it does raise the question of whether these early movers in OpenStack are really attacking anything or simply preparing a defense.  That would raise the question of who the attacker might be.  Amazon might yet end up a winner here, simply because a drive toward OpenStack and OpenDaylight by major vendors, driven more by turf protection than opportunity realization, might well stall OpenStack progress completely.

There’s also the impact on NFV.  I said years ago, in the ETSI NFV dialogs leading up to work on the specs, that if all NFV proposed to do was to deploy VNFs they might as well simply reference OpenStack.  There’s a project within OpenStack to telco-ize the model, and if enough vendors jump to support OpenStack that project might get enough backing to tap off the opportunity for “limited-scope” NFV.  That would force differentiation to address higher-level orchestration and management issues, which most vendors are still dabbling in.

What this means is that all this OpenStack stuff could be very important indeed, and something we’ll all have to keep an eye on.

HP Shows a Bold Vision for the Future of NFV, SDN, and the Cloud

SDN, NFV, and the cloud all suffer from a common challenge.  You have to start somewhere and exercise prudence on creeping scope, but the changes all could make in IT/network practices are so profound that you need to look beyond early applications in search of both an architecture and a mission broad enough to cope with growth.  I’ve said several times recently that current NFV projects in particular were perhaps examining the toes of the technology too much rather than stepping boldly toward the future.  That puts more pressure on vendors to make that broad business case.

HP is IMHO the functional lead in NFV; their stuff is the most complete and the most credible of all the solutions I’ve examined.  This week at their Discover event in Las Vegas, they opened up a bit on their vision of the future, both for their OpenNFV approach and for how NFV marries to SDN, the cloud, and future operator business models.

One of the critical issues in NFV is how NFV deployment in the explicit ETSI sense of virtual functions and hosting can be integrated with legacy elements of the same service, SDN components, and perhaps most critically operations and management systems.  HP proposes this be done using a combination of “universal orchestration” and an extended model of infrastructure management.

HP’s Director has been enhanced to offer analytics integration and a dual-view option for service management or OSS/BSS versus an NFV view of management data.  It’s underpinned by a universal data model for structuring services and resources alike.  I like the unified approach, though I confess it might be a personal bias since I’ve not really been able to line up a divided-model approach against an equally good integrated-model approach to see if the benefits are real.

The new version of the model supports both the relatively limited MTOSI states and more generic and flexibly defined states.  If you can make the former work the result is easier to integrate with TMF-level material like SID but I think that MTOSI states are probably not adequate to represent complex NFV stuff like horizontal scaling.  The model also supports the notion of a “virtual VIM” that lets you model multiple options for deployment, like NFV versus legacy.

The extended infrastructure manager concept, which I’ll call “IM” just to save typing, is based on creating a unified data model to represent infrastructure that can generate services—you can model SDN, NFV, and legacy with the same model, which means that a common IM approach above the data model can link everything to higher-level functionality.  It’s this approach that gives HP’s OpenNFV product breadth to address a service end-to-end.

This all links to some of the future-trend stuff.  HP talked about a “service factory” and “mall” approach.  Architects can assemble stuff in the service factory and wring it out, sending it to the mall for retail offers.  The services can take advantage of underlying transport, higher-layer connectivity, network-related features like firewalls, network information gleaned from service operations, and data from other sources (more on this in a minute).  This creates a multi-layer operator business model rooted in traditional networking and building up to a more OTT-like picture.

HP’s view is that as time goes by, NFV will become more “cloud-like”, not so much by changing any of its technologies but by supporting services and applications that evolve as operators look for new revenues—for new stuff to sell in the “mall”.  They offered a vision of how that might work, too.  Users, both consumer and business, are evolving their position on what “the network” has to offer them as they become more dependent on mobile devices.  Mobile users move through what look like “information fields” created by their own behavior (social networking, for example) and by the sum of their environment.  These fields intersect with the users’ interests to create what the HP slides call a “Decision Fabric”.  From this fabric, you can build applications that support not knowledge but action, something that’s of higher value.

You can get information fields from what you already have.  Location-based services are an example, and so is that social-media context.  You can also visualize new developments like IoT as an information field, and in fact you can see the difference between the Decision Fabric approach and conventional IoT models pretty easily.  IoT in the HP view is almost a self-organizing concept, a marriage of data from sensors and analytics presented not as a bunch of IP addresses but as a service.

In fact, they shared the architecture of their IoT solution, and that’s exactly how it works.  You have a collection of devices on public or private networks that contribute to a repository of IoT knowledge that can be passed through filters in real time or in the form of analytic results.  While this model is specifically aimed at IoT, HP said that it would be the general model for their drawing data into their Decision Fabric.

They also said that the OpenNFV tools would be used to deploy the components of the Decision Fabric, and that it was applications like this that would shift NFV from provisioning relatively fixed pieces of service technology to coordinating highly dynamic services.  That’s where NFV is most useful of course; you don’t need a lot of service management technology to deploy something under a two-year contract.

The HP positioning on NFV, SDN, and the cloud poses an interesting challenge for the industry and for operators.  It may also challenge HP itself, all because of the fact that it extends the current rather narrow notion of NFV, brings in related technologies like SDN and the cloud, and magnifies the need to orchestrate everything related to service deployment and behavior and not just virtual functions.  A big story can have big benefits, but it’s also more complicated.

That’s the root of the dilemma we have with SDN, with NFV, and with the cloud.  We have glorious future visions of these things and we have a generally pedestrian view of their near-term evolution.  The cloud is going to change everything, yet most think of it as hosted server consolidation.  SDN is a revolution, but it’s a revolutionary way of building the very same services we already consume.  NFV is about orchestrating for dynamic service creation, but we think it will be dynamic creation of the very functional pieces we already have in a forty-dollar network access box you can buy at a technology store.

You can’t leap into a totally new world of networking, not in an industry with capital cycles of five to seven years on the average.  But you can’t evolve by just doing the same old stuff either.  I think what HP’s visions prove is that there are solutions out there that can create a real future for NFV and not just an extension of the present.  That’s true for other vendors too, and so the question for the industry and the vendors may now be whether they can show buyers that future without scaring them into immobility, or whether they can lead them step by step to the right place without ever telling them just where that is.  If one or the other can’t be done, then all our revolutions risk diffusing into footnotes on the technology story.

Alcatel-Lucent Has More NFV Game than it Shows

In a couple blogs last week I was critical of Alcatel-Lucent’s positioning of a new product (NSP) and their expressions of SDN/NFV opportunity in a white paper done in conjunction with AD Little.  I also raised questions about their positioning overall, and their ability to deliver on SDN/NFV benefit needs.  I had a chance to talk at length with them about their portfolio, and the good news is that they have a lot better technology than I thought.  The bad is that they’ve been presenting it even more inconsistently than I’d feared.

NFV poses a lot of vendor challenges, and one we don’t often think about is that it can cross over product-group boundaries.  A complete NFV implementation that does everything I think is essential would have to include operations tools, NFV MANO and VNFM, a VIM/NFVI strategy, SDN, and legacy device control via something.  You’d also likely need some VNFs.  However, an ETSI-compliant NFV implementation could need only the MANO/VNFM/VIM/NFVI story.  Almost every NFV supplier out there takes a holistic view of NFV and touts as many things as they have from my full list.  What I’ve found out is that Alcatel-Lucent doesn’t do that, which means that if you look at their NFV material you don’t see things that operators and I would all put into the NFV story.  You’d think Alcatel-Lucent didn’t have the stuff, in fact.

If you want to see a complete picture of NFV from Alcatel-Lucent, there is one, but you don’t find it in NFV material.  Instead you have to look at Motive and in particular SURE, and neither of these is easy to find on the company website or easy to connect with either Alcatel-Lucent’s SDN or NFV strategy.  Here’s some helpful links to save you the problems I had!  Motive is HERE and SURE is HERE.

Motive is the family of products that Alcatel-Lucent aims at service management and operations.  If you look at the components of Motive you see all the familiar operations stuff but you don’t see specific links to emerging critical technologies like SDN, NFV, and the cloud.  It’s not that they aren’t there, but you have to dig even deeper, into the modeling approach itself, which is SURE.  There’s a little blurb on Sure on the Alcatel-Lucent site, but it doesn’t even scratch the surface.  I got some slides on it but they’re NDA, so I’ll have to try to verbalize.

SURE is the acronym for Motive Service and Unified Resource Engine, which the company presents as a “data model” but which is in effect an operations architecture.  An Alcatel-Lucent slide on SURE shows it as being a TMF-modeled service management framework that actually looks quite a bit like the package that Oracle has announced as an SDN strategy.  Interestingly, Alcatel-Lucent’s SURE is actually a bit more modern and arguably more complete even than Oracle’s, even though Alcatel-Lucent seems not to present it in an NFV/SDN context.

My secret slides show SURE as a data model that integrates the Motive components with services and with each other.  SURE and Motive create a complete service management framework that includes both customer-facing service (CFS) and resource-facing service (RFS) models that correspond to the TMF modeling and also generally to my own CloudNFV and ExperiaSphere models of a service and resource domains.  The models that make up services are “fractal” in that either CFS objects or RFS objects can decompose into lower-level objects of the same type.  CFSs can also cross the service/resource border and link to RFSs that then commit the resources.

Under the SURE model structure, Alcatel-Lucent explicitly places SDN, NFV, and legacy elements.  Each of these is represented by what could be called a resource or infrastructure manager.  I think that would suggest that each could then be considered the realization of an RFS.  You could organize, orchestrate, and manage hybrid SDN/NFV/legacy services with the SURE/Motive approach, which raises two interesting questions.

The first and most obvious question is why Alcatel-Lucent doesn’t tout this in their SDN and NFV positioning.  I was able to find the Motive approach on their website only with some difficulty after knowing it existed and what it was called.  I didn’t get any comment from operators on Motive in association with SDN/NFV, and in fact a couple of operator transformation plans I just received outline an almost perfect mission for Motive and identify vendor candidates, but don’t list Alcatel-Lucent among them.  SURE, which should be the heart of an Alcatel-Lucent SDN/NFV positioning, gets perhaps a paragraph on the website, and its description isn’t even all that helpful.

I don’t want to get hung up on the lack of positioning, though I think the fact that key operator staff don’t even know about the stuff demonstrates there’s an issue.  With Motive and SURE, Alcatel-Lucent is surely superior to Oracle and nibbling at the functional heels of (IMHO) market leader HP.  There’s enough substance here for Alcatel-Lucent to drive an effective NFV engagement and to take rational NFV services to field trial and deployment.  Not many vendors can say that.

The second question is whether an approach like this (which Oracle also takes) that separates OSS/BSS orchestration from whatever goes on in NFV/MANO is better than one that integrates orchestration across everything that needs orchestrating.  It’s an unfair question at one level because the NFV ISG doesn’t consider operations or legacy network elements in-scope, though it seems to be expanding at least its legacy strategy for connections in association with NFV deployment.  If the standard doesn’t consider broad orchestration a goal, you can’t fault vendors for not broadly orchestrating.

Well, maybe you can.  I think that there are significant advantages to be gained from a common data modeling from the top of a service to a bottom.  This structure means that all of the tools for modeling and linking process elements are common, which means that it’s easy to make a model hierarchical.  If you have to cross modeling approaches, you first have to define an open boundary and second have to insure that you have everything you need passed across it.  Even then there is a concern that you may have different visibility/management tools at different places.

Despite the possible benefits of a common model, vendors who have it seem to be uneasy about trotting it out in association with NFV.  I suspect the issue comes down to sales.  The value of a common model is linked to broad application of orchestration.  If you have engagement in the NFV PoCs, do you want to expand the scope of your effort to involve new carrier constituencies and potentially new and complicated issues?  Or do you want to press forward, order-book in hand?

The problem is that I don’t think that blind momentum on current PoCs is going to take operators, or NFV, where we need to go.  I think it’s going to take harmonized efforts involving operations, the cloud, SDN…and NFV.  We can’t get those efforts if all the pieces except NFV are walled off.  Silos are what we’re trying to get rid of, so we don’t need to invent a philosophical siloing level for our key technologies.

Alcatel-Lucent has other hidden-gem assets, some of which I’ve seen demonstrated in a very impressive way.  Their virtual IMS and Rapport strategy is exceptionally strong I think, and could be easily coupled with a strong NFV and operationalization positioning.  You can get all of that with the right combination of people but not on the website or in any public marketing material.

The whole is supposed to be greater than the sum of the parts, which is true in Alcatel-Lucent’s sense but hard to determine given that the parts are difficult to locate.  While Alcatel-Lucent is far from the only vendor to have under-positioned their NFV-and-SDN-related assets, they have more to lose by doing so.  I saw, at an HP event this week, a broadening of NFV scope and positioning.  Given that HP is Alcatel-Lucent’s primary functional rival in NFV, Alcatel-Lucent needs to start singing their own multi-product song—in harmony.

Making the Most from Intel’s Altera Deal

The decision by Intel to acquire custom-chip-and-FPGA vendor Altera is another indicator that networking may be driving incremental opportunity in the server space.  As the WSJ points out, more than half of Intel’s operating profits come from servers, though personal-system chips account for most of the company’s revenues.  You can’t afford to ignore a change in your profit-cow’s pasture, but what exactly is driving the changes?

There’s a lot you can do with Altera silicon, of course, but the big applications probably relate to networking.  FPGAs are great ways of doing those little complicated communications-oriented functions that tend to bog down traditional servers.  If servers are going to serve in a network mission, you’d expect them to need that special stuff.

There is no question that servers have been getting more network-centric.  That’s been true since we exited the age of batch processing, in fact.  There’s no question that higher-speed connections to servers have started to impact server design, or that pools of servers linked in a virtualized data center or a cloud will impact the role of networks in connecting servers, and impact servers’ network interface design.  Both of those have been true for some time too.  I think we have to look further.

One obvious factor in the shift in server design is the transfer of functionality from network devices to servers and server pools.  Both NFV and SDN would have the effect of making servers more “appliance-like” in what they did, and Altera’s FPGA technology has always been targeted in part at custom appliances and network devices.  The only question is whether the trend is large enough to justify Intel’s M&A decision.

I’ve been modeling NFV impact for a couple years now, and my numbers have been pretty consistently showing that optimal deployment of NFV would generate about 100,000 new data centers worldwide, making NFV the largest single source of new data centers.  It’s hard to say how many servers would be involved here because of the variety of density options, but I think it’s fair to say that we could be looking at tens of millions.  Nobody would sneeze at a market that big.

Intel seems to have grasped the scope of NFV opportunity some time ago.  Wind River’s Titanium Server (formerly Carrier-Grade Communications Server) targets the VNF hosting market and is the basis of a number of partnerships with server vendors.  Intel is also a Platinum member of OPNFV, the group driving an open-source reference implementation for NFV.

If we have all these new servers, we may have something that on the surface seems to defy NFV’s notion of “COTS” because specialization of server needs can justify specialization of server technology if you have enough quantitative needs to fill a bunch of servers.  Resource efficiency is critical in NFV and the cloud and you don’t want to segment resource pools unnecessarily, but a requirement for special data handling that’s pervasive enough justifies its own pool.  Thus, a failure of Intel to address the need could fill a pool of NFV servers with another vendor’s chips.

Whether all this interest, including Altera, will pay off for Intel may depend on the pace and manner of NFV adoption—the question of what an “optimal deployment” of NFV would be.  The simple reality of any technology shift is that it happens to the extent that its benefits pay back on the investment needed to support the shift.  Given that the primary benefits of NFV deployment—operations efficiency and service agility—are not in fact delivered by in-ETSI-scope NFV that may pose some questions for Intel as it digests Altera.

Perhaps the biggest question is whether the SDN track to success could be as valuable as the NFV track, or whether it could hurt Intel and servers in the long term.  I pointed out in a couple earlier blogs that if you took the “manager” concept of NFV and pulled it out to create a plug-and-play version of infrastructure and manager, you could then drive that combination without NFV and direct it at pure NaaS missions.

If you operationalize NaaS using OSS/BSS orchestration from any of the vendors who supply it, you could deliver at least a big piece of the operations efficiency and service agility benefits that NFV needs.  You could boost revenues with NaaS.  Given that, and given the fact that vendors like Cisco might love the idea of proving out network efficiency and revenue benefits with legacy devices, might you reduce the incentive to move to NFV?  It depends.

I think that as services evolve to incorporate more mobile/behavioral elements, network infrastructure will evolve toward the cloud.  Operationalizing the cloud is clearly a mission NFV could support, since VNFs and elements of mobile/behavioral services would look much the same.  The trick is to make this happen, because of those pesky NFV scope issues I’ve talked about.

For Intel, that may be the challenge to be faced.  Legacy network practices have blundered along in the traditional circuit/packet model and they could be on the verge of escaping that mold.  NFV could be instrumental, even pivotal, in making that escape, but I think it’s becoming clear that other stuff could also force the change in the network service and infrastructure model.  That wouldn’t prevent a cloud revolution for operators, but it could divide the transformation process into two phases—operations-driven and service-cloud-driven.  The result might be a delay of the second phase because the first could address (for a time) the converging revenue/cost-per-bit curves.

Intel needs to have the future of networking set by NFV and the cloud.  That means that they need to drive NFV PoCs toward field trials that include service-lifecycle-management, operations efficiency, and agility.  And that may not be easy.

In nearly every operator, NFV PoCs are totally dominated by the standards and CTO people, and the scope of the trials has been set at what the term “PoC” stands for in the first place—proof of concept.  Can “NFV” work?  Somebody will have to promote broader operations integration here, and Intel would be a very logical source.  They could do that in three ways.

The first is to encourage both the NFV ISG and the OPNFV group to formally encourage PoC extension or new PoCs to focus on operations and infrastructure-wide orchestration.  This is a given, I think, but the problem is that there will likely be enormous resistance from network vendors.  I think more will be needed.

The second approach would be to work closely with server vendors to take a broader and more aggressive stance on the scope of NFV.  HP has a very strong NFV suite and is a server partner of Intel and Wind River, for example.  Alcatel-Lucent and Ericsson have their own server strategies based on Intel.  Could Intel promote a kind of NFV-server-vendor group that could develop its own tests and trial strategy that’s aimed at broader orchestration and agility goals?

The final approach would be to actually field the necessary technology elements as Intel or Wind River products.  To have this approach generate any utility, Intel would have to preempt any efforts to encourage standards progress and even perhaps create some channel conflicts with its current server partners.  I think this is a last-resort strategy; they might do this if all else failed.

To my mind, Intel is committed to Door Number Two here whether it realizes it or not, largely by default.  Otherwise it’s exposed to the risk that NFV won’t develop fast enough to pay off on the Altera investment, and also to the risk that NFV itself will either be delayed in realizing those heady server numbers, or fail to realize them altogether.  I don’t think those would be smart risks for Intel to take.

How SDN Models Might Decide the “Orchestration Wars”

One of the interesting things about SDN is that it may be getting clearer because of things going on elsewhere.  We still have a lot of SDN-washing, more models of what people like to call “SDN” than most would like, but there’s some clarity emerging on just how SDN might end up deploying.  I commented on SDN trends in connection with HP’s ConteXtream deal last week, and some of you wanted a more general discussion, so here goes!

There have been three distinct SDN models from the first.  The most familiar to most is the ONF OpenFlow “purist” SDN, the one that seems to focus on SDN controllers and white box switches.  The second is the overlay model, popularized by Nicira, which virtualizes networks by adding a layer on top that’s actually a superlayer of Level 3 not Level 4 as some would like to think.  The final model is the “software-controlled routing/switching” model, represented in hardware by most vendors but in particular Cisco, and in software by virtual-router products like Vyatta from Brocade.

Virtual switching and routing, and overlay SDN, are all the result of a desire to virtualize networks just like we did with computing.  In effect, these create a network-as-a-service framework.  That can be done with ONF-flavored SDN too, but the white-box focus has tended to push this model to a data center switching role, and to a role providing for explicit forwarding control in the other models.

Too many recipes spoil the broth more decisively than too many chefs.  Diversity of approach and even mission isn’t the sort of thing that creates market momentum.  What I think is changing things for SDN is a pair of trends that are themselves related.  The first is rapidly growing interest in explicit network-as-a-service, both for cloud computing and for inclusion in retail service offerings.  The second is NFV, and it may be that NFV is why the NaaS interest was finally kindled in earnest.

NFV postulates infrastructure controlled by a “manager” of some sort.  Initially this was limited to a virtual infrastructure manager, but many in the ISG are now accepting a “connection manager” responsible for internal VNF connectivity, and some are accepting that this might be extended to support connection to the user.  The important notion here is the “manager” concept itself.  You have to presume (the ISG isn’t completely clear here so “presuming” is a given) that a manager gets some abstract service model and converts it into a service.  That’s a pretty good description of NaaS.

If a manager can turn an abstraction into connections under the control of MANO in NFV, it’s not rocket science to extend the notion to applications where there’s no NFV at all.  I could use an NFV VIM, for example, to deploy cloud computing.  I could use a “connection manager” to deploy NaaS as a broad retail and internal service.

In NFV, most would think of the connection manager as controlling SDN, meaning that there’s an SDN controller down below.  That would likely be true for NFV inter-VNF connections, and it could also be true for edge connections to NFV services.  But logically most “connections” beyond the NFVI in NFV would be made through legacy infrastructure, so connection managers should be able to control that too.  Some would use OpenDaylight, and others might simply provide a “legacy connection manager” element.

It’s this that makes things so interesting, because if we can use connection managers to create NaaS and we can have legacy connection managers, we can then use legacy infrastructure for NaaS.  The manager-NaaS model then becomes a general way of addressing infrastructure to coerce it into making a service that can be sold.

In TMF terms, this might mean that “managers” are providers of resource-facing services.  If that’s true, then orchestration at the service level, meaning within OSS/BSS, might be able to do all of the higher-level orchestration involved in NFV.  “Higher-level” here would mean above MANO, above the process of deploying and controlling virtual functions.

Oracle sort-of-positioned itself in this camp with their NFV strategy.  I commented that it was the most operations-centric view, that it had TMF-style customer-facing and resource-facing services, and that it seemed to be positioned not as a limited implementation of VNF orchestration but as a broader approach, perhaps one “above” NFV.

I’ve been saying for quite a while that you need total cross-technology, vertically-integrated-with-OSS/BSS orchestration to make the service agility and operations efficiency business cases for NFV.  There’s always been three options on getting to that.  First, you could extend NFV and MANO principles upward.  Second, you could extend OSS/BSS downward, and third you could put an orchestration stub between the two that did the heavy lifting and matching between the environments of OSS/BSS and NFV.  How would an SDN-and-legacy NaaS model influence which of these options would be best, or most likely to prevail?

It might not change much, even if the NaaS story comes about.  The NFV ISG has taken a very narrow position on its mission—it’s about VNFs.  If you presume that the evolution to NFV comes about because services are converted from appliance/device-based to VNF-based, then the easiest way to orchestrate would likely be to extend MANO upward.  If you presume that NFV deploys to improve service agility and operations efficiency, then orchestration has to provide those things, and even if you orchestrated VNF-based versions of current services you’d still have the same operations problems unless something attacked that area too.

There’s some pressure from operators conducting NFV trials to broaden the trial to include operations, and also some to demonstrate specific efficiency and agility benefits.  However, these trials and PoCs are based on the ISG model of NFV and so they’ve been slow to advance out of the defined scope of that body.  Operators haven’t told me of any useful SDN orchestration PoCs or trials, and most of the operations modernization work in operators is tied up in long-term transformation projects.

That’s what’s creating the race, IMHO.  NFV could win it by growing “up”, literally, toward the higher operations levels and “out” to embrace legacy elements and broader connection-based services.  SDN could win it by linking itself convincingly to an operations orchestration approach, and OSS/BSS could win it by defining strong SDN and NFV connections for itself.

Who will win is probably up to vendors.  OSS/BSS has always moved at a pace that makes glaciers look like the classic roadrunner.  NFV is making some progress on generating a usefully broad mission, but not very quickly.  So I’m thinking that the question will come down to SDN.  Can SDN embrace an orchestration-and-manager model?  The competitive dynamic that might be emerging is what will answer that question.

How HP’s ConteXtream Deal Might Change the Game

Hint:  It’s not how you think!

HP is certainly at least one of the functionality leaders in the NFV race, and the fact that they’re an IT player is important to senior management at many operators.  They’ve won what’s arguably the most important NFV deal yet (Telefonica), and they’re on track to deliver convincingly on operations integration.  In one regard, though, they reminded me of the lion in the Wizard of Oz; they lacked a heart.

SDN may be the critical heart of any NFV deployment and HP’s SDN position was “referential” in that they support OpenDaylight.  That’s not enough for some operators, and HP has now fixed that by acquiring  ConteXtream.  The move may signal some new HP aggression in the SDN/NFV space, and it may even move the ball in terms of “network as a service.”  Or it may be a simple tactical play, one that could even go wrong.

ConteXtream provides a form of “overlay SDN” not unlike the Nicira model that first popularized SDN as a concept.  That approach offers three potentially significant benefits.  First, overlay SDN is infrastructure-independent and so it doesn’t force HP to take a position on network equipment technology.  Second, ConteXtream implements OpenDaylight and so it reinforces HP’s strategic SDN commitments but realizes them in a form operators can buy and deploy (and they have done so already).  Finally, overlay SDN models are fairly easy to make end-to-end.  All you need is some software to do the encapsulation/decapsulation at any access point and you have a virtual network that behaves much like a VPN but can be deployed with incredible agility and in astonishing numbers.

There’s also a tactical issue that ConteXtream addresses.  HP has been forced to work with SDN from other players, including major competitors, at key NFV accounts.  This obviously keeps the competitors in play where HP would like to have complete control, and having their own SDN strategy could be a big step in that direction.

The best of all possible worlds would be that HP takes the ConteXtream deal for tactical benefits and strategic potential, but it’s simply not possible to tell whether that’s the case.  In his blog on the acquisition, Saar Gillai (head of HP’s cloud and NFV business) cites both service-chaining benefits and subscriber connection benefits for ConteXtream, and that could be an indicator that HP intends to use the deal both to support its current PoC activity (where other SDN vendors are sticking their noses in) and also to address broader SDN service issues.

The connection between all of this and NaaS is the big question, for SDN and for NFV.  There is no question that SDN is critical “inside” NFV where connections among VNFs have to be made quickly, efficiently, and in large numbers.  If you have SDN in place for that purpose, it would make sense to use SDN to provide connections outside the NFV enclave too, and that could open not only a broader NaaS/SDN business but also expand the scope of NFV.

We have virtual networks today using legacy technology; VPNs based on IP/MPLS and VLANs based on Ethernet standards.  The problem with the approach is that these networks are based on legacy technology; they have to be supported at the protocol/device level and there are limitations in their number and the speed at which you can set them up.  Overlay SDN can add virtual networking to legacy networking, removing most of the barriers to quick setup and large numbers of users without changing the underlying network or even requiring any specific technology or vendor down there.  I’ve blogged a number of times about the benefits of application-specific and service-specific virtual networks; overlay SDN can create them.  You could have NaaS for real.

This summary demonstrates three possible values of ConteXtream; you can connect VNFs with it, extend NFV to users with it, and build it into cloud and telecom services even without NFV.  HP seems likely committed to both the NFV-related missions, and if they were to embrace the third pure SDN mission and add in their operations-and-legacy orchestration capability, they could build services that could support new NaaS models, which could start a legitimate SDN arms race to replace the current PR extravaganza we tend to have.

The competitive dynamic between HP and Alcatel-Lucent might be a factor in that.  Nuage is still in my view the premier SDN technology, but as I’ve noted Alcatel-Lucent has tended to soft-ball Nuage positioning, perhaps out of fear of overhanging their IP group’s products and perhaps simply because of product-silo-driven positioning that I’ve already commented on.  HP fires a shot directly at Alcatel-Lucent with the ConteXtream deal, one they might not be able to ignore.

If Alcatel-Lucent takes a more aggressive NaaS position, it would follow that Juniper and Cisco could also be forced to respond in kind.  It’s possible that virtual-router leader Brocade (with Vyatta) could then respond, and all of this could create a new model for network connectivity based on SDN.

NFV could be impacted by this evolution too.  Overlay SDN doesn’t provide a direct means of coupling connection-layer QoS to the network layers that could actually influence it.  You can do that with operations orchestration of the type used in NFV, which could pull NFV orchestration up the stack.

Oracle’s positioning might also play in this.  Oracle has been pushing a TMF-and-operations-centric vision for NFV, but its Oracle SDN strategy for enterprises includes firewall, NAT, and load-balancing features that are most often considered part of NFV service chaining.  Augmented SDN could then be seen as a way of bridging some NFV features to the enterprise.  Since both Alcatel-Lucent and HP have SDN now and since both have NFV features, might this presage an “NFV lite” that adds some capabilities to the enterprise?  Remember that Alcatel-Lucent’s Rapport makes IMS a populist-enterprise proposition.

The net here is that an arms race with SDN could actually open another path to both operations orchestration and service opportunity.  In fact, you could secure a better cost/revenue picture from operations orchestration and SDN in the near term than from NFV, presuming you did NFV without those opex orchestration enhancements.  Some might think that means that NFV is at risk of being bypassed, but it’s not that simple.

In the long term, service provider infrastructure is a cloud on top of dumb pipes.  We have over a trillion dollars a year on the table from cloud-based services so dynamic that NFV will be needed to deploy and orchestrate them no matter whether we call them “network functions” or “cloud applications”.  What NFV is at risk for is the loss of a valuable early deployment kicker.  We can do without NFV today, but we’ll be sorry tomorrow if we try to do that.

How Operator Constituencies are Groping the SDN/NFV Elephant

I often get emails on my blogs from network operators (and of course network vendors too, but those are another story).  One of the things I get from those emails that I find particularly fascinating is the difference in perspective on SDN and NFV between the pillars of power in operator organizations.  We talk all the time about what “operators” think or want, but the fact is that their thoughts and wants are not exactly organized.  Ask a given operator a question on their plans for SDN or NFV and you’ll get different pictures depending on who and where you touch, just like the classic joke of “groping the elephant” behind the screen and identifying the parts as trees, cliffs, snakes, etc.  I thought it might be interesting to sort the views out, particularly with some feedback on the comments I made in Tuesday’s blog.

One group of people, the “standards and CTO” people, see SDN and NFV as technology evolutions, primarily.  To their minds, the value proposition for them is either fairly well established or not particularly relevant to their own job descriptions, and the question is whether you can make the technologies work.  This group generally staffs the industry initiatives like the ONF and the NFV ISG, and they’re focused on defining the way that things work so that SDN and NFV can actually do what legacy technology already does, and more.

Within this group, SDN and NFV deployment is seen as something for 2018 to 2020, because it will take that long to go through the systematic changes in infrastructure that deployment of either would require.  SDN is generally seen as having an earlier application than NFV to this group of people, and also SDN is seen as requiring the smallest number of enhancements or changes to specifications to make it work.  Among the S&CTO crowd, almost 80% think SDN is already in the state needed for deployment.  NFV is seen as ready for deployment by less than half that number, though both technologies are thought to be ready for field trials.

The greatest challenge for both SDN and NFV is seen by the S&CTO group as “product availability.”  For SDN the feeling is that a mature controller from a credible source and switch/router products (including “white box” from credible sources will have to be available before full deployment would be considered.  The group sees vendors as dragging their feet, and in more than half the cases actively obstructing the standards process for both SDN and NFV.

The second group of people are the CIO and Business/Operations group.  This group believes that current activities for both SDN and NFV border on “science projects” because the linkage of the new technologies and their expected behaviors to OSS/BSS has not been effectively defined.  That means that from the perspective of CIO&O, the value proposition for SDN or NFV deployment is incomplete.

Whose fault is that?  Most everyone’s.  Three-quarters of the group think that the ONF and the NFV ISG have dropped the ball on management and operations integration.  They believe that proposing new infrastructure demands proposing effective management/operations models for it.  A bit smaller percentage thinks that the TMF has dropped the ball, and that the body has simply taken too long and showed to little insight in moving toward a better operations model.  Almost all of them blame vendors for what they see as the same-old-same-old attitude toward operations/management—let the other guy do it.

For this group, the biggest problem for SDN and NFV is the lack of a suitable operations framework to support the technologies themselves, much less to deliver on the scope of benefits most hope for.  One in nine think that SDN/NFV can be operationalized based on current specifications.  One in eleven think that operations efficiency benefits or service agility could even be delivered based on current ops models.  Almost exactly the same number think that vendors will have to advance their own solutions, and this group holds out the most hope for vendors who have OSS/BSS elements in their portfolios.

The third group is the CFO organization, and this group has very different views from both the other groups.  For CFOs, the problem isn’t and never was “SDN or NFV deployment” per se.  They don’t see technology deployment in a vacuum; it has to be linked to a systematic shift in business practices—what has been called “transformation”.

Transformation is apparently a cyclical process.  Operators’ CFO groups say that transformation has been going on for an average of a decade, though at first it was focused on “TDM to IP”.  It’s enlightening to hear what they think is the reason for cyclic transformation; Eight out of ten said the IP transformation didn’t deliver what they’d hoped for in terms of operations efficiency and service agility.  To many of these, the lesson is not to tie “transformation” as a goal to a specific technology realization.  Rather, you have to tie technologies to the goal.

That’s where the problem lies.  Fewer than one in twelve in the CFO group thought that SDN or NFV had a convincing link to transformation.  Interestingly, less than a third thought it was a priority to create such a link, which shows that CFOs and their direct reports are more interested in business results than in promoting technologies.  And if you press, more than half the CFO group thinks that the right approach to transformation should be “top down”, meaning it should focus on service lifecycles and operations and not on infrastructure.  Not surprisingly, this group tends to take an operations-centric view of the technology needed, and they also believe that vendors will have to play a key role in bringing the transformation about; standards are too slow.

The final group is Network Operations, the group responsible for ongoing management of the network.  This group (perhaps unsurprisingly) has the least cohesion in the views they express.  In the main, they see the issues of SDN and NFV to be in another group’s court for now.  However, I do get a couple of consistent comments from the operations group.

The first is that they believe that NFV and SDN are both more operationally complex than the alternatives and that there has been little or nothing done in trials or PoCs to address this.  At this point they’re seeing this as a pure network management problem not a customer care problem.

The second comment from operations is that vendors who supply traditional gear are telling their operations contacts that both SDN and NFV won’t roll out suitably until 2016.  There are no reports of vendors suggesting they consider SDN/NFV alternatives in current purchases, or consider holding back on deals to prevent being locked into an older technology.

If you dig through all this, there are a couple of themes that stand out.

First, operators themselves are not organizing their own teams around an NFV strategy and engaging all their constituencies.  Even where a group cites something that is clearly identified with another (CFO people and service lifecycle, for example) there is little or no attempt to coordinate the interests.  As a result, there is no unified view of either SDN or NFV across all the constituencies and no solid broad support for either concept.  Few operators are making SDN/NFV a cross-constituency priority, and some are planning “transformation” without any specific goal to employ either SDN or NFV in it.

Second, too much time has been and is being spent proving something nobody seems to doubt, which is the concept of NFV.  The problem is that the benefit case for SDN and NFV is necessarily cutting across all the groups, and nothing is really uniting them.  Most of this problem can be attributed to the relatively narrow scope of both SDN and NFV standardization; both are too limited to cover enough infrastructure and practices to make a convincing business case.

Third, this is getting way too complicated.  Even the CTO team thinks that the work of the respective standards bodies is making both SDN and NFV more complex and likely more expensive to implement.  Some operations people noted that they were being told to “forget five-nines” with traditional networks at the same time as the standards people were trying to insure that every aspect of reliability/availability was addressed through service definitions, redundant VNFs, failovers, and so forth.  They wonder how such a setup would ever even meet current costs, and those costs have to be lowered.

Vendors are finding themselves on the horns of a dilemma.  On the one hand, the current trials are too narrow to be likely to advance the cause of SDN or NFV.  One operator said that both technologies were at risk to becoming “the next ATM.”  On the other hand, an attempt by vendors to broaden the trials not only works against their primary CTO-group contacts’ interests, it introduces potential delay.  So do you rush forward to support an engagement that doesn’t have convincing backing or funding, or rush to the funding and potentially lose the engagement?

I think it’s truth time here.  Fewer than 15% of lab trials have historically resulted in deployment, so simple statistics say that just betting the current processes will lead to success through inertia alone seems risky.  Not only that, the current “benefit-and-justification gap” screams for a vendor who is willing to face reality, and once somebody prominent charts a course to real validation of the SDN/NFV business case, they’ll leap into a lead that may be hard for competitors to overcome.