Nokia and the Lesson of Progression

Yesterday, Nokia and Alcatel-Lucent combined, and as a company name at least the latter is no more.  The question now is whether the merger will accomplish anything.  Many, myself included, have wondered if the Alcatel-Lucent combination wasn’t one of the classic cases where the whole ended up being a lot less than the sum of the parts.  Can Nokia avoid that, and if so what will they have to do?

The first and most obvious point is that Nokia has to avoid the “It’s not going to happen to me” syndrome.  The fact is that the forces that made the Alcatel-Lucent marriage difficult are forces that Nokia will almost surely face too.  Complacency will be fatal.

The big risk to Nokia in this area has already arrived.  Go to the website and read their mission statement; it’s the sort of thing any company hack could have produced.  Do they think they have no need to communicate a real vision?  If that is the case they probably have about six months to get smart, after which the fate of this deal will be sealed.

Second, Nokia needs to understand that even the combined company cannot win in a commoditizing market.  Huawei is the company who wins in those conditions, period.  Since mergers are usually driven in large part by the notion that greater efficiencies that arise from them will be an appropriate response to a commoditizing market, it’s likely Nokia is falling prey to this point.  Is the purpose of the deal to just live a little longer?  If so then just do whatever you like.  If not then you need a plan to address commoditization.

Which brings me to the third point. The senior partner has little in the way of game-changing assets to play here.  If Alcatel-Lucent had ended up on top, I’d shake my head at their decades-long struggle for unity but I’d at least believe that the new company had the pieces needed to win and an understanding of the process.  Nokia is the senior partner.  Can their management now tell all their product units that their role is to fade gracefully as technology generated in their new partner gradually takes over?  That’s a tall order.

Nuage is a great SDN story that Alcatel-Lucent has hidden under a bushel from the first.  Will Nokia now bring it into the Light?  Alcatel-Lucent’s NFV strategy is one of six that could make the business case.  Huawei has another, and you know Huawei is going to push theirs forward.  SDN and NFV are two technologies that could address that commoditizing market, but can Nokia drive either or even want to?

There’s not a lot of time to find that out, which introduces point four.  If an elephant takes a long time to turn, two tied together will take even longer.  Alcatel-Lucent has been frustratingly slow to respond to market conditions.  Nokia has given them a run for their money in the un-agility race.  How will the two companies do together?  Management decisions of great import will have to be made right now and those decisions implemented just as fast.

It is very obvious that SDN and NFV are trying to mature as a market.  Yes, vendors either don’t like the idea of either of the two at all, or at the least are happy to let the old earth take a couple of whirls before anything happens.  Yes, operators are still groping for a path forward.  Despite all this, we are seeing small steps toward a respectable position on both SDN and NFV, and we surely will achieve Enlightenment in 2016.  Will Nokia be able to lead as the market picks up?

Or inspire, bringing us to point five.  Alcatel-Lucent and Nokia fought it out for the title of Least Inspiring Marketing and Positioning just as they’ve fought for the Turtle Award for Agility.  Rival Huawei isn’t holding records for inspiring marketing either, but they are the price leaders in a commoditizing market and they’re leading in the technologies that could arrest commoditization.  Who needs to sing and dance?  Nokia does, because they have to make a point to buyers that they know what’s happening and how to make it happen better.  Neither Alcatel-Lucent nor Nokia separately could do this.  Can the new company, with everyone watching their backs and jobs, do better?

Which brings me to the last point, the tension between bureaucracy and leadership.  If a camel is a horse designed by committee, both Nokia an Alcatel-Lucent have distinguished themselves by designing those committees.  Market leadership, as Apple has consistently shown, is not a consensus process.  Somebody has to emerge as the inspirational leader, not launch another study, yet the only technically strong and politically unified group in the new organization is Bell Labs.  Will they inspire, or study as usual?

Alcatel-Lucent and Nokia are not truly symbiotic because they don’t really complement each other by filling in critical voids in the other’s position.  Despite this, they are now one, and two cohabiting organisms really have only three possible relationships—symbiotic, parasitic, and indifferent.  Nokia needs to reflect on this, and take a path that accentuates the relationship outcome they want even knowing it’s not the natural course.  The other two relationship options would make this merger just another step on the path to marginalization.

There were once three independent companies here, then two, and now only one.  You don’t have to be a mathematics genius to see where that progression is leading.  Few consolidations driven by commoditization are survivable in the long term, and Nokia will have to demonstrate very quickly that this one is different.

ADVA buys Overture, but for What Reason?

In yet another sign that NFV is evolving, ADVA has purchased Overture Networks, one of the six vendors who I believe have the tools needed to make a business case for NFV deployment.  ADVA joins optical rival Ciena (who purchased Cyan and its Blue Planet platform for SDN/NFV) in the ranks of network equipment vendors who hope to capitalize on an NFV transition from lab to field trials and deployment.  But it’s not clear whether this M&A is really a sign of hope, because it’s not clear that ADVA has any profound NFV strategy.

Overture is a carrier Ethernet company with a long history, but one that never took it to prominence.  A small fish in a big pond, they never really made it big in their core carrier Ethernet device market space.  In 2013, Overture planned to introduce service automation features in a product family called “Ensemble”, and in the spring of 2013 when I was looking for CloudNFV partners, Overture was one of the first firms to step up.  They developed their own NFV solution in 2014, and it was comprehensive both in its ability to orchestrate legacy equipment and to integrate management/operations tools.  As a result, they were one of the six firms I said could make an NFV business case.

Overture had great NFV capability but they still seemed a bit mired in that old CE space.  They didn’t jump aggressively into positioning their NFV assets, and perhaps as a result of this they didn’t generate an exit strategy for themselves during the hope-and-hype period of NFV.  Now, with NFV facing the real challenges of deployment, operators traditionally look for bigger partners.  Overture may simply have shot behind the duck in positioning, even with a drop-dead great feature set.

So now, the next act.  But what is that?  If you look at Overture’s acquisition (for a mere, though rumored, $35 million) in the light of ADVA buying the ability to make the NFV business case, it’s a bargain for sure.  But does ADVA want to make that business case?  Unlike Ciena, who knew something about NFV, there’s no indication ADVA has much internal knowledge of the topic of NFV.  Thus, it’s far from clear that ADVA expects to fully exploit their new Overture assets, which for a time were the most functionally complete implementation of NFV that anyone had.

Here’s a pullquote from the release page on their website:  “ADVA Optical Networking can now deliver CE-based cloud services in a hybrid or NFV-pure-play environment – something nobody else in the industry can offer.”  That’s an undershoot; the very same undershoot that characterized Overture’s own positioning.  Overture can make an NFV business case—one of six who can.  If you look at NFV as being only carrier-Ethernet-cloud, you’re restricting the application of NFV so severely that being able to make a full business case is a doubtful value.

The press release and company comments so far lead me to think that the Overture acquisition was more likely stimulated by Ciena’s NFV moves than by any specific NFV commitment by ADVA.  Ciena apparently decided about a year ago that they needed to play in the NFV game, and their first effort was to develop a “Ciena-hosts-for-a-slice-of-the-business” approach.  They’d establish a pay-as-you-earn program with partners and offer it in hosted form to operators.  It didn’t work and at the time I said I didn’t think it would—for two reasons.

Reason one is that VNF providers really don’t want somebody like Ciena controlling their deals.  Many VNF providers want to differentiate themselves by offering operators a use-license rather than an unrestricted license at a higher cost.  Ciena disintermediates them.  But reason two is the big problem; operators don’t want VNFs on a use-license or revenue-share basis.  Some would accept that for a time to ease the early cost of NFV services, but very few like it as a longer-term model.

Ciena didn’t give up on NFV, and went on to buy Cyan.  They’ve taken the latter’s Blue Planet and built a Ciena NFV strategy around it, augmenting the product in features/development and even more in marketing/positioning.  And for ADVA?  So let me see; my major competitor is making a name for themselves in NFV and I’m sitting on my hands?  What’s wrong with this picture if you’re ADVA?

Ciena has to see this announcement as a real softball.  Overture couldn’t functionally match the long-term results of Ciena’s Blue Planet development program, but it would certainly be able to get to the same places in time and it might have a better near-term feature set.  If ADVA came out swinging on this, with very aggressive development and positioning, they could be a threat to Ciena’s plans.  But every day that goes by without any profound NFV statements from ADVA is an opportunity for Ciena to ask customers, analysts, and media whether ADVA is as serious about NFV (and even SDN) as Ciena is.

This is the challenge of interpreting this deal in a nutshell.  Ciena brought its Cyan deal out with aggressive positioning and a strong development promise.  ADVA has yet to say anything Overture couldn’t have said before the deal; in fact they’re saying much the same thing.  If ADVA intends to exploit Overture aggressively they could raise the level of full-business-case NFV competition.  If they have no such intentions they may well simply be taking one of the full-business-case players off the table.  Better then that someone with NFV intentions had picked Overture up.

All too much NFV positioning today is based on factors that can’t drive real success for NFV or for the vendors involved.  The firms who can make the business case are still largely bogged down in PoCs and trials that lack (by the trial sponsors’ and even participants’ own admission) any operations/management integration.  Suppose two full-business-case NFV players went at each other on making the full business case?  That would be the hope for this deal, but I can’t point to any comment from ADVA that makes me hopeful.

There’s going to be a shift in NFV in 2016, either positive or negative.  I’d sure like to see a signal that ADVA was going to contribute to the positive possibilities, and I’ll let you know if I do.

Signs of Progress in some SDN/NFV Announcements

Sometimes really important things reach the news but in pieces, and that may be the case with two items in SDx Central.  Huawei, says one article, is beefing up its open source credentials with new hires.  Fujitsu, according to a second article, has announced a new SDN controller that uses two levels of abstraction.  It’s not clear reading the pieces that these two items are related beyond an open-source reference, but I think they are, and that the relationship is a milestone in the evolution of both SDN and NFV.

Operator interest in open source is real, but there’s a big defensive component to their thinking.  Virtually every operator, and every Tier One I’ve talked with, thinks that network equipment vendors are dragging their feet on the evolution to SDN and NFV to protect their own business models.  Open source lets operators base their evolutionary strategies on software that (in theory) vendors don’t control.  The reason for the qualifier is that vendors dominate open source as much as they dominate standards bodies, for the same reason.  More bodies and money to spend.  But it’s harder to drive collective development in your own selfish direction than it would be to drive your own development that way, so operators are hoping that open source will weaken vendor strangle-holds on progress.

Open source is a priority, but there are a lot of open-source projects out there.  Huawei’s two hires, the topic of the first piece, are involved in OpenDaylight and OpenStack, which are surely open source but are more directly related to NFV when you take them together.  Which, in timing terms, is what Huawei has done.  If you were planning a big NFV push, you might have a specific interest in ODL and OpenStack.

But Huawei already has an NFV strategy.  That’s where the second piece comes in.  Fujitsu’s interest in a two-level abstraction for ODL, meaning that the northbound interface is abstracted (pretty much as usual) and the southbound (to the devices) is also abstracted.  This second level of abstraction means that you can use a modeling language (YANG) to map devices to the southbound model, and that the ODL controller actually talks to the abstract southbound interface only—the actual device mappings are handled by the model.

Still looking for the tie-in?  Well, Dave Lenrow, the guy Huawei got from HP, happens to be perhaps THE go-to guy for the hot topic of intent modeling, including as a northbound interface for ODL.  Intent modeling seems the basis for the Fujitsu dual-abstraction model too.  So one thing we seem to have here is a common thread of intent modeling, a linkage between the Huawei and Fujitsu stories that’s more complicated (and important) than the open-source connection.

For Huawei, intent modeling could be a critical addition to its NFV strategy for several reasons.  First, every operator (including both AT&T and Verizon in the US) is increasingly aware that you can get to NFV operations and agility benefits only if you can bind both legacy and NFV elements in a common infrastructure pool.  That’s a specific goal in a preso Verizon recently made; start your evolution by creating an operations framework that abstracts the infrastructure away.

For Fujitsu, the dual-level abstraction is critical if they’re to have either SDN or NFV success because they have to be able to work with other vendors’ gear to get their foot in the door.  The dual-abstraction approach lets them address any kind of gear while at the same time keeping the core implementation of the SDN controller clean.  It also facilitates a common management view from the devices upward, because the southbound model can handle the management abstraction too.

So for sure, abstraction is getting critical.  I think it’s even more specific, as I’ve said, and that intent-model-based abstraction is getting critical.  As I said above, and as I’ve said in prior blogs, you can’t achieve operations efficiency or service agility if all you can operationalize is that tiny NFV enclave you’ve started with.  The intent-model approach lets you build a top-down model of NFV that harmonizes easily with OSS/BSS, and at the same time insure you can fit all the bottom-level stuff into the picture, even if you don’t know for sure what that stuff is yet.

Intent modeling is one thing that unites SDN and NFV because it’s valuable for both and because it’s probably essential in providing a means to adopt SDN to support an NFV evolution.  The NFV ISG report makes the recommendation that intent modeling be looked at more broadly within NFV in part because of its obvious value in SDN/NFV integration.

As important as intent modeling is to both SDN and NFV, there may be a still-higher-level issue on the table for both Fujitsu and Huawei, which is virtual networking.  We are only now discovering something that should have been clear at least two years ago, which is that simplistic forwarding graphs aren’t ever going to solve NFV networking challenges.  The IETF, as I pointed out in a prior blog, has issued a nice paper on that topic, but the real proof of the pudding is the fact that both Amazon and Google have sophisticated virtual networking tools as part of their cloud offerings, and NFV is really functions in a cloud.

Virtualization in any form doesn’t work without network virtualization because you can’t have elastic, agile, resource assignments if every time you add or move something you lose touch with it.  Amazon led the industry in recognizing this and adopted “elastic IP addresses” which allowed software to map an “external” address representing access to an application/component to a private IP address that could be changed as needed when something was scaled or redeployed.  Google carried this further with Andromeda, which recognized that cloud applications were really living simultaneously in multiple “worlds” at the same time, and that in many cases the same element had to have different addresses and access rules in each.

Neither SDN nor NFV has adopted this notion, and in fact even OpenStack lags commercial giants like Google and Amazon in supporting network virtualization for the cloud.  I think that Huawei’s acquisition of David Lenrow and their collateral hire of Chris Donley from CableLabs could well be aimed at taking a leading role in network virtualization.  It’s also possible that Fujitsu is heading that way as well, because abstraction at multiple points in SDN is helpful in deploying and managing those multiple virtual-network worlds.

Why is all this happening now?  That’s the big question and I think the answer is obvious.  Operators have finally recognized that the PoC framework for NFV is proving little or nothing useful because it has holes in the critical places where benefits have to be proven.  If you look at VNF on-boarding, just the first step in an NFV service, you see that there are major issues in preparing software as a VNF because even simple points like how you address management elements or how you address dynamically moved/scaled VNFs aren’t standardized.  Intent models and virtual networking could fix all of that.

If Huawei could address intent modeling and virtual networking fully, it would have an SDN implementation so good that it would be a real leader there, and that would carry over into connectivity and addressing in NFV.  If Fujitsu could abstract any set of equipment under a common intent-model umbrella it could play a role in evolving any operator’s infrastructure toward both SDN and NFV.  That’s great for these two vendors, but it’s a bit of an indictment for the others because as I said none of this should have come as a surprise.  Maybe what that proves is that the operators are right—vendors are dragging their feet to protect their business model.  The only problem with that theory is that some of the vendors, like HP from whom Huawei stole Lenrow, don’t have any network sales to speak of that would be at risk in an SDN/NFV translation.  So maybe the issues with other vendors are just a matter of tunnel vision.  If so, then these two announcements should widen everyone’s perspective in a hurry.

Putting “Services”, “Networks”, “Agility”, and “Transformation” into Perspective

The big question for network operators is less how they might use SDN or NFV but how they might build their next-gen infrastructure to support an evolved business model.  That question has been asked for a full decade now, and tellingly it was first called “transformation”.  The term reflects a top-down vision that’s absent in most operator plans today.  Interestingly, we’re starting to see some of the results of top-down plans, and we need to understand not only what’s happening but why it’s important.  The challenge is that top-down is inherently less focused (operators each have their own businesses), and also that there’s a question of where you want the “top” to be.

SDN and NFV are technologies, not services.  They are one of a family of stuff (including “IP convergence”) that emerged as technical options and matured through a long process into network deployment.  Historically, the operators have started this long process with lab tests run by the Office of the CTO (or a similar title), and only about 15% of the stuff that gets a start ever goes on to be deployed in volume.  SDN and NFV are in this lab-trial-by-CTO process now.

About five years ago, we saw a second pathway to change emerging.  Some operators, including both US giants AT&T and Verizon, kicked off executive-committee-level projects to rethink how they built networks.  To many, especially in the vendor community, these projects seemed rudderless, but what was different about them was that they started at the top, identified business goals, and then tried to work downward to realize them.

You could argue we have a meet-in-the-middle problem now, because the bottom-up CTO stuff has yet to build to a convincing business case, and until very recently the top-down stuff didn’t show any signs of being mapped to something you could buy and install.  In 2015, though, we saw some changes in this picture.  A few operators advanced, not by bridging the gap between two sets of projects, but by driving the top-down approach to the bottom where it could be deployed.

AT&T and Verizon have both talked publically about their initiatives, though not always in the same top-down way that they were planned.  I’ve seen presentations from operators in Europe in Asia that follow the same general model:

  • They start with the fact that service competition is really happening not between operators but between all operators and the OTT players.
  • They point out that OTTs have the advantage of a very agile fast-fail approach to services, largely because their position in the network stack lets them be software-driven rather than dependent on long-life-cycle devices.
  • They postulate an evolution to a software-based model, often calling their goal exactly that or sometimes calling it “virtualization”.
  • They frame an approach that is very operations-centric both in focus and in how it evolves.

One thing that emerged from these top-down players in 2015 was a universal recognition that you needed to base future services on abstractions, and that these abstract service components had to first be mapped to legacy infrastructure.  That need for legacy mapping is implicit in some operator presentations and very explicit in others, but it is part of every top-down pitch I saw last year.  One result is that these operators tend to see “transformation” as being operations-driven rather than technology-driven.

Oracle played this trend quite well in 2015, making itself one of the six who could make a business case for NFV by casting its NFV strategy in an operations mold.  Amdocs has just started an even more directed activity, blogging about the value of “intent models” in abstracting elements of services.  Huawei, who has a credible NFV story and also an OSS/BSS story, hired away HP’s intent-model guru.  These latter two developments happened just this month, which I think shows there’s momentum developing.  If we do see OSS/BSS providers adopting intent-model abstraction, it could shift the momentum to them, even for pure NFV applications.

The challenge for operators in balancing “network” and “service” is that somebody has to do the network, and if you’re elected then you have to make the network work efficiently before you worry too much about competing above it.  If you carry a loss into the higher layers from the network, when the OTTs dance along on top wild and free, you’re in trouble.  A collateral challenge is that the network does provide some services, albeit the traditional ones with declining margins, and some of the service layer requirements would impact network-level service requirements.  That’s particularly true when you consider “agility.”  Operators have tended to get bogged down in network agility, thinking about composing connections and network features, but not the stuff above, where the OTTs are making the money.

If operators share a weakness of focusing too much on network agility not service agility, they share another in focusing on IoT as a gravy train to ride to 5G revenues as every device on the globe becomes a cellular customer.  They have partnerships, including developer partnerships, and they call out analytics as a key element, but they don’t establish an architecture in which IoT could deploy, how it could harness current sensors and controllers, etc.  This, from players who emphasize protection of their own assets in a network-level migration!

Despite all of this, operators are really committed to transformation, and in fact many are transforming.  From AT&T, for example, we have insightful applications of SDN (and, I hear, eventually NFV) in Carrier Ethernet services and infrastructure.  It’s clear that the pathway to agile networks and services is to create virtual networks at the physical level via SDN, partitioning current transport assets.  You then pair them with hosted (initially, at least, on the premises) features to create business-level services.  We also have their “Smart Cities” initiative, which lacks both transitional and architectural detail.  AT&T has done a good job of pushing the limits of SDN and NFV, but it still needs to be pushing services at a higher layer.

AT&T’s initiative seem to meld nicely with the MEF’s “third network” approach.  Most operators see broad collaboration among businesses they serve as a fertile opportunity, and Ethernet services could provide the connection reliably providing that the boundaries of the VLAN could be set more dynamically, and that there was a higher-level mechanism for controlling access to applications and information.  As I said last week, contextual and collaborative processes seem the way of the future, and one of the network implications could be this “third network” as a connector between contextual process hosting points and IoT information repositories.

Operators’ tendency to focus on “network agility” has aroused media scorn; they say that operators don’t understand how OTTs can set up a service in a minute where operators take a long time.  Well, it’s easy to add hosting onto a connection service somebody else is responsible for.  Adding functions and processes to the connection services themselves creates major issues in security, privacy, and stability, and challenges in how to mediate development so it becomes symbiotic and not incompatible.  Someone who commented on a past blog of mine pointed out an IETF draft from the NFV Research Group that talks about the need to work out connection models for virtual functions and identifies some of the limitations of current models and a suggestion for the future.  I agree, of course; I’ve argued that we have a simplistic vision of “service graphs” that draws them but never manages to define the connection process suitably.  That’s created what’s likely the largest integration problem for VNFs today.

And it could get even more complicated as we move into “service” agility in a true sense.  IoT and other high-layer services are going to look web-like, as microservices, and they’ll need even more architectural order or developers will never be able to use them.  Some operators make microservice evolution a part of their goal set, but they don’t define how they’ll get there.  AT&T’s IoT framework does include a programming language for analytics access, but there’s still a need to paint the total picture, including how current sensor data and even non-IoT data can be introduced.

Operators know from long experience that you need a very well-defined architecture to build services.  I believe that intent modeling and microservices will define that architecture, but it’s not defined yet and it’s not clear to me what processes will define it.  In the meantime, operators are continuing with their top-down initiatives, creating network agility and hoping that the rest of the puzzle will come together.

The network-agility focus has rejuvenated some network vendors, who can claim a role in NFV because they can support SDN agility and connectivity and can host functions in an edge device.  That, combined with the fact that a network-agility focus necessarily involves network equipment, may be promoting a stronger position for the network vendors at the expense of the IT vendors.  But if Oracle and Amdocs work their magic on the operations/abstraction side, even the network vendors would be cut off from the business case, which of course is likely the goal for these two and also for Ericsson.

An OSS/BSS-centric approach creates the very real risk that abstracting infrastructure would actually allow operators to gain efficiency and agility without changing much at the network equipment level.  That’s a point I raised in a blog last week, and network vendors might well think this was a good outcome.  The problem is that once you wrap functionality in an intent model, it becomes difficult or impossible to differentiate implementations.  It’s dark in that black box!  That’s why intent modeling is so important for NFV integration; there’s no other way to do it in my view.  Vendors may therefore have to balance forces.

Operators are doing that already.  We will have transformation projects in 2016 that may include some SDN or NFV, but will be focused higher up.  What I’m afraid of is that they will still not focus high enough, that they won’t see future services (including and especially IoT) as a kind of PaaS community that the operators themselves will have to define—using of course intent-model principles and microservices.  If we miss that service layer it won’t stall transformation, but it will focus it on network efficiency and agility and perhaps foreclose operators from ever getting a real seat at the new-revenues table.

Contextual Processing and the Future of Network Services

It seems to me that if you read the tea leaves of current carrier plans you see that the potential for new things like SDN and NFV are being inhibited by “old-think”.  If we try to build a new model of a Carrier Ethernet network, we’re limited in the benefits we can bring relative to the original model.  But businesses network sites not people, and we have established business communications, application usage, and collaboration practices based on the old network model.  If benefits come from above, then these three high-level things are going to have to change for us to get the most out of a new network technology down below.

The big mistake we make in thinking about the future is looking at it through a lens of limitations.  I can still read Hollerith card codes, and I remember “batch processing”.  That’s fine as long as I don’t cast the limits of those days into consideration of today and tomorrow.  But the past does provide us with an opportunity to take a fresh view of things too.  We didn’t always have IT, business or personal networking.  How did we evolve from nothing to something?  Part of it involved making IT into a real-time work partner or life partner.  But how exactly does that happen?

The answer in my view lies in a fundamental truth about people, perhaps a truth profound enough to be one of the criteria for humanity.  We are contextual beings.  What makes human thought a powerful tool is that we interpret things as a whole, in context.  We see conditions, we see stimulation, we see possible future outcomes and risks, and we decide.

In the early days of IT the worker and the computer were not partners; in many cases the worker never interacted with computers at all, even where some IT applications were in place.  People used computers to record what had been done, and so in a sense we had big data applications before we had “little data” because the goal of early applications tended to be more analytic than productivity support.  To make productivity support better, we brought the IT processes closer—meaning we made the IT processes contextual partners.

IoT is forcing us to look at this contextualness thing more closely, because what IoT would really do is allow applications to absorb more context without troubling humans to provide it.  If I can use IoT to know when I’m looking at the right panel in a factory, I don’t need human processes to do that and those human processes are then relieved of a task that inhibits doing the work.

I’ve blogged about IoT and context from a network perspective, a cloud perspective.  There’s another question beyond that, which is whether we may have reached a point where what we need to have is contextual IT rather than trying to fit IT into a context.  If humans like context, if human work and entertainment is increasingly contextual, why not deliver context-as-a-service?

Suppose you’re strolling down the avenue taking some air and getting some exercise.  How are you different from an alter ego that’s looking to buy a gift or trying to find the location of a meeting you’re late to?  Same you, same avenue, same stroll, but different context.  This doesn’t seem important when you think about your stroll in traditional terms because you know what you’re trying to do, but you can see that personal or business services directed at you would have to be different for the three missions, or they’d be irrelevant at best and intrusive at worst.

If we had contextual processes inside our network, we could accept a general mission from you and then build an appropriate context around what you are doing and what the partner processes are doing for you.  If you’re strolling, the process could point out interesting byways or diversions, points of interest.  If you’re shopping the processes could direct you on a route that passes the largest number of suitable retailers, or the lowest prices.  If you’re looking for a business meeting, of course, you want to get guidance on the fastest path, including when to cross over to maximize the timing of traffic lights.  Network services, IoT or beyond, have to do what’s useful.

There are a lot of challenges to contextual partnership, though.  To start off, you have to decide whether you are adopting a “pull” or “push” model, meaning whether the user being supported, directly or inferentially because of the mission, ask for information from the service, or whether the service pushes information at the user.  Obviously, the more the user has to do the less incrementally useful the service is.  Why not just give them a search engine?  So we either have to anticipate user needs based on something, or push events and then policy-filter them to insure we’re not intrusive.

Calling, viewing videos, shopping, and many other things can now be cast as context-management activities.  Personal communications already divides logically into what could be called “chats” and what could be called “collaboration”.  A chat is an event, but a collaboration is a sharing of context.

All of this argues strongly for an agent in the cloud, a context partnership that’s valuable in part because it’s explicit.  The user, out for a stroll/buy/meeting, tells the phone/partner what is happening.  The partner process then proceeds to stimulate and filter as appropriate, via the phone.  The intermediary partner unloads the enormous data load associated with maintaining context and also provides security and privacy.

The obvious consequences of this approach are a significant increase in cloud-hosted applications as elements of a service.  This isn’t as much an NFV process as it is a multi-tenant server process, though it may be useful to employ NFV to scale some components and even to instantiate those partner/agent processes as needed.  There’s also a need for a lot more intra-cloud bandwidth to insure you don’t tell someone to cross to make a light thirty seconds after it changed.  There are more impacts, but to consider them we first have to look more at those contextual-partner processes.

I mentioned in a prior blog that the logical consequence of software-agent-based web activity would be an explosion in microservices, each designed to provide (in our example here) an element of contextual information, contextual filtering of events, etc.  The satisfaction of contextual requirements seems to demand something that almost looks like dynamic service composition.  NFV postulates the creation of services by the assembly of functions, but the associations are presumed persistent.  With contextual processing it would appear these associations would have to be ad hoc, almost momentary.

This raises an interesting question, which is whether agile service management principles for NFV could—if properly extended—become agile service logic principles.  Is contextual processing a kind of self-authoring application where the high-level model is set by the “context goal” and the rest is optimized based on conditions?  If so it’s a mixture of analytics, complex event processing, flow machines, and more.

I think it’s inescapable that context-partner applications and the microservices that support them will rise sharply.  Not only are they driven by mobility, they’re also driven by IoT and by corporate interest in harnessing the next wave of tech-driven productivity improvement (which they’ve awaited for 15 years now!).  Those changes will, of course, drive changes in networking—not so much in spending on infrastructure as in where the money goes.  Access is still essential, but IP backbone could be reduced to accommodate greater emphasis on metro intra-cloud applications for context processing.  It could generate a more SDN-ish mission, which would tend to accelerate operator plans for SDN—plans I’ll be blogging about next week.

 

How Many “NFV Benefits” are Really Specific to NFV?

One thing that is probably clear to everyone who reads about SDN or NFV these days is that there is no real consensus on what either actually do, or should do.  There’s a lot of confusion out there, inhibiting a strong consensus on what SDN or NFV can do, separately or together, and what other things might be done around them, especially in the area of service management.  I want to look at service management today, and also look at a first hint of how operators might step beyond NFV to make a business case for change.

The service management problem has been around for almost 15 years, believe it or not.  Any “virtual network” poses a management challenge because the functional view of the network/service is different from the resource view.  In VPNs or VLANs, users get what looks like the acronym suggests—a private network.  It’s not, it’s produced by segmenting a multi-tenant network.  So when users exercise “management” you have to be able to recognize the difference in views and insure that users see the state of their own service and don’t interfere with the services of others.

The more virtual you get, the more issues you create with this view dichotomy.  SDN introduces a wider range of “virtual devices” because it permits very detailed forwarding control, enough to define even different L2/L3 services.  NFV creates virtual devices by hosting software on various server/device types.  In both cases, we have that same functional/resource separation.

This service management challenge has grown up at a time when established principles of service management are being questioned.  It’s all about SLAs, right?  Well, the Internet is by far the largest data service globally, and it’s best-efforts.  VPNs over the Internet have been growing as users discover the lower cost of the service makes up for the fact that there’s no solid SLA.  Applications can be made more resilient to QoS variations, and best-efforts can be good enough for all practical purposes.  So do you need to manage services at all, or might you simply plan a network for the traffic you think you’ll carry and then fix problems at the resource level, or through capacity planning?

This is how we manage the Internet, and in truth many IP and Ethernet services as well.  You do some capacity planning, you exercise traffic management based on aggregate user flows, and you have what could be called a “statistical SLA”, one where there’s a goodly number of “9’s” (but not all five!) over a long-ish period like a month, but no real near-term guarantees.  Remember T1 lines and “error-free seconds” or “severely errored seconds?”  Forget that level of granularity these days.  We’ve already accepted lower levels of availability and QoS, and lower prices would likely induce even further trade-offs here.

If we were to view services in the future as being totally best-efforts, if we believed that they never required us to associate customer-experience with resource-state, we could solve SDN and NFV’s management challenges easily.  And there may be those who believe that, and they may be right.  I don’t dispute the fact that this view could be rational, only the notion that we can accept the independence of service and resource management without accepting the baggage.

With SDN and NFV, but primarily with the latter, we add yet another factor to the mix.  The business case for NFV is based on a mixture of three benefits—capex reduction, opex reduction, and “service agility” meaning improved service-to-market adaptation in features and timing.  NFV was not targeted at replacing everything with virtual functions hosted on something, though.  The targets have really been “higher-layer” features like security.  Virtual CPE is such an obsession within the NFV community that they tend to frame all their examples in those terms.  Yet you can’t eliminate devices to terminate a service, only simplify them, and you can’t address enough cost through capex replacement alone under those conditions.  That means NFV’s business case relies on operations-related factors.  That’s also true for SDN.

Neither SDN nor NFV considered service management or operations to be in-scope.  As a result, neither have defined a “new” operations or service management relationship.  That leaves us with the old one, which was that OSS/BSS systems talked to devices or device management systems.  If that’s the case, then SDN and NFV should only generate “virtual devices”, and it is in this point that all the various service management forces collide.

If we consider SDN and NFV to be builders of virtual devices, then we’re saying that they are the technologies that make the function-to-resource mapping, which means that whatever we know about the relationship between functional or service management and resource management has to come from SDN or NFV.  Where, you recall, operations management in all forms is out of scope.

Virtual-device management is easiest where the functional/resource relationship is simple, but the problem is that “basic business vCPE” is a small-potatoes application for anyone but business MSPs.  If operators want to make larger changes in costs or revenues and can’t broaden the vCPE base, they might find it useful to mingle vCPE with SDN.   vCPE is a successful early application of NFV, and it can be linked to an SDN overlay (as AT&T does with its on-demand switched Ethernet) to build a service based on virtualization.  With vCPE we have tenant-specific hosting at the edge, which makes the management connection easy.

The AT&T initiative is my topic for the “think outside the NFV box” award.  The question, which AT&T and others are working to answer, is whether you can gain satisfactory operations efficiency and service agility using a mix of SDN, NFV, and legacy infrastructure changes, and do so on a broad enough scale to impact costs.  This could be an example of building an NFV justification by stepping out of NFV, by assigning the potential benefits to something else.

So AT&T’s Ethernet service, using SDN to partition low-level network services, could be a giant step toward a broader simplification.  Note that AT&T has been clear that the “interior” of this service doesn’t involve virtualizing functions at all.  It’s clear, though, that if you were to add VNF-hosted edge routing to it, you’d transition to a VPN service.  It’s a step on what might be a road to radical change.  You could also segment IP or even pure optics with SDN, creating virtual wires that then combined with edge-hosted (or even selectively centralized instances of) routing and switching to build services.  Providing you circle back to the management model.  Service management changes in operations, coupled with a management model for the new configuration, would realize all the benefits that every technology that proposes to change infrastructure must realize to make a business case.

How would this model, combining OSS/BSS changes and edge-hosted alternatives to traditional L2/L3 infrastructure, impact SDN?  Highly positively; there’d be a lot more of it.  NFV?  This approach doesn’t yet address applications beyond business virtual private network/LAN services.  It doesn’t yet harmonize usefully with mobile infrastructure.  There’s lot of “yets” here, a lot of potential to shift the focus of operators from simply “deploying NFV” to making much broader network change that NFV would be only a piece of.  It is possible that something like AT&T’s service plans could pull a lot of business drivers out of NFV, limiting it to vCPE, and add them into SDN and operations systems.

It is really too early to say that something like AT&T’s Ethernet service evolution is a signal that operators are expecting less from NFV.  The problem is that “real” NFV has to build a highly efficient resource pool and a highly efficient pool of operations processes, both of which demand convergence in approach even though early service trials are all being done per-service.  Would we have committed more to NFV had we resolved all of the business case issues last year?  I think so.  Will we commit less to NFV if we don’t solve them in 2016?  I think the proof of that is already happening, at AT&T and elsewhere.

On the vendor side, the kind of shift from pure NFV to opportunistic marriages of NFV and collateral virtualization of the service layer of carrier networks using SDN would certainly generate less NFV hosting.  My model says that an optimally efficient NFV deployment would create about a hundred thousand incremental data centers worldwide.  The SDN-and-vCPE mode would create only about 14% of that.   That says that IT vendors with NFV aspirations will need to try to frame something more impactful with NFV, or risk a major loss of opportunity.

Inside the ETSI NFV ISG Report on SDN/NFV

Standards documents are definitely not entertaining reads, even important ones.  The ETSI ISG published its “Report on SDN Usage in NFV Architectural Framework” prior to the last ISG meeting, and I’ve been reading through it.  There are a lot of interesting and useful things in it, and some things that I think are problematic, but it’s hard to dig them out.

Since the topic of SDN/NFV symbiosis is critical, I want to try to restructure the view to make it easier to absorb, and comment on some of the recommendations.  “Recommendations” is important, because this document was really aimed at supporting and evolving the NFV-to-SDN relationship, not coming to all the necessary conclusions.

We all know that the recognized vision of SDN is the conversion of abstract application-level connection requirements to commitments of network resources to fulfill them.  You might, in high-level SDN-speak, say “Give me these nine points on a LAN or subnet” and expect the necessary OpenFlow commands would be generated (by an SDN controller) to bring that about.

One of the challenges that this simple vision of SDN brings to the SDN/NFV symbiosis discussion is that this capability can be mapped into NFV in a variety of ways, which I think are best viewed as being “dimensions” of a relationship:

  1. The “who’s the user?” dimension. Is NFV a user of SDN service, is the target user a customer or another application, or are both possible?
  2. The “who deploys?” dimension. Is NFV using SDN facilities already in place, or is NFV expected to deploy/manage SDN elements?
  3. The “who federates?” dimension. Are multiple SDN domains connected within SDN and so provide multi-domain services, or is NFV responsible for federation across SDN domains?

I’m going to try to organize these dimensions into something coherent that can still be related to the ETSI material.  If you want to follow that relating process yourself, please refer to sections 4.3 and 4.4 of the ETSI document.

SDN could broadly be seen as an NFV resource, an NFV service, or both.  If SDN is a resource then it’s represented in some way by a Virtual Infrastructure Manager, and that relationship could involve “deep” or “shallow” placement of the SDN controller.  In deep placement, SDN is simply an in-place connection capability like that created by switches or routers, which is why the ISG says that this is where SDN acts like a physical network function.  In deep placement, NFV wouldn’t “see” SDN directly at all.  The VIM would presumably ask a lower-level management system for connectivity and that system would interact with SDN to create it.  In shallow placement, the SDN controller is part of the VIM itself, meaning that SDN control is a VIM service extended downward to compatible devices.

If SDN is a service of NFV, then the implication is that NFV is offering not just connectivity that happens to be available using SDN, but offering SDN itself.  To me, that means that NFV is offering an SDN controller and SDN vSwitches as VNFs and that the “SDN service” would look something like a service chain, where the controller VNF has to be connected to the vSwitch VNFs in some way.  NFV would also have to offer management, including horizontal scaling if that were a capability to be included.

The more difficult case is the “all of the above” case.  In theory, NFV could deploy SDN-as-a-service and then utilize the connection features the new “SDN service” offers to build other services.  This is the concept I called “Infrastructure Services” in both CloudNFV and ExperiaSphere.  The challenge here is that SDN is first deployed and then composed as though it had been there all along, which implies that the deployment of an infrastructure service could create a service catalog entry, a VIM, or something else.  How this gets done dynamically in such a way that other services could reference it isn’t covered in the ETSI specs.

Customer targeting and deployment responsibility are obviously related fairly closely.  They may also be related to the functional scope of the SDN controller being used.  For example, where the controller is designed to use multiple southbound protocols to allow an abstract service to be built using a mixture of SDN and legacy devices, that higher-level role argues not only for a “shallow” definition of placement of the controller but also for a broader-scope VIM.  In other words, as the SDN controller gets more functional it can absorb the functions of several VIMs, or the need for a VIM to separate service requests by domain.  In the extreme case you could argue that a supercontroller might be able to fulfill any connection request through a single VIM.

This leads us to the federation dimension.  If a service is to be extended across operator boundaries or other domain boundaries, there are two basic possibilities.  One is that the service model decomposes into elements that are linked to a series of VIMs, some of which are out of domain and referenced logically.  In this case, federation is done by NFV’s MANO-or-MANO-like elements.  The other possibility is that SDN makes the whole connection across all boundaries look like a single-domain service, in which case it’s SDN’s responsibility to manage the federation.

The determining factor here is likely to be the extent to which federated elements are purely connection elements, meaning whether a cross-domain service requires only connections in other domains or hosting of VNFs in other domains.  If the latter is the case, then it probably doesn’t do much good to be able to offer SDN connectivity services that span domains.  It would be smarter to let NFV manage federation.

These issues aren’t totally clear in the document, but they’re not easy to be clear on and I think they’re covered respectably.  There are some important areas where I think the document fell short.

The most significant of these is in the area of intent modeling.  The document recognizes that some of the bodies in the SDN space (ONF and ODL) have recognized intent-modeled northbound interfaces.  It also comments that it’s recommending that intent-modeled interfaces be further studied for NFV.  Gosh, people, we’re way beyond the “further study” point here.  The SDN alignment document clearly shows that intent modeling is the way everyone is going, which means that before NFV does anything else in terms of connection service definition, or perhaps does anything else in service modeling at all, it needs to adopt an intent-model framework for all its abstraction/virtualization interfaces.

Every day, the ISG is working on stuff that is at risk to being a barrier to intent model adoption, or at least at risk to creating unnecessary work in harmonizing pre-intent-model recommendations with intent modeling.  This document, to me, shouts out a warning to the ISG to stop diddling on intent modeling and bring that forward immediately as the basis for the interfaces between MANO and the VIM, between NFV and SDN, between OSS/BSS and MANO, and for service modeling in general.

Second, the recommendation of the document is that there be an interface defined between the SDN controller and MANO.  This implies that NFV somehow has to deal with the controller differently from other applications, and I do not believe that’s necessary.  In fact, I think it’s risky.  First, we don’t have such an interface now, and how much work and time will be expended creating one is unknown.  We’re running out of time to make a business case for both SDN and NFV.  Second, it forces us to address a truly knotty problem, one I’ve described before, which is the difference between service address space and control/management address space.

Security and stability for NFV and SDN alike depend on being able to separate control and management traffic from the user/service data plane.  If that’s not done then the foundation interfaces of SDN and NFV are subject to attack/hacking.  Improper address space management also risks contamination of tenant separation strategies.  This raises questions with respect to SDN’s mission in NFV, or in how NFV handles it.

The existence of a MANO interface to an SDN controller mandates connectivity with MANO, but how can that be harmonized with the fact that the controller is not a real part of NFV?  If we deploy a controller for multi-tenant use (whether NFV deploys it or not) we have to ask whether the MANO connection provides for separation of the tenants, or whether it would be possible that the activities of one service relating to SDN control might impact others.

If we deploy SDN using NFV, and we have to make the connections to the vSwitches or instantiate copies of the controller, we’re in particularly deep water.  In most case, probably, we’re overlaying SDN forwarding on top of some established (likely L2, Ethernet) connectivity.  Is that connectivity then shared?  Remember, a new SDN controller is an application of VNFs, but it also may support multi-tenancy.

I think most of the questions on SDN/NFV coexistence could be addressed in the context of intent modeling.  I think intent modeling would also go a long way to defining an open framework for NFV, which we clearly don’t have now, and it would even help to build a business case.  The SDN/NFV paper is right to call out the need to look at intent modeling, but it should have been in there from the start.  If we kick off processes to address intent modeling and address spaces now, it’s better than nothing, but it’s going to take time and operators want something to happen with NFV this year.

SDN, NFV, and SD-WAN: Better Together

Since networking is a kind of massive cooperative behavior set, it’s not surprising that the critical networking technology “revolutions” we face have a relationship.  What is surprising is that the relationship isn’t all that clear.  Software-defined networking (SDN), network functions virtualization (NFV), and software-defined wide-area networking (SD-WAN) share some technology elements and they may also be related in a business opportunity sense.  In fact, that relationship might be critical and so we need to look at it a bit more deeply.

SDN, if we accept the “purist” notion of white-box switches or virtual switches controlled by the OpenFlow protocol, is really about using central device control to define routes in a network.  Each device is told how to handle packets based on their header, and the combination of handling instructions creates the routes that carry traffic.

You can see that SDN is about routes, which means it’s about the connectivity that networks provide.  You can use SDN if you “own” a network, whether it’s a LAN or WAN.  That doesn’t necessarily mean that you have to own fiber and microwave and so forth, because SDN could be applied to networks built from tunnels as well as those built from real physical facilities.  You don’t necessarily need white boxes either; many legacy switches/routers will accept OpenFlow, and you can use virtual switches or routers.  You do need to own switching/routing elements.  The largest SDN application today is inside the cloud, where SDN is used to connect cloud components in a data center.

NFV is about using software features, hosted on something, instead of using specialized devices or appliances to provide some network feature set.  What NFV, as a standard or spec, does is describe how to deploy, connect, and manage these software features.  What features?  Generally, NFV targets either higher-layer network features like firewall, encryption, or VPN encapsulation.  It could also be used to deploy service elements like DNS and DHCP, and even to deploy instances of virtual switches and routers.

NFV, then, is about service features/elements.  It follows that you can deploy SDN elements using NFV, and if you think about it you can also presume that SDN could be used to connect NFV elements once they were deployed onto something like a server.  In an NFV application, you could make use of SDN almost anywhere you needed a connection.  The question is where NFV comes in.  NFV was designed for network operators who deployed services made up of complex features.  It could be used by enterprises in theory, and we’ll get a bit more into that below.

SD-WAN is a bit of an outrider here and perhaps the hardest of the three to actually understand.  The basic principle of SD-WAN is that it’s possible to build a “virtual network” on top of one of many different physical networks or network services, and as long as that virtual-physical mapping works, the stuff on and inside the virtual network wouldn’t know what was being used.  Thus, you could build a “VPN” with some sites connected via MPLS VPN services and others connected through Internet tunnels, or you could mix both services at some or all sites and use them to back up or augment each other, or for different classes of traffic.

Unlike SDN and NFV that are about putting stuff in the network in some way, SD-WAN relies on the stuff at the edge of the network.  It’s the SD-WAN edge element that creates the virtual network, manages how it connects users/sites, and maps it to whatever physical/services connections are available.  That includes not only traffic mapping virtual-to-physical, but also the management of the network service options so that virtual network users don’t get involved with the plumbing.

Interestingly, SD-WAN might exercise more SDN principles for enterprises than SDN does.  The original SDN concept, from Nicira, was an overlay network that didn’t force any changes to the connection infrastructure in place, whether it was Level 2 or 3.  Overlay networks let users manage connectivity independent of transport, segment networks without operations penalties, and accommodate multiple service choices.  SD-WANs codify all of this.

The introductions here present the “basic” affinities among the three technologies, but we’re actually circling through some changes that could create even more affinities.  We could look at these changes through the lens of any of these technologies, but I’m going to use SD-WAN as the framework for reasons that (I hope) will become obvious.

SD-WAN is an edge technology, as I’ve noted.  Edge technologies can be deployed using NFV via what’s now called “service chaining” and “virtual CPE” (vCPE).  The idea is that if you need an SD-WAN you could load that feature into an edge device, along with other security tools.  This hasn’t been talked about much to date, but IMHO it might be the best argument you could make for vCPE because while most enterprise sites already have basic network features like security tools implemented in CPE, few have yet adopted SD-WAN.

Another reason operators should love SD-WAN as a vCPE mission is that SD-WAN builds on VPN services without nailing them to a specific implementation.  You could build VPNs the current way with MPLS, or with Internet overlays, or with SDN and embedded virtual routers—you name it.  This is useful for operators even in-region where they have to contend with multiple access options or where access is evolving.  It’s critical out of region, of course, because you could offer Internet extensions to MPLS (or other provisioned VPN models).

SDN can also be used to provision the forwarding processes within an SD-WAN device, just as it is already to set up virtual switches.  Since you can, as I just noted, use SDN inside an SD-WAN as a service option, that lets operators bridge between SDN-based and traditional implementations of business (or, in theory, residential) services.  Some operators might see this bridging as a risk; buyers might adopt SD-WAN to wean away from MPLS-based VPNs, and that might be why we don’t see SD-WAN taking a leading role in NFV vCPE interests and trials.

In the longer term, I think that SD-WAN may need to be a bit more accommodating itself.  The virtual-overlay-network nature of SD-WAN seems to get swept under the rug in a lot of SD-WAN product positioning.  That may be because vendors fear other approaches, perhaps even growing out of more traditional vCPE VPN solutions, could steal their thunder.  The problem with this protectionism is that if, as I believe, we’re moving toward a newer model of business networking based on virtual wires and virtual switch/routers, virtual overlay networking is then going to boom in its own right.  It would be smart for the SD-WAN vendors to think about getting in on that now.

The fact that SD-WAN symbiosis with SDN and NFV hasn’t been developed so far isn’t a technical issue but one reflecting the complex business situation that prevails.  Network operators and the major incumbent network equipment vendors alike have reason to fear SD-WAN because it could dilute spending on traditional VPNs and VPN-related equipment.  It’s difficult now for a technology concept to gain traction without strong backing from big players who will pay for advertisements and analyst reports.  We do have a number of credible SD-WAN solutions, though, and if some of these vendors can make the connection with SDN and NFV, they might be able to move the ball on this at last.

2016: The Year of the Hop

It’s the first working day of 2016, and I guess I have a kind of obligation to talk about what to expect the rest of this year on the biggest issue facing telecom—operator return on infrastructure.  We stand a year from the time when operators said that falling revenue per bit and insufficient progress in cost reduction would put their revenue/cost line in crossover territory.  The question is whether there will be progress in 2016, or whether we’ll enter 2017 with the real risk that investment in broadband services will slow.

Most of the pressure for performance is going to fall on Network Functions Virtualization (NFV) because it was really NFV that was tasked to solve the problem of crossover in the first place.  While in fact SDN has just as good a chance of reducing costs (capex and opex), nobody really seemed to expect that to happen.  SDN was promoted more for the impact it might have on the network vendor establishment, as a kind of rising up of the masses to smite the mighty.  We’ll see as I go along whether SDN might end up doing a bit more.

For now, though, let’s see where NFV is.  We have a specification set from the NFV ISG that was never broad enough to address the credible set of benefits needed to address the operators’ revenue/cost challenge.  That was known by operators even in the fall of 2013 when the second white paper on NFV was published.  The problem was that operators themselves could not “collude” (as the anti-trust lawyers would have called it) and that left only the option of participating in industry groups along with the vendors—the very vendors whose revenues would necessarily fall if cost control was successful.  Guess how that went?

Nothing is going to wrestle control back to the operators, not in the ISG and not in any current or yet-to-come open-source group.  We are left in 2016 with four possible paths:

  1. NFV could be adopted as a service-specific strategy in a very limited number of areas, like business services. The total impact of that would be by my calculation less than 2% of total operator capex/opex.  You’d have to be able to grow this early success into something bigger.
  2. Operators could accept that the lower-level ISG standards won’t cut it, and look to OSS/BSS systems to build a business case umbrella over the basic deployment mechanisms the ISG worked out. But the TMF hasn’t moved any faster or more credibly than the ISG, and it’s also primarily a body of vendors.
  3. The non-network vendors, whether involved in NFV currently or not, could step in. These are the IT vendors like Canonical, Dell, HP, IBM, Red Hat, and Oracle.  They have everything to gain from NFV success and absolutely nothing to lose.  Support of software/hardware players could create an open community that would crack the current NFV deadlock, or simply provide a proprietary solution knowing that there’s no credible open alternative coming.  But only two of these IT giants (HP and Oracle) have substantial NFV capability, and neither has been effective in promoting systemic change.
  4. Operators could, through integration projects and their evolution to deployment, create de facto solutions that would become standards by acclimation if they work. But these operators have failed to create consensus, or even to bring out all the critical issues, in current standards and open projects.  Could they do better?

The question, obviously, is whether any of these things could move the ball with respect to NFV progress.  To do that, they’d have to produce a business case not at the service level alone, but for broad deployment.  That’s true because of two factors.  First, only broad deployment would change costs much.  Second, only broad deployment can deliver operations benefits.  Capex can be saved on a narrow front, but not opex; you have to modernize whole operator-wide practices.  Any of these paths to NFV success could work, but at least one has to develop significantly this year or NFV could lose steam.

Another question is whether operators, now down to the wire on their revenue/cost convergence, will start to look at other options.  This is where SDN comes in.

What we’ve proved so far with SDN is also a bit disappointing, perhaps even more so given that it came along first and has broader applicability.  We can already use SDN in cloud data centers, and we can also use it inside IP core networks.  For operators that’s not enormously helpful because they don’t have that many cloud data centers and they don’t spend much of their capex and opex on core IP networks.  Is there no more to it than this?

There is, of course, and we’ve made an industry out of ignoring what’s there.  SDN can do new things, build truly new services and not just create another way of doing the services we’ve had all along.  SDN can also build new networks, meaning define a whole new model for WAN infrastructure.

The OSI model defines a model of networking that builds from what we know we have to what we think we need.  We have wires, so there’s a physical layer, and that’s a good unequivocal place to start.  What grew above Level One was based on less solid evidence.  Remember that this came about in 1974, and we presumed the need for data-link error recovery because data links were unreliable. That gave us Level 2 (later adapted to LANs as well).  We needed services and that gave us Level 3, and so forth.  Each layer built on the ones below, using the capabilities that were designed there.  Evolutionary networking, feature-wise.

Evolution doesn’t always produce logical things, only workable ones.  Look at Emperor Penguin reproduction.  What SDN could do is take us to a place where the features of networking were redistributed, where what we thought was happening below changed so radically that what was needed above would be revolutionized too.  Example:  If I have wires that are error-free, that redirect themselves to provide capacity as needed, how many Level 2 and 3 features can I then render obsolete?  I don’t need to invent perfect transmission to do this, only invent virtual-layer agility at Level One, which I can do with SDN.

There’s more.  Look at operations cost and service agility.  We hear a lot about how NFV could revolutionize costs and time-to-deploy, but where exactly do we find the realization of these benefits, assuming we realize them?  Most of what we need is service automation that ends up being exercised above the infrastructure.  If we assume that intent models can define NFV deployments, SDN connectivity, and legacy service elements, why couldn’t we reap the benefits of service automation without any of that other stuff?  Why not modernize services and let infrastructure go as it must, shifting in technology direction as convenient under our new umbrella of intent models and service virtualization?  SDN and NFV might not be needed to drive change at all.

We have a three-legged race here with the legs on different racers.  None of these things is really running; you can see that from the truly pathetic progress we’ve made.  Hopping is about it.  But the thing is, if there’s a finish line and a prize to be won, even hopping will get somebody across it eventually.  That’s is my forecast for 2016—the Year of the Hop.

Operators don’t really know how to do any of this stuff, even the best of them.  That shouldn’t be surprising given that they are buyers of technology not producers of it.  But somebody has to tell the story of next-gen networking for us to get it, and the most logical players (the network equipment vendors) are probably far more likely to want to suppress that story than to support it.  What makes NFV more likely to drive change is not that it’s a technically better option (it is not) but that there are powerful players who can sponsor NFV for the good business reason that they would win big in an NFV transition.  In the SDN or service automation area, though, we could have a second-tier player step up.  Brocade and Ciena have good SDN positions, and there are many second-tier OSS/BSS players too.

I think that what we’re going to see is one of these three developments—maturity of the NFV business case, realization of SDN’s potential, or service automation reframing of service management—will come along and be decisive enough to drive change in 2017.  Maybe more than one.  Which one(s) might happen is beyond making predictions because there are too many forces (many totally illogical) involved.  We’ll just wait and see.  Meanwhile, Happy Year to you all!

 

What Early M2M Can Teach Us About Modern Technology Revolutions

Years ago I was a member of the Data Interchange Standards Association (DISA), involved in electronic data interchange (EDI).  This body provided message interface specifications for common business transactions, and because of that you could say it was a very early (and widely successful) example of M2M.  I was thinking about EDI yesterday, and I wondered whether there were things about EDI, M2M, IoT, and even NFV that might be useful to connect.

In the old days, people used to send purchase orders and other commercial paper by express mail, largely because electronic copies would 1) be subject to arbitrary changes by any party and 2) because without some format standards, nobody would be able to interpret an electronic document except a human.  That kind of defeats the purpose of the exchange, or at the minimum limits its benefits.  EDI came about largely to address these two points.

One of the founding principles of EDI was a guarantee of authenticity and non-reputability.  Somehow, you had to be sure that if you sent an electronic something, your recipient didn’t diddle the fields to suit their business purposes.  The recipient had to be sure that their copy would hold up as well as a real document as a representation of your own intent.  EDI achieved this by using a trusted intermediary, a network service that received and distributed electronic commercial transactions and was always available as a source of the authentic message exchange if there was a dispute.

Message authenticity is critical in just about everything today.  Commercial EDI is still lively (more than ever, in fact) but we’re now looking for other mechanisms for guaranteeing authenticity.  The most popular of the emerging concept is the blockchain mechanism popularized by Bitcoin.  One of the things that could make blockchain useful is that it can be visualized as a kind of self-driving activity log, a message whose history, actions, and issues follow it and can always be retrieved.

If we start to visualize applications as loose chains of microservices, a worthy cloud vision for sure, we have to ask how we’d know that anyone was who they said they were and whether any request/response could actually be trusted in a commercial sense.  For services like SDN and NFV there’s the problem of making sure that transactions designed to commit or alter resources are authentic and that changes made to services that impact price can actually be traced back to somebody who’s responsible for paying.

I think we see the future of IT and networking too much through the lens of past imperfections.  IT’s history is one of moving the computer closer to the activity, culminating in my view in what I’ve called “point-of-activity empowerment”.  I’ve lived through punch-card and batch processing, and one thing I can see easily by looking back is that the difference between past and future, in application terms, is really one of dynamism.  I have to connect services to workers as needed, not expect workers to build their practices around IT support or (worse) bring their results to IT for recording.

The problem is that dynamism means loose coupling, widely ranging information flows, many-to-many relationships, and a lot of issues in authentication.  We’ve looked at cloud security largely in traditional terms, but traditional applications won’t make the cloud truly successful.  We need those point-of-activity, dynamic, applications and we need to solve their security problems in terms appropriate to a dynamic, high-performance, loosely coupled future.

The issue of formatting rules, what we’d call APIs today, was also important because it was always assumed that EDI linked software applications and not people.  You had to provide not only a common structure for an electronic transaction, you had to be sure that field codings were compatible.  The classic example from some old DISA meetings was someone who ordered 100 watermelons and had it interpreted as 100 truckloads!

One thing that tends to get overlooked here is that microservices can be a substitute for structured information.  If we think of a purchase order, for example, we think of a zillion different fields, most of which will have specific formatting requirements that we need to encode to support properly.  If we viewed a PO as a series of microservices, we’d have only a couple fields per service.  The biggest difference, though, is that it’s common to think of web service results in a format-agile way, so we have practices and tools to deal with the problems.

The web, JSON, XML, and similar related stuff also provide us guidance on how to deal with data exchanges so that structure and content are delivered in parallel.  There are also older approaches to the same goal in SOA, but one thing that seems to me to be lagging behind is a way of providing access to information in a less structured way.  It’s not as simple as “unstructured data” or even “unstructured data analytics”.  The less structure you have in information, the more free form and contextual your application if it to users/workers has to be.

The only logical way to disconnect from this level of complexity is to abstract it away.  If we have to provide workers with information about stuff to support a decision, we have to be able to facilitate the worker’s ability to interpret a wide variety of information in many different formats.  If, on the other hand, we simply supply the worker with the answer we’re looking for, then the structural issues are all buried inside the application.

This might sound like we’re limiting what workers can know by saying that “answer services” will hide basic data, but remember that if we stay with the notion of microservices we can define many different answer services that use many different information resources.  And answer services, since they provide (no surprise!) answers and not data, are inherently less complex in terms of formats and structures.

There used to be EDI networks, networks that charged enormous multiples over pure transport cost just to supply security, authentication/repudiation management, and structure.  Imagine how efficient our EDI processes would be today had we applied our technology changes simply to changing how we do those things, rather than to whether we now need to do them.  Modernizing networking, modernizing IT, means breaking out of the old molds, not just pouring new stuff into them.