Alcatel-Lucent Offers a Bottom-Up Metro Vision

While vendors are typically pretty coy about public pronouncements on the direction that networking will take, they often telegraph their visions through their product positioning.  I think Alcatel-Lucent did just that with its announcement of its metro-optical extensions to its Photonic Service Switch family.  Touting the new offerings as being aimed at aggregation, cloud, and content missions, Alcatel-Lucent is taking aim at the market area that for many reasons is likely to enjoy the most growth and provide the best opportunities for vendors.  It’s just shooting from cover.

Networking isn’t a homogeneous market.  Infrastructure return varies, obviously, by service type so wireless is generally more profitable than wireline, business more profitable than residential, high-level more profitable than lower-level.  Operators will spend more where profits are greater, so there’s an emphasis on finding ways to exploit higher return potential.  Unfortunately, the universality of IP and the fact that broadband Internet is merging wireline and wireless to a degree work against service-based targeting.  Another dimension of difference would be helpful, and we have it with metro.

I’ve noted in past blogs that even today, about 80% of all profitable traffic for operators travels less than 40 miles, meaning that it stays in the metro area where it originates.  Cloud computing, NFV-based services, and content services will combine to raise that percentage through the next five years.  If NFV achieves optimum deployment, the number of NFV data center interconnects alone would be the largest source of cloud-connect services.  Mobile EPC transformation to an SDN/optical model and the injection of SDN-based facilitation of WiFi offload and integration are another enormous opportunity.

Aside from profit-and-service-driven changes, it’s obvious that networking is gradually shifting focus from L2/L3 down to the optical layer as virtualization both changes how we build high-level connectivity and virtual switch/routers displace traditional hardware.  It’s also obvious that the primary driver of these changes is the need to deliver lower cost bit-pushing services in the face of steadily declining revenue per bit.

Given that one of Alcatel-Lucent’s “Shift” focus points was routing, the company isn’t likely to stand up and tout all of this directly.  Instead of preaching L2/L3 revolution from above, they’re quietly developing more capable optical-layer stuff and applying it where it makes the most sense, which is in the metro area.  The strategy aims to allow operators to invest in the future without linking that investment to displacement of (or reduction in purchasing of) legacy switches and routers.  Unlike Juniper, who tied its own PTX announcement to IP/MPLS, Alcatel-Lucent stays carefully neutral with its approach, which doesn’t commit operators to metro IP devices.

One of the omissions in Alcatel-Lucent’s positioning was, I think, negative for the company overall.  They did not offer specific linkage between their PSS metro family and SDN/NFV, though Alcatel-Lucent has highly credible solutions in both these areas.  Operators don’t want “service layer” activities or applications directly provisioning optical transport, even in the metro, but they do want service/application changes to influence transport configuration.  There is a significant but largely ignored question of how this comes about.  The core of it is the extent to which optical provisioning and management (even based on SDN) are linked to service events (even if SDN controls them).  Do you change transport configuration in response to service orders or in response to traffic when it’s observed, or maybe neither, or both?  Juniper, who has less strategic SDN positioning and no NFV to speak of, goes further in asserting integration.

I’m inclined to attribute the contrast here to my point on IP specificity.  Juniper’s approach is an IP “supercore” and Alcatel-Lucent’s is agile optical metro.  Because of its product portfolio and roots, Juniper seems determined to solve future metro problems in IP device terms, where Alcatel-Lucent I think is trying to prepare for a future where spending on both switches and routers will inevitably decline (without predicting that and scaring all their switch and router customers!).  They can presume “continuity” of policy; transport networks today are traffic engineered largely independent of service networks.  Juniper, by touting service protocols extending down into transport, has to take a different tack.

I’d hope that Alcatel-Lucent takes a position on vertical management integration in metro networks, even if they don’t have to do so right away.  First, I think it would be to their competitive advantage overall.  Every operator understands where networking is heading; vendors can’t hide the truth by not speaking it.  On the other hand, vendors who do speak it have the advantage of gaining credibility.  Alcatel-Lucent’s Nuage is unique in its ability to support what you could call “virtual parallel IP” configurations where application-specific or service-specific cloud networks link with users over the WAN.  They also have a solid NFV approach and decent OSS/BSS integration.  All of this would let them present an elastic approach to vertical integration of networks—one that lets either management conditions (traffic congestion, failures) or service changes (new service orders pending that would demand an adjustment at the optical layer) drive the bus.

With a story like this, Alcatel-Lucent could solve a problem, which is their lack of a significant server or data center switch position.  It’s hard to be convincing as a cloud player if you aren’t a server giant, and the same is true with NFV.  You also face the risk of getting involved in a very expensive and protracted selling cycle to, in the end, see most of the spending go to somebody else.  A cloud is a server set, and so is NFV.  Data center switching is helpful, and I like Alcatel-Lucent’s “Pod” switch approach but it would be far stronger were it bound into an interconnect strategy and a Nuage SDN positioning, not to mention operations/management.  That would build Alcatel-Lucent’s mass in a deal and increase their return on sales effort and their credibility to the buyer.

Most helpful, perhaps, is that strong vertical integration in a metro solution would let Alcatel-Lucent mark some territory at Cisco’s expense.  Cisco isn’t an optical giant, doesn’t like optics taking over any part of IP’s mission, doesn’t like purist OpenFlow SDN, NFV…you get the picture.  By focusing benefits on strategies Cisco is inclined to avoid supporting, Alcatel-Lucent makes it harder for Cisco to engage.  De-positioning the market leader is always a good strategy, and it won’t hurt Alcatel-Lucent against rival Juniper either.

I wonder whether one reason Alcatel-Lucent might not have taken a strong vertical integration slant on their story is their well-known insularity in product groups.  My recommended approach would cut across four different units, which may well approach the cooperation vanishing point even today.  But with a new head of its cloud, SDN, and NFV effort (Bhaskar Gorti), Alcatel-Lucent may be able to bind up what has traditionally been separate for them.  This might be a good time to try it.

More Signposts Along the Path to an IT-Centric Network Future

I always think it’s interesting when multiple news items combine (or conflict) in a way that exposes issues and market conditions.  We have that this week with the Cisco/Microsoft cloud partnership, new-model servers from HP, a management change at Verizon, and Juniper’s router announcements.  All of these create a picture of a seismic shift in networking.

The Cisco/Microsoft partnership is a Nexus 9000/ACI switching system with a Windows Azure Pack (a Microsoft product) to provide a hybrid cloud integration of Microsoft Azure with Windows Server technology in the data center.  The software from Microsoft has been around a while, and I don’t frankly think that there’s any specific need for a Nexus or ACI to create a hybrid cloud since that was the mission of the software from the first.  However, Microsoft has unusual traction in the hybrid space because Azure is a PaaS cloud that offers easy integration with premises Windows Server and middleware tools.  Cisco, I think, wants to take advantage of Microsoft’s hybrid traction and develop its UCS servers as a preferred strategy for hosting the premises part of the hybrid cloud.

This is interesting because it may be the first network-vendor partnership driven by hybrid cloud opportunity.  Cisco is banking on Microsoft to leverage the fact that Azure and Windows Server combine to create a kind of natural hybrid, and that this will in turn drive earlier deployment of Azure hybrids than might be the case with other hybrid cloud models.  That would give Cisco street cred in the hybrid cloud space.  The IT strategy drives the network.

One reason for Cisco’s interest is the HP announcement.  HP has a number of server lines, but Cloudline is an Open Compute compatible architecture that’s designed for high-density cloud deployments, and would also be a darn effective platform for NFV.  HP has a cloud, it has cloud software for private clouds, and a strong server position (number one).  If HP were to leverage all its assets for the cloud, and if it could pull hybrid cloud opportunity through from both the public cloud provider side (through a hybrid-targeted Cloudline positioning) and from the IT side (through its traditional channels) then Cisco might see its growth in UCS sales nipped in the bud.

A Microsoft cloud alliance won’t help Cisco with NFV, though, and that might be its greatest vulnerability to HP competition in particular.  Even before Cloudline, HP had what I think is the best of the major-vendor NFV approaches.  Add in hyperscale data centers and you could get even more, and my model still says that NFV will generate more data centers in the next decade than any other application, perhaps sell more servers.  I’d be watching to see if Cisco does something on the NFV side now, to cover that major hole.

NFV’s importance is, I think, illustrated by the Verizon management change.  CTO Melone is retiring, and the office of the CTO will then fall under Verizon’s CIO.  Think about that!  It used to be that the CTO, Operations, and CMO were the big names.  The only people who called on the CIO were OSS/BSS vendors.  Now, I think, Verizon is signaling a power shift.  CIOs are the only telco players who know software and servers, and software and servers are the only things that matter for the future.

Globally, CIOs have been getting more involved with NFV, but now I think it’s fair to say they may be moving toward the driver’s seat.  That’s a dynamic that will require some thinking, because of the point I just made on what CIOs have historically been involved with.  OSS/BSS vendors have more engagement with CIOs and OSS/BSS issues have taken a back seat from the very first meetings of the ETSI ISG.  Might this shift impact the vendor engagement?  It won’t hurt HP because they have a strong operations story, and obviously Ericsson and Alcatel-Lucent do as well, but Cisco will have to do a lot more if operations is given a major role.  Of course, everyone will have to address OSS/BSS integration more effectively than they have if the guy who buys the OSS/BSS systems is leading the NFV charge.

Speaking of network vendors, we have Juniper.  Juniper has no servers, and they don’t have a strong software or operations position either.  They can’t be leaders in NFV because they don’t have the central MANO/VNFM components.  I think they represent what might be the last bastion of pure networking.  Cisco, Alcatel-Lucent, Ericsson, Huawei all need more bucks and more opportunity growth than switching and routing can hope to provide.  All of them, as contenders for leader status in network equipment, will have to expand their TAM.  Juniper is likely hoping that with the rush to servers and software, there will be opportunity remaining in the network layers.

Will there be?  Truth be told it won’t matter for Juniper because there are no options left.  They can’t be broader players now; time has run out.  The union of IP and optics, at least part of the focus of their announcements, is inevitable and it will inevitably cap the growth of IP and Ethernet alike, working with virtual routing and switching driven by SDN and NFV at the technical level and by operators’ relentless pressure to reduce capex and opex.  It’s hard to see how a switch/router company only recently converted to the value of agile optics can win against players like Alcatel-Lucent or Ciena or Infinera or Adva, all of whom have arguably better SDN and NFV stories.

There are other data points to support my thesis that we’re moving toward the “server/software” age of networking.  Ciena already announced an NFV strategy and so now has Adva.  Alcatel-Lucent’s CEO said that once they’re done “shifting” they will likely focus more on services.  Logical, given that professional services are almost inevitably more important as the rather vast issues if the cloud and SDN and NFV start driving the bus.  Few vendors will field comprehensive solutions and operators want those.  They’ll accept consortium insurance where specific vendor solutions just aren’t available from enough players to give the operators a comfortable competitive choice.

All of these points demonstrate the angst facing network vendors, but adding to that is the fact that Huawei is running away with the market, racking up 20% growth when almost all the competition is losing year-over-year.  It’s Huawei that in my view renders the pure networking position untenable for competitors; everyone else will lose on price and network equipment differentiation is now almost impossible.  For five years now, vendors have played Huawei’s game, focusing their attention on reducing costs when the price leader in the market is sharpening their blade.  It may be too late to change that attitude, though Cisco at least is certainly trying.

We have a true revolution here.  It’s not the platitudes we read about, it’s the relentless march of commoditization driven by that compression of revenue/cost curves.  It’s the shift in approach to hosted software with greater agility, from monolithic specialized network hardware.   We are moving to an IT-driven future for networking and there is no going back now.

What OPNFV Needs to Address

OPNFV, the Linux Foundation open-source project for NFV, is getting ready to release its first preliminary code.  Everyone, including me, is rooting for a success in creating first a reference implementation of NFV that’s functionally complete and second an open-source framework for that implementation.  My concern is as it’s always been; do we know that “reference implementation” is “functionally complete?”  I’d like to offer some comments to OPNFV and its members on what is needed, which I’ll call “principles”.

First and foremost, I think operators globally would agree that NFV should embrace any and all hosting resources.  We are advancing server technology with low-power and edge-deployed technology, and we’re optimizing virtualization with things like containers.  It’s less desirable to standardize on a platform than to standardize a way to accommodate whatever platforms operators find helpful.  The key to achieving this is what I’ll call an open infrastructure management model, and it has four requirements:

  1. The implementation must support multiple VIMs, with the VIM to be specified by the model used to drive management and orchestration (MANO). All VIMs must accept a common “input” from MANO to describe what is to be deployed so that all VIMs are portable across all implementations of MANO.
  2. Resources to be used for hosting/connecting VNFs as part of NFV Infrastructure (NFVI) must be represented by a VIM that supports the common input structure described above.
  3. If it is determined that some hosting options may not be available for all NFVI choices, then the VIM must be capable of returning a response to MANO indicating that a deployment request cannot be executed because one or more features are unsupported by the specified VIM.
  4. Operators will need NFV to control legacy connection resources either within the NFVI data centers or in the non-NFV portion of a service. This means that there should be “network managers” (to use a term that’s been suggested in the ETSI ISG) that should look in most ways as VIMs look but support connection requests rather than both connection and hosting.  I suggest that given the similarity, the concept of an Infrastructure Manager with two (current) subclasses—VIM and Network Manager—is appropriate

We should be viewing these IMs as “black boxes”, with defined interfaces but flexibility in assigning the processes internally.  What goes on inside an IM is, I think, opaque to the process of standardization.  Yes, we need one or more reference implementations, but if the goal of an IM is to represent an arbitrary set of controllable resources, we have to give the IM latitude in how it achieves that goal.

With these requirements at the bottom layer, we can now move upward to management and orchestration.  Here, I believe that it’s important to recognize that the ISG’s work has defined MANO and VNFM separately, but if you read through the material it’s fairly clear that the two are features of a common implementation.  At one time (and even now, for some members) the ETSI ISG used the term “package” to describe a deployed unit.  A package might be a complete service or service element or just a convenient piece of one.  For my second principle, I think that OPNFV has to recognize that MANO and VNFM operate collectively on packages, and that the definition of a package must provide not only how the package is deployed on resources but how the management connections are made.  I also think that “packages” are fractal, meaning that you can create a package of packages or a package of VNFs/VNFCs.

The question at the MANO/VNFM level is how the package is modeled.  It seems to be possible, based on my experience at least, to model any package by defining a connection model and then identifying nodes that are so connected.  We have LINEs, LANs, and TREEs as generally accepted connection models.  A package might then be two NODEs and a LINE, or some number of NODES on a LAN or in a TREE.  With the right modeling approach, though, we could define another model, “CHAIN” that would be a linear list of nodes.  Thus, a connection model could represent any generally useful relationship between nodes.

There are a lot of ways to define this kind of model.  Some vendors already use XML, which I’ve also used on a project.  Others prefer TOSCA or YANG.  I think it would be helpful to have a dialog to determine whether operators think that their services would be defined hierarchically as a package tree using a single standard modeling semantic, or whether they’d be happy to use anything that works.  I suspect that the answer might lie in whether operators thought service definitions could/should be shared among operators.

If a standard model approach is suitable, then I think that models could be the input to IMs.  If it’s desired to support multiple model options, then IMs will need some standard API to receive parameters from MANO.  Otherwise IMs would not be portable across MANO implementations.

Going back to the VNFM issue, I believe in the concept I’ve called “derived operations” where each package defines its own MIB and the relationship between that MIB and subordinate package or resource MIBs.  I still think this is the way to go because it moves management derivation into the model rather than requiring “manager” elements.  I’m willing to be shown that other ways will work, but my third principle is that OPNFV has to define and provide a reference implementation for a rational management vision.

A related point is lifecycle management, a responsibility of the VNFM in the ETSI spec.  There is simply no way to get a complicated system to progress in an orderly way from service order to operations without recognizing operating states by package and events to signal changes.  Principle number four is that OPNFV has to provide a means of describing lifecycle processes in terms of package state/event progressions.

The final principle is simple.  Operators build services today in a variety of ways—they may start with opportunities and dive downward to realization, or they may build upward from capabilities to services.  The “service lifecycle” to an operator is more than the deployment-to-operations progression, it’s the concept-to-realization progression.  OPNFV has to demonstrate that any service lifecycle process from conception to delivery can be supported by their reference implementation.  That means we have to define not only the baseline model for example, but also the tools that will be used to build and validate the models.

I think that all of this is possible, and some at least seems to be consistent with the direction that the ETSI ISG is taking for their second-phase activity.  I also think that all of this, in some form at least, is necessary.  A reference implementation that actually does what’s needed is enormously useful, but one that fails to support the goals of NFV will be far worse than no implementation at all.  It could freeze progress for so long that operators are forced to look elsewhere for solutions to their revenue/cost per bit problems.  We may not have the time for that.

Do We Watch Watch or Look Through Glass?

Apple announced the details on its Apple Watch, which some will call a revolution and others (including me) will yawn at.  It’s the first truly new product from Apple in about five years, the latest darling of the wearable technology niche, loved by Apple fans for sure.  The question is whether it will really amount to anything other than a status symbol.  It’s a valid question because Apple Watch isn’t the ideal wearable no matter what your Apple fan status might be.

For many, the form factor alone is going to be hard to accept.  There are two sizes, 38mm and 42mm, which equate to roughly an inch and a half or an inch and two-thirds.  A good-sized chronograph roughly equals the former, but the square face looks bigger.  It’s certainly something to be noticed, which to Apple fans may be a good thing.  Conspicuousness probably won’t sell a lot of watches, though.  There has to be utility, and that may be harder to come by, because obviously for many tasks a watch face presents a pretty minimalist GUI. Yes, you could wave a watch at a pay terminal and buy something (if Apple Pay gets cleaned up).  Yes, you could read the time and perhaps some SMS or an email notice.  The thing is, you can do that with your phone.  Some will pay a minimum of three hundred fifty bucks to wave a watch instead of a phone, but I don’t think that will start a revolution.

All wearable technology is essentially an extension of mobile broadband.  While it might work standalone, it’s really designed to work with a mobile phone (probably) or tablet (possibly), which means you have to value it based on what it can “input” into the mobile/behavioral ecosystem or what it can output from it.  The Watch can be a tickler to tell you to get your phone out, and it can let you do some basic things without taking out your phone.

Probably the most interesting application for Apple Watch is biometric monitoring, which could be used to track fitness goals and monitor yourself during exercise.  Even here, though, it’s possible to do that stuff in other ways.  Judging from what I see on the street, there aren’t too many people in gyms or exercising anyway.  I took a two-and-a-half mile walk yesterday and didn’t see anyone who wasn’t in a car.  More intense health care apps are for the future only, and then only if issues with FDA approval can be dodged.

Why then is Apple doing this?  It’s most likely a matter of niche marketing.  Apple fans value social interaction, cool-ness, leading-edge stuff.  You can sell them stuff that most of the population won’t buy.  There’s nothing wrong with that approach, except perhaps that you believe some of the heady numbers.  Five hundred million units by 2020 is one estimate I saw, and a Street guy who’s a bull on Apple thinks a couple billion in that timeframe is reasonable.  Milking your base with add-on products is Marketing 101, but you have to be wary of expectations.  Total smartwatch sales so far have been less than two million units, and many of the things you can do with Apple Watch can be done with earlier releases from other vendors, at lower cost.

Competition there is.  Questions on the value proposition are there too.  But Apple’s big risk isn’t competitors in the smartwatch space, or even lack of interest in smartwatches.  It’s the possibility that Google might still do something with Glass.

The most powerful of all our senses is sight, and in my view that means that the logical way to supplement any mobile device is through augmented-reality glasses.  Yes, I know that’s what Google Glass was/is supposed to be and yes, I know that the story is that Glass is gas, so to speak.  The truth, I think, is that Glass got away from Google and they simply were not ready to capitalize on what happened when it came out.  That doesn’t mean it’s not the best approach.

Google says that Glass is only exiting one phase and preparing for a new one.  A number of news stories have made that same point in more detail, claiming that the current Glass strategy was only a proof of concept, a kind of field trial.  It’s hard not to see this as an opportunity for Google, though.  What better way to kick sand in Apple’s face than to launch a really useful wearable, and I think even the hardest-core Apple aficionado would agree that the king of the wearable concept is still augmented reality glasses.

But will Google really push Glass?  Google has a habit of trying to launch a revolution on the cheap, hoping partners will do the heavy lifting and take the risks.  Remember Google Wave?  It was another skunk-works project that had great potential, but Google never really invested in it.  Some now believe that Android may belong in that same category, a concept that Google should have productized and pushed rather than simply launched and blew kisses at.  It’s hard to see how Glass could succeed without support from Android devices, so would that mean Google would have to get more serious about Android?

The Google MVNO rumor might be an interesting data point.  If Google wanted to stay with its advertising roots, though, it’s hard to see how a Glass/Android pairing could promote advertising effectively if the associated Android phone wasn’t on an MVNO service that could then be partially ad-sponsored.   The thing is, before the story took hold, Google seemed to be trying more to control expectations than to promote the concept.  That doesn’t sound like it’s prepping a new Glass.

They should, because augmented reality could be great for ads.  You can picture a Glass wearer walking down the street and viewing the ads of stores along the route.  It’s hard to get that same effect by looking at a watch.  Selling ad space on a billboard an inch and a half square is definitely an uphill battle.  Augmented reality is also great for travelers, for gamers, for a host of large markets that cut across current vendor/technology boundaries.  It seems to me those would be better places to go.

The potentially positive thing with Apple Watch is that it could extend the concept of personalization and it could help further integrate mobile broadband into day-to-day behavior.  The biggest impact of mobile broadband on traffic is video viewing, but the biggest impact overall is on behavior.  We are remaking ourselves through the agency of our gadgets, and Watch might boost that.  Here, Apple is surely hoping that developers will innovate around the basics to develop something useful.

Useful but not compelling.  The thing I’m not clear on is why Apple would do something like that, which I think would magnify the value of the cloud by creating in-cloud agents, when they seem unwilling to take a lead in the cloud itself.  Without Apple leadership, the cloud is unlikely to be a place where developers elect to go, and as long as Apple stays in the background, cloud-wise, they are at risk to being preempted by Google, Glass or no.  In the end, broadband devices are appliances to help us live, and the watch can be only a subordinate appliance.  The master intelligence has to be in the cloud, and if they want Watch to succeed, so does Apple.

Will TV Viewing Habits Change Metro Architecture?

According to a couple recent surveys, TV viewing is dropping in the 18-34 year old age group.  Some are already predicting that this will mean the end of broadcast TV, cable, and pretty much the media World as We Know It.  Certainly there are major changes coming, but the future is more complicated than the “New overtakes the Old” model.  It’s really dependent on what we could call lifestyle phases, and of course it’s really complicated.  To make things worse, video could impact metro infrastructure planning as much as NFV could, and it’s also perhaps the service most at risk to being itself impacted by regulatory policy.  It’s another of those industry complications, perhaps one of the most important.

Let’s start with video and viewing changes, particularly mobile broadband.  “Independence” is what most young people crave.  They start to grow up, become more socially aware, link with peer groups that eventually influence them more than their parents do.  When a parent says “Let’s watch TV” to their kids, the kids hear “Stay where I can watch you!”  That’s not an attractive option, and so they avoid TV because they’re avoiding supervision.  This was true fifty years ago and it’s still true.

Kids roaming the streets or hanging out in Starbucks don’t have a TV there to watch, and mobile broadband and even tablets and WiFi have given them an alternative entertainment model, which is streaming video.  So perhaps ten years ago, we started to see youth viewing behavior shift because technology opened a new viewing option that fit their supervision-avoidance goal.

Few people will watch a full hour TV show much less a movie on a mobile device.  The mobile experience has to fit into the life of people moving, so shorter clips like music videos or YouTube’s proverbial stupid pet tricks caught on.  When things like Facebook and Twitter came along, they reinforced the peer-group community sense, and they also provided a way of sharing viewing experiences through a link.

Given all this, it’s hardly surprising that youth has embraced streaming.  So what changes that?  The same thing that changes “youth”, which is “aging”.  Lifestyles march on with time.  The teen goes to school, gets a job and a place to live, enters a partner relationship, and perhaps has kids of his/her own.

Fast forward ten years.  Same “kid” now doesn’t have to leave “home” to avoid supervision, but they still hang out with friends and they still remember their streaming habits.  Stupid pet tricks seem a bit more stupid, and a lot of social-media chatter can interfere with keying down after a hard day at the office.  Sitting and “watching TV” seems more appealing.  My own research says that there’s a jump in TV viewing that aligns with independent living.

Another jump happens two or three years later when the “kid” enters a stable partner relationship.  Now that partner makes up a bigger part of life, the home is a better place to spend time together, and financial responsibilities are rising and creating more work and more keying down.  There’s another jump in TV viewing associated with this step.

And even more if you add children to the mix.  Kids don’t start being “independent” for the first twelve years or so on the average.  While they are at home, the partner “kids” now have to entertain them, to build a set of shared experiences that we would call “family life”.  Their TV viewing soars at this point, and while we don’t have full data on how mobile-video-exposed kids behave as senior citizens yet, it appears that it may stay high for the remainder of their lives.

These lifecycle changes drive viewing changes, and this is why Neilson and others say that TV viewing overall is increasing even as it’s declining as a percentage of viewing by people between 18 and 34.  If you add to this mix the fact that in any stage of life you can find yourself sitting in a waiting room or on a plane and be bored to death (and who shows in-flight movies anymore?), you see that mobile viewing of video is here to stay…sort of.

The big problem that TV faces now isn’t “streaming” per se, it’s “on-demand” in its broadest sense—time-shifted viewing.  Across all age groups we’re seeing people get more and more of their “TV” in non-broadcast form.  Competition among the networks encourages them to pile into key slots with alternative shows while other slots are occupied by the TV equivalent of stupid pet tricks.  There are too many commercials and reruns.  Finally, we’re seeing streaming to TV become mainstream, which means that even stay-at-homes can stream video instead of watching “what’s on”.

I’ve been trying to model this whole media/video mess with uncertain results, largely because there are a huge number of variables.  Obviously network television creates most of the original content, so were we to dispense with it we’d have to fund content development some other way.  Obviously cable networks could dispense with “cable” and go directly to customers online, and more importantly directly to their TV.  The key for them would be monetizing this shift, and we’re only now getting some data from “on-demand” cable programming regarding advertising potential for that type of delivery.  I’m told that revenue realization from streaming or on-demand content per hundred views is less than a third of channelized real-time viewing.

I think all of this will get resolved, and be resolved in favor of streaming/on-demand in the long run.  It’s the nature of the current financial markets to value only the current quarter, which means that media companies will self-destruct the future to make a buck in the present.  My model suggests that about 14% of current video can sustain itself in scheduled-viewing broadcast form, but that ignores the really big question—delivery.

If I’m right that only 14% of video can sustain broadcast delivery then it would be crazy for the cable companies to allocate the capacity for all the stuff we have now, a view that most of the cable planners hold privately already.  However, the traffic implications of streaming delivery and the impact on content delivery networks and metro architecture would be profound.

My model suggests that you end up with what I’ll call simultaneity classes.  At the top of the heap are original content productions that are released on a schedule whether they’re time-shifted in viewing or not and that command a considerable audience.  This includes the 14% that could sustain broadcast delivery and just a bit more—say 18% of content.  These would likely be cached in edge locations because a lot of people would want them.  There’s another roughly 30% that would likely be metro-cached in any significant population center, which leaves about 52% that are more sparsely viewed and would probably be handled as content from Amazon or Netflix is handled today.

The top 14% of content would likely account for about two-thirds of views, and the next 30% for 24% of views, leaving 10% for all the rest.  Thus it would be this first category of viewing, widely seen by lots of people, that would have the biggest impact on network design.  Obviously all of these categories would require streaming or “personalized delivery”, which means that the total traffic volume to be handled could be significant even if everyone were watching substantially the same shows.

“Could” may well be the important qualifier here.  In theory you could multicast video over IP, and while that wouldn’t support traditional on-demand programming there’s no reason it couldn’t be used with prime-time material that’s going to be released at a particular time/date.  I suspect that as on-demand consumption increases, in fact, there will be more attention paid to classifying material according to whether it’s going to be multicast or not.  The most popular material might well be multicast at its release and perhaps even at a couple of additional times, just to control traffic loads.

The impact of on-demand on networking would focus on the serving/central office for wireline service, and on locations where you’d likely find SGWs today for mobile services (clusters of cells).  The goal of operators will be to push caches forward to these locations to avoid having to carry multiple copies of the same videos (time-shifted) to users over a lot of metro infrastructure.  So the on-demand trend will tend to encourage forward caching, which in turn would likely encourage at least mini-data-center deployments in larger numbers.

What makes this a bit harder to predict is the neutrality momentum.  The more “neutral” the Internet is, the less operators can hope to earn from investing in it.  It seems likely that the new order (announced but not yet released) will retain previous exemptions for “interior” elements like CDNs.  That would pose some interesting challenges because current streaming giants like Amazon and Netflix don’t forward-cache in most networks.  Do operators let them use forward caches, charge for the use of them, or what?

There’s even a broader question, which is whether operators take a path like that of AT&T (and in a sense Verizon) and deploy an IP-non-Internet video model.  For the FCC to say that AT&T had to share U-verse would be a major blow to customers and shareholders, but if they don’t say that then they are essentially sanctioning the bypassing of the Internet for content in some form.  The only question would be whether bypassing would be permitted for more than just content.

On-demand video is yet another trend acting to reshape networking, particularly in the metro sense.  Its complicated relationship with neutrality regulations mean it’s hard to predict what would happen even if consumer video trends themselves were predictable.  Depending on how video shakes out, how NFV shakes out, and how cloud computing develops, we could see major changes in metro spending, which means major spending changes overall.  If video joins forces with NFV and the cloud, then changes could come very quickly indeed.

The Question of MWC: Can NFV Save us From Neutrality?

At MWC, US FCC Chairman Wheeler tried to clarify (some would say “defend”) the Commission’s Neutrality Order.  At almost the same time Ciena released its quarterly numbers, which were light on the revenue line.  I think the combination of these two events defines the issues that network operators face globally.  I just wish they defined the best way to address them.  Maybe the third leg of the news stool can help with that; HP and Nokia are partnering on NFV.  Or perhaps the new EMC NFV initiative might mean something good.  Let’s see.

Ciena missed on their revenue line by just short of $30 million, about 5% less than expectations and down about a percent y/y.  This is pretty clear evidence that operators are not investing as much in basic transport, which suggests that they are holding back on network capacity build-out as they search for a way to align investment and profit better.  It’s not that there isn’t more traffic, but that the primary source of traffic—the Internet—doesn’t pay incrementally for those extra bits.

Operators obviously have two paths toward widening the revenue/cost-per-bit separation to improve profits.  One is to raise revenue and the other to lower costs, and it’s fair to say that things like the cloud, SDN, and NFV have all been aimed to some degree at both.  Goals are not the same as results, though.  On the revenue side, the problem is that operators tend to think of “new services” as being changes to the service of information connection and transport.  I think that the FCC and Ciena are demonstrating that there is very little to hope for in terms of new connection/transport revenue.

The previous neutrality order, sponsored by VC-turned-FCC-chairman Genachowski, Wheeler’s predecessor, was a big step in favor of OTT players over ISPs.  It had a stated intention of preserving the Internet charging model, meaning bill-and-keep, no settlement, no paid QoS.  It didn’t actually impose those conditions for fear of running afoul of legal standing but even its limited steps went too far and the DC Court of Appeals overturned it.  Wheeler had the opportunity to step toward “ISP sanity”, and in his early statements he seemed to favor a position where settlement and QoS might come along.  That hope was dashed, perhaps because of White House intervention.

We still don’t have the full text of the order, but it seems very clear from the press release that the FCC is going to use Title II to establish its standing to make changes, and then do everything that Genachowski wanted, and more.  The order will ban paid prioritization—as far as I can tell no matter who pays.  It will “regulate” interconnect, which seems likely to mean it will not only sustain bill-and-keep but also take a dim view of things like making Netflix pay for transport of video.  And the FCC also proposes to apply this to mobile.

The Internet is the baseline connectivity service worldwide.  More traffic flows through it than through everything else combined, and so you can’t hope to rebuild losses created with Internet services by subsidizing them from business IP or Ethernet.  If Wheeler’s position is what it appears to be, then profitable ISP operations isn’t possible for long, perhaps not even today.  Whether that will mean operators push more capex into independent content delivery mechanisms, which are so far exempt, remains to be seen, as does the technology that might be used.  Certainly there will be a near-term continued capex suppression impact while the whole thing is appealed.

To me, that’s the message of Ciena.  If operators knew that they could make pushing bits profitable they would sustain fiber transport investment first and foremost because that’s where bits are created.  They’ve instead focused on RAN, content delivery, and other things that are not only not bits but also not necessarily sustainable given neutrality regulatory trends.  Dodging low ROI may get harder and harder with mobile services subject to neutrality in the US and roaming premiums ending soon in the EU.

Does that leave us with cost reduction?  Is there revenue out there besides connection/transport?  Some sort of non-traditional connection/transport could help.  SDN might offer some options here but the work isn’t moving in that direction—it’s all about white boxes and lowering costs.  The cloud is a pure new revenue opportunity, but my contacts among operators suggest that they’re not all that good exploiting cloud opportunity yet.  We’re left, I think, with NFV, which is why NFV has gotten so hot.

Up to now, NFV vendors have fallen into three categories.  One group has network expertise and functions and a collateral interest in sustaining the status quo.  Another has the resources to host stuff, but nothing much to host on it.  The third group has nothing at all but an appetite for PR, and this has sadly been the largest and most visible group.  Perhaps that’s now changing.

I’ve believed for some time that HP was one of the few vendors that actually had a full-spectrum NFV implementation that included operations integration and service lifecycle management.  They also have a partnership program, and now that program is expanding with the addition of Nokia.  Nokia has functionality and mobile expertise, but no respectable hosting capability and no MANO or OSS/BSS integration.

Nokia says IT and the telco world are merging more quickly than expected, which is true.  NFV is a big part of merging them, in fact.  Nokia wants to be the connected environment of the future, where virtualization can deliver the lower costs that operators need for profit and that users need to sustain their growing commitment to the Internet.  Nokia is strong in the mobile network, RAN and virtual IMS/EPC.  They’re essentially contributing VNFs to the picture, but VNFs in the mobile area which represents the last bastion of telco investment.  That could prove critical.

HP is strong in the infrastructure, cloud, and management areas, plus service-layer orchestration.  Their deal with Telefonica suggests that big operators see HP not as a kind of point-solution one-off NFV but as a credible platform partner.  That’s critical because wherever you start with NFV in VNF functional terms, you pretty much have to cover the waterfront in the end or you’ll fail to realize the benefits you need.

The two companies told a story to TelecomTV that made these basic points, though I think without making a compelling link to each company’s own contribution and importance.  Both were careful to blow kisses at open standards and to acknowledge their pact, which includes integration and professional services to sell and support as a unit, isn’t exclusive.  This, I think, is attributable to the vast disorderly mass of NFV stuff going on.  Nobody wants to bet on a single approach, a single partnership.HP

That’s likely what gives EMC hope.  EMC has to be worried that almost everyone in the NFV world is treating OpenStack and NFV as synonymous.  Even though that’s not true, and even though even the ETSI ISG is now seemingly accepting the notion of orchestration both above (what I’ve called “functional” orchestration) and within (“structural” orchestration) the Virtual Infrastructure Manager (VIM) where OpenStack lives, it’s worrying to EMC’s VMware unit.  Which obviously EMC wants to fix.

How far they’ll go here is hard to say.  I doubt that EMC/VMware are interested in doing a complete MANO with OSS/BSS integration, so they could create a VIM of their own to operate underneath this critical functional layer.  The fact that they’ve included Cyan as an early partner suggests this, but IMHO Cyan doesn’t match other NFV players like Alcatel-Lucent, HP, and Overture in terms of MANO and operations integration.  EMC can’t drive the NFV bus without MANO but only ride along, and everyone who has a good MANO is already committed to OpenStack.  EMC is also colliding with full-solution player Oracle, who presented their own NFV approach at MWC and targeted some of the same applications the HP/Nokia alliance targets.

My guess here is that EMC will be looking to other telco network vendors (such as Alcatel-Lucent or Ericsson) for partnering in a way similar to that presented by HP/Nokia.  I’d also guess that EMC’s NFV initiatives will put more pressure on Cisco to tell a comprehensive NFV story.  Here their risk is that “partnership” after the HP/Nokia deal will almost have to include much tighter sales/support integration, and a perfect partner for EMC will be hard to find.

Perfection in networking is going to be hard to find, in fact, and we are on a track to search—if not for perfection then at least for satisfaction.  NFV and the cloud could provide new revenue for operators, but there’s no incentive for them to subsidize an under-performing Internet investment with those revenues.  A decade ago, responsible ISPs were calling for changes in the business model because they saw this all coming.  Well, it may now be here.

CIOs See a New Cloud Model Emerging

In some recent chats with enterprise CIOs, I noticed that there were some who were thinking a bit differently about the cloud.  Their emerging views were aligned with a Microsoft Azure commercial on sequencing genomes, though it wasn’t possible to tell whether Microsoft influenced their thinking or they set the tone for Microsoft.  Whatever the case, there is at least a chance that buyers and sellers of cloud services may be approaching the presumptive goal of personalization and empowerment, but in a slightly different way.  This way can be related to a total-effort or “man-hour” vision of computing.

It’s long been common, if politically incorrect, to describe human effort in terms of “man-hours” (or man-days, or man-years), meaning the number of people needed times the length of time required.  If something requires a hundred man-hours, you could theoretically accomplish it with a single person in a hundred hours, a hundred people for an hour’s effort, or any other combination.  Many compute tasks, including our genome example, could similarly be seen as a “total-effort” processing quantum that could be reached by using more resources for a shorter time or vice versa.

Traditional computing limits choices in total-effort planning, because if you decide you need a hundred computers for an hour you have to ask what will pay the freight for them the remainder of the time.  One of the potentially profound consequences of the cloud, the one that’s highlighted by Microsoft and accepted by more and more CIOs, is that the cloud could make any resource/time commitment that adds up to the same number equivalent in a cost sense.  A cloud, with a pool of resources of great size, could just as easily commit one hundred computers for an hour as one computer for a hundred hours.  That facilitates a new way of looking at computing.

Most of you will probably recognize that we’re stepping over the threshold of what used to be called grid computing.  With the “grid” the idea was to make vast computational resources available for short periods, and what separated it from the “cloud” was that specific resource/time assumption.  When the cloud came along, it focused on hosting stuff that we’d already developed for discrete IT, which means that we accepted the traditional computing limitations on our total-effort tradeoffs.  One box rules them all, not a hundred—even when you have a hundred—because we built our applications for a limited number of boxes committed for long periods of time.

The reason why we abandoned the grid (besides the fact that it was old news to the media at some point) was that applications were not designed for the kind of parallelism that the grid demanded.  But early on, parallelism crept into the cloud.  Hadoop, which is a MapReduce concept, is an example of parallelism applied to data access.  My CIO friends suggest that we may be moving toward accepting parallelism in computing overall, which is good considering that some initiatives are actually trying to exploit it.

NFV is an example.  We hear all the time about “horizontal scaling”, which means that instances of a process can be spun up to share work when overall workloads increase, and be torn down when no longer needed.  Most companies who have worked to introduce it into NFV, notably Metaswitch whose Project Clearwater is a scale-friendly implementation of IMS, understand that you can’t just take an application and make it (or its components) scalable.  You have to design for scaling.

Another related example is my personalization or point-of-activity empowerment thesis.  The idea is to convert “work” or “entertainment” from some linear sequence imposed by a software process into an event-driven series.  Each “event”, being more atomic in nature, could in theory be fielded by any given instance of a process.  While event-driven programming doesn’t mandate parallelism, it does tend to support it as a byproduct of the nature of an event and what has to be done with it.

It seems to me that we have converging interest in a new application/service/feature model developing.  A combination of horizontal scaling and failover on one side and point-of-activity personalization on the other side is combining with greater CIO interest in adopting a grid model of parallel processing.  All of this is enriching the cloud.

Which, as I’ve said before, needs enriching.  The cloud broke away from the grid because it could support current application models more easily, but at the very least it’s risky to assume that current application models are where we’re heading in the long term.  I think the three points on parallelism I’ve made here are sufficient to justify the view that we are heading toward event-driven, parallelistic, computing and that the cloud’s real benefit in the long term comes from its inherent support for this capability.

I also think that this new model of the cloud, if it spreads to include “hosting” or “the Internet” (as it’s likely to do) is what generates a new model of networking.  Event-driven, personalized, parallel applications and services are highly episodic.  Yes, you can support them with IP-connectionless services, but the big question will be whether that’s the best way to do it.  If we assume that Internet use increasingly polarizes into mobile/behavioral activity on one side and long-term streaming on the other, then that alone could open the way to new models of connectivity.

I mentioned in a prior blog that mobile/behavioral services seem to favor the notion of a personal agent in the cloud operating on behalf of the user, fielding requests and offering responses.  This would internalize actual information-gathering and processing and mean that “access” was really only accessing your agent.  On the content side, it is very likely that today’s streaming model will evolve to something more on-demand-like, with a “guide” like many of the current streaming tools already provide to disintermediate users from “Internet” interfaces.  That would facilitate even radical changes in connectivity over time.

It’s really hard to say whether “facilitating” changes is the same as driving them.  What probably has to happen is that conventional switches and routers in either legacy device or software-hosted form would need to evolve their support for OpenFlow and through that support begin to integrate new forwarding features.  Over time, and in some areas like mobile/EPC replacement, it would be possible to build completely new device collections based on SDN forwarding.  If traditional traffic did indeed get subducted into the cloud core by personal agent relationships and CDN, then these could take over.

What remains to be seen is how “the cloud”, meaning cloud vendors, will respond to this.  Complicated marketing messages are always a risk because they confound the media’s desire for simple sound bytes, but even buyers find it easier to sell cost reduction now than improved return on IT investment over time.  The best answers aren’t always the easiest, though, and the big question for the cloud IMHO is who will figure out how to make the cloud’s real story exciting enough to overcome a bit of complexity.

What’s “NFV” versus “Carrier Cloud?”

We had a number of “NFV announcements” at MWC and like many such announcements they illustrate the challenge of defining what “NFV” is.  Increasingly it seems to be the carrier cloud, and the questions that raises are “why?” and “will this contaminate NFV’s value proposition?”

NFV has always had three components.  One is the virtual network function pool (VNFs) that provide the hosted features that build up to become services.  Another is the resources (NFV infrastructure or NFVI) used to host and connect VNFs, and the last is the MANO or management/orchestration functions that deploy and sustain the VNFs and associated resources.  It’s not hard to see that VNFs are pretty much the same thing as application components, and that NFVI is pretty much like cloud infrastructure.  Thus, the difference between “cloud” and “NFV” is largely the MANO area.  If MANO is there, it’s NFV.  But that raises the first question, which is whether there are versions of MANO, and even whether it’s always useful.

Automated deployment and management is usually collectively referred to as “service automation”.  It is roughly aligned with what in the cloud would be called “application lifecycle management” or ALM.  This function starts with development and maintenance, moves through testing and validation, and ends with deployment.  In most applications, ALM is something that, while ongoing in one sense, is episodic.  You do it when there are new apps or changes, but generally ALM processes are seen as stopping short of the related process of sustaining the applications while they’re active.

MANO is a kind of bridge, something that binds the ALM processes of lifecycle management to the sustaining management processes.  This binding is where the value of MANO comes in, because if you assume that you don’t need it you’re tossing the primary differentiator of MANO.  So what makes the binding valuable?  The answer is “dynamism”.

If an application or service rarely changes over a period of months or years, then the value of automating the handling of changes is clearly lower.  Static services or applications could in fact be deployed manually, and the resource-to-component associations for management could be set manually as well.  This is actually what’s getting done in a lot of “NFV trials”, particularly where the focus is on long-lived multi-tenant services like mobile services.  It’s not that these can’t be NFV applications, but that they don’t exercise or prove out the primary MANO differentiator that is the primary NFV value proposition—that dynamic binding of ALM to sustaining management.

Applications/services become dynamic for one of two reasons.  First, they may be short-lived themselves.  You need service automation when something is going to live for days or hours not months or years.  Second, they may involve highly variable component-to-resource relationships, particularly where those relationships have to be exposed to service buyers to validate SLA performance.

I reported that NFV is making better progress toward proving itself out at the business level, which is both good and important.  However, some of that progress is of the low-apple variety.  We’re picking applications that are actually more cloud-like, ones that represent those long-lived multi-tenant services that don’t actually require much ALM-to-management bridging.  Full-spectrum service automation is less stressed in these services, and so the case they make for NFV is narrow.

You can absolutely prove out an “NFV” implementation of mobile networking, mostly because early operator work with these services demonstrates that there is a capital cost savings from the transition from appliances to servers, and that operations costs and practices aren’t much impacted by the shift because the hosted version of the service looks in a management/complexity sense much like the appliance version did.  You can also prove out “NFV” implementations of virtual CPE for business users for much the same reason.  The services are actually rather static over the contract period, which is rarely less than a year.  Where dynamism is present, it’s often in the service-layer feature set (firewalls, NAT, etc.) and NFV principles here can reduce both capex and opex because they eliminate truck rolls.

There’s still a risk here, though, because we’ve provided only a limited validation of the full range of MANO “bindings”.  The fact that so many vendors present a “MANO” strategy that’s really OpenStack is a reflection of the fact that many multi-tenant NFV trials are really carrier cloud trials.  Is MANO more than that?  If it’s not then a lot of people have wasted years defining NFV.  If it is, then we have to be clear on what “more” means, and prove that an assertion of additional functionality is both true and presents value to justify its costs.

I think we have to get back to dynamism here.  If there is a value to “network services” that can not only be sustained in an evolving market but grow over time, that value has to be derived from personalization or specialization.  Since bits are neither personal nor special, the value has to come from higher layers.  Some of these valuable services may be, like content or mobility, based on multi-tenant behaviors of facilities and components, but it’s hard to see how we can depend on new services of this type arising.  What else of this nature is out there?  On the other hand, it’s easy to see that applications and features built on top of both mobile and content services could be very valuable.

I’ve talked before about mobile/behavioral symbiosis for both consumer and worker support.  It’s also easy to conceptualize additional services in the area of content delivery.  Finding not only videos but portions of videos is very useful in some businesses.  Hospitals who video grand rounds would love to be able to quickly locate a reference to a symptom or diagnosis, for example.  In the consumer space, sharing the viewing of a portion of a movie would be as valuable as sharing a YouTube stupid pet trick—maybe even more so depending on your pet-trick tolerance.

Building upward from multi-tenancy toward personalization/specialization is what MANO should and must provide for, and we have to insure that we don’t get trapped in the orchard of low apples here.  If we do, then we risk having NFV run out of gas.

HP Grabs a Potential Lead in the Greatest IT Race

We’re obviously in a period of transformation for computing and networking, and it’s equally obvious that HP is intent on improving its position in the market through this transition.  They’ve made two recent announcements that illustrate what their strategy could be, and it could mean some interesting dynamics in both computing and networking over the next couple years.

The first step was HP’s acquisition of Aruba Networks, a leading player in the enterprise WiFi space.  If you’ve been following my blog for any period of time you know that I believe that mobile worker empowerment (point-of-activity empowerment) is critical for enterprises to take the next step in productivity enhancement.  That, in turn, is critical in driving an uptick in technology spending.  We’ve had three periods in the past when IT spending has grown faster than GDP, and each was driven by a new productivity dynamic.  This could be the next one in my view.

WiFi is important to this developing empowerment thesis for two reasons.  First, all workers who are potential targets for mobile empowerment spend some of their time on company premises where WiFi is available, and over 75% spend nearly all their mobile time on-prem.  That means that you can hit a large population of workers with a WiFi mobile productivity approach.  Second, the cost of WiFi empowerment is lower than empowerment via 4G, and you can always spread a WiFi strategy over 4G if it proves useful to do so.

Mobile productivity obviously means getting to the worker, so there has to be a communications ingredient, and that’s likely the basis for the HP move with Aruba.  It also requires a new computing model, something that turns applications from driving the worker through fixed workflows to responding to work events as they occur.  This new computing model is very suitable for cloud implementation, public or private, because it’s highly dynamic, highly agile.  Given that HP wants (like everyone else) to be a kingpin of the cloud, it makes sense to be able to link a cloud story with a WiFi/empowerment story.

The second announcement involves NFV, which may seem a strange bedfellow for worker WiFi and empowerment but isn’t.  I’ve commented before that my model says that optimum NFV deployment would create the largest single source of new data centers globally, and could create the largest source of new server deployments too (the model can’t be definitive on that point yet).  That’s certainly enough reason for a server giant like HP to want to have an NFV position.  Now, HP is announcing what’s perhaps the most critical win in the SDN/NFV space.

Telefonica has long been my candidate for the most innovative of the major network operators, and they’ve picked HP to build their UNICA infrastructure, foundation for Telefonica’s future use of NFV and SDN.  I think Telefonica is the thought leader in this space, the operator who has the clearest vision of where NFV and SDN have to go in order for operators to justify that “optimum NFV deployment” that drives all that server opportunity.  They are very likely going to write the book on NFV.

And HP is now going to be taking dictation, and in fact perhaps being a contributing author.  HP is one of those NFV players who have a lot more substance than they have visibility with respect to NFV (as opposed to most NFV vendors who are all sizzle and no substance).  I’ve seen a lot of their material and it’s by far the most impressive documentation on NFV available from anyone.  That suggests HP has a lot more under the hood to write about, even if they haven’t learned to sing effectively yet.

There are dozens of NFV tests and trials underway, most of which are going to prove only that NFV can work, not that NFV can make a business case.  Operators are now realizing that and are working to build a better business case (as I’ve reported earlier) but many of the trials are simply not going to be effective in doing that.  The ones most likely to succeed are the ones sponsored by operators who understand the full potential of NFV and SDN and who are supporting tests of the business case and not just the technology.  Who, more than Telefonica as the NFV thought leader, can be expected to do that?

HP has just sat itself at the head of the NFV table because they’re linked with the operator initiative most likely to advance NFV’s business case.  And everyone in the operator community knows this in their heart; I told a big US Tier One two years ago that Telefonica was the most innovative operator and they didn’t even disagree.  So imagine the value of working with an operator of that stature on defining the best model to meet the NFV business case.  It just doesn’t get any better.

Well, maybe it does.  NFV is not a “new” technology, it’s an evolution of the cloud to add a dimension of management and dynamism to cloud infrastructure.  The lessons of NFV can be applied to cloud computing and thus can be applied to mobile productivity.  For network operators, cloud computing services that are aimed at the mobile worker’s productivity would be far more profitable and credible than those aimed elsewhere.  WiFi and 4G integration with dynamic applications, created and managed using NFV tools, could be the rest of that next-wave business case that could drive the next tech growth cycle.  With proper exploitation of Aruba and an interweaving of NFV tools honed in the Telefonica deal, HP could build a new compute model.

The operative word as always is “could”.  HP has thrown down a very large gauntlet here, one that broad IT rivals like Cisco, IBM, and even Oracle can hardly ignore.  They’ve also put NFV players like Alcatel-Lucent on notice.  They’ve made a lot of enemies who will be eager to point out any deficiencies in the HP vision.  And deficiencies in vision are in the eyes of the beholder in a very literal sense.  HP has, like many companies, failed to promote itself to its own full potential.  That may not have mattered in a race where all the runners are still milling around the starting line.  If you make your own start, clean and early, you darn sure want to make sure everyone in the stands know you’re running, and leading.  The question for HP now is whether they can get that singing voice I’ve mentioned into order, and make themselves as known as they are good.