Does the Oracle/Intel Demonstration Move the NFV Ball?

Oracle has started demoing their new NFV/orchestration stuff, and anything Oracle does in the space is important because the company represents a slightly different constituency in the NFV vendor spectrum.  They’re definitely not a network equipment player so NFV isn’t a risk to their core business.  They do sell servers, but that’s not their primary focus.  They are a software player and with their NFV announcement earlier they became the biggest “official” software company in NFV.

The big focus of the Oracle announcement was a partnership with Intel on the Open Network Platform initiative.  This is aimed at expanding what can be done with NFV by facilitating the hosting of functions on hardware with the right features.  The demo shows that you can create “sub-pools” within NFVI that have memory, CPU, or other hardware features that certain types of VNF would need.  Oracle’s orchestration software then assigns the VNFs to the right pools to insure that everything is optimally matched with hardware.

There’s no question that you’d like to have as much flexibility as possible running functions as VNFs instead of as physical appliances, but I’m not sure that the impact is as great as Oracle might like everyone to believe.  There are a number of reasons, ranging from tactical to strategic.

Reason one is that this is hardly an exclusive relationship between Oracle and Intel.  Intel’s ONP is available to any vendor, and Intel’s Wind River open-source Titanium supports it.  HP, a rival with Oracle for NFV traction, is a (or THE) Intel partner with ONP, in fact.  I doubt that any Intel-server-based NFV implementation would not use ONP.

Reason two is that the NFV ISG has called for VNF steering to servers based on a combination of the VNFs’ needs and servers’ capabilities for ages.  It’s part of the ETSI spec, and that means that implementations of MANO that want to conform to the spec have to provide for the steering.

Reason three is that right now the big issue with NFV is likely to be getting started, and in early NFV deployment resource pools will not be large.  Subdividing them extensively enough to require VNF hosting be steered to specialized sub-pools is likely to reduce resource efficiency.  Operators I’ve talked to suggest that early on they would probably elect to deploy servers that had all the features that any significant VNF population needed rather than specialize, just to insure good resource pool efficiency.

Then we have the big strategic reason.  What kind of VNF is going to need specialized hardware for performance?  I’d contend that this would likely be things like big virtual routers, pieces of EPC or IMS or CDN.  These functions are really not “VNFs” in the traditional sense because they are persistent.  I commented in an earlier blog that the more a software function was likely to require high performance, higher-cost hardware, the less likely it was to be dynamic.  You don’t spin up a multi-gigabit virtual router for an hour’s excursion, you plant it somewhere and leave it there unless something breaks.  That makes this kind of application more like cloud computing than like NFV.

I asked an operator recently if they believed that they would host EPC, virtual edge routers, virtual core switches, etc. on generalized server pools and they said they would not.  The operator thought that these critical elements would be “placed” rather than orchestrated, which again suggests a more cloud-like than NFV-like approach.  Given that, it may not matter much whether you can “orchestrate” these elements.

Then there’s the opex efficiency point, which I think is a question of how many such situations arise.  Every user doesn’t get their own IMS/EPC/CDN, they share a common one, generally per metro.  It’s not clear to me given that limited deployment that any operations efficiencies generated would be confined to a small number of functional components, how much you could drive the NFV business case on OPN alone.

And service agility?  Valuable services that operators want to deploy quickly are almost certain to be personalized services.  What exactly can we do as part of a truly dynamic service that is first personalized for a user and second, so demanding of server resources that we have to specialize what we host it on?  Even for the business market I think this is a doubtful situation, and for the consumer market that makes up most of where operators are now losing money, there is virtually no chance that supersized resources would be used because they couldn’t be cost-justified.

Don’t get me wrong; OPN is important.  It’s just not transformative in an NFV sense.  I’ve shared my view of the network of the future with all of you who read my blog.  It’s an agile optical base, cloud data centers at the top, and a bunch of service- and user-specific hosted virtual networks in between.  These networks will have high-performance elements to be sure, elements that need OPN.  They’ll be multi-tenant, though, and not the sort of thing that NFV has to spin up and tear down.  They’ll probably move more than real routers do, but not often enough to make orchestration and pool selection a big factor.

I am watching Oracle’s NFV progress eagerly because I do think they could take a giant step forward with NFV and drive the market because they do have such enormous credibility and potential.  I just don’t think that this is such a step.  “Ford announces automobiles with engines!” isn’t really all that dramatic, and IMHO ONP or ONP-like features are table stakes.  What I’m looking for from Oracle is something forward-looking, not retrospective.

In their recent NFV announcement, Oracle presented the most OSS/BSS-centric vision for NFV that any major vendor has articulated.  There is absolutely no question that every single NFV mission or service must have, as its strongest underpinning, a way of achieving exceptionally good operations efficiency.  Virtualization increases complexity and complexity normally increases management costs.  We need to reduce them, in every case, or capex reductions and service agility benefits won’t matter because they’ll either be offset or impossible to achieve.  Oracle’s biggest contribution to NFV would be to articulate the details of OSS/BSS integration.  That would truly be a revolutionary change.

As an industry, I think we have a tendency to conflate everything that’s even related to a hot media topic into that topic.  Cloud computing is based on virtualization of servers yet every virtualized server isn’t cloud computing.  Every hosted function isn’t NFV.  I think that NFV principles and even NFV software could play a role in all public cloud services and carrier virtualization of even persistent functions, but I also think we have to understand that these kinds of things are on one side of the requirements spectrum and things like service chaining are on the other.  I’d like to see focus where it belongs, which is where it can nail down the unique NFV benefits.