Climbing the Benefit Ladder Above SDN, NFV, and the Cloud

Network Functions Virtualization (NFV) is one of several technologies that operators are hoping will improve their profit on infrastructure investment.  NFV itself was launched to reduce capex by substituting generic hosted functions for embedded-appliance-based functions.  NFV’s benefit expectations have evolved since to include, and even emphasize, operations efficiency and service agility.

The evolution of expectations doesn’t necessarily drive collateral evolution of capability, which I’ve noted in the past.  Last year operators told me that none of their trials of NFV had proved a full business case for deployment.  Early this year they said that they were integrating more operations practices and processes into the trials, and most were hopeful this would resolve the benefit issues.  Even though it’s only the end of April, they’re still evolving their view of NFV and I think it’s interesting to see where it’s headed.

The most significant point I’ve learned is that about 80% of operators’ NFV trials are characterized by operators themselves as “proof of technology not benefits”.  This isn’t a return to the 100% “my trials won’t prove a business case” but it does seem pretty clear that hopes that additional scope for current PoCs and trials would justify deployment aren’t yet realized.

A couple of operators were very candid in their comments.  The problem, they say, is that the trials aren’t really doing much to operations at all.  Vendors, who in fairness are probably influenced by the ETSI vision of management and operations integration, have promoted what can be called the “virtual device” model of management.  Virtual functions, under this model, are managed by adding management components that mimic the management interfaces and behaviors of the original devices.

This seems very logical on the surface.  If you want to validate NFV technology you need to contain the impact on surrounding aspects of your network and business or you end up with a “trial” that includes everything.  The challenge is that if you are mimicking current device management, then it’s hard to demonstrate much in the way of operations efficiency gains.  In fact, you’re likely to create additional operations issues.

Early trials of the virtual device model show that you can manage a virtual device through existing interfaces, with existing tools, only to a point.  There is a kind of border you’ll need to cross in this situation—the border where virtual functions are hosted on or connected through real resources.  The management of those resources tends, in early NFV trials, to be separate from the management of the virtual functions.  The challenge, according to operators, is that separation means that resource management in addition to function management is needed, and problem resolution across the border is more difficult than expected.

A few of the operators attribute all of this to a lack of service lifecycle management focus.  In order to assess NFV benefits, you’d have to be able to test NFV from the conception of a service to the realization as a product to be sold and paid for.  Three quarters of trials, according to operators, fail to provide any such scope and so it’s difficult to assess what the total cost and total agility-driven revenue benefit might be.

Most operators now seem to believe that the problem isn’t NFV per se, but the fact that NFV has to be fit into a larger service revolution.  “I’m not interested in building my business around NFV,” said one, “but I’m very interested in building NFV into my business.”  The challenge for operators is that while there is an NFV architecture (even if it’s operationally imperfect or at least not validated) there’s nothing above they can play with.

What I see now is something like the “transformation” age of operators eight or ten years ago.  At that time they all were looking at business model transformation aided by technologies.  I looked back over the presentations made at that time and found striking similarities with the presentations on current operator goals for building that mysterious layer above NFV (and SDN).  Nothing much came of those old adventures, of course, and that has a lot of operators worried.  They need something complete and effective in play within two years on the average, and they’re not only unsure where it will come from, they aren’t confident they can describe it fully to prospective sellers.

There are people who see this as a failure of NFV, even within the operator community.  About a quarter of Tier Ones seem to have scaled back considerably on their NFV expectations.  I’ve had my own doubts about the scope of the ETSI work—I’ve argued from the first that the limitations in scope risked isolating NFV from most of the benefit sources.  I still feel a broader model would have been better, but I have to admit that it would have taken longer to do and in the end might not have accomplished any more than that which has been done by the ETSI ISG to date.

So what’s the problem?  I have a sense of inevitability here, I guess.  The constriction of profits between a falling revenue-per-bit line and a slower-falling cost-per-bit line is a systemic problem with roots that go way beyond network technology and operations or business practices.  It may not be possible to solve it completely, and even some operators now admit that.  Regulators may have to accept the very kind of consolidation that the rejection of the Comcast/Time Warner deal would have created.  Users and OTT players may have to accept that there will not be continued improvement in speed and quality, and that in fact congestion online may become the rule.

That’s what these new high-level visions are hoping to avoid.  A bit less than half the operators seem to have at least skunk-works projects underway to advance a new service architecture at the highest level.  In a goal sense, most of these new architectures aren’t demanding NFV or SDN or the cloud, but they are all defining objectives it would be hard to meet without all three.  In fact, what these operators seem to be creating is a kind of Unified Field Theory for networking that harmonizes all three.

For vendors this poses an enormous risk and opportunity at the same time.  Much of the work involved in PoCs and NFV trials up to now isn’t going to pay off in direct deployment.  Much of the work needed to drive significant network transformation will have to take place outside the NFV, SDN, and cloud processes.  But remember that about 20% of trials are considered to be making useful progress.  We do have NFV vendors who are successfully (if, in operator views, too slowly) expanding their scope to grasp at the borders of NFV and whatever is above it.  This is where big vendors will have the advantage, because they’re going to have to take a big bite of complexity to get a big bite of benefits.  And only a big benefit case is going to transform networking.

Apple, iDevices, and the New Age of the Cloud

Apple “crushed estimates” according to the headline of a financial website, and they surely did.  In fact Apple turned in what was perhaps the first unabashedly great quarters of tech companies in the current earnings season.  iPad sales were slightly below estimates and some analysts thought outlook was less positive than the current quarter, but other than that it was beer an roses.

The obvious question is whether they can keep it up.  This is important not only for Apple but for the industry, because if Apple is the face of success then we have to reexamine some of the cherished tech illusions we’ve been reading about.

The Driving Principle of Our Age is that of virtualization.  Computer power is cheap and getting cheaper, and its expansion into every aspect of our lives isn’t limited by capital cost but by support or operations.  We have to turn the world into as-a-service because the masses can’t be expected to be computer gurus.  In this world-view, we should be seeing a dumbing down of local intelligence, a shift toward devices being on-ramps to the cloud.

Which is hardly consistent with Apple’s vision.  Apple has three basic value propositions.  One is that their stuff is cool and their users enviable.  That’s a given.  The second is that “their stuff” is something you buy and hold.  It’s not virtual, it’s not hosted, it’s not something that is really created on some nameless server somewhere, because that anonymity would make everyone else Apple’s equal.  Their game is devices.  The third proposition is that sought-after experiences are atomic.  Users want something, and that something isn’t much related to other somethings the user wanted or will want.  We live in the moment.

Amazon tried to beat Apple, at its own game, with Kindle Fire and their phone, and it didn’t work.  It’s very hard to unseat a champion if you agree to abide by all their rules of engagement, after all.  Google bought a phone company, arguably, to try their own hand and that didn’t work either.  So what would work?  Something that eats away at those basic value propositions, and nothing would do that better than a shift to the cloud.

Amazon is the cloud giant, the king of virtualization.  They have the tools to make an experience virtual, not tied to cool devices.  Such a move would hit at one of Apple’s critical foundations.  Google, with Fi, is now taking a mobile service and building layers on it in the cloud and not in the device.  Their Nexus 6 initial offering is weak.  Could they make their next supported handset an iPhone, perhaps?  Suppose they said that any iPhone, even an older model, could be used with Fi.  Will that undermine an Apple proposition?

Alcatel-Lucent may be aiming at the third proposition, through service-provider proxies.  Their new Rapport promises contextual services, or at least an early form thereof.  The more intelligence you draw into giving users what they want, the more costly it is to store all that stuff in the handset or deliver it there for analysis.  The network knows, remember?  Operators have wanted to break the hold phone providers have on mobile services.  Empowered operators, particularly operators inspired by Google’s Fi, might do that.

Context is also the logical solution to the first of Apple’s propositions, the cachet.  Every user may strive to display that Apple logo, but more than even that they’d like to display “Her”, the artificial-intelligence companion that sees all, knows all, and when she (or he) speaks draws admiring glances.  A companion shares experiences, shares context, which is why we automatically expect smart devices to know what we’re seeing or doing.  Can a phone do that sort of thing?  With network help, yes.  A network can do it with nothing more from a phone than a conduit to the user.

The industry, for a variety of reasons, is moving beyond traditional networking and IT and into a new age, an age where context and personal assistants are inevitable.  I think that the signs are already visible.  Yes, Amazon has fumbled its own ball.  Yes, Google’s Fi is tentative, a wisp of what it could be.  Alcatel-Lucent isn’t marketing Rapport to the stars either, and operators confronting a profit crisis and a technology (NFV) that promises to support agile services are instead trying to use it to create the same old crap they’ve sold forever.  There will always be under-realization of every new industry trend, but eventually mass pays off.  An industry-wide profit-starvation trend has a lot of mass behind it, and urgency besides.

The question for Apple is whether they see this or not.  It’s perfectly possible that Cook and his clan have already laid out the Cool Answers to the context-and-services future, that the Apple Cloud offering will eventually just awe everyone into submission.  They’re milking their current model while they can, and will spring when it’s necessary.  It’s also possible that Apple has stuck its head in the (silicon-based) sand and will hold onto the past too long, like so many others.

The question for competitors is the same, the “do-they-see” question. It’s possible to make a better phone or tablet than Apple, but nobody is going to make a phone or tablet superior enough to overcome Apple’s advantage.  To beat Apple you have to write new game rules, rules that favor new innovators.  Amazon and Google and even the network operators, perhaps through NFV or Alcatel-Lucent’s Rapport, have a chance.  They have a chance to undermine the handset.

And to exalt what, exactly?  That may be the problem.  The network isn’t the value proposition for the future; it’s getting commoditized just as fast as anything else.  If the cloud is the future, what exactly is the cloud?  Servers are commoditizing.  Is it software?  Can we earn massive profits from software alone?  What is the engine to create the Big Win that justifies the Big Risk?

If as-a-service is the future then services is the goal.  We will likely spend less on hardware and software to support centralized cloud services of any type than we’d spend if we hosted our gratification on local devices.  The next level of disintermediation may be aimed less at the network operators and more at the vendors.  It won’t kill the market for computers or devices, but it will surely help commoditize it.  Unless the vendors start thinking about how they can be as-a-service players too.  Does that sound a lot like the dilemma the Internet posed for operators decades ago?  It does to me.

Stepping Beyond the Cloud as We Know It

There are few who doubt that we are in the Internet Age.  Few doubt we’re entering the Cloud Age and maybe even the SDN/NFV Age, but I wonder whether there’s broad understanding that the cloud and related technologies like SDN and NFV are going to be as transformative as the Internet was.  When the Internet first developed, nobody saw what it would become.  We’re just now starting to see the signs of what might come next.

Our biggest news item last week was Amazon’s first break-out of cloud earnings.  The company reported about $5 billion in cloud service sales and a $6 billion run rate.  If you give Amazon about 28% of the IaaS/PaaS cloud market, that sizes that cloud market at $18 billion, which is about 1.8% of current IT spending.  More significant to financial analysts was that Amazon reported a profit of about $1 billion on the cloud.

I think the most interesting thing about the Amazon number is the way it frames total cloud service sales.  If you believe the cloud will largely displace private IT, it’s clear there’s a long road ahead.  If you don’t, which I don’t, then you have to examine cloud service opportunity more closely to see where we are now and where we’re heading.  It’s that examination that takes us into the future, into the transformation that just might change everything.

The first point is that SaaS is generally viewed as the larger cloud service segment, but it’s hard to size effectively because hosted services and SaaS services are hard to distinguish.  If you eliminate web hosting, my own estimate is that SaaS currently accounts for about $16 billion in spending, which would make it a titch smaller than the “platform” clouds.  Total cloud computing spending would then be about $34 billion.  Include all hosted services and the spending doubles, which shows that SaaS and the cloud are really extending trends that had been established before.

Online sales and similar adventures by enterprises didn’t displace current IT spending, they augmented it.  What that proves in my view is that we had two possible views of the cloud to choose from when it launched—substitute IT or a new opportunity—and we picked the most pedestrian.

The cloud can probably displace only about $240 billion of current IT spending.  Even with that low a target, it’s obvious that we’ve not even reached 14% of likely penetration, which means that public declarations of an Amazon victory are likely optimistic simply on statistics.  Other providers still have a good chance.  That means not only current providers of cloud services, but even new and credible cloud service market entrants.  But while a quarter-trillion isn’t chump change, it’s not transformative either.

What makes things interesting in my view is that right now about a third of the platform (IaaS/PaaS) cloud spending and 20% of the SaaS spending isn’t displacing current IT spending at all, but rather is accretive to it because the cloud is doing stuff that was never done traditionally.  Despite cost-driven targeting, we’ve been witnessing a quiet cloud transformation, a shift from the pedestrian and short-sighted targeting to something exciting.  The future cloud opportunity lies more with this new stuff, which for the enterprise is about $800 billion according to my model.  If you go beyond the enterprise into new consumer mobile and NFV services, you add another $1.5 trillion, which gets you into the realm of real money.

Amazon has an impressive but not compelling cloud position in the “enterprise cloud” as most would see it today.  They have no real position in the extended enterprise, mobile, or NFV spaces.  That means that if the cloud fully develops and Amazon doesn’t push out of its current focus area or change market share, they’d end up with 28% of what is about a $1 trillion total opportunity.  That’s a lot of growth for them, and investors would have every reason to be happy.

The question is the rest of the cloud opportunity, the roughly $1.5 trillion in mobile/NFV services.  This is the space that the network operators (at least the savvy ones) hope to reap with the “service agility” NFV is supposed to provide.  It’s also the space that Google obviously hopes to capture with its Fi MVNO service.

Put into cloud terms, Fi could be a model to transfer network service value upward out of the network and into the cloud, and then to meld it with MVNO network services to create what the user would see as a new native mobile service.  Google is likely betting that the operators, who could create a tighter linkage between true mobile connection services and Fi-like cloud services through NFV, won’t be able to move far enough or fast enough.  In a way, Google is targeting the biggest disintermediation project since the Internet, where the cloud disintermediates operators from higher-layer service value.

As-a-service activity, virtualization, SDN/NFV, the cloud, or whatever you call it, are generators of “new opportunity” that aggregates to well over $2 trillion in annual revenues.  At least three-quarters of this could be viewed as “natural opportunities” for operators and all of it would be an opportunity were operators to position their cloud assets properly.  How do we know that?  From Amazon.

Amazon’s profits on AWS are hard to validate because we all know that it’s difficult to know the formula the company uses to allocate costs on shared infrastructure.  But we do know that in the cloud overall, the highest profits will likely accrue to the guy with the lowest costs.  Amazon’s enormous scope has made it an economy-of-scale play.  The operators, with NFV, could in theory deploy even more infrastructure than Amazon and do so at a lower expected ROI because of their utility-like internal rates of return.  Financially they could win.

We can also draw some insights from the regulatory opposition to the Comcast/TW acquisition that ultimately killed the deal.  Regulators were at least as afraid of the impact on OTT video as they were on other cable/broadband or telco video/broadband service providers.  That suggests that even in regulatory circles there’s a growing sense that services are above the bit.  If that’s true then Google and Amazon have a shot at the whole pie, which could be huge for them.

This also shows why network equipment is lumpy.  Mobile infrastructure needs a higher-layer boost, so Ericsson is seeing a slowdown.  Future services will be based mostly on software and servers, so F5 saw a boost.  Profitable traffic in the metro, to be supplemented by the cloud, still demands carriage of some sort and Juniper is aiming at that and hoping that somebody with a good specific cloud and NFV story won’t step on them.

I’ve tended to call future applications and services “contextual”, meaning that they exploit the sense of context that mobile users (and humans in general) base their behaviors on.  Call them what you like, but I think that these services, whose total revenue value is over $2.5 trillion per year, represent the pie that everyone has to be looking to slice at the provider level, and that every vendor wants to supply with equipment, software, and professional services.  The question seems how and when to start.

Inside every Tier One is a planner who understands the future.  That’s true for about half the Tier Twos and perhaps a quarter of Tier Threes.  Among the largest enterprises, about half see the future as it is, and the rate of insight drops radically as you move toward the SMBs.  The point is that future-speak is nice if you’re a reporter but it’s not necessarily the path to riches if you’re selling network equipment, software, or services.  There’s always a need to build the future on the present, not destroy the present to get to it.  That means that the status quo will hold a powerful appeal until there’s no way to avoid facing future reality.

We may be getting to that point.  Optical players like Infinera are speaking future-truth already and reaping the rewards.  NFV’s principles are becoming clear even if there is still an unknown amount of work to do on specs, and we’re gaining on the 2017 deadline when operators will need something from NFV to save profits, and may leapfrog remaining standards and issues to get there.

I still see this as a kind of face-off-by-proxy, with Google Fi on one side and Alcatel-Lucent’s Rapport on the other.  Can Google figure out how to build superior higher-layer services on top of an MVNO framework?  If so, then they relegate operators to MVNO hosts at even slimmer margins.  Or can operators use Rapport or NFV or both to build agile service layers, not new ways of doing connection services?

We may have other answers even sooner.  Amazon and Apple can’t let Google own this transformation.  If all Fi represented was an MVNO deal, competitors could sit on the sidelines because the risk is great and the upside isn’t that great.  If Fi is a step toward a multi-trillion-dollar opportunity, nobody dares ignore it.  Apple is particularly vulnerable, but also particularly well positioned with a loyal fan base and a legion of related products.  Once they’ve reported (today) can then clear the decks and move more decisively into this new age?  That we’ll have to see.

The Tale of Three Vendors

There’s no denying that networking is changing, but different people or companies see the change differently.  For consumers, it’s mostly about replacing wireline phones and maybe cable TV with the Internet and wireless broadband.  For network operators it’s about sliding profits on basic connection/transport services and growing competition from traditional and non-traditional sources.  For network equipment vendors it’s (so they say) about “deferred spending”.

Well, you are what you invest in, and so networking is the sum of network infrastructure.  We’ve had three network vendor quarterly announcements this week, and it’s interesting to try to synthesize reality from their raw data.  That reality might then give us some indication of where the industry is moving, in the broadest sense.

Let’s start with the biggest.  Ericsson had a bad quarter that surprised nearly everyone.  Revenues were up y/y by nearly 13% but their EPS was off and the stock took an 8%-or-more hit as a result.  Ericsson blames sluggish sales to US carriers for their problem, and obviously if your biggest business source is slow you’ll be impacted.  But any seller can say “I’d do better if my customers bought more.”  It’s not a helpful analysis.  Why do sales matter if they’re up 13% y/y?  Because in adjusted currency form they were off by 9% for networks and 2% for services.

Move on now to Juniper.  Of the three companies who released this week-to-date, they turned in the best results from a Street perspective.  Juniper beat in EPS by a cent and beat earnings by a similarly small amount.  Their guidance was mid-range, contrasting to the other vendors we’re discussing.  The issue for Juniper is that their results beat estimates but they’re still running behind relative to past quarters.  Year over year, they were off in all three of their product categories (routing, switching, security).  Regionally they were up in EMEA and off elsewhere.

F5 also reported, and while it beat a big on revenue and EPS, its guidance for revenue was light.  Interestingly, much of their revenue positive was attributable to the very North America/US market that Ericsson said was “sluggish”.  According to them, the problem is exchange issues—foreign currency headwinds.  Their fundamental trends in ADC and security are strong.

When you look at the three together, it’s clear that there is no clear secular trend driving them all.  We’re not seeing economically or systemically driven demand suppression, but rather a shift in spending that in some areas probably represents a decline in perceived operator ROI potential and in others potential for a gain.  Operators are doing what’s profitable, and that’s changing.

A specific point here is Ericsson’s sluggish wireless spending lament.  Wireless has historically been the bright spot in capex, but for the last five to ten years we’ve seen increased pressure on wireless ARPU.  Couple that with the fact that most operators don’t have a large unpenetrated prospect base, and you have a formula for profit stagnation or even decline.  The operators, like vendors, respond by cutting costs (Ericsson plans that, for example).  An operator cutting cost equals an operator with lower capex.  Mobile has fallen from grace, at least relative to its glory days, because it’s not as profitable.

Roaming regulations in the EU and neutrality in the US conspire to increase future risk.  Reductions in roaming charges mean less mobile revenue and (worse to some operators) loss of a means of avoiding churn in a very competitive market.  An operator usually has the best coverage and performance in their home area, and if they have to share their network with competitors even at home and at minimal incremental cost, then they risk competition.  In the US, neutrality rules on mobile could stymie a lot of broadband usage plans, particularly if “content-pays” is an illegal model.

In contrast, pretty much all future service revenue gains are seen as coming from services whose features are hosted in data centers.  It follows that data center equipment and network equipment associated with hosting points would do well—F5’s ADC and security portfolio for example.  Add to that the fact that unlike Ericsson, F5 gets only about a quarter of its revenues from service providers and you see some good reasons why F5 is different.  The stock was off initially on light guidance but popped back with the announcement (expected) of the new CEO (replacing the retiring McAdams).  The pop is more justified in my view based on the fact that there will be a lot more data centers down the line.

Juniper is (as often is the case) a kind of interesting dilemma.  If you look at their trend line relative to the other vendors, their results are worse.  The Street has rewarded them for not being worse than expected.  But in fundamentals Juniper still has strong assets.  Their security stuff is in the top tier for CSP/NSP buyers.  They have good data center switching credentials.  They have less exposure to mobile than Ericsson, meaning that mobile’s slide from grace won’t impact them as much.  They are what CEO Rahim describes as “maniacally focused on IP networking.”  For all the changes in the industry, we still have to push bits.

Overall, I think we’re seeing an industry in transition, and I doubt many disagree.  The view I hold that vendors in particular might not like is that I think the transition is from connection/transport dependency to higher-layer dependency.  F5 won because it was more higher-layer than the others, less exposed to segments that are in decline.

If you know your current business model isn’t working and you know what the future holds, you’d shift on a dime to fund the new.  If you knew the former and not the latter, you’d withhold spending on the old and wait to see what develops.  That’s where I think we are.  Operators know that pushing bits won’t be rewarded, but they don’t know for sure what will.  They currently can see only that hosting and data centers will have a lot to do with it.  So they trim their sales on traditional products and watch for signs of a clear future direction.

For the network vendors, the question is whether that future direction intersects with any path they can hope to take.  Ericsson wants to bet on professional services, but you need a goal to need a route-planner.  Juniper wants to bet on business as usual, a bet that I think is least likely to pay off in the long run.  F5 wants to bet on the cloud and data center, and that’s the only winning bet available.  Their risk is that NFV and SDN will combine to create a more definitive future path that will subsume their ADC/security mission.  F5 really doesn’t play a convincing role in either.

The situation with these three vendors illustrates the risk Alcatel-Lucent and Nokia face in combining.  If you’re consolidating based on current conditions or even current established trends, you’re shooting behind the duck.  The fundamental problem in networking is benefits to drive new spending.  For operators, that’s revenue from new services.  For enterprises, it’s new productivity gains.  As an industry we’ve come to see offering more bits for less money as a gain; it’s a path to commoditization.  We have to make bits more valuable, and that’s the simple truth that vendors and their customers must all face.

What Hath Google Fi Wrought?

Google has unveiled its long-awaited MVNO offering, Google Fi.  Right now, Fi is in what Google calls “Early Access” so you have to apply for an invite and wait to get it.  It might be worth the wait.  Working in partnership with carriers in over 120 countries (Sprint and T-Mobile in the US), Google has put together a pretty jazzy cellular/WiFi combination that’s integrated with Hangouts (Google Voice) and offers a novel and attractive pricing plan.  It might be a game-changer in the mobile broadband space.  It also might be another DOA concept like Google Wave.

Fi’s pricing is probably the most obvious differentiator and disruptor.  A month’s service ranges from an improbable-alone base of $20 for talk/text, and an additional $10 per gig of data.  The Google plan seems to start with 3 gigs, making the price $50 per month.  You get a rebate for what you don’t use and you can buy additional gigs for ten bucks.  That puts the service price on par or better with respect to most other prepay plans, and much cheaper than traditional post-pay plans.  Fi is post-pay, so it’s probably a price leader in that space for many users.  With service in over 120 countries at reasonable rates, international travelers might find it especially compelling.

Seamless WiFi calling is another plus.  Fi selects the best/cheapest option for connection for a given call, so you don’t have to do anything to make a WiFi call other than be somewhere where public WiFi is available.  That works in the US or internationally.  I have to note that there is seems to be a conflict between Google’s blog and the Fi pages on how WiFi works.  The blog and broad marketing material suggest it works “…whether in your home, your favorite coffee shop or your Batcave”, which would imply that you can register it on secure WiFi networks since most home networks at least are secure.  The Fi FAQs say that the WiFi network has to be an open public network without entered security.

Fi is tightly coupled to Google’s current communications frameworks, once Google Voice now Hangouts.  When you sign up for Fi with a Google account, the Hangout options associated with that account are updated to include the Fi handset (a Nexus 6 is all that’s supported initially).  You can make Fi calls using any other device that’s also linked to the account’s Hangouts profile, and receive calls made to the Fi number on any other device as well.

For a lot of users the Fi offering will be pretty significant, but it’s not for everyone.  Unless you happen to have a Nexus 6 you’ll have to wait until your device is supported or buy a Nexus 6 or other supported device (of which there are none for now, as I’ve noted, so this means only a Nexus 6 for now).  That’s a six-hundred-buck buy-in.  There are no family plans or unlimited data plans either, so people who save a lot with combination plans or who use a lot of data may end up paying more with Fi.  Fi doesn’t pay termination charges either, so switching could be costly even if you can salvage your phone.

The obvious question raised by Fi is whether Google is serious about it, and there’s obviously no answer to that one.  You have a better chance of being able to get Google Fi than Google Fiber, but it’s far from 100% and even if you get it, there’s a chance it might go away.  For “Early Access” read “field trial?”  I suspect that Google is reserving the right to pull the plug during the Early Access period, and even change terms.  I don’t think they’re likely to do either, but it’s possible.

The uncertainty over how serious Google is about Fi extends to cloud competitive responses.  Sprint and T-Mobile are unlikely to jump out to undermine the Google offering since they’re hosting it in the US.  Verizon, AT&T, and the other current MVNOs may stand by for a real national offering to be made rather than to respond to what’s obviously a trial.  In a pricing/offering sense, in fact, I think that may be likely.  In a feature sense I’m not so sure.

Integrated, seamless, roaming between WiFi and cellular is long overdue as a service feature, and Fi will likely accelerate recognition that this is an important feature.  Roaming among operators may also be encouraged just because Fi could otherwise make a big dent in the international traveler market.  Integration of multiple devices—the “virtual phone number”—is also I think a likely outcome of Fi even if Google eventually pulls the plug on it.

What if Fi takes off, though?  AT&T and Verizon will be looking hard at the subscriber stats once the service goes out of its Early Access phase, and at the first indication that there might be serious competition from Fi, I expect these two giants will step in.  Both are experiencing some ARPU erosion for wireless services, in AT&T’s case primarily due to cannibalization by its multi-party plans.  On one hand they don’t want to start a race to the bottom on pricing, but on the other hand they know that 1) they are network operators not MVNOs and so have all the pie rather than a piece, and 2) their low IRR means they could underprice Google if they had to.

Underneath Fi may be the important thing.  It’s a service platform, albeit a currently limited one, that rides on a federation of networks.  In many respects it’s a bit of what Alcatel-Lucent’s Rapport could be used to build.  The platform has to realize any goals Google has to build/socialize a revenue ecosystem on top of Fi, and the fact that there’s a conceptual platform competitor out there a day before the Fi announcement means Google will have to work hard to make Fi more even than it is now.  That, when financial caution may be holding them back.

“Contextual” was Alcatel-Lucent’s tagline and it should be Google’s, but both will have to build some proof points to validate the contextual potential they offer.  There’s limited presence built into Fi through Hangouts.  There’s great potential for building in other such features, and it’s this potential that should be driving Google and striking fear into competing giants like AT&T and Verizon.

Another risk posed by Fi is that mobile services over pure hotspots might emerge, which could create a major price competitor to traditional prepay and post-pay plans.  It’s possible to use smartphones with only WiFi service, but hot-spot-hopping could be limited and difficult.  With Fi you could get enough roaming capability to make WiFi-only a possibility.  Even Google could offer that down the line, and at the least WiFi roaming would likely cap data rates competitors would be able to charge.  That would almost guarantee lower ARPU as time passed.

I think the architecture challenge posed by Fi is the most compelling.  Operators have talked a lot about agile services and NFV agility, but few have really thought about creating a consumeristic competitive ecosystem.  My own experience with Verizon’s business voice and residential IP voice was negative enough to push me to another approach, one that has included Google.  You could argue that Google Voice/Hangouts would have made a significant impact had Google pushed legacy adapters for the service and had it been more directed to the mobile user.  Fi fixes the latter, and this may be the factor that forces operators to look at ways to finally build agile services above connectivity.

Alcatel-Lucent Takes a Contextual Route with Rapport

I’m a fan of the notion that the future of communications, in fact of applications, is contextual services.  I’ve used that term to describe applications/services delivered to users/workers in part or whole based on their geographic, social, or other context.  It’s not just a matter of answering a question, but a matter of understanding that question in context and providing a contextually reasonable response.

What’s good for services overall should be good for a given service, or for a framework to support multiple services, including the service of collaboration.  Alcatel-Lucent seems to believe that because they’ve announced a new cloud-based communications platform called Rapport.  They use the term “contextual” in describing it, and they’re right not only with respect to how Rapport works but also how it fits in an evolving network/IT industry.

At a high level, Rapport is a set of tools that integrate communications services into existing applications, documents, or experiences.  Rapport creates a kind of unified communications domain by linking PBX and IP network assets into one pool.  This is done with what Alcatel-Lucent calls “Global Routing”, a layer below “Session Control”.  Open Communications and Collaboration builds on this, and above that you’d have applications like Contact Center, which Alcatel-Lucent provides.

In implementation, it’s probably fair to simplify Rapport as being a tool set to create what’s effectively a UCC-platform-as-a-service framework that’s very extensible both in terms of what it covers and in terms of what it does or can do.  This toolkit can be run in a cloud platform by an enterprise or, I assume, a cloud provider who wants to build services based on it.  It could also be offered as an NFV service set to network operators, which is a nice slant on the way relationships between services and applications should be developed.

To make Rapport work, Alcatel-Lucent has re-architected IMS to be friendlier to web-style application development and more accommodating to application models other than the pure 3GPP vision.  IMS gives Rapport the ability to manage enterprise mobility and session continuity both for mobile devices (BYOD) and for more traditional ones, including handsets and computers.  It’s not the first time that someone has tried to make IMS into something bigger and better, but it may be the most relevant given overall trends in mobility both for workers and consumers.

The notion of creating a UCCPaaS that’s portable across virtually any cloud-suitable platform and can be used both by enterprises and service providers is the greatest strength of Rapport.  This is a good idea in today’s world, where it’s clear that buyers of all sizes want as-a-service offerings but may also want in-house hosting either as an alternative or perhaps as an endgame with –aaS as the on-ramp.

The IMS linkage may also be a good idea.  Mobility management is mobility management, whether you depend on 3/4G or WiFi and it’s logical to use what’s proven in the space, particularly when you’re expecting to support the same handsets for enterprise WiFi mobility and cellular mobility.  That’s even true for enterprises, but it’s most compelling for the operators.

The linkage with NFV is also very smart.  Ultimately NFV has to boost operator revenues to deploy optimally, and in many cases perhaps to deploy at all.  There are many different directions operators could take “new services” but they’d certainly be most comfortable with something that involved “communications” in a more traditional sense.  Such an offering would also likely be more credible to buyers.  Rapport is a platform to fulfill the revenue-side NFV benefit case, and if its own APIs are used to enhance service features and even build new offerings, it could be a complete near-term revenue driver.

The biggest upside for Alcatel-Lucent would be that operators started with a UCC-like service and built other service offerings outward from that.  This would create a kind of service ecosystem within NFV, and also perhaps establish the value of having a PaaS substrate to NFV that takes care of some of the messy business of adapting applications to the ETSI model.  I like a more generalized model-driven approach to NFV adaptation myself, but an expansion of Rapport could still be helpful in cutting down on development and also standardizing management practices.

Of course, there are downsides.  My qualifier on IMS (it “may also be a good idea”) is deliberate.  A lot of people will see the IMS dimension as an attempt to validate something Alcatel-Lucent already has and is good at.  Some may even see an IMS link as a chain of the very kind Alcatel-Lucent says Rapport is supposed to break, a tie to the past.  Even if Alcatel-Lucent’s motives were entirely unselfish here, they’ll have to address a skeptical crowd and prove their IMS inclusion is more than self-validation.

The other issue is that while you could do a lot with Rapport, somebody is still going to have to do something more than that provided in the initial suite.  Call center is an important application but it’s not the only one.  I’d have suggested that Alcatel-Lucent bring out at least two applications for Rapport to show that it’s not a one-trick pony.  Three would be better, particularly if one was an open-source application that exploited Rapport’s APIs in the cloud.  That could serve as a model for others to develop even more stuff.

APIs are tricky things on which to base a product offering.  Alcatel-Lucent should know that given that it’s tried to build a service on APIs before with less than spectacular results.  Given that HP is a partner on the enterprise side of Rapport, Alcatel-Lucent should consider playing some ball with those guys to quickly build an inventory of Rapport applications.  That would make the platform more credible.

But such an HP initiative exposes a potential issue.  Rapport for operators is explicitly a cloud offering suitable for use with any NFV platform, but it’s also available for Alcatel-Lucent’s CloudBand.  HP’s OpenNFV is also an NFV platform, a competitor to CloudBand.  In fact, the two vendors have the two most credible large-vendor NFV approaches, but HP has servers and you need servers to have clouds.  With Nokia waiting in the wings, it will be interesting to see how the competition between these two NFV platforms plays out.

IBM: Deep Trouble Beneath Tactical Success

IBM’s earnings are always interesting, and right now they’re downright critical.  First, obviously, IBM needs to show it’s getting back on track or it risks a loss of customer credibility that would quickly become impossible to stem.  But second, IBM is likely a barometer for the pace of change in the IT market.  Big guys always suffer during fast shuffles.

At a high level, IBM was a tactical plus and at least a mild strategic minus.  The company beat slightly on EPS but missed on revenues, which I think is the most critical number.  The Street response was generally favorable given that they pick up on EPS, but most financial analysts noted the hole in the boat as well.  IBM can succeed by cost management alone for a while, but unless it wants to be bought part and parcel by Lenovo it needs to do more than just stabilize sales, it needs to increase them.

Part of the revenue problem isn’t IBM’s to solve.  The company lost in Europe and Asia and in emerging market where economic conditions were challenging, but they only managed to be flat in major markets.  But broadly their results were troubling because their big gains were in hardware; IBM lost ground in revenues in other segments of their business.  Nobody, even IBM, could possibly see that picture as positive.

What should worry IBM most is their dip in revenues for global business services and technology services; the former off most sharply.  IBM has kept its place on top of the IT heap largely because they exercised more strategic influence on buyers.  Business services trends are a decent reflection of their ability to sustain that influence, and those trends are off.

Software was also weak, and here the concern is WebSphere, which had in the past shown double-digit gains.  All it could deliver for IBM was 1% growth, and branded software was off overall.  IBM’s development tools (Rational) were off sharply, which suggests IBM is losing the edge in controlling new software creation and enhancement.

What was the hardware gain?  Well, there’s not a lot left but System Z was delivering.  Mainframes are not a growth market, folks.  Buyers who suppressed investment there in doubtful economic conditions were loosening their purse strings but that wasn’t unexpected.  Power systems managed only a small gain even with x86 servers out of the product line.

In their prepared remarks, IBM set what should have been its own tone.  “Our strategy is focused on leading in the areas where we see the most value in enterprise IT.”  Well, is that mainframes?  IBM needed to drive the cloud, SaaS in particular, and carrier cloud most of all.  They did generate 60% growth in cloud revenue (to $7 billion).  They’re pushing Bluemix and Watson successfully in the Enterprise, but from what I can see from my own surveys their success is within the IBM base.  You can’t increase milk production by re-milking the same cow.

Mining the customer base has been a pattern with IBM, and even in the enterprise space their lack of forthright positioning has weakened their ability to influence buyers.  That’s particularly true given that the cloud engages broader constituencies within the enterprise, constituencies that IBM sales doesn’t influence much.  What IBM lost half a decade ago was evangelism.  They need to be able to drive new market opportunities.  In the SMB space that meant x86, which IBM sold, and more application software.  In the cloud space, the opportunity lies with cloud providers in general and with the network operators in particular.

The as-a-service trends that are behind both cloud-SaaS and NFV have enormous potential.  NFV alone, according to my most recent modeling, could produce over 100,000 new data centers (albeit many smaller ones, in central offices) worldwide.  SaaS could generate thousands of additional and larger ones.  These opportunities emerge from a fundamental shift, the kind of shift IBM has in the past embraced when necessary.  The kind they don’t seem to be willing to embrace now.

IBM’s cloud ambitions appear to be taking the form of cloud services to current customers, back to mining the old base.  Not only does that cement them further into their tunnel-vision problem of positioning to the broad market, it directs their cloud initiatives purely at cost savings.  Even the Street admits that if IBM were to transition buyers to the cloud the result would likely be dilutive.

You can’t succeed in IT if you can’t succeed in the biggest incremental data center opportunity in the world, perhaps the largest ever.  That’s NFV, and IBM has consistently underplayed its (actually considerable) assets there.  You could argue that IBM has failed to learn a lesson that HP is rumored to be learning, which is that they will lose more competing with cloud providers than they’ll gain in direct cloud revenue.  IBM may be so enthralled by their 60% growth in enterprise cloud services that they’re losing sight of the enormous pie of hardware/software sales that will accrue in the space.  IaaS is not ever going to be a revenue bonanza for IBM and they have undermined their marketing position to the SMB space most likely to drive SaaS.

In IBM’s prepared remarks, the phrase “service provider” never appears.  Neither does “SDN”, “NFV”, or even “network”.  That suggests that IBM doesn’t appreciate the magnitude of the changes virtualization is driving, or the fact that you can’t lead a buyer to the future by addressing just the steps you find convenient.  Rival Cisco is doing the right thing in the cloud space, engaging with network operators rather than competing with them.  Cisco is also viewing cloud data centers as an ecosystem, including switches and the x86 servers that people want.

It’s possible that IBM sees its own cloud efforts as a means of displacing the commodity x86 stuff that it’s now exited in hardware sales terms.  But even if that’s true, IBM still has to recognize that without software value-add as a revenue kicker, all it would be doing if its cloud plans succeeded would be entering a business with declining margins and selling to a small and static portion of the total opportunity space.  That’s an uncharacteristically short-sighted move.

I have a long history with IBM; I learned programming on an IBM computer 50 years ago and I cherished a notepad with their tagline of the time, “Think.”  I’ve seen them weather more storms than any other tech vendor, seen them prosper when virtually every other computer vendor flagged.  I have to confess confusion here.  IBM has seen the writing on the wall for at least three years and probably for more than five.  Once virtualization raised its head, commodity hardware was the platform, middleware the differentiator, and applications the revenue driver.  With all that time to invest, to develop, to position, what the heck was IBM thinking?

What IBM is going to have to do at this point is buy somebody, perhaps multiple somebodies.  They need core technology in the network, cloud, SDN, and NFV spaces to augment their current capabilities.  More than that, they need somebody who can take fresh and exciting stories to the broad market.  They need to make buyers do what that old notebook of mine challenged us all to fifty years ago—think.

Can There be Secret Sauce in the Nokia/ALU Deal?

The marriage of Nokia and Alcatel-Lucent is clearly a consolidation.  The question is what the companies see as the end-game.  Consolidation is usually a market response to commoditization, to the loss of pricing power that comes when no meaningful feature differentiation is possible.  Consolidation can also be a step toward taking a leadership position in a new market phase, a way of cleaning up the financials and tidying product lines to align for a new future.  Which is it here?

As a pure consolidation play, the combination of Nokia and Alcatel-Lucent is a reflection of the mobile market, and in particular the trends in 4G RAN.  For a decade, wireless infrastructure capex has been able to sustain itself because wireless has been under less service price pressure than wireline.  That’s changing, and it may change dramatically if regulatory trends in the US and Europe continue.  Loss of roaming premiums and equal application of neutrality would likely be the last straw in making wireless and wireline equivalent in terms of return on infrastructure risk.  Economy of scale would help a vendor in this situation.

But not for long.  Nobody is going to out-price Huawei in the long term, and Ericsson (the other wireless infrastructure leader) is leveraging services and operations effectively to help sustain its position as well.  I don’t think that simply consolidating is going to make the New Nokia a success in the space, much less a leader.

The only defense against commoditization is feature differentiation, and there’s precious little that can be done to differentiate what I’ll call “basic wireless” which means the RAN, IMS, and EPC.  Standards and interoperability have narrowed the range of innovations that can be made in traditional infrastructure.  Which means you have to get un-traditional to differentiate.

There are some basic symbiotic elements in play here.  Nokia has a good agile RAN strategy and strong CSM elements to play with, and Alcatel-Lucent has WAN hardware for mobile networks as well as a good cloud-based IMS and EPC implementation.  The question is whether these will be enough, given that gaining any economies of scale from the merger will surely demand consolidation in the product lines, and any dropping or changing of technologies could put current customers up for grabs.  I think that Nokia will have to look beyond the obvious.

The most obvious opportunity for the New Nokia is to exploit NFV, SDN, and the cloud.  Alcatel-Lucent has the best position in these three spaces of any network equipment vendor, though the company has been (not uncharacteristically) weak in positioning what it can do.  If Nokia could leverage the Alcatel-Lucent assets in these three spaces it could be a player in the new mobile infrastructure revolution.

In a business-politics sense, that’s not going to be easy.  Any big M&A tends to make everyone cautious, both within the companies involved and among the prospects for the companies’ products and services.  This take-root tendency would be particularly destructive right now because of the fact that operators are looking for decisive responses to their own return on infrastructure crisis.  Any approach that can’t be validated and initiated at scale within the next year is likely to be too late.

Another business-politics problem is that Nokia has been perhaps the only company to out-fumble Alcatel-Lucent in terms of marketing and positioning.  No matter what the companies say now about how they’ll divide responsibility in the future, the combined business doesn’t have a great pool of serious song-and-dance types to draw from.  And that at a time when singing and dancing are definitely going to be the order of the day, especially after a big M&A.  And especially when the merger that created Alcatel-Lucent in the first place hadn’t really gelled even at the time of the Nokia deal.

The final issue is that Alcatel-Lucent just named (in January) a new head of the IP Platforms (Bhaskar Gorti), which would run the company’s critical NFV/SDN/cloud activity.  You’d normally expect this sort of change to be accompanied by some substantive strategy/positioning shifts, and there’s been enough time for some of these to get going.  What happens now?  It’s hard to keep driving change without ever really getting it fully developed.

What does the New Nokia need to do?  I think the first part of the answer is clear; they have to fully position themselves to be an operations integration giant for the age of virtualized infrastructure and as-a-service composition of retail offerings for operators.  They cannot beat Huawei on equipment price anywhere that Huawei can sell, as I’ve said, and they have to compete with Ericsson who has the right tools but is also a vapid positioner of their own assets.  Ericsson’s claim to fame for the future is OSS/BSS, but it’s also their weakness.  Traditional operations isn’t enough for a virtual future.  Alcatel-Lucent’s Gorti knows this, I think, because Oracle (where he came from) realized the ops value in NFV and was positioning to exploit it.

Operations integration for NFV has never been handled optimally by ETSI; they’ve considered it out of scope.  The problem is that everything credible in terms of NFV benefits is derived in part (or totally) from operations efficiency gains.  You can’t even save on capex with NFV if the inevitable increase in complexity that NFV creates eats up your savings.

Within operators, the operations integration issue has also created a face-off between those who think that the old-line OSS/BSS systems are dinosaurs and need to be made extinct so the next wave of technology mammals can emerge, and those who think that OSS/BSS is the base of mammals (hidden for millennia under the cover of dinosaur equipment policies and technologies) that must now emerge and become supreme.  I’ve recounted in prior blogs that these two divergent OSS/BSS visions are often represented within the same operator, at the same meetings.

As it happens, it’s in expressing its operations integration strategy that Alcatel-Lucent has been the least successful in marketing/positioning its NFV story.  It’s not clear that there’s a good approach behind the lack of positioning, so the first order of business for New Nokia should be to figure out what needs to be done and insure it’s happening.  The second order is to do a lot of uncharacteristically strong singing and dancing around the story.

This new and good story has to be tied to mobile, of course, and it could be what unifies the troika of NFV, SDN, and cloud.  What do these guys have in common?  Virtualization of course, but more significantly they all have benefit cases that demand extraordinary operations efficiency.  Getting operations, virtualization, and mobile all rolled into a common story won’t be easy, but I don’t see how the New Nokia can avoid pushing to make it work.  Unless they want to watch commoditization continue to eat away at the combined company as quickly as it was eating at the two separately.

The possibility that the New Nokia might launch an effective campaign for SDN, NFV, and the cloud is a problem in itself for competitors, but perhaps a greater one is the fact that a mega M&A event in the industry would be driven largely by mobile considerations.  Neither Cisco nor Juniper has a RAN, nor do they have a strong mobile position.  They have to be wondering whether this M&A is a signal that you have to be in the mobile infrastructure game to be a contender even for M&A consolidation.  Cisco may believe it can ride enterprise IT and carrier evolution even without mobile infrastructure specialization, and they may be right.  Juniper?  I doubt it, so they have to be even more effective in their own NFV/SDN/Cloud positioning than the New Nokia, and that’s going to be hard.

Hard, and ultimately not enough.  A vendor, to have meaningful feature differentiation, has to be aligned with the features that drive the purchases.  What do service providers sell?  Services, obviously, and the most significant thing that’s changing here is the nature of services.  Bits, as I’ve said, will never be really profitable.  No matter what you do to make operations more efficient, you’re only band-aiding the wound that unlimited usage has already created and will almost certainly continue to create.  You can’t make money selling something with a zero marginal price.  So operators have to move upward, and so do vendors.

SDN and NFV are platforms to create the carrier cloud, and while Alcatel-Lucent has a cloud position (in CloudBand) it’s not ideal because they don’t make servers.  Nokia has to realize that without the automatic seat at the cloud table that servers offer, they have to earn a place.  SDN and NFV can create a fabric for applications and services, but both have to be extended to make that happen.  Interestingly, Nuage has done a lot to provide for the SDN extensions so only NFV remains.  The point is that the New Nokia may stand or fall on how well it exploits Nuage and addresses NFV, and it’s just getting those assets now.  The challenge is obvious.

How to Make Services Agile

Everyone in NFV is jumping on “service agility” as the key benefit, and I don’t disagree that the value NFV could bring to launching new services could be the best justification for deploying it.  Wishing won’t make it so, though, and I don’t think we’ve had enough dialog on how one makes a service “agile”.  So…I propose to start one here.

The first point about service agility is that it’s going to be a lot like software agility, and in particular what I’ll call “functional” or “app” programming.  Traditional software is written by programmers who write specific instructions.  Modular software, a trend that goes back over 40 years, initiated the concept of reusable “modules” that were introduced into a program to perform a regularly used function.  This was enhanced about 20 years ago by the notion that a software function could be visualized as a “service” to be consumed, and that was the dawn of the Service-Oriented Architecture or SOA.  Today’s web-service and PaaS models (and many SaaS models) are another variant on this.

In all these approaches, we get back to the notion of abstraction.  A programmer consumes a service without knowing anything about it other than the API (application program interface, meaning the inputs and outputs) and functionality.  The service is a black box, and the fact that all the details are hidden from the programmer means that these services make it easy to do very complicated things.

To me, this is a critical point because it exposes the biggest truth about service creation in an NFV sense.  That truth is that there are two different “creations” going on.  One is the creation of the services, which if we follow software trends are assembled by assembling lower-level services.  The other is the generation of those lower-level services/abstractions from whatever primitives we have available.  I’ve categorized this in role terms as “service architect” and “resource architect”.

An agile service, I think, is created first by identifying or building those lower-level services/abstractions from the resources upward.  A VPN or a VLAN is an example of an abstraction, but so is “DNS” or “firewall”, or even “HSS”.  Once we have an inventory of this good stuff, we can let service architects assemble them into the cooperative functional systems that we call “services”.

There are a lot of possible slip-ups that can happen here, though.  I’ll illustrate one.  Suppose I have a need to deploy virtual CPE but I can’t do it everywhere I offer service, so I have “real” CPE as well.  I have two options.   One to define a low-level service called “CPE” and let that service sort out the difference between virtual and real.  The other is to expose a “virtualCPE” and “realCPE” service.  Let’s see how that plays out.

If I have a CPE service, then the decision of whether to use cloud principles to host and connect software elements is invisible to the service architect.  The service definition includes only CPE services, and they don’t care because the underlying service logic will sort out the provisioning.  On the other hand, if I have virtualCPE and realCPE, the service definition has to know which to use, which means that the details of infrastructure differences by geography are pushed upward to the service level.  That means a much more complicated process of service creation, which I contend means much less agile.

But even my virtualCPE and realCPE abstractions have value over the alternative, which is to define the services all the way from top to bottom, to the deployment level.  If I have a pair of abstractions I will have to reflect the decision on which to use into the service orchestration process, but the details of how it’s done will stay hidden.  I can provision different CPE, deploy on different virtual platforms, without changing the service.  That means that changes in real devices or virtual infrastructure are hidden from the service orchestration process.  If I don’t have those abstractions then any change in what I need to do to deploy (other than simple parameter changes) would have to be propagated up to the service definition, which means the change would change all my service order templates.  No agility there, folks.

The point here is that an agile service has to be agile through the whole lifecycle or it’s not really agile at all.  I cannot achieve that universality without following the same principles that software architects have learned to follow in today’s service-driven world.

If you map this to the current ETSI work and to other NFV activities you see that it means that things like OpenStack are not enough.  They can (and will) be used to decode what to do to deploy “virtualCPE”, but I still have to decompose my service into requests for realCPE and virtualCPE somewhere.  Further, if I decide to get smart and abstract two things that are functionally identical into “CPE”, I have created a level of decomposition that’s outside what OpenStack is designed to do.  Could or should I shoehorn or conflate the functionality?  I think not.

Network resources, technologies, and vendors create a pool of hardware and software—functionality-in-waiting we might say.  An operator might elect to harness some of this functionality for use by services.  If they don’t then service definitions will have to dive down to hardware detail, and that creates a service structure that will be a long way from agile, and will also be exceptionally “brittle”, meaning subject to changes based on nitty-gritty implementation details below.

Do we want to have every change in infrastructure obsolete service definitions that reference that infrastructure?  Do we want every service created to do direct provisioning of resources, probably in different ways with different consequences in terms of management properties?  Do we want horizontal scaling or failover to be mediated independently by every service that uses it?  Well, maybe some people do but if that’s the case they’ve kissed service agility goodbye.

And likely operations efficiency as well.  Abstraction of the type I’ve described here also creates consistency, which is the partner of efficiency.  If all “CPE” is deployed and managed based on a common definition, then it’s going to be a lot easier to manage the process, and a lot cheaper.

Next time you talk with a purported NFV provider, ask them to show you the service modeling process from top to bottom.  That exercise will tell you whether the vendor has really thought through NFV and can deliver on the benefits NFV promises.  If they can’t do it, or if their model doesn’t abstract enough, then they’re a science project and not an NFV story.

Does the Oracle/Intel Demonstration Move the NFV Ball?

Oracle has started demoing their new NFV/orchestration stuff, and anything Oracle does in the space is important because the company represents a slightly different constituency in the NFV vendor spectrum.  They’re definitely not a network equipment player so NFV isn’t a risk to their core business.  They do sell servers, but that’s not their primary focus.  They are a software player and with their NFV announcement earlier they became the biggest “official” software company in NFV.

The big focus of the Oracle announcement was a partnership with Intel on the Open Network Platform initiative.  This is aimed at expanding what can be done with NFV by facilitating the hosting of functions on hardware with the right features.  The demo shows that you can create “sub-pools” within NFVI that have memory, CPU, or other hardware features that certain types of VNF would need.  Oracle’s orchestration software then assigns the VNFs to the right pools to insure that everything is optimally matched with hardware.

There’s no question that you’d like to have as much flexibility as possible running functions as VNFs instead of as physical appliances, but I’m not sure that the impact is as great as Oracle might like everyone to believe.  There are a number of reasons, ranging from tactical to strategic.

Reason one is that this is hardly an exclusive relationship between Oracle and Intel.  Intel’s ONP is available to any vendor, and Intel’s Wind River open-source Titanium supports it.  HP, a rival with Oracle for NFV traction, is a (or THE) Intel partner with ONP, in fact.  I doubt that any Intel-server-based NFV implementation would not use ONP.

Reason two is that the NFV ISG has called for VNF steering to servers based on a combination of the VNFs’ needs and servers’ capabilities for ages.  It’s part of the ETSI spec, and that means that implementations of MANO that want to conform to the spec have to provide for the steering.

Reason three is that right now the big issue with NFV is likely to be getting started, and in early NFV deployment resource pools will not be large.  Subdividing them extensively enough to require VNF hosting be steered to specialized sub-pools is likely to reduce resource efficiency.  Operators I’ve talked to suggest that early on they would probably elect to deploy servers that had all the features that any significant VNF population needed rather than specialize, just to insure good resource pool efficiency.

Then we have the big strategic reason.  What kind of VNF is going to need specialized hardware for performance?  I’d contend that this would likely be things like big virtual routers, pieces of EPC or IMS or CDN.  These functions are really not “VNFs” in the traditional sense because they are persistent.  I commented in an earlier blog that the more a software function was likely to require high performance, higher-cost hardware, the less likely it was to be dynamic.  You don’t spin up a multi-gigabit virtual router for an hour’s excursion, you plant it somewhere and leave it there unless something breaks.  That makes this kind of application more like cloud computing than like NFV.

I asked an operator recently if they believed that they would host EPC, virtual edge routers, virtual core switches, etc. on generalized server pools and they said they would not.  The operator thought that these critical elements would be “placed” rather than orchestrated, which again suggests a more cloud-like than NFV-like approach.  Given that, it may not matter much whether you can “orchestrate” these elements.

Then there’s the opex efficiency point, which I think is a question of how many such situations arise.  Every user doesn’t get their own IMS/EPC/CDN, they share a common one, generally per metro.  It’s not clear to me given that limited deployment that any operations efficiencies generated would be confined to a small number of functional components, how much you could drive the NFV business case on OPN alone.

And service agility?  Valuable services that operators want to deploy quickly are almost certain to be personalized services.  What exactly can we do as part of a truly dynamic service that is first personalized for a user and second, so demanding of server resources that we have to specialize what we host it on?  Even for the business market I think this is a doubtful situation, and for the consumer market that makes up most of where operators are now losing money, there is virtually no chance that supersized resources would be used because they couldn’t be cost-justified.

Don’t get me wrong; OPN is important.  It’s just not transformative in an NFV sense.  I’ve shared my view of the network of the future with all of you who read my blog.  It’s an agile optical base, cloud data centers at the top, and a bunch of service- and user-specific hosted virtual networks in between.  These networks will have high-performance elements to be sure, elements that need OPN.  They’ll be multi-tenant, though, and not the sort of thing that NFV has to spin up and tear down.  They’ll probably move more than real routers do, but not often enough to make orchestration and pool selection a big factor.

I am watching Oracle’s NFV progress eagerly because I do think they could take a giant step forward with NFV and drive the market because they do have such enormous credibility and potential.  I just don’t think that this is such a step.  “Ford announces automobiles with engines!” isn’t really all that dramatic, and IMHO ONP or ONP-like features are table stakes.  What I’m looking for from Oracle is something forward-looking, not retrospective.

In their recent NFV announcement, Oracle presented the most OSS/BSS-centric vision for NFV that any major vendor has articulated.  There is absolutely no question that every single NFV mission or service must have, as its strongest underpinning, a way of achieving exceptionally good operations efficiency.  Virtualization increases complexity and complexity normally increases management costs.  We need to reduce them, in every case, or capex reductions and service agility benefits won’t matter because they’ll either be offset or impossible to achieve.  Oracle’s biggest contribution to NFV would be to articulate the details of OSS/BSS integration.  That would truly be a revolutionary change.

As an industry, I think we have a tendency to conflate everything that’s even related to a hot media topic into that topic.  Cloud computing is based on virtualization of servers yet every virtualized server isn’t cloud computing.  Every hosted function isn’t NFV.  I think that NFV principles and even NFV software could play a role in all public cloud services and carrier virtualization of even persistent functions, but I also think we have to understand that these kinds of things are on one side of the requirements spectrum and things like service chaining are on the other.  I’d like to see focus where it belongs, which is where it can nail down the unique NFV benefits.