The Tale of Three Vendors

There’s no denying that networking is changing, but different people or companies see the change differently.  For consumers, it’s mostly about replacing wireline phones and maybe cable TV with the Internet and wireless broadband.  For network operators it’s about sliding profits on basic connection/transport services and growing competition from traditional and non-traditional sources.  For network equipment vendors it’s (so they say) about “deferred spending”.

Well, you are what you invest in, and so networking is the sum of network infrastructure.  We’ve had three network vendor quarterly announcements this week, and it’s interesting to try to synthesize reality from their raw data.  That reality might then give us some indication of where the industry is moving, in the broadest sense.

Let’s start with the biggest.  Ericsson had a bad quarter that surprised nearly everyone.  Revenues were up y/y by nearly 13% but their EPS was off and the stock took an 8%-or-more hit as a result.  Ericsson blames sluggish sales to US carriers for their problem, and obviously if your biggest business source is slow you’ll be impacted.  But any seller can say “I’d do better if my customers bought more.”  It’s not a helpful analysis.  Why do sales matter if they’re up 13% y/y?  Because in adjusted currency form they were off by 9% for networks and 2% for services.

Move on now to Juniper.  Of the three companies who released this week-to-date, they turned in the best results from a Street perspective.  Juniper beat in EPS by a cent and beat earnings by a similarly small amount.  Their guidance was mid-range, contrasting to the other vendors we’re discussing.  The issue for Juniper is that their results beat estimates but they’re still running behind relative to past quarters.  Year over year, they were off in all three of their product categories (routing, switching, security).  Regionally they were up in EMEA and off elsewhere.

F5 also reported, and while it beat a big on revenue and EPS, its guidance for revenue was light.  Interestingly, much of their revenue positive was attributable to the very North America/US market that Ericsson said was “sluggish”.  According to them, the problem is exchange issues—foreign currency headwinds.  Their fundamental trends in ADC and security are strong.

When you look at the three together, it’s clear that there is no clear secular trend driving them all.  We’re not seeing economically or systemically driven demand suppression, but rather a shift in spending that in some areas probably represents a decline in perceived operator ROI potential and in others potential for a gain.  Operators are doing what’s profitable, and that’s changing.

A specific point here is Ericsson’s sluggish wireless spending lament.  Wireless has historically been the bright spot in capex, but for the last five to ten years we’ve seen increased pressure on wireless ARPU.  Couple that with the fact that most operators don’t have a large unpenetrated prospect base, and you have a formula for profit stagnation or even decline.  The operators, like vendors, respond by cutting costs (Ericsson plans that, for example).  An operator cutting cost equals an operator with lower capex.  Mobile has fallen from grace, at least relative to its glory days, because it’s not as profitable.

Roaming regulations in the EU and neutrality in the US conspire to increase future risk.  Reductions in roaming charges mean less mobile revenue and (worse to some operators) loss of a means of avoiding churn in a very competitive market.  An operator usually has the best coverage and performance in their home area, and if they have to share their network with competitors even at home and at minimal incremental cost, then they risk competition.  In the US, neutrality rules on mobile could stymie a lot of broadband usage plans, particularly if “content-pays” is an illegal model.

In contrast, pretty much all future service revenue gains are seen as coming from services whose features are hosted in data centers.  It follows that data center equipment and network equipment associated with hosting points would do well—F5’s ADC and security portfolio for example.  Add to that the fact that unlike Ericsson, F5 gets only about a quarter of its revenues from service providers and you see some good reasons why F5 is different.  The stock was off initially on light guidance but popped back with the announcement (expected) of the new CEO (replacing the retiring McAdams).  The pop is more justified in my view based on the fact that there will be a lot more data centers down the line.

Juniper is (as often is the case) a kind of interesting dilemma.  If you look at their trend line relative to the other vendors, their results are worse.  The Street has rewarded them for not being worse than expected.  But in fundamentals Juniper still has strong assets.  Their security stuff is in the top tier for CSP/NSP buyers.  They have good data center switching credentials.  They have less exposure to mobile than Ericsson, meaning that mobile’s slide from grace won’t impact them as much.  They are what CEO Rahim describes as “maniacally focused on IP networking.”  For all the changes in the industry, we still have to push bits.

Overall, I think we’re seeing an industry in transition, and I doubt many disagree.  The view I hold that vendors in particular might not like is that I think the transition is from connection/transport dependency to higher-layer dependency.  F5 won because it was more higher-layer than the others, less exposed to segments that are in decline.

If you know your current business model isn’t working and you know what the future holds, you’d shift on a dime to fund the new.  If you knew the former and not the latter, you’d withhold spending on the old and wait to see what develops.  That’s where I think we are.  Operators know that pushing bits won’t be rewarded, but they don’t know for sure what will.  They currently can see only that hosting and data centers will have a lot to do with it.  So they trim their sales on traditional products and watch for signs of a clear future direction.

For the network vendors, the question is whether that future direction intersects with any path they can hope to take.  Ericsson wants to bet on professional services, but you need a goal to need a route-planner.  Juniper wants to bet on business as usual, a bet that I think is least likely to pay off in the long run.  F5 wants to bet on the cloud and data center, and that’s the only winning bet available.  Their risk is that NFV and SDN will combine to create a more definitive future path that will subsume their ADC/security mission.  F5 really doesn’t play a convincing role in either.

The situation with these three vendors illustrates the risk Alcatel-Lucent and Nokia face in combining.  If you’re consolidating based on current conditions or even current established trends, you’re shooting behind the duck.  The fundamental problem in networking is benefits to drive new spending.  For operators, that’s revenue from new services.  For enterprises, it’s new productivity gains.  As an industry we’ve come to see offering more bits for less money as a gain; it’s a path to commoditization.  We have to make bits more valuable, and that’s the simple truth that vendors and their customers must all face.

Posted in Uncategorized | Comments Off

What Hath Google Fi Wrought?

Google has unveiled its long-awaited MVNO offering, Google Fi.  Right now, Fi is in what Google calls “Early Access” so you have to apply for an invite and wait to get it.  It might be worth the wait.  Working in partnership with carriers in over 120 countries (Sprint and T-Mobile in the US), Google has put together a pretty jazzy cellular/WiFi combination that’s integrated with Hangouts (Google Voice) and offers a novel and attractive pricing plan.  It might be a game-changer in the mobile broadband space.  It also might be another DOA concept like Google Wave.

Fi’s pricing is probably the most obvious differentiator and disruptor.  A month’s service ranges from an improbable-alone base of $20 for talk/text, and an additional $10 per gig of data.  The Google plan seems to start with 3 gigs, making the price $50 per month.  You get a rebate for what you don’t use and you can buy additional gigs for ten bucks.  That puts the service price on par or better with respect to most other prepay plans, and much cheaper than traditional post-pay plans.  Fi is post-pay, so it’s probably a price leader in that space for many users.  With service in over 120 countries at reasonable rates, international travelers might find it especially compelling.

Seamless WiFi calling is another plus.  Fi selects the best/cheapest option for connection for a given call, so you don’t have to do anything to make a WiFi call other than be somewhere where public WiFi is available.  That works in the US or internationally.  I have to note that there is seems to be a conflict between Google’s blog and the Fi pages on how WiFi works.  The blog and broad marketing material suggest it works “…whether in your home, your favorite coffee shop or your Batcave”, which would imply that you can register it on secure WiFi networks since most home networks at least are secure.  The Fi FAQs say that the WiFi network has to be an open public network without entered security.

Fi is tightly coupled to Google’s current communications frameworks, once Google Voice now Hangouts.  When you sign up for Fi with a Google account, the Hangout options associated with that account are updated to include the Fi handset (a Nexus 6 is all that’s supported initially).  You can make Fi calls using any other device that’s also linked to the account’s Hangouts profile, and receive calls made to the Fi number on any other device as well.

For a lot of users the Fi offering will be pretty significant, but it’s not for everyone.  Unless you happen to have a Nexus 6 you’ll have to wait until your device is supported or buy a Nexus 6 or other supported device (of which there are none for now, as I’ve noted, so this means only a Nexus 6 for now).  That’s a six-hundred-buck buy-in.  There are no family plans or unlimited data plans either, so people who save a lot with combination plans or who use a lot of data may end up paying more with Fi.  Fi doesn’t pay termination charges either, so switching could be costly even if you can salvage your phone.

The obvious question raised by Fi is whether Google is serious about it, and there’s obviously no answer to that one.  You have a better chance of being able to get Google Fi than Google Fiber, but it’s far from 100% and even if you get it, there’s a chance it might go away.  For “Early Access” read “field trial?”  I suspect that Google is reserving the right to pull the plug during the Early Access period, and even change terms.  I don’t think they’re likely to do either, but it’s possible.

The uncertainty over how serious Google is about Fi extends to cloud competitive responses.  Sprint and T-Mobile are unlikely to jump out to undermine the Google offering since they’re hosting it in the US.  Verizon, AT&T, and the other current MVNOs may stand by for a real national offering to be made rather than to respond to what’s obviously a trial.  In a pricing/offering sense, in fact, I think that may be likely.  In a feature sense I’m not so sure.

Integrated, seamless, roaming between WiFi and cellular is long overdue as a service feature, and Fi will likely accelerate recognition that this is an important feature.  Roaming among operators may also be encouraged just because Fi could otherwise make a big dent in the international traveler market.  Integration of multiple devices—the “virtual phone number”—is also I think a likely outcome of Fi even if Google eventually pulls the plug on it.

What if Fi takes off, though?  AT&T and Verizon will be looking hard at the subscriber stats once the service goes out of its Early Access phase, and at the first indication that there might be serious competition from Fi, I expect these two giants will step in.  Both are experiencing some ARPU erosion for wireless services, in AT&T’s case primarily due to cannibalization by its multi-party plans.  On one hand they don’t want to start a race to the bottom on pricing, but on the other hand they know that 1) they are network operators not MVNOs and so have all the pie rather than a piece, and 2) their low IRR means they could underprice Google if they had to.

Underneath Fi may be the important thing.  It’s a service platform, albeit a currently limited one, that rides on a federation of networks.  In many respects it’s a bit of what Alcatel-Lucent’s Rapport could be used to build.  The platform has to realize any goals Google has to build/socialize a revenue ecosystem on top of Fi, and the fact that there’s a conceptual platform competitor out there a day before the Fi announcement means Google will have to work hard to make Fi more even than it is now.  That, when financial caution may be holding them back.

“Contextual” was Alcatel-Lucent’s tagline and it should be Google’s, but both will have to build some proof points to validate the contextual potential they offer.  There’s limited presence built into Fi through Hangouts.  There’s great potential for building in other such features, and it’s this potential that should be driving Google and striking fear into competing giants like AT&T and Verizon.

Another risk posed by Fi is that mobile services over pure hotspots might emerge, which could create a major price competitor to traditional prepay and post-pay plans.  It’s possible to use smartphones with only WiFi service, but hot-spot-hopping could be limited and difficult.  With Fi you could get enough roaming capability to make WiFi-only a possibility.  Even Google could offer that down the line, and at the least WiFi roaming would likely cap data rates competitors would be able to charge.  That would almost guarantee lower ARPU as time passed.

I think the architecture challenge posed by Fi is the most compelling.  Operators have talked a lot about agile services and NFV agility, but few have really thought about creating a consumeristic competitive ecosystem.  My own experience with Verizon’s business voice and residential IP voice was negative enough to push me to another approach, one that has included Google.  You could argue that Google Voice/Hangouts would have made a significant impact had Google pushed legacy adapters for the service and had it been more directed to the mobile user.  Fi fixes the latter, and this may be the factor that forces operators to look at ways to finally build agile services above connectivity.

Posted in Uncategorized | Comments Off

Alcatel-Lucent Takes a Contextual Route with Rapport

I’m a fan of the notion that the future of communications, in fact of applications, is contextual services.  I’ve used that term to describe applications/services delivered to users/workers in part or whole based on their geographic, social, or other context.  It’s not just a matter of answering a question, but a matter of understanding that question in context and providing a contextually reasonable response.

What’s good for services overall should be good for a given service, or for a framework to support multiple services, including the service of collaboration.  Alcatel-Lucent seems to believe that because they’ve announced a new cloud-based communications platform called Rapport.  They use the term “contextual” in describing it, and they’re right not only with respect to how Rapport works but also how it fits in an evolving network/IT industry.

At a high level, Rapport is a set of tools that integrate communications services into existing applications, documents, or experiences.  Rapport creates a kind of unified communications domain by linking PBX and IP network assets into one pool.  This is done with what Alcatel-Lucent calls “Global Routing”, a layer below “Session Control”.  Open Communications and Collaboration builds on this, and above that you’d have applications like Contact Center, which Alcatel-Lucent provides.

In implementation, it’s probably fair to simplify Rapport as being a tool set to create what’s effectively a UCC-platform-as-a-service framework that’s very extensible both in terms of what it covers and in terms of what it does or can do.  This toolkit can be run in a cloud platform by an enterprise or, I assume, a cloud provider who wants to build services based on it.  It could also be offered as an NFV service set to network operators, which is a nice slant on the way relationships between services and applications should be developed.

To make Rapport work, Alcatel-Lucent has re-architected IMS to be friendlier to web-style application development and more accommodating to application models other than the pure 3GPP vision.  IMS gives Rapport the ability to manage enterprise mobility and session continuity both for mobile devices (BYOD) and for more traditional ones, including handsets and computers.  It’s not the first time that someone has tried to make IMS into something bigger and better, but it may be the most relevant given overall trends in mobility both for workers and consumers.

The notion of creating a UCCPaaS that’s portable across virtually any cloud-suitable platform and can be used both by enterprises and service providers is the greatest strength of Rapport.  This is a good idea in today’s world, where it’s clear that buyers of all sizes want as-a-service offerings but may also want in-house hosting either as an alternative or perhaps as an endgame with –aaS as the on-ramp.

The IMS linkage may also be a good idea.  Mobility management is mobility management, whether you depend on 3/4G or WiFi and it’s logical to use what’s proven in the space, particularly when you’re expecting to support the same handsets for enterprise WiFi mobility and cellular mobility.  That’s even true for enterprises, but it’s most compelling for the operators.

The linkage with NFV is also very smart.  Ultimately NFV has to boost operator revenues to deploy optimally, and in many cases perhaps to deploy at all.  There are many different directions operators could take “new services” but they’d certainly be most comfortable with something that involved “communications” in a more traditional sense.  Such an offering would also likely be more credible to buyers.  Rapport is a platform to fulfill the revenue-side NFV benefit case, and if its own APIs are used to enhance service features and even build new offerings, it could be a complete near-term revenue driver.

The biggest upside for Alcatel-Lucent would be that operators started with a UCC-like service and built other service offerings outward from that.  This would create a kind of service ecosystem within NFV, and also perhaps establish the value of having a PaaS substrate to NFV that takes care of some of the messy business of adapting applications to the ETSI model.  I like a more generalized model-driven approach to NFV adaptation myself, but an expansion of Rapport could still be helpful in cutting down on development and also standardizing management practices.

Of course, there are downsides.  My qualifier on IMS (it “may also be a good idea”) is deliberate.  A lot of people will see the IMS dimension as an attempt to validate something Alcatel-Lucent already has and is good at.  Some may even see an IMS link as a chain of the very kind Alcatel-Lucent says Rapport is supposed to break, a tie to the past.  Even if Alcatel-Lucent’s motives were entirely unselfish here, they’ll have to address a skeptical crowd and prove their IMS inclusion is more than self-validation.

The other issue is that while you could do a lot with Rapport, somebody is still going to have to do something more than that provided in the initial suite.  Call center is an important application but it’s not the only one.  I’d have suggested that Alcatel-Lucent bring out at least two applications for Rapport to show that it’s not a one-trick pony.  Three would be better, particularly if one was an open-source application that exploited Rapport’s APIs in the cloud.  That could serve as a model for others to develop even more stuff.

APIs are tricky things on which to base a product offering.  Alcatel-Lucent should know that given that it’s tried to build a service on APIs before with less than spectacular results.  Given that HP is a partner on the enterprise side of Rapport, Alcatel-Lucent should consider playing some ball with those guys to quickly build an inventory of Rapport applications.  That would make the platform more credible.

But such an HP initiative exposes a potential issue.  Rapport for operators is explicitly a cloud offering suitable for use with any NFV platform, but it’s also available for Alcatel-Lucent’s CloudBand.  HP’s OpenNFV is also an NFV platform, a competitor to CloudBand.  In fact, the two vendors have the two most credible large-vendor NFV approaches, but HP has servers and you need servers to have clouds.  With Nokia waiting in the wings, it will be interesting to see how the competition between these two NFV platforms plays out.

Posted in Uncategorized | Comments Off

IBM: Deep Trouble Beneath Tactical Success

IBM’s earnings are always interesting, and right now they’re downright critical.  First, obviously, IBM needs to show it’s getting back on track or it risks a loss of customer credibility that would quickly become impossible to stem.  But second, IBM is likely a barometer for the pace of change in the IT market.  Big guys always suffer during fast shuffles.

At a high level, IBM was a tactical plus and at least a mild strategic minus.  The company beat slightly on EPS but missed on revenues, which I think is the most critical number.  The Street response was generally favorable given that they pick up on EPS, but most financial analysts noted the hole in the boat as well.  IBM can succeed by cost management alone for a while, but unless it wants to be bought part and parcel by Lenovo it needs to do more than just stabilize sales, it needs to increase them.

Part of the revenue problem isn’t IBM’s to solve.  The company lost in Europe and Asia and in emerging market where economic conditions were challenging, but they only managed to be flat in major markets.  But broadly their results were troubling because their big gains were in hardware; IBM lost ground in revenues in other segments of their business.  Nobody, even IBM, could possibly see that picture as positive.

What should worry IBM most is their dip in revenues for global business services and technology services; the former off most sharply.  IBM has kept its place on top of the IT heap largely because they exercised more strategic influence on buyers.  Business services trends are a decent reflection of their ability to sustain that influence, and those trends are off.

Software was also weak, and here the concern is WebSphere, which had in the past shown double-digit gains.  All it could deliver for IBM was 1% growth, and branded software was off overall.  IBM’s development tools (Rational) were off sharply, which suggests IBM is losing the edge in controlling new software creation and enhancement.

What was the hardware gain?  Well, there’s not a lot left but System Z was delivering.  Mainframes are not a growth market, folks.  Buyers who suppressed investment there in doubtful economic conditions were loosening their purse strings but that wasn’t unexpected.  Power systems managed only a small gain even with x86 servers out of the product line.

In their prepared remarks, IBM set what should have been its own tone.  “Our strategy is focused on leading in the areas where we see the most value in enterprise IT.”  Well, is that mainframes?  IBM needed to drive the cloud, SaaS in particular, and carrier cloud most of all.  They did generate 60% growth in cloud revenue (to $7 billion).  They’re pushing Bluemix and Watson successfully in the Enterprise, but from what I can see from my own surveys their success is within the IBM base.  You can’t increase milk production by re-milking the same cow.

Mining the customer base has been a pattern with IBM, and even in the enterprise space their lack of forthright positioning has weakened their ability to influence buyers.  That’s particularly true given that the cloud engages broader constituencies within the enterprise, constituencies that IBM sales doesn’t influence much.  What IBM lost half a decade ago was evangelism.  They need to be able to drive new market opportunities.  In the SMB space that meant x86, which IBM sold, and more application software.  In the cloud space, the opportunity lies with cloud providers in general and with the network operators in particular.

The as-a-service trends that are behind both cloud-SaaS and NFV have enormous potential.  NFV alone, according to my most recent modeling, could produce over 100,000 new data centers (albeit many smaller ones, in central offices) worldwide.  SaaS could generate thousands of additional and larger ones.  These opportunities emerge from a fundamental shift, the kind of shift IBM has in the past embraced when necessary.  The kind they don’t seem to be willing to embrace now.

IBM’s cloud ambitions appear to be taking the form of cloud services to current customers, back to mining the old base.  Not only does that cement them further into their tunnel-vision problem of positioning to the broad market, it directs their cloud initiatives purely at cost savings.  Even the Street admits that if IBM were to transition buyers to the cloud the result would likely be dilutive.

You can’t succeed in IT if you can’t succeed in the biggest incremental data center opportunity in the world, perhaps the largest ever.  That’s NFV, and IBM has consistently underplayed its (actually considerable) assets there.  You could argue that IBM has failed to learn a lesson that HP is rumored to be learning, which is that they will lose more competing with cloud providers than they’ll gain in direct cloud revenue.  IBM may be so enthralled by their 60% growth in enterprise cloud services that they’re losing sight of the enormous pie of hardware/software sales that will accrue in the space.  IaaS is not ever going to be a revenue bonanza for IBM and they have undermined their marketing position to the SMB space most likely to drive SaaS.

In IBM’s prepared remarks, the phrase “service provider” never appears.  Neither does “SDN”, “NFV”, or even “network”.  That suggests that IBM doesn’t appreciate the magnitude of the changes virtualization is driving, or the fact that you can’t lead a buyer to the future by addressing just the steps you find convenient.  Rival Cisco is doing the right thing in the cloud space, engaging with network operators rather than competing with them.  Cisco is also viewing cloud data centers as an ecosystem, including switches and the x86 servers that people want.

It’s possible that IBM sees its own cloud efforts as a means of displacing the commodity x86 stuff that it’s now exited in hardware sales terms.  But even if that’s true, IBM still has to recognize that without software value-add as a revenue kicker, all it would be doing if its cloud plans succeeded would be entering a business with declining margins and selling to a small and static portion of the total opportunity space.  That’s an uncharacteristically short-sighted move.

I have a long history with IBM; I learned programming on an IBM computer 50 years ago and I cherished a notepad with their tagline of the time, “Think.”  I’ve seen them weather more storms than any other tech vendor, seen them prosper when virtually every other computer vendor flagged.  I have to confess confusion here.  IBM has seen the writing on the wall for at least three years and probably for more than five.  Once virtualization raised its head, commodity hardware was the platform, middleware the differentiator, and applications the revenue driver.  With all that time to invest, to develop, to position, what the heck was IBM thinking?

What IBM is going to have to do at this point is buy somebody, perhaps multiple somebodies.  They need core technology in the network, cloud, SDN, and NFV spaces to augment their current capabilities.  More than that, they need somebody who can take fresh and exciting stories to the broad market.  They need to make buyers do what that old notebook of mine challenged us all to fifty years ago—think.

Posted in Uncategorized | Comments Off

Can There be Secret Sauce in the Nokia/ALU Deal?

The marriage of Nokia and Alcatel-Lucent is clearly a consolidation.  The question is what the companies see as the end-game.  Consolidation is usually a market response to commoditization, to the loss of pricing power that comes when no meaningful feature differentiation is possible.  Consolidation can also be a step toward taking a leadership position in a new market phase, a way of cleaning up the financials and tidying product lines to align for a new future.  Which is it here?

As a pure consolidation play, the combination of Nokia and Alcatel-Lucent is a reflection of the mobile market, and in particular the trends in 4G RAN.  For a decade, wireless infrastructure capex has been able to sustain itself because wireless has been under less service price pressure than wireline.  That’s changing, and it may change dramatically if regulatory trends in the US and Europe continue.  Loss of roaming premiums and equal application of neutrality would likely be the last straw in making wireless and wireline equivalent in terms of return on infrastructure risk.  Economy of scale would help a vendor in this situation.

But not for long.  Nobody is going to out-price Huawei in the long term, and Ericsson (the other wireless infrastructure leader) is leveraging services and operations effectively to help sustain its position as well.  I don’t think that simply consolidating is going to make the New Nokia a success in the space, much less a leader.

The only defense against commoditization is feature differentiation, and there’s precious little that can be done to differentiate what I’ll call “basic wireless” which means the RAN, IMS, and EPC.  Standards and interoperability have narrowed the range of innovations that can be made in traditional infrastructure.  Which means you have to get un-traditional to differentiate.

There are some basic symbiotic elements in play here.  Nokia has a good agile RAN strategy and strong CSM elements to play with, and Alcatel-Lucent has WAN hardware for mobile networks as well as a good cloud-based IMS and EPC implementation.  The question is whether these will be enough, given that gaining any economies of scale from the merger will surely demand consolidation in the product lines, and any dropping or changing of technologies could put current customers up for grabs.  I think that Nokia will have to look beyond the obvious.

The most obvious opportunity for the New Nokia is to exploit NFV, SDN, and the cloud.  Alcatel-Lucent has the best position in these three spaces of any network equipment vendor, though the company has been (not uncharacteristically) weak in positioning what it can do.  If Nokia could leverage the Alcatel-Lucent assets in these three spaces it could be a player in the new mobile infrastructure revolution.

In a business-politics sense, that’s not going to be easy.  Any big M&A tends to make everyone cautious, both within the companies involved and among the prospects for the companies’ products and services.  This take-root tendency would be particularly destructive right now because of the fact that operators are looking for decisive responses to their own return on infrastructure crisis.  Any approach that can’t be validated and initiated at scale within the next year is likely to be too late.

Another business-politics problem is that Nokia has been perhaps the only company to out-fumble Alcatel-Lucent in terms of marketing and positioning.  No matter what the companies say now about how they’ll divide responsibility in the future, the combined business doesn’t have a great pool of serious song-and-dance types to draw from.  And that at a time when singing and dancing are definitely going to be the order of the day, especially after a big M&A.  And especially when the merger that created Alcatel-Lucent in the first place hadn’t really gelled even at the time of the Nokia deal.

The final issue is that Alcatel-Lucent just named (in January) a new head of the IP Platforms (Bhaskar Gorti), which would run the company’s critical NFV/SDN/cloud activity.  You’d normally expect this sort of change to be accompanied by some substantive strategy/positioning shifts, and there’s been enough time for some of these to get going.  What happens now?  It’s hard to keep driving change without ever really getting it fully developed.

What does the New Nokia need to do?  I think the first part of the answer is clear; they have to fully position themselves to be an operations integration giant for the age of virtualized infrastructure and as-a-service composition of retail offerings for operators.  They cannot beat Huawei on equipment price anywhere that Huawei can sell, as I’ve said, and they have to compete with Ericsson who has the right tools but is also a vapid positioner of their own assets.  Ericsson’s claim to fame for the future is OSS/BSS, but it’s also their weakness.  Traditional operations isn’t enough for a virtual future.  Alcatel-Lucent’s Gorti knows this, I think, because Oracle (where he came from) realized the ops value in NFV and was positioning to exploit it.

Operations integration for NFV has never been handled optimally by ETSI; they’ve considered it out of scope.  The problem is that everything credible in terms of NFV benefits is derived in part (or totally) from operations efficiency gains.  You can’t even save on capex with NFV if the inevitable increase in complexity that NFV creates eats up your savings.

Within operators, the operations integration issue has also created a face-off between those who think that the old-line OSS/BSS systems are dinosaurs and need to be made extinct so the next wave of technology mammals can emerge, and those who think that OSS/BSS is the base of mammals (hidden for millennia under the cover of dinosaur equipment policies and technologies) that must now emerge and become supreme.  I’ve recounted in prior blogs that these two divergent OSS/BSS visions are often represented within the same operator, at the same meetings.

As it happens, it’s in expressing its operations integration strategy that Alcatel-Lucent has been the least successful in marketing/positioning its NFV story.  It’s not clear that there’s a good approach behind the lack of positioning, so the first order of business for New Nokia should be to figure out what needs to be done and insure it’s happening.  The second order is to do a lot of uncharacteristically strong singing and dancing around the story.

This new and good story has to be tied to mobile, of course, and it could be what unifies the troika of NFV, SDN, and cloud.  What do these guys have in common?  Virtualization of course, but more significantly they all have benefit cases that demand extraordinary operations efficiency.  Getting operations, virtualization, and mobile all rolled into a common story won’t be easy, but I don’t see how the New Nokia can avoid pushing to make it work.  Unless they want to watch commoditization continue to eat away at the combined company as quickly as it was eating at the two separately.

The possibility that the New Nokia might launch an effective campaign for SDN, NFV, and the cloud is a problem in itself for competitors, but perhaps a greater one is the fact that a mega M&A event in the industry would be driven largely by mobile considerations.  Neither Cisco nor Juniper has a RAN, nor do they have a strong mobile position.  They have to be wondering whether this M&A is a signal that you have to be in the mobile infrastructure game to be a contender even for M&A consolidation.  Cisco may believe it can ride enterprise IT and carrier evolution even without mobile infrastructure specialization, and they may be right.  Juniper?  I doubt it, so they have to be even more effective in their own NFV/SDN/Cloud positioning than the New Nokia, and that’s going to be hard.

Hard, and ultimately not enough.  A vendor, to have meaningful feature differentiation, has to be aligned with the features that drive the purchases.  What do service providers sell?  Services, obviously, and the most significant thing that’s changing here is the nature of services.  Bits, as I’ve said, will never be really profitable.  No matter what you do to make operations more efficient, you’re only band-aiding the wound that unlimited usage has already created and will almost certainly continue to create.  You can’t make money selling something with a zero marginal price.  So operators have to move upward, and so do vendors.

SDN and NFV are platforms to create the carrier cloud, and while Alcatel-Lucent has a cloud position (in CloudBand) it’s not ideal because they don’t make servers.  Nokia has to realize that without the automatic seat at the cloud table that servers offer, they have to earn a place.  SDN and NFV can create a fabric for applications and services, but both have to be extended to make that happen.  Interestingly, Nuage has done a lot to provide for the SDN extensions so only NFV remains.  The point is that the New Nokia may stand or fall on how well it exploits Nuage and addresses NFV, and it’s just getting those assets now.  The challenge is obvious.

Posted in Uncategorized | Comments Off

How to Make Services Agile

Everyone in NFV is jumping on “service agility” as the key benefit, and I don’t disagree that the value NFV could bring to launching new services could be the best justification for deploying it.  Wishing won’t make it so, though, and I don’t think we’ve had enough dialog on how one makes a service “agile”.  So…I propose to start one here.

The first point about service agility is that it’s going to be a lot like software agility, and in particular what I’ll call “functional” or “app” programming.  Traditional software is written by programmers who write specific instructions.  Modular software, a trend that goes back over 40 years, initiated the concept of reusable “modules” that were introduced into a program to perform a regularly used function.  This was enhanced about 20 years ago by the notion that a software function could be visualized as a “service” to be consumed, and that was the dawn of the Service-Oriented Architecture or SOA.  Today’s web-service and PaaS models (and many SaaS models) are another variant on this.

In all these approaches, we get back to the notion of abstraction.  A programmer consumes a service without knowing anything about it other than the API (application program interface, meaning the inputs and outputs) and functionality.  The service is a black box, and the fact that all the details are hidden from the programmer means that these services make it easy to do very complicated things.

To me, this is a critical point because it exposes the biggest truth about service creation in an NFV sense.  That truth is that there are two different “creations” going on.  One is the creation of the services, which if we follow software trends are assembled by assembling lower-level services.  The other is the generation of those lower-level services/abstractions from whatever primitives we have available.  I’ve categorized this in role terms as “service architect” and “resource architect”.

An agile service, I think, is created first by identifying or building those lower-level services/abstractions from the resources upward.  A VPN or a VLAN is an example of an abstraction, but so is “DNS” or “firewall”, or even “HSS”.  Once we have an inventory of this good stuff, we can let service architects assemble them into the cooperative functional systems that we call “services”.

There are a lot of possible slip-ups that can happen here, though.  I’ll illustrate one.  Suppose I have a need to deploy virtual CPE but I can’t do it everywhere I offer service, so I have “real” CPE as well.  I have two options.   One to define a low-level service called “CPE” and let that service sort out the difference between virtual and real.  The other is to expose a “virtualCPE” and “realCPE” service.  Let’s see how that plays out.

If I have a CPE service, then the decision of whether to use cloud principles to host and connect software elements is invisible to the service architect.  The service definition includes only CPE services, and they don’t care because the underlying service logic will sort out the provisioning.  On the other hand, if I have virtualCPE and realCPE, the service definition has to know which to use, which means that the details of infrastructure differences by geography are pushed upward to the service level.  That means a much more complicated process of service creation, which I contend means much less agile.

But even my virtualCPE and realCPE abstractions have value over the alternative, which is to define the services all the way from top to bottom, to the deployment level.  If I have a pair of abstractions I will have to reflect the decision on which to use into the service orchestration process, but the details of how it’s done will stay hidden.  I can provision different CPE, deploy on different virtual platforms, without changing the service.  That means that changes in real devices or virtual infrastructure are hidden from the service orchestration process.  If I don’t have those abstractions then any change in what I need to do to deploy (other than simple parameter changes) would have to be propagated up to the service definition, which means the change would change all my service order templates.  No agility there, folks.

The point here is that an agile service has to be agile through the whole lifecycle or it’s not really agile at all.  I cannot achieve that universality without following the same principles that software architects have learned to follow in today’s service-driven world.

If you map this to the current ETSI work and to other NFV activities you see that it means that things like OpenStack are not enough.  They can (and will) be used to decode what to do to deploy “virtualCPE”, but I still have to decompose my service into requests for realCPE and virtualCPE somewhere.  Further, if I decide to get smart and abstract two things that are functionally identical into “CPE”, I have created a level of decomposition that’s outside what OpenStack is designed to do.  Could or should I shoehorn or conflate the functionality?  I think not.

Network resources, technologies, and vendors create a pool of hardware and software—functionality-in-waiting we might say.  An operator might elect to harness some of this functionality for use by services.  If they don’t then service definitions will have to dive down to hardware detail, and that creates a service structure that will be a long way from agile, and will also be exceptionally “brittle”, meaning subject to changes based on nitty-gritty implementation details below.

Do we want to have every change in infrastructure obsolete service definitions that reference that infrastructure?  Do we want every service created to do direct provisioning of resources, probably in different ways with different consequences in terms of management properties?  Do we want horizontal scaling or failover to be mediated independently by every service that uses it?  Well, maybe some people do but if that’s the case they’ve kissed service agility goodbye.

And likely operations efficiency as well.  Abstraction of the type I’ve described here also creates consistency, which is the partner of efficiency.  If all “CPE” is deployed and managed based on a common definition, then it’s going to be a lot easier to manage the process, and a lot cheaper.

Next time you talk with a purported NFV provider, ask them to show you the service modeling process from top to bottom.  That exercise will tell you whether the vendor has really thought through NFV and can deliver on the benefits NFV promises.  If they can’t do it, or if their model doesn’t abstract enough, then they’re a science project and not an NFV story.

Posted in Uncategorized | Comments Off

Does the Oracle/Intel Demonstration Move the NFV Ball?

Oracle has started demoing their new NFV/orchestration stuff, and anything Oracle does in the space is important because the company represents a slightly different constituency in the NFV vendor spectrum.  They’re definitely not a network equipment player so NFV isn’t a risk to their core business.  They do sell servers, but that’s not their primary focus.  They are a software player and with their NFV announcement earlier they became the biggest “official” software company in NFV.

The big focus of the Oracle announcement was a partnership with Intel on the Open Network Platform initiative.  This is aimed at expanding what can be done with NFV by facilitating the hosting of functions on hardware with the right features.  The demo shows that you can create “sub-pools” within NFVI that have memory, CPU, or other hardware features that certain types of VNF would need.  Oracle’s orchestration software then assigns the VNFs to the right pools to insure that everything is optimally matched with hardware.

There’s no question that you’d like to have as much flexibility as possible running functions as VNFs instead of as physical appliances, but I’m not sure that the impact is as great as Oracle might like everyone to believe.  There are a number of reasons, ranging from tactical to strategic.

Reason one is that this is hardly an exclusive relationship between Oracle and Intel.  Intel’s ONP is available to any vendor, and Intel’s Wind River open-source Titanium supports it.  HP, a rival with Oracle for NFV traction, is a (or THE) Intel partner with ONP, in fact.  I doubt that any Intel-server-based NFV implementation would not use ONP.

Reason two is that the NFV ISG has called for VNF steering to servers based on a combination of the VNFs’ needs and servers’ capabilities for ages.  It’s part of the ETSI spec, and that means that implementations of MANO that want to conform to the spec have to provide for the steering.

Reason three is that right now the big issue with NFV is likely to be getting started, and in early NFV deployment resource pools will not be large.  Subdividing them extensively enough to require VNF hosting be steered to specialized sub-pools is likely to reduce resource efficiency.  Operators I’ve talked to suggest that early on they would probably elect to deploy servers that had all the features that any significant VNF population needed rather than specialize, just to insure good resource pool efficiency.

Then we have the big strategic reason.  What kind of VNF is going to need specialized hardware for performance?  I’d contend that this would likely be things like big virtual routers, pieces of EPC or IMS or CDN.  These functions are really not “VNFs” in the traditional sense because they are persistent.  I commented in an earlier blog that the more a software function was likely to require high performance, higher-cost hardware, the less likely it was to be dynamic.  You don’t spin up a multi-gigabit virtual router for an hour’s excursion, you plant it somewhere and leave it there unless something breaks.  That makes this kind of application more like cloud computing than like NFV.

I asked an operator recently if they believed that they would host EPC, virtual edge routers, virtual core switches, etc. on generalized server pools and they said they would not.  The operator thought that these critical elements would be “placed” rather than orchestrated, which again suggests a more cloud-like than NFV-like approach.  Given that, it may not matter much whether you can “orchestrate” these elements.

Then there’s the opex efficiency point, which I think is a question of how many such situations arise.  Every user doesn’t get their own IMS/EPC/CDN, they share a common one, generally per metro.  It’s not clear to me given that limited deployment that any operations efficiencies generated would be confined to a small number of functional components, how much you could drive the NFV business case on OPN alone.

And service agility?  Valuable services that operators want to deploy quickly are almost certain to be personalized services.  What exactly can we do as part of a truly dynamic service that is first personalized for a user and second, so demanding of server resources that we have to specialize what we host it on?  Even for the business market I think this is a doubtful situation, and for the consumer market that makes up most of where operators are now losing money, there is virtually no chance that supersized resources would be used because they couldn’t be cost-justified.

Don’t get me wrong; OPN is important.  It’s just not transformative in an NFV sense.  I’ve shared my view of the network of the future with all of you who read my blog.  It’s an agile optical base, cloud data centers at the top, and a bunch of service- and user-specific hosted virtual networks in between.  These networks will have high-performance elements to be sure, elements that need OPN.  They’ll be multi-tenant, though, and not the sort of thing that NFV has to spin up and tear down.  They’ll probably move more than real routers do, but not often enough to make orchestration and pool selection a big factor.

I am watching Oracle’s NFV progress eagerly because I do think they could take a giant step forward with NFV and drive the market because they do have such enormous credibility and potential.  I just don’t think that this is such a step.  “Ford announces automobiles with engines!” isn’t really all that dramatic, and IMHO ONP or ONP-like features are table stakes.  What I’m looking for from Oracle is something forward-looking, not retrospective.

In their recent NFV announcement, Oracle presented the most OSS/BSS-centric vision for NFV that any major vendor has articulated.  There is absolutely no question that every single NFV mission or service must have, as its strongest underpinning, a way of achieving exceptionally good operations efficiency.  Virtualization increases complexity and complexity normally increases management costs.  We need to reduce them, in every case, or capex reductions and service agility benefits won’t matter because they’ll either be offset or impossible to achieve.  Oracle’s biggest contribution to NFV would be to articulate the details of OSS/BSS integration.  That would truly be a revolutionary change.

As an industry, I think we have a tendency to conflate everything that’s even related to a hot media topic into that topic.  Cloud computing is based on virtualization of servers yet every virtualized server isn’t cloud computing.  Every hosted function isn’t NFV.  I think that NFV principles and even NFV software could play a role in all public cloud services and carrier virtualization of even persistent functions, but I also think we have to understand that these kinds of things are on one side of the requirements spectrum and things like service chaining are on the other.  I’d like to see focus where it belongs, which is where it can nail down the unique NFV benefits.

Posted in Uncategorized | Comments Off

Sub-Service Management as a Long-Term SDN/NFV Strategy

For my last topic in the exploration of operator lessons from early SDN/NFV activity, I want to pursue one of the favorite trite topics of vendors; “customer experience”.  I watched a Cisco video on the topic from the New IP conference, and while it didn’t IMHO demonstrate much insight, it does illustrate the truth that customer experience matters.  I just wish people did more than pay lip service to it.

Customer experience management in a service sense is a superset of what used to be called SLA management, and it reflects the fact that most information delivered these days isn’t subject to a formal SLA at all.  What we have instead is this fuzzy and elastic conception of quality of experience, which is the classic “I-know-it-when-I-see-it” concept.  Obviously you can’t manage for subjectivism, so we need to put some boundaries on the notion and also frame concepts to manage what we find.

QoE is different from SLAs not only in that it’s usually not based on an enforceable contract (which, if it were, would transition us to SLA management) but in that it’s more statistical.  People typically manage for SLA and engineer for QoE.  Most practical customer experience management approaches are based on analytics, and the goal is to sustain operation in a statistical zone where customers are unlikely to abandon their operator because they’re unhappy.  That’s a very soft concept, depending on a bunch of factors that include whether the customer was upset before the latest issue and whether the customer sees a practical alternative that can be easily realized.

Sprint and T-Mobile have launched campaigns that illustrate the QoE challenge.  If I believe that some significant percentage of my competitors’ customers (and likely my own as well) are dissatisfied with service but unwilling to go through the financial and procedural hassle of changing, they I’ll make it easy for competitors’ customers to change—even give them an incentive.  Competition is the goad behind customer experience management programs; if your competitor can induce churn then you have a problem despite absolute measurements.

Operators recognize that services like Carrier Ethernet are usually based on recognizable resource commitments, which means that you can monitor the resources associated with the service and not just guess in a probabilistic sense what experience a user has based on gross resource behavior.  In consumer services there are no fixed commitments, and so you have to do things differently and manage the pool.

NFV, according to operators, has collided with both practice sets.  For business services, dynamic resource assignment and automated operations are great, but they introduce new variables into the picture.  With business services, NFV is mostly about deriving service state from virtual resource state.  That’s a problem that can be solved fairly easily if you look at it correctly.  The consumer problem is different because we have no specific virtual resource state to derive from.

What operators would like to avoid is “whack-a-mole” management where they diddle with resource pool behavior to achieve the smallest number of complaints.  That sort of thing might work if you could converge on your optimum answer quickly, and if resource state was then stable enough that you didn’t have to keep revisiting your numbers.  Neither is likely true.

One possible answer that operators are looking at, but have not yet been able to validate in a full trial, is correlating service and resource analytics.  If you have a quirky blip on your resource analytics dashboard, you could presume with fairly low risk of error that service issues at that time were correlated with the blip.  Thus, you could work to remedy the service problems by remediation of the resource blip, even if you didn’t understand full causal relationships.  The barrier to this mechanism is not only that it’s not easy to test the correlations today, it’s not even easy to gather the service-side analytics.  Measurement of QoE, you’ll recall from earlier comments, is measuring “windy”.  It’s in the eye of the beholder.

Most of the operators I’ve talked with are now of the view that NFV management, SDN management, and probably management overall, is going to be driven by the same notions (QoE substitutes for SLA, multi-tenancy substitutes for dedicated, virtualized substitutes for real) into the same path and that they need a new approach.  A few of the “literati” are now looking at what I’ll call “sub-service management”.

Sub-service management says that a “service” is a collection of logical functions/behaviors that are individually set to at least a loose performance standard.  The responsibility of service automation is to get each functional element to conform to its expectations.  Each element is also responsible for contributing a “management view” in the direction of the user, perhaps in the simple form of a gauge that shows red-to-green transitions reflecting non-conforming to beating the specifications.

If something goes wrong with a sub-service function we launch automated processes to remediate, and at the same time we look at the service through the user-side management viewer to see if something visible has gone bad.  If so, we treat this as a QoE issue.  We don’t try to associate user service processes with resource remediation processes.

The insight of sub-service management is that if you aren’t going to have fixed, dedicated, resource-to-service connections with clear fault transmission from resource to service, then you can’t work backwards from service faults to find resource problems.  The correlation may be barely possible for business services but it’s not possible for consumer services because the costs won’t scale.

There are barriers to sub-service management, though.  One is that we don’t have a clear notion of a service as a combination of functional atoms.  ETSI conflates low- and high-level structuring of resources and so makes it difficult to take a service like “content delivery” and pick out functional pieces that are then composed to create services.  And because only functionality can ever be meaningful to a service user, that means it’s hard to present a user management view.  Another is that there is no real notion of “derived operations” or the generation of high-level management state through an expression-based set of lower-level resource states.

I don’t think that it will be difficult to address any of these points, and I think the only reason why we’ve not done that so far is that we’ve focused on testing the mechanisms of NFV rather than testing the benefit realization.  As I’ve said in earlier blogs, the focus of PoCs and trials is now shifting and we’re looking at the right areas.  It’s just a matter of who will come up with an elegant solution first.

Posted in Uncategorized | Comments Off

What Operators Think about Service-Event versus Infrastructure-Event Automation

I’m continuing to work through information I’ve been getting from operators worldwide on the lessons they’re learning from SDN and NFV trials and PoCs.  The focus of today is the relationship between OSS/BSS and these new technologies.  Despite the fact that operators say they are still not satisfied with the level of operations integration into early trials, they are getting some useful information.

One interesting point clear from the first is that operators see two different broad OSS/BSS-to-NFV (and SDN) relationships emerging.  In the first, the operations systems are primarily handling what we would call service-level activities.  The OSS/BSS has to accept orders, initiate deployment, and field changes in network state that would have an impact on service state.  In the second, we see OSS/BSS actually getting involved in lower-level provisioning and fault management.

There doesn’t seem to be a strong correlation between which model an operator thinks will win out and the size or location of the operator.  There’s even considerable debate in larger operators as to which is best, though everyone said they had currently adopted one approach and nearly everyone thought they’d stay with it for the next three years.  All this suggests to me that the current operations model evolved into existence based on tactical moves, rather than having been planned and deployed.

There is a loose correlation between which model an operator selects and the extent to which that operator sees seismic changes in operations as being good and necessary.  In particular, I find that operators who have pure service-level OSS/BSS models today are most likely to be concerned about making their system more event-driven.  Three-quarters of all the operators in the service-based-operations area think that’s necessary.  Interestingly, those that do not seem to be following a “Cisco model” of SDN and NFV, where functional APIs and policy management regulate infrastructure.  That suggests that Cisco’s approach is working, both in terms of setting market expectations and in fulfilling early needs.

The issue of making operations event-driven seems to be the technical step that epitomizes the whole “virtual-infrastructure transition”.  Everyone accepts that future services will be supported with more automated tools.  The question seems to be how these tools relate to operations, which means how much orchestration is pulled into OSS/BSS versus put somewhere else (below the operations systems).  It also depends on what you think an “event” is.

Most operations systems today are workflow-based systems, meaning that they structure a linear process flow that roughly maps to the way “provisioning” of a service is done.  While nobody depends on manual processes any longer, they do still tend to see the process of creating and deploying a service to be a series of interrupted steps, with the interruption representing some activity that has to signal its completion.  What you might call a “service-level event” represents a service-significant change of status, and since these happen rarely it’s not proved difficult to take care of them within the current OSS/BSS model.

The challenge, at least as far the “event-driven” school of operations people is concerned, lies in the extension of software tools to automatic remediation of issues.  One operator was clear:  “I can demonstrate OSS/BSS integration at the high level of the service lifecycle, but I’m not sure how fault management is handled.  Maybe it isn’t.”  That reflects the core question; do you make operations event-driven and dynamic enough to envelop the new service automation tasks associated with things like NFV and SDN, or do you perform those tasks outside the OSS/BSS?

This is where I think the operators’ view of Cisco’s approach is interesting.  In Cisco’s ACI model, you set policies to represent what you want.  Those policies then guide how infrastructure is managed and traffic or problems are accommodated.  Analytics reports an objective policy failure, and that triggers an operations response more likely to look like trouble-ticket management or billing credits than like automatic remediation.  It’s not, the operators say, that Cisco doesn’t or can’t remediate, but that resource management is orthogonal to service management, and the “new” NFV or SDN events that have to be software-handled are all fielded in the resource domain.

Most operators think that this approach is contrary to the vision that NFV at least articulates, and in fact it’s NFV that poses the largest risk of change.  It’s clear that NFV envisions a future where software processes not only control connectivity and transport parameters to change routes or service behavior, the processes also install, move, and scale service functionality that’s hosted not embedded.  This means that to these operators, NFV doesn’t fit in either a “service-event” model or in a “resource-based-event-handling model.  You really do need something new in play, which raises the question of where to put it.

The service-event-driven OSS/BSS planners think the answer to that is easy; you build NFV MANO below the OSS/BSS and you field and dispatch service-layer events to coordinate operations processes and infrastructure events.  This does not demand a major change in operations.  The remainder of the planners think that somehow either operations has to field infrastructure events and host MANO functions, or that MANO has to orchestrate both operations and infrastructure-management tasks together, creating a single service model top to bottom.

I’ve always advocated that view and so I’d love to tell you that there’s a groundswell of support arising for it.  That’s not the case.  In all the operators I’ve talked with, only five seem to have any recognition of the value of this coordinated operations/infrastructure event orchestration and only one seems to have grasped its benefits and how to achieve it.

What this means is that the PoCs and tests and trials underway now are just starting to dip a toe in the main issue pool, which is not how you make OSS/BSS launch NFV deployment or command NFV to tear down a service, but how you integrate all the other infrastructure-level automated management tasks with operations/service management.  This is what I think should be the focus of trials and tests for the second half of 2015.  We know that “NFV works” in that we know that you can deploy virtual functions and connect them to create services.  What we have to find out is whether we can fit those capabilities into the rest of the service lifecycle, which is partly supported by non-NFV elements and overlaid entirely by OSS/BSS processes that are not directly linked with MANO’s notion of a service lifecycle.

I think we may be close to this, and though “close” doesn’t mean “real close”, I think that the inertia of OSS/BSS is working in favor of keeping service events and infrastructure events separated and handling the latter outside OSS/BSS.  Since that’s what most are doing now, this might be a case where the status quo isn’t too bad a thing.  The only issue will be codifying how below-the-OSS orchestration and the OSS/BSS processes link with each other in a way broad and flexible enough to address all the service options we’re hoping to target with NFV.

Posted in Uncategorized | Comments Off

Feature Balance, Power Balance, and Revolutionary Technologies

Networking and IT have always been “frenemies”.  They often compete for budgeting in the enterprises, they certainly have competed for power in the CIO organization.  One of the interesting charts I used to draw in trade show presentations tracked how the two areas were competing for “feature opportunity”.  By the year 2010, my model showed, IT would have convincingly owned about 17% of the total feature opportunity, networking 28%, and 55% would still be up for grabs.  Since no market wants to differentiate on price alone, having a feature-opportunity win would be a big boost for technologies, vendors, and the associated political constituencies in the enterprise.

That forecast largely came true in 2010, and networking did gain strength and relevance.  Since then things have been changing.  In 2014, the model said that if you looked at the totality of feature opportunity, networking and IT had cemented about 19% each, and everything else was yet to be committed.  What changed things certainly included the combination of the Internet and the cloud, but these two forces don’t tell the whole story.

The Internet demonstrates resource can be turned into network abstractions.  All forms of cloud computing tend to make things more network-like for the simple reason that they promote network access to abstract IT features.  That much of the cloud trend could promote networking versus IT flies in the face of the shift actually seen.  What made the difference goes back to abstraction, and the details might explain why John Chambers seems to be saying “white boxes win”, why IBM might (as reported on SDxCentral) be investing more in SDN, and why EMC might want to buy a network company.

Even before SDN came along, we were seeing a trend toward the abstraction of network behavior, “virtual” networks like VPNs and VLANs.  This trend has tended to reduce differentiation among network vendors by creating a user-level, functional, definition for services at L2 and L3.  Sure, users building their own networks could appreciate the nuances of implementation, but functionality drives the benefit case and thus enables consumption.

SDN takes virtualization of networks in a new direction.  By proving abstractions of devices and not just services, SDN makes it more difficult to differentiate even at the level of building networks.  If we assumed that SDN  in its pure form went forward and dominated, then “white box” is inevitable at least in a functional sense.  Only what could be specified by OpenFlow could be used to build services.  That’s the ultimate in abstraction.

NFV takes another perhaps more significant step, along with cloud-management APIs like OpenFlow’s Neutron.  If you have a means of creating applications and services that consume network abstractions, then anything that realizes these abstractions is as good as anything else.  That’s the explicit goal of NFV, after all.  Properly applied, NFV says “You can resolve our abstractions of network services using SDN, but also using anything else that’s handy”.  It embraces the legacy elements, which limits how much network incumbents can do to stave off commoditization by bucking evolution to new models like SDN.

The interesting thing here is that networking, despite having a lead before, lost ground between 2010 and 2015.  Not lost in terms of investment but lost in terms of feature-value leadership.  Perhaps even more interesting is that IT didn’t gain, it also lost.  The gainer was the “in-between”, and I think that’s the most important lesson to learn here.

Virtualization is the general trend at work here.  It’s a combination of abstraction and instantiation, intended in large part to promote resource independence.  Abstraction reduces everything to functionality.  Functionality is a slave to demand, not to supply, and abstraction’s very goal of resource independence shouts “Hardware doesn’t matter!”  The important thing my modeling shows is that both IT and networking are losing, and nobody grabs for a lifeboat like a drowning man.  Thus, it’s abstraction that I think is behind the news items I cited.

EMC, whose VMware unit acquired Nicira, is in a position to abstract everything in physical networking.  A virtual overlay doesn’t care what the underlayment is.  The problem that they have is that even if the “undernet” is anonymous, it still has to be something.  So it makes sense for EMC to think about buying a network company to get some real gear.  If they don’t then a vendor who offers real equipment might well offer virtual-overlay software too.  A vendor like Cisco.  Chambers knows that overlay wins, but he dares not to say “overlay” because everyone will then think VMware/EMC.  So he says “white box”, and commits his own version of abstraction.

For a vendor on the IT side like IBM, the smart play is to abstract the network stuff as completely as possible.  So IBM is an OpenDaylight champ, and it continues to develop OpenDaylight even though it seems to have no clear SDN story or mission of its own.  It doesn’t need one to win; it only has to make sure that a network abstraction wins.

Making network abstraction win means to make sure software wins, because ultimately IBM is now a software company.  Hardware, network or IT, is more of a risk than anything else.  A giant hardware player can still hold its own against a giant software player because you can’t run software in thin air, or overlay nonexistent infrastructure.  So IBM has to fight not only Cisco but also EMC and HP, more perhaps even than it has to fight Oracle and Microsoft.  Why?  Because software plus hardware will beat software alone, mostly because the majority of spending will still be on the platform and not on the software.  The company who has both can sustain sales presence and control, at least in the near term.

Even in the long term, how commoditized can hardware be?  We’ve had standard-platform x86 “COTS” for decades and while all the vendors would love to see better margins and less competition, there are still competitors and we’re seeing commoditization and consolidation rather than the collapse of the hardware space.  Chambers’ view of a white-box future may be likened to a story about trolls to keep kids out of the woods.  He may be afraid that he can’t dynamite Cisco into a more software-centric stance without making it clear that they can’t just hunker down on the hardware forever.  Whatever he says, though, you still need vendors to make white boxes and to integrate and support the function/host combination.

The interesting thing is that while Cisco and EMC and IBM have been in the stories, they’re not the players I think will decide the issues.  Those are HP and Oracle.  HP is perhaps the last real “server” vendor left and Oracle is the real “software” player, if one focuses both categories on the abstraction, SDN, and NFV battleground.  Both HP and Oracle are looking for a strong NFV story.  Both have good middleware credentials, but Oracle has the advantage in middleware.  HP has the advantage with servers, networking and SDN.  Neither of the two have fully leveraged their assets.  If and when they do they may finally decide who gets to shape the future.

Posted in Uncategorized | Comments Off