Seven Questions to Navigate NFV Hype

The term “-washing” is applied all too often in our industry, where the dash is preceded by the name of some hot technology like “cloud”, “SDN”, or “NFV”.  Everyone loves publicity and if you can get it by simply tattooing the name of some hot concept on your forehead, so much the better.  Of course, at some point, buyers will have to separate the thin veneer of something to the full substance, because nobody ever built a network or an IT plan from a tattoo.

NFV is perhaps especially susceptible to washing.  The specs aren’t done, the concept spreads out a lot across both the IT and network space, and use cases suggested to expose issues rather than to define total solutions can be addressed without actually solving the overall problem that the NFV ISG has dedicated itself to solving.  Then there’s the fact that NFV will have an impact on things like operations and inter-provider federation that may be fully or partly out-of-scope to the work of the ISG.  Operators reported in our fall survey that vendors were not providing them complete NFV solutions in their PoCs and even in RFI responses, but those vendors are sure happy to stick an NFV bumper-sticker on their bandwagons.  What’s a buyer to do?

The emergence of multiple NFV Proof-of-Concepts in the ETSI ISG could be a help to buyers in the long term, but for now the submitted proposals don’t all aim at the same goals or even cover the same implementation ground.   Thus, even when reviewing PoCs, it’s important to apply some kind of structured process.  Well, one possibility is to look for specific things in an NFV claim, things that will separate the real from the wash, and this is what I want to talk about.  I propose a list of seven things to ask anyone who says they have NFV.

The first thing on the list is can you provide me with a list of the specific contributions your company has made to the NFV ISG, including applications for and approvals of Proof-of-Concept?  How can somebody be implementing NFV if they’re not a contributing member of the NFV ISG?  There is no significant financial commitment needed to join the ISG, so there is simply no excuse for non-member status.  Any member can contribute, and if you’ve done enough to say you have any or all of NFV implemented you should be able to point to your contributions to the process.

The second thing is where do your virtual functions come from?  There are three possible answers here:  from any source, from my own developer program, or from “standard” sources.  Obviously the second answer means virtual functions are proprietary; they’ll work only for the NFV implementation they’re developed for.  That’s not a very good match to open commercial servers for deployment, and yet the operators tell me that vendor responses to RFP/RFIs issued so far tilt overwhelmingly toward proprietary virtual functions.  If standard APIs are needed, the question is whose standard they follow.  The ISG hasn’t defined any yet.  The best answer here is “from anywhere”, which means that open-source code currently available and third-party development from almost any source would be suitable.  But those who claim this should be able to explain how the virtual functions’ interfaces (including management interfaces) are connected at deployment time.

Point number three is how do you accommodate different hosting platforms?  Ideally, an NFV implementation should be able to deploy a virtual function in a VM, on bare metal, in the cloud, on a board in a network device, or even on a chip.  There would clearly have to be customization for the specific environment, but that should be handled via a modular plug-in or something.  Demand openness here because if you don’t you’ll end up with a single-vendor NFV solution, which has to be anachronistic at the least.

Point number four is related; how to you drive connections, among VNFs and with other legacy elements and endpoints?  The biggest single failing of NFV strategies is that they are intra-function in their connection support.  Do you think your service chains will live independently in the great beyond, chugging away and presumably getting billed to your customers without being connected to their VPN or access links?  Probably not, but if you don’t have an NFV networking approach that can connect all kinds of legacy and NFV resources together, you can’t support transitional environments or uniform deployment practices.

Point five is how does management of virtual functions work, both within the NFV part of a service and overall?  There are really only two choices here; what I’ll call “simple” management and what I’ll call “derived operations”.  Simple management says that all collections of virtual functions present a MIB that loosely corresponds to what a real device that  provided that set of functionality would present.  This fits into current management systems, but the problem is that virtual functions have resources that connect elements and host elements, and these resources are part of their management domain but not part of a device MIB an equivalent real device would present.  How then do you manage them?  A link between two pieces of a virtual customer gateway, for example, is explicitly a manageable element in a service chain, yet there’d be no such thing in a real device MIB.  Then there’s the problem of shared resource collision.  A bunch of virtual functions all depending on a single resource could swamp it with management polling.  The functions could also change variables that would impact overall operation, destabilizing the service and presenting a major security risk.  The alternative is to provide a highly composable management strategy, but here again you need to understand exactly how it works.

The sixth point is how does this NFV implementation deal with services that are deployed across multiple resource pools, multiple cloud providers, or even across operator boundaries?  “Federation” of NFV services is critical because virtually no credible NFV consumer would ever be running without partnerships in the cloud or carrier sense.  If you can’t provision and manage across a mixed pool of resources or build services that are implemented by partner carriers, most of your customers are going to get unhappy fast.  And if vendors persist in having their own proprietary NFV strategy, it may be impossible to even support multi-vendor networks without federation capability.

The final point is who’s selling and guaranteeing this?  NFV is complex in that it mixes software systems, management systems, servers, networks, you name it.  All of this could become a harmonious machine working to support your profit/revenue goals, or a pile of disorderly junk.  Who do you go to if the latter is what happens?  Will you have to integrate all of this on your own, or is there some single source?  Some operators don’t mind being an integrator, but most tell me they would prefer someone bid that role.  Will your integrator bid it?  Will the NFV provider be a suitable integrator for NFV?  Are there things that will fall into the responsibility cracks—things outside “NFV” but inside the services that contain NFV elements?  You need to know how this will work.

NFV is the tip of a gigantic iceberg, the issue that is forcing us to reconsider how we build and manage services.  It could be the start of something big, something different.  It could be a reprise of the same old vendor-specific crap.  Most of the benefits claimed for NFV will never be realized unless buyers demand answers to these questions.  Reporters too.  If you care about the network of the future, then you need to care about these seven points.

Why We Need to Get to the Experience Layer

Some of my readers likely think I spend too much time on business issues in networking, preferring that I talk about revolutionary technology.  Well, if the Founding Fathers had been stranded on a desert island through the late 1700s all their revolutionary precepts would have done is inspire birds, fish, and maybe turtles.  Revolutions depend on revolutionary ideals, but also on a real framework for those ideas to operate.  Business, in networking, is that framework and we have more proof of that today.  We also have proof we’re hiding in the basement of networking’s future instead of basking in the penthouse.

I probably get a dozen emails or comments a month from people who tell me that Google is the next networking giant, and they often get pretty upset when I blow off the viewpoint.  Google, they say, is branching out into selling handsets, selling broadband, bidding on mobile services.  Hold onto your hats, Google is taking us to the stars.  Baloney.  I said when Google bid on the mobile licenses that it was just pressuring the operators, and they were sure to cast their bid not to win.  I said that Google’s vaunted broadband was carefully dodging any major commitment, and no such commitment has been made.  Now we see, in the latest story, that Google is looking to get out of the handset business.  What can we learn here, besides perhaps taking everything we hear with a grain of salt?

That business drives things in the business world, is what.  Any good business-school student can analyze the handset players, the network operators, and Google side by side and they’ll tell you that Google has a very low financial tolerance for making investments that generate a small return or a high “first cost”, meaning the investment needed to get to break-even on something.  Google is in the ad business for the same reason that Facebook is in the ad business, which is that consumers like free stuff, the Internet is incrementally free to ride on, and return on investment is relatively high.  Network operators, even handset vendors, aren’t in that kind of business so it would be terminally stupid for Google to jump into such a space.  Wall Street would punish them when the truth was known.

Which it is now.  Look at IBM selling its x86 server business to Lenovo.  Look at the reports of “sales war” in Oracle driven by the need to build profit on a per-product basis.  What we are seeing in the business of tech is continued signs that companies are being driven to create near-term financial success even if it means courting longer-term collapse.  Business units would happily grow their own revenues and profits by 5% even if it cost the company 25% overall.  The tech ship is sinking and everyone’s stepping on the others’ heads to get into the lifeboats.

Look at the VC space.  We can raise hundreds of millions to fund a social network company despite the fact that every single one of the companies of this type have boomed and busted.  People used to ask who would be the next Cisco, but today nobody cares because success in actually building network equipment would take too long.  We have a universally short horizon on expected returns and it’s hurting.

There’s a story here beyond business, though.  What is happening is that appliances are commoditizing.  Apple is under pressure, Google is under pressure, so are Dell and HP and everyone else on the hardware side.  And it’s not going to stop because at the end of the day, hardware either runs software or pushes bits and neither of those things are highly differentiable.  What’s happening is that we’re marginalizing all the low items on the tech food chain in favor of the higher items.

Which are?  Well, they’re experiences.  The Internet isn’t a network it’s hosted experiences.  The cloud is the same thing.  NFV is about hosting experiences cheaper, SDN is about connecting hosted experience elements cheaper.  There is a distinct technical polarization taking place, between an experience-driven service layer and a commodity resource pool.  You’re on the top (business-wise) if you’re on the top (layer-wise).

Virtualization is the trend that’s driving the bottom layer, and it’s a trend that makes all the hardware in the world into a blank slate that something else writes on.  The “something else” is the question.  We have created experiences so far by exploiting connectivity—the OTT model.  We now have to think about experiences that exploit networking more broadly and that utilize not only connection resources but also IT resources more effectively.  What is this?  It’s not “the cloud” or “SDN” or “NFV” but something higher up, something that can translate goals into experiences and experiences into profits.

An “experience” in a tech sense is simply a cooperative set of resource relationships that create and sustain something valuable.  People are now calling the creating part of this “orchestration” and the sustaining part “management”, and so you can align experience creation with many of the atomic trends we see, including trends in SDN and NFV and the cloud.  But the notion that we build the future experiences by manipulating a bunch of specialized tools flies in the face of the need to make the experiences cheap enough to be useful on a mass-market scale.

We are in an age where operations drives everything, because we are in an age where cost drives profit.  The mass market doesn’t get bigger very fast.  We have to offer it stuff at lower cost, or offer it different good stuff at acceptable cost.  One way or the other, how we create and sustain those experiences is the key.

Google needed to dump handsets.  Even Apple needs more than handsets, as I said earlier this week.  Dell doesn’t need white-box network switches, it needs valuable stuff that sits on top of those switches and does that experience-creating.  We are trying to transform the tech world from the bottom up, and in doing so we’re exacerbating commoditizing forces and delaying ecosystem-building that’s essential to the real mission.

Oracle now, and Alcatel-Lucent before, and also Juniper, have been renown for pushing products over ecosystemic solutions.  The reason that’s bad is that only ecosystemic solutions can create experiences and only experiences can be the future of our industry and the vendors in it.

An Industry in Search of a Business

This is another one of those days when you have a bunch of news items that reflect a common theme.  What do Yahoo, OpenCompute, and AT&T have in common?  They’re all about the business of networking.

Yahoo reported another quarter of disappointment, which frankly should never have been a surprise given that there was nothing visibly different about their model for making money.  You can change management as much as you want in any business venture, but unless the change impacts your underlying P&L-generating practices you’re standing still no matter how much news the changes make.

The problem with Yahoo is simple; advertising is actually a bit less than a zero-sum game, particularly all forms of online advertising.  Think about it; do you remember display ads?  You probably do what I do, which is to avoid them completely.  I don’t look at them when they’re presented as an on-ramp to a site—I click through them.  If I’m watching a video I’ll switch to another window until the pre-roll is done.  But even Google has a problem here.  Unless you’re very young or very naïve you probably remember “honest” search.  Nobody cares about relevance these days, they want to show you ads or positioning they get paid for.  As a result, the value of search has been seriously eroded.  Online advertising is paying the price of excess, and it’s not going to be any better until all the companies, and all the consumers, realize that it’s not possible to have a free networking industry.

AT&T realizes that.  They turned a nice profit but they also showed disappointing growth in subscribers, meaning mobile subscribers.  Interesting.  What we’re saying is that the market for telecom services has already hit the wall ARPU-wise and the only hope for growth comes from either stealing market share or growing more humans.  Obviously the latter isn’t a very practical business strategy—it takes a while for a new human to get a phone, after all and we’re all focused on the next quarter only.  And the problem with higher market share is that there’s only so much market and everyone is sharing it.  We’re back to a zero-sum game here.

So what do you do about it?  Well, look at OpenCompute.  This is a movement that’s aimed at creating new data center platforms that are essentially commodities, to eliminate proprietary influence and the accompanying higher prices and margins.  Again, this is “sensible” at one level because if you can lower costs you can increase profits.  The problem is that it’s far from clear that initiatives like this will really do much to lower cost.  Even if we define a standard platform for things like rack-mount servers or network switches, how much cheaper do we make them?  IBM is selling off its x86 business, after all, and that means that some big players with good reputations don’t want to play in the COTS market even today.  Suppose we make them even more commoditized?  Are we left then with small players with no reputation?  Who will buy from them?

Interestingly, IBM is also interested in selling off its SDN business, according to reports yesterday.  Here’s this networking revolution, the thing that’s supposed to sweep everything we knew about IP and Ethernet aside.  Here’s this IT giant who has nothing but OEM business in network equipment today, so they’d be the perfect player to push such a revolution because they have literally nothing to lose.  So they rush to sell it off.  Doesn’t this suggest that we’re missing something?

We are.  The common thread here from Yahoo to IBM is that we’re trying to grow in a market that’s not growing.  Tech has for ages been the bastion of organic growth, and now it’s not organically growing any more.  Maybe it’s even doomed to grow at a slower rate.  Color TV was once leading-edge, then flat-screen, then 3D, and yet TV-makers aren’t exactly setting the profit records of the market.  We’re leaping from fad to fad, and why?  Because it’s too much work to figure out something truly useful.  Or maybe because we’re an industry that’s now structured itself for mediocrity.

Who runs these big forums, the groups that are “leading” us to innovation?  In most cases it’s big vendors who can pony up hundred-grand-plus annual dues.  And these big vendors are often the ones who are selling the current equipment, winning the current game.  Innovation, revolution, isn’t for the big guys, but the little guys can’t easily create a credible one.

Yahoo needs to look at selling something not at more ad sponsorship.  That would be a revolution.  AT&T needs to be looking at something users are prepared to pay more for, not hope that new customers fall from the sky (perhaps delivered by storks?)  OpenCompute needs to think about making servers and switches more valuable and not just cheaper, and IBM needs to think hard about how surrendering the opportunity to revolutionize networking is a smart play for someone trying to rebuild their strategic credibility.  And we in the market need to stop believing what we want to believe, which is that advertising will pay for everything we need and that every new foundation or consortium that’s formed will advance our industry.

The cloud, SDN, and NFV are indeed the revolutions.  Our intuition about that is correct.  What we need, what we’re missing, isn’t revolutionary opportunity, it’s revolutionaries.

Will Amazon Eat the Low Apple?

Apple reported its numbers yesterday, and they showed what might have been a surprise for Apple fans, was surely a disappointment for the Street, and things that I’d been concerned about for quite a while.  In short, Apple’s message is that “It isn’t easy to be cool, and coolness isn’t very durable either!”

Apple’s three pillars of profit are the iPhone, the iPad, and the Mac.  Everyone thinks that tablets and smartphones are eating the PC market, but Apple’s Mac sales beat estimates.  The sale of iPhones was below estimates, and Apple lost market share in the iPad though it did manage to sell more of them than expected.  Apple’s shares are off about 7% pre-market, which shows that this situation isn’t a happy one for Wall Street.  It should be a warning for Apple too, but I’m not sure it will be.

Apple’s strategy for a decade or more has been to target the “cool” consumer of technology and not the average consumer.  There’s nothing wrong with that approach in the sense that it does let you sustain higher prices and margins and gives you a kind of envy-driven brand recognition.  The problem is that the mass market is where the mass money is, and that coolness as I’ve said is not durable.  Much of the coolness of iPhones and iPads has come from the novelty of the devices.  From a distance it’s hard to tell just what a given smartphone or tablet user is using, and after there are enough iPads and iPhones floating around, it’s not a novelty to be seen with one.  If everyone drove a Corvette, a beat-up Ford would get the most attention.

For Apple, this means that the trajectory that the Street is pointing out for the iPhone and iPad, which is steady loss of market share, is inevitable.  It means that you either have to accept a mass-market positioning that will kill your margins, move on to some new product (my suggestion has been WiFi belly-button studs—why not just jump to the end-game?),  or somehow rehabilitate your coolness.

A lot of people I know say that Steve Jobs would have known what the next gigantic opportunity was, that he might have even made my WiFi belly-button studs work.  I’m not so sure.  I wonder whether Steve or anyone could find a niche like smartphones or tablets at this point, because I wonder whether we’ve not reached the point where the device is really just a portal.  Underneath it all, Apple’s problem is that its device-driven strategy has hit the wall of the cloud.

Just yesterday I talked about the notion of a future where knowledge and power migrated in a contextual sense (including a geographic one) toward the current interest profile and location of a given user.  The user’s device is a necessary portal into all that knowledge and power, but ultimately a smartphone or a tablet is still an on-ramp to the Internet and the cloud.  In fact, it’s the Internet-on-ramp mission for these mobile devices that have undermined the PC market.  Why now are they suddenly more than that on-ramp?  Yes, some people will buy thousand-dollar sunglasses to keep the sun out of their eyes, but not very many.  Apple’s own product strategy is its own greatest weakness.  It’s a window, not the world.

But where is Apple in the cloud game?  After long and apparently agonizing deliberation all they’ve managed to do is replicate what a dozen companies offer in the way of storage and synchronization.  They are not driving cool features that could be hosted in the cloud, they are stuck in those cool platforms.  Well, earth to Apple here; people want the movies and not the tickets.  The latter is just a way of getting to the former.  Apple’s failure to even control its own mapping framework until the failure was obvious (and then booting their response) is an indication that everyone who matters in Apple is sitting in the Industrial Design group and there’s a vast silence in service design.

Wearable tech is not the salvation of Apple, it’s an indication that there are no device salvations.  What wearable tech can do is very limited without significant from-somewhere-else service support.  You can slave your belly-button stud to your iPhone or iPad, but if you do that you’ve got an iStud that doesn’t do anything you couldn’t have done before except perhaps to get a belly-buttons-eye-view of the world.  Or maybe have your pulse taken there, or your hormones checked.  How about having your stud glow green when you see someone you like and red when you’re repulsed?  Yeah, somebody would probably buy it, but how many and for how long?  That’s the other problem with coolness.  It’s not easy to drive a trend like WiFi belly-button studs, and with every cool trend like it the barrier for the next one is only higher.  Eventually the combination of barriers to entry and durability of opportunity combine to kill your business model.

This loss of appliance opportunity isn’t Cook’s fault; Jobs would likely not have been able to do better.  However, Jobs might have recognized the value of the cloud some time ago and gotten Apple solidly into the lead there while there was still time.  Since Cook took over, the old Apple rivals have gained share on fundamentals, but they’re not the problem.  There is a new kid in town.

Yes, Apple has a new competitor now, Amazon.  Amazon, perhaps with the same level of serendipity as drove Apple into iPhones, has taken a cloud’s-eye view of the future.  The recognize that contextual consumer services are the perfect cloud application because there are a zillion different things a consumer might want but the duty cycle of their needs, and the variability and distributability of their need, make need-fulfillment a perfect cloud application.  For which Amazon thinks (with considerable justification) they have the perfect cloud.  Amazon has relentlessly pursued Kindle, but not just as a tablet—as a symbiote.  A Kindle has a foot in the cloud by design.  It’s what Apple should have done, that maybe Jobs would have done.  Now it may be too late to catch up.  Somebody is picking the low apples of the cloud, and eating them.

CloudNFV Transitions from “Project to Product”

In October of 2012, an insightful group of network operators published the “Call for Action” that launched Network Functions Virtualization (NFV).  I responded to that call with some suggestions, among which was a recommendation that a prototype be developed as soon as possible.  Operators encouraged me to do that, and a project group came together ad hoc in a parking lot in California at the April meeting of the ETSI NFV ISG.  From that, CloudNFV was born, an initiative whose simple mission was to prove that NFV could really be implemented.

That objective has been fulfilled; a group of companies transformed my architectural vision into something real.  On January 13 we demonstrated a functional, running, NFV implementation with features that go even beyond the scope of the ETSI activity.  The open design and tutorial material was posted on the CloudNFV website and on my own YouTube and SlideShare channels.  Insights on implementation have been fed back to the NFV ISG and to the TMF, whose GB922 and GB942 models formed the basis for my high-level architecture.  This works.  I’m proud of my role as the Chief Architect for CloudNFV and proud of the work that the other parking-lot founders (6WIND, Dell, EnterpriseWeb, Overture, and Qosmos) and the new integration partners (Metaswitch, Mellanox, and Shenick) have done.

Time marches on, and CloudNFV is now entering a new phase.  We’ve been encouraged by both the CloudNFV members and the network operators to productize CloudNFV.  This has generated a whole new level of activity, and a new objective for CloudNFV.  I’m sure everyone realizes how much of my time has gone into the “project” phase of CloudNFV, and I can’t sustain that much less expand it as CloudNFV productizes.  Such a role is inconsistent with my position as an independent consultant and industry analyst, and it’s a role I never considered playing.

I’ve accomplished what I set out to do here, proved that NFV can be implemented in a way that’s integrated with the cloud, with SDN, and with current infrastructure and operations.  It’s time for others to provide that next level of commitment and leadership.  So, I am stepping down from my role as Chief Architect for CloudNFV effective today and CIMI Corporation will no longer be directly involved in CloudNFV and its activities.  I may still, from time to time and subject to my normal business terms, undertake consulting work on NFV and CloudNFV with individual members of the team.

The remainder of the group have selected Wenjing Chu of Dell to lead the project forward.   Wenjing has been the sparkplug for all of the hosting, testing, and integration work in CloudNFV so far, with the skills and commitment needed to take it to the next level.  Wenjing will be assisted by Dave Duggal of EnterpriseWeb and Ramesh Nagarajan of Overture Network, the members who have provided the central Active Virtualization implementation and the OpenStack Service Model Handler, respectively.

This is a strong transition.  Remember, the team that actually implemented my design and made this happen is the team that remains; I never wrote a line of code here or struggled with hardware connections and software versions.  This team can carry on, and develop CloudNFV to become what they believe it can be, must be.  I wish them the very best in this effort and I ask you all to continue to follow and support this initiative.

Everything and Everythought: Looking at the Future

I’ve criticized Cisco often for making statements more calculated to generate PR than to provide any insight about the conditions in the networking space.  Cisco’s CTO Padmasree Warrior has been at least as guilty as any other Cisco exec in this regard, but I recently saw a quote from her that makes some sense.  The context of the comment was thin, so the insight behind it might be as well, but there’s some real value in the vision, which is that instead of striving for work/life balance, we should be striving for work/life integration.  There may not be much more thought behind this than there is behind the “Internet of Everything” but there could be strong (if accidental) substance below the surface, enough to say that this is where Cisco should have focused all along.

Mobile broadband and social networking have made social interaction pervasive.  If there are companies who believe their people, particularly their key people, aren’t checking Facebook or Twitter regularly during working hours, they need to dispel their delusions.  What the broadband/social revolution has done is made socialization an independent context for all our lives and not just a distinct state of living.  That means that people could be expected to time-share between work and non-work, not only during working hours but at all times of day.  In many ways, it’s the ultimate notion of telework, except that the world, the day, is the workplace and not just the home.

I’m not going to propose that everything becomes broadband-enabled work, that eager workers with 3D printers will churn out pieces of cars assembled by robots driven around to pick up the parts using Google’s auto technology.  I’ll leave the heavy PR lifting to Cisco.  However, I am personally a very productive person by the measure of everyone who knows me, and a big part of my productivity is that I fit work into every niche that doesn’t impact my overall quality of life.  Thus, I avoid feeling stressed or trapped.  Fact is, I feel fairly comfortable with my lifestyle and its balance.  People who like me shuffle information and electronic communications could adopt the same model.

Which is where Warrior’s insight is insightful.  We do have a terrible tendency to focus revolutionary capabilities on pedestrian missions, or to immediately jump so radically into the future that we leave the Jetsons far behind.  If we can build our personal lives around mobile broadband we can darn sure build the professional lives of knowledge workers the same way.  That should be what we focus on doing.

This implies an extension in my notion of point-of-activity empowerment.  You can help a worker wrestle with interpreting a set of patch panels or valves in a kind of context-less way.  We don’t know where the worker was coming from, how they got the job, whatever, but the local concept still works.  That’s generally going to be true with event-driven models of productivity support.  What the integration of life/work implies is that we have to broaden the model of worker time management, supervisory interaction, and collaboration to deal with the idea that even the notion that there’s work to be done is an event to be driven.  Hey, I’m waiting my turn on the third tee; let me complete the review on a few mortgage applications.  The ability to do that exists now, but the mindset is probably uncommon and the supervisory model to accommodate that style of working is certainly uncommon.

I think that the life/work integration process is based first and foremost on a different collaborative model, because collaboration is the basis of supervision and supervision is how telework or its integrated life/work extension is separated from giving everyone a paid grant instead of a salary.  The hard question here is just what that different collaborative model might look like.

It’s facile to say it’s based on social networking.  It’s more accurate to say that it has to somehow combine “communication” in at least a somewhat and optional real-time sense with some mechanism of context, multiple contexts in fact.  We could visualize this new thing as being a bunch of Twitter-fed wikis, or perhaps even like Google Wave of old.  Concepts, ideas, needs, jobs, assignments, all represent a context.  Obviously this has to be more complicated than a simple context set; contexts are hierarchical and they’re linked by metadata so you can traverse them according to any number of associative properties of the contents.

That’s the thing about a true social collaborative framework; it has to get complicated like social behavior does.  You can’t build an empowering framework that dumbs down the user.  The growth of context demands a collateral growth of organizing metadata, a compensating flow of information.  Could it be that Warrior has invented something that’s actually more important than the Internet of Everything?  It’s the Internet of Everythought.

Certainly the problem with Cisco’s IofE concept is presuming that adding machines to a network automatically generates N-squared  traffic growth.  Obviously it doesn’t because the refrigerator has little reason to talk to the toilet.  But if you posit a group of people being proxied into multicontextual existence where they shift from one task to another by switching everything they see and manipulate, you can see a lot of traffic flowing, a lot of network changes.

Gamers might call this “immersive reality gaming” or “artificial reality”, but what is real in an electronic age?  Is social networking linking people and making friends, or is it linking and making electronic analogs?  To the people using it, there isn’t any difference.  We are our online egos.

Translate this to networks, for a moment.  Think of the cloud as a kind of context store, where waves of what you need to know wash around in response to what you’re doing, reaching toward you with the stuff that’s relevant to your context or sweeping back if you turn away.  It may sound outlandish, but how can something like this not develop once you say that human intelligence draws knowledge it needs?  Think about it and maybe some context will wash over you!

Three Tales of One Cloud

We had a lot of news yesterday, news that I’m going to assert all adds up to cloud-driven change.  To open this blog (since I’m a networking guy after all) I want to start with Juniper.  The company had a decent quarter and while it’s guidance was cautious, the initial reaction of the market to the numbers were favorable.  Yeah, but Juniper is still an almost-lone high-flyer in terms of P/E multiple in the space, and that implied measure of future profitability growth has to be addressed somehow.

The company’s new CEO Shaygan Kheradpir spoke on the call and the message can be condensed into two statements.  First, “To achieve this, Juniper needs to be the leading provider of high-IQ networks and best-in-class cloud builder, where the network is the most sensitive piece of the puzzle and has to be the first mover. I intend to play an active role in guiding this transition.” Second, “…I believe I can leverage my operational experience as well as my deep engineering and technology background to drive effective and efficient business operations and lead Juniper through the changes required to reach its full potential.”

This juxtaposition fairly reflects the challenge and opportunity Juniper faces.  Based on the macro trends in the industry and its current cost structure, Juniper is radically overvalued.  You can fix that by either improving revenues radically or reducing costs radically, or perhaps something in between.  I think most in the industry would agree that a pure cost-driven approach to sustaining Juniper’s current P/E multiple would result in the company quickly attaining negative size.  Some fairly significant component of new market share or new TAM is needed.  Kheradpir’s notion of high-IQ networking (meaning, I’d hope, adding smarts to dumb bit-pushing) and best-in-class cloud-builder is at least in the right ball park.

Kheradpir has spent only about three weeks on the job at this point (as he pointed out on the call) and it’s obviously unfair to expect that he’d lay out a roadmap to achieving either operational efficiency or strategic ascendency, and he didn’t.  Nobody in the industry who’s not brain-dead thinks that low IQ networks would win, so the devil is going to be in the details.  I think personally that Juniper’s litmus test will be in NFV.  SDN is hopelessly mired in the wrong stuff at this stage, and so the only network initiative that can bind network intelligence to the cloud is NFV.  So, Juniper, you will stand or fall based on what you do there, and that is the simple truth.

Microsoft also illustrates some simple truths, but not new ones.  The rumors of the death of the PC, as I’ve been saying all along, are exaggerated.  There has always been a simple economic component and a market change component to the PC space, and Microsoft’s numbers illustrate that economic recovery will, if it doesn’t life all boats, at least lift those that aren’t totally sunk.

Economic recovery will lift suppression but it won’t generate positive change.  Microsoft needs to deal with the fact that it likely will never achieve the old position it had because that old position doesn’t exist in the future market.  Like IBM, Microsoft needs to reinvent itself.  Like IBM, it has allowed itself to become positionally ossified, binding its brand to something that can’t work for it in the long run.

Like Juniper, Microsoft’s fate lies in the clouds, but not in the same way.  For Microsoft, the challenge is to recognize that its opportunity to make Azure and PaaS the way of the future has now been lost forever.  The cloud of the future will be based on an IaaS framework that runs the cheap unimportant application junk, augmented by dazzling platform service features that rise out of pedestrian development practices to exploit the cloud as it really can be.  Everything, and I mean everything, in the cloud now depends on that model and how and who evolves it.  Microsoft is going to be a player, but they booted their early chance to make PaaS the winner instead of platform services, and in doing that they’ve opened the competition to others.  Microsoft was the only credible PaaS player, and they are not even now a credible platform services player.  They have a lot of behind to come from.

The backdrop of this is the IBM decision to (finally) sell its x86 server business to Lenovo.  This has a potential impact on both Juniper and Microsoft, perhaps even a decisive one.

The cloud, including a platform-services IaaS model, is built on COTS and Linux.  Where in that does it say “on such-and-such’s server or Microsoft’s OS?”  There is no platform branding in the cloud—not hardware, not software—except in the form of visible platform services.  COTS is a commodity business now, suited for somebody like Lenovo.  For Microsoft, it’s that truth that means the company has to completely reinvent itself.

But the same truth applies to Juniper.  On one hand, the anonymity of the cloud is a profound liability.  How do you do a first-class job of building an invisible object (or at least how do people know you did it)?  On the other hand, it will be very difficult for IT vendors to exercise leadership in a strategic sense in a “cloud market” based on COTS.  And NFV is such a market.  The network operators would LOVE to have an IT giant step up and lead NFV, but while a year ago such a thing would have required nothing more than a little PR, today it would require substantial collateralization.  It would be easier, in fact, for a network vendor to lead at this point.

Juniper is one, obviously, but Juniper has the dual challenge of having no useful NFV position and no professional services group to harness the massive integration opportunity that NFV would create.  Can they build one?  Wouldn’t it be more likely that a competitor (like Cisco, who also has servers) would do that faster and better?  The cloud is a race.  Nobody has convincingly won or lost it yet, but the trends are toward creating more losers than winners.

Verizon and IBM Tell Different Stories

Earnings are probably a dull topic to many who read my blog, but they are vitally important in understanding tech revolutions and evolutions.  Nothing happens in tech unless someone with credibility sells a product to a buyer, and earnings are a good measure of how that’s going for the key players.  Today I want to talk about IBM and Verizon, to pull some market lessons from their results.

IBM reported its quarterly results, and they were below lackluster—pretty much as I’d expected they would be based on the trends I’ve cited in our survey showing IBM’s losses in strategic influence.  While IBM has been able to cut costs and sustain at least a modicum of profit stability here, even the bullish Street analysts think that IBM has challenges in fundamentals that will be difficult to meet.

There was really nothing good in IBM’s numbers.  Even software sales, which were at least up, were up less than consensus forecasts—about 2%.  This, from a company who at its strategic peak was driving some categories of software up 18%.  Services showed some improvement but still missed, and hardware was off across the board.  IBM’s guidance was light versus expectations, but many Street analysts are candidly saying that absent financial gimmickry IBM isn’t likely to make its numbers for 2014.

To me, IBM’s results shout out their problem of strategic positioning and strategy.  I think that if I were charged by the marketing gods to bet my future on positioning a single company portfolio, I’d rather do it with IBM’s than with a competitor in the IT space.  Even a modest effort at singing and dancing could produce buyer emotions that would quickly elevate IBM’s brand and visibility.  Why, then, does IBM seem incapable of doing that?  First, they might disagree with my assessment that it would be easy.  Second, they might simply not have the human collateral, and I think that’s the problem.

Any organization evolves to suit its market, and the “market” it’s evolving to is the one it’s addressing and not the overall ecosystem.  Zebras don’t respond to opportunities in the high canopy, they respond to grass.  That creates a problem when a company gets strategic myopia; they narrow down, their people narrow with their goals, and when fruit is falling on their head while they starve, all they can do is dodge.  IBM has seen its strategic influence decline in every survey I’ve done since the spring of 2011, and in two of every three surveys for the two-and-a-half years prior to that.  Their bump in 2011 is instructive too; they gained because it was perceived that they’d be a guidepost in cloud transformation—from the software to the network.  They declined in every single one of those categories in the fall survey of 2011 and in every one thereafter, because IBM did not deliver on buyer expectations.

This is dire for IBM, make no mistake.  It could be manna from heaven for Cisco, except that Cisco has some of the same myopia problems that IBM has.  And Cisco hit its strategic low in the fall of 2011 and has been bouncing back ever since.  They have a bully pulpit from which to launch a sexy story if they can figure out how to talk about something other than the “Internet of the Known Universe Space-Time Continuum of Things.”  IBM’s head is sinking below the pews.

Verizon’s numbers came in better than expected, but most of their goodness and light was attributable to wireless, where Verizon gained accounts and also boosted ARPU by about 7%.  This demonstrates that wireless is the bright spot for the network operators, and of course Verizon faces greater competition in the space as AT&T, T-Mobile, and Sprint all take aim at the top dog.  Truth be told, though, even without competition the current situation can’t go on.  All the operators tell us that they expect wireless profits and even revenues to plateau and even decline by late 2015 unless something radical happens.

Wireline was mixed, in my view.  FiOS ARPU was up almost 11%, boosted in no small part by an almost 50% take-up in Quantum Internet.  Verizon has also upped the penetration of FiOS into its supported market areas to nearly 40%.  However, recall that Verizon has essentially ended the build-out for FiOS, so the fastest growth in that area is likely now behind them.  Cable is being more aggressive in competing, too, and Verizon is undertaking changes to try to bring IP-based video viewing into its mainstream.  Given that’s not worked for much of anyone, it could be an expensive distraction.

But unlike IBM, Verizon has some good news.  The telcos are generally more about operationalization than the cable companies, and while that means that cable could jump out and do something with opex that would transform their business, they’ve not been able to do that up to now and don’t seem to be working much on it.  NFV and SDN would be more transformational to cable companies than to telcos, but the cablecos all seem to be letting their technical arm CableLabs run those races on their behalf.  Nobody thinks you’re taking a market seriously when you send a stand-in instead of your lead players, and they’re right in this case.

For Verizon, the good news this quarter means that the company has at least a calendar year in which to work through some important service-layer, cloud, and opex improvements that would cut costs and boost profits with new services and more agile market responsiveness.  That doesn’t guarantee they will do that, of course, but the option is on the table.  AT&T has much the same situation, but their Domain 2.0 initiative suggests they may be a bit more committed to an architectural transition to lead a business transformation.  Watch both these big telcos; they have the cash flow and the incentive to be leaders in the network of the future.

Getting to the Bottom Line for SDN and NFV

One of the imponderables about the future of networking is how it’s going to be paid for.  Nobody is going to invest in a network without ample return, which in financial terms means an ROI that’s at least equal to the investor’s current internal rate of return.  There’s a lot of pressure to “boost benefits” on a project, but that’s hard when you aren’t really sure how those benefits are derived in the first place.

If you look at the “cost” of a network today, you find that on the average about half of that cost represents capital spending and half represents operations expenses (capex and opex).  This ratio varies depending on geography because areas with higher labor costs or difficulties retaining skilled personnel will tip more into the opex direction, of course.  The point is that if you look at network cost alone, you have a rough capex/opex balance.

When something like SDN comes along, there’s a group that will tout the “savings”, but the fact is that on a large scale we have little data on which to base any presumption of savings.  Buyers in our fall survey (both businesses and service providers) told us that they were not aware of any mature SDN solution by a 3:1 margin.  Among enterprises, only about 18% could draw an SDN configuration with the elements labeled and half of those could put the names of products/vendors in the boxes.  Slightly less than 8% had done any capital comparison of costs, and of that group less than half reported any actual savings could accrue.  I think it’s pretty clear at this point that capital cost benefits for SDN are still speculative.

Where SDN has shown benefits is in cloud data center networking, or actually any application where a large number of servers and a lot of WAN connectivity have to mesh.  Much of the capex savings here comes from better utilization and resiliency. Enterprises think that they would see similar savings at the WAN level where they actually build their own WAN connections, but the trend in the enterprise is toward consumption of architected VPN services for the WAN, so SDN doesn’t impact their plans directly.

IMHO, you’ll rarely create an SDN justification based on capital cost, whether you’re a carrier or an enterprise.  That makes opex reduction the cost-savings-benefit target, and unfortunately SDN opex is totally mythical according to buyers.  The percentage of enterprises who could delineate where SDN could save operations costs is down in the statistical noise level, and for the service providers it’s only about 21% who recognize categories.  Only 7% of operators thought they could realize the savings, and that low level of realization is due to the fact that they can’t readily assimilate SDN operations into network operations on a larger scale.

This is the very problem that plagues NFV, but with NFV the operators have a better handle on the market dynamic.  A large majority of operators believe that traditional network equipment from the major vendors (excluding Huawei) is over-priced, and that their best strategy for reducing capex is simply to put price pressure on vendors.  In part, this is because the NFV value proposition works best for middle-box elements and not the devices that actually handle the bit-pushing.  However, operators think that as price pressure on connection/transport equipment increases, vendors will jack up the prices and margins on their middlebox products.  That’s a concern for operators because the operators see higher-layer services (often created by appliances or middle-box gadgets) as their own profit salvation down the line.  NFV is as much a preemptive strike as a current remedy, capex-wise.

But operators know that you have to look hard at opex even for things that are supposed to be justified by capex.  They also know, better than enterprises, that network operations is complicated by the existence of multiple models of management, multiple technology choices per model, and a generally service-specific capital program that often creates a lot more disorder in the management layer to save a few bucks on capital costs.

One operator I talked with recently summed it up nicely, I think.  “We have traditionally designed networks to be cheap to buy, and now we have to learn to design them to be cheap to run.”  The notion that TCO is moving toward opex dominance is recognized by buyers and sellers, everyone tells me, but the sellers still think that’s an excuse to spend more on gear even when compensatory savings in operations costs can’t be proven.  And operators tell me that they think every single vendor TCO story is crap…no exceptions.  Frankly, they’re right.

What SDN and NFV need, both in my view and in the view of the “literati” who really get this stuff in the buyer space, is a real architecture that covers the entire network geography and that extends upward through all the service elements—including hosts and caches and software features—to the management layer.  Everyone agrees that management automation is the only path to optimum opex, but you can’t automate a process you can’t describe at the software level, and we still don’t have a description of what an SDN or NFV service looks like, or how to specify the functional and performance objectives of the pieces.

There’s some hope here, I think.  Network operators who know the network is their stock in trade and not just a pencil that gets pushed around in the course of their real business, are now looking for answers and not platitudes.  In the NFV activity I can see real maturity of thought on the management side, and the TMF is collecting all the NFV-related activities into a set of sessions in their Team Action Week to focus better on how evolved network models interact with management concepts.  Hope is good, but it’s also still true that a responsible business case for either SDN or NFV would be difficult to make today without some extensive pilot testing, and that may mean that little happens with live deployment of either until 2015.

CloudNFV TMF Catalyst Proposal Approved!

I’m very pleased to report that I’ve been notified by the TM Forum that the CloudNFV Catalyst proposal we submitted for the June Nice conference has been approved.  This Catalyst is aimed at demonstrating that TMF principles (GB922 and GB942) can be used to guide not only the deployment of virtual functions but also their ongoing management and their integration with non-NFV elements of services.

We will be meeting others involved in Catalysts at the Team Action Week meeting in Madrid in mid-February (just who all will be there from CloudNFV isn’t firm yet), and we’ll be providing a tutorial on our expanded management architecture, the foundation for the Catalyst, prior to that meeting.

All of you who read my blog know that CloudNFV is indebted to the TMF work for its high-level modeling, and to the ETSI NFV ISG for their work on functions virtualization and orchestration of virtualized elements.  CloudNFV is using the same prototype implementation to drive both the ETSI ISG PoC and the TMF Catalyst, and we hope this will demonstrate that there is a way to approach virtual functions and NFV evolution that is consistent with and conserving of OSS/BSS principles.  At the same time, we hope to demonstrate that our model can be used to selectively augment or even replace legacy operations practices in services and applications where major transformations are dictated either by service agility or cost considerations.

We are eager to work with management/operations vendors in later phases of our activity, and we remind those vendors that we have a process defined on the CloudNFV website to frame integration project proposals and move them along as fast as our resources permit.  We will be working to set up our Catalyst workspace and we’ll provide a link to that when it’s available.  We’ll also keep everyone posted on our activities on our own website.  This is an open project, and we intend to share our findings fully.

We thank the TMF for considering our efforts worthy of acceptance, and I thank the members of the CloudNFV team and in particular the sponsors of the PoC and Catalyst for their efforts and support.