“Virtual Networking” Means Networking the Virtual

It’s another of those recap Fridays, to pick up news that was pushed aside by other developments and to try to drag some cohesion out of the general muddle of news.  I think the theme of earnings calls for the week has been “an industry in transition” and so I’ll stay with that theme and fill in some interesting developments and rumors.

One rumor I particularly like is that Oracle is looking to develop a cloud-based voice service leveraging its Tekelec and Acme assets.  The service would be, says the rumor, offered by Oracle directly at a wholesale level to operators who didn’t want to run their own voice networks, offered as a product package for operator replacement of legacy voice, or both.

Voice services are an increasing burden to operators, particularly in the wireless space.  The problem is that you can do Google Voice or Skype or other free options over Internet dialtone, and in any event the younger generation doesn’t want to call at all—they text.  The net result is that you have a fairly expensive infrastructure in place to serve a market that is only going to become less profitable and less a source of competitive differentiation.  Most people in our surveys, speaking for themselves personally as voice users, say that if they didn’t have to worry about 911 service they’d probably not have paid voice service at all if the option existed.  No surprise, a 911 solution is part of Oracle’s rumored plans.

A second development that’s not a rumor is some interesting concepts contributed to OpenDaylight.  Recall that in that framework, and pretty much all OpenFlow controller architectures, there’s a northbound API set that connects what I’ll call “service applications” to OpenFlow.  A service application is one that creates a connection network to support application/user needs; IP and Ethernet are legacy models of service connection networks, but other models are possible.  Two are now being proposed by Plexxi and ConteXstream.

Plexxi is contributing its core Affinity concept, which is a dynamic way of visualizing connectivity needs at the virtual level, independent of actual network topology or physical infrastructure.  It might be interesting to consider a framework for SDN that started with base-level adaptive forwarding and then built virtual network overlays based on Affinities.  The key would be getting service properties pushed down as needed to create aggregate traffic handling rules.  ConteXstream is contributing an application of Cisco’s proposed Location/Identifier Separation Protocol (LISP, another of those overloaded tech acronyms), which makes it possible to have assigned logical addresses and independent physical locators.  This is yet another example of an SDN overlay mechanism, but it has interesting potential for mobility.  Both demonstrate that SDN’s biggest value may lie in its ability to define “services” in a whole new way, unfettered by network hardware or legacy concepts.

Needless to say, none of these things are going to change the world overnight.  Network virtualization faces both a conceptual and a practical barrier, and it’s not yet clear how either will be overcome.  On the conceptual side, virtualization of networking opens a major can of worms regarding service assurance and management.  If nothing is real, how do you send a tech to fix a problem, or even decide where the problem might lie?  On the practical side, the services of the network will in today’s world be substantially constrained by the need to support current IP/Ethernet endpoints (or you have no users) and the need to support an orderly evolution from currently installed (and long-lived in depreciation terms) network assets.  There’s also a both-issues question; how do you define middlebox functions in a virtual service?  We depend on these functions for much of networking today.  NFV might offer some answers, but the standards process there is ongoing.

You can argue that Oracle’s rumored service is an example of a virtual PSTN, and obviously the real developments I’ve cited here are also linked to virtualization.  You can only pass real traffic on real links, so virtualization must inevitably map to the real world.  You can only generate and deliver traffic to real users and applications, so there’s another real-world mapping.  What is in between, in OSI terms, are the higher protocol layers.  In network equipment terms, it’s the transport infrastructure.  I think it’s clear that if we redefine the notion of “services” to take advantage of the agility of virtualization, we make the transport network a supporter of virtualization and not a supporter of users.  What does that do to network equipment?  My model says that in the next two years (through 2015) we’ll see an expansion in network capex driven primarily by metro build-out and cloud deployment but also by the opticalization of the core.  After that, we’ll see a world where virtual missions for network devices gradually shifts value up to the stuff that does the virtualizing.

Amazon’s numbers may be a reflection of this same transition.  The company has a heady stock price, one that could not possibly be sustained by its online retail business.  The Street wants growth, which only the cloud and cloud-based services can bring.  Amazon is going to have to lead the service charge to justify its P/E multiple, and that means investing to prepare for the mission, which raises costs and lowers profits.  Their believers haven’t fled after unexpectedly bad profit news, likely because they realize that you have to bleed to heal, strategy-wise.  Amazon is redefining the relationship between devices and networks, and that is what everyone has to accommodate over time.  Along the way, they may end up redefining what we mean by SDN.

Spotlighting Carrier Capex and Profit Plans

Verizon’s comments about capex and generally better visibility from vendors has helped the telecom equipment space look a bit better, and of course that’s been our forecast since the spring.  My model shows general telecom spending will increase through 2015, with spending in all equipment areas showing some gain.  This represents the “last-gasp” modernization funded by mobile revenues and unfazed by SDN and NFV.  Beyond that point we’ll see gradual erosion in spending on network equipment, first in the core where the long-term effects of a shift to optics will be felt.  The core, recall, is the least profitable part of the network for operators.

You can see in AT&T’s earnings that they are expecting to have to do some creative framing of new fees and services if they’re to sustain profit and revenue growth going forward.  The capex increases will likely put all of the major operators near zero profit growth if they don’t cut other costs, and many are concerned about further cuts in OAM&P for fear of creating service issues that would drive churn.

Operators I talk to have three touchstones for future profit growth.  First, they believe that initiatives like SDN and NFV can lower overall costs, though some now admit that they believe the early estimates of savings are generally too high.  Second, they believe that the prepay mobile market can, with the proper programs and handsets, translate either to postpay at a higher ARPU or translate to featurephones with a la carte feature pricing.  Finally, they believe that there is a service-layer framework for profitable cloud-based services and features out there somewhere.

Readers of my blog know that I’m not a big fan of technology changes driven by cost management goals.  Historically operators have underrealized in this kind of investment, and in nearly all cases the problem is that they’ve been unable to create a reduction in operations cost corresponding to the capex reduction.  In fact, Tier Ones tell me that opex as a percentage of total cost has increased more than capex has been reduced by new technology.  Even operators who report higher profits on services like VoIP admit that part of the equation is limiting customer interaction.  They could have done that with the old stuff.  So while I think that SDN and NFV cost-savings won’t create a steady stream of profits, I do think they can help prime the pump.

The “a la carte” feature stuff is linked to operator views that the majority of wireless users will pay for some special features (like international roaming for mobile) on an episodic basis but not in monthly-subscription form.  Today we tend to see two classes of wireless service—prepay and postpay.  The latter, in the US in particular, trends toward unlimited usage and complete feature support and is aimed at customers who rely totally on mobile communications, especially for social reasons.  The former is for those who tend to use only basic services and are trying to control costs.  Operators tell me that their modeling shows that prepay revenues could be raised by as much as a third by introducing special-service packages on a shorter-term basis.  Some of this is already visible in calling packages for Mexico or data packages for prepay customers.

The “featurephone” notion is an offshoot of the a la carte model.  European operators and many in emerging markets are interested in the Firefox OS framework which would allow hosting of features on-network and a cheaper phone.  AT&T’s profits were hit a bit by smartphone subsidies, so you can see why the latter point would be interesting.  But inside the whole featurephone notion is the fact that operators recognize that ad revenues will be an unattractive profit strategy for them.  For the most part the ads are linked to social or other portals already in existence, and competing with them is probably unrealistic.  In addition, total available online ad revenue isn’t enormous.  A pay-for market is better, and if people will pay for apps on a phone they’d pay for network-hosted apps.  Most likely wouldn’t even know the difference.

The service-layer and management stuff is at the heart of this, of course.  Operators are already spending on cloud data centers (F5’s gains in the SP space were linked to that, IMHO) and they’re eager to leverage them (as the NFV stuff shows), but the model of an agile service layer is still illusive.  You need to define a “PaaS” platform for applications, you need a deployment model, and you need operations.  It’s not clear how long any of those will take to mature, but one factor that may help things along is the increased interest of network and other vendors in what could be called “VNF authoring”.  NFV is helpful to the extent that there are properly designed virtual network functions to deploy, and Alcatel-Lucent for example has just established a program, its “CloudBand Ecosystem” to encourage virtual function development.

A VNF authoring framework is a bit of a fuzzy concept.  At the minimum, you need to have an application structure that can be deployed by NFV for hosting, and it’s not clear yet from the state of the ISG work just what the requirements for that would be.  It might be necessary, or helpful, to include some platform APIs for integration with deployment and management services, but this area is even more fuzzy.  We don’t know yet whether things like horizontal scalability are expected to be application-controlled or controlled by a platform deployment or management system, and we don’t know much of anything about NFV management in an official sense.  I don’t have information on the program yet, and I don’t know if these details are revealed and can be made public.  I do think that most vendor NFV strategies are likely to include a program to support VNF authoring at some level.

VNF authoring is a natural step toward service-layer deployment because unless you do a very bad job at defining the hosting framework for VNFs, you should be able to use that same framework to deploy SaaS elements or components of cloud computing services.  That would make a VNF authoring strategy a natural step toward service-layer development programs and the de facto definition of a service-layer architecture.  I do want to stress, though, that it’s the management of virtual functions or cloud service elements that’s hard, not the deployment.  Otherwise we’re back to my early point; operations costs overwhelm capital benefits and the service price is too high to sustain market interest.

 

Juniper Will Get a New CEO

Juniper reported its numbers, which showed better profits and a slight improvement in revenue, and then issued a pretty nice 3Q outlook to boot.  The initial reaction of the Street was mixed; some hoped for better performance given Juniper’s multiple, and others were happy.  But earnings may not have been the big news.  Kevin Johnson, Juniper’s CEO, announced he would be retiring.

There have been rumors of Johnson’s departure emerging this year.  He arrived from Microsoft to replace Scott Kriens, the original CEO and now Chairman, and many thought he might push Juniper out of the box-fixation mindset that has been its legacy.  He didn’t, and in my personal opinion he didn’t really grasp the difference between “software”, “embedded software”, and “network software” in an SDN and NFV age.  Juniper may have embraced software in an organizational sense, but not in the sense that it needed to.

What should have been done?  Clearly, Juniper like other vendors was facing pressure from operators to support the new operator monetization goals.  Logically, that meant providing service-layer software that would allow operators to build new services that were competitive with those of OTTs, but also to recast current services in a more modern, cost-effective, profitable, and flexible way.  Juniper had an initiative, “Junos Space” that could easily have done that, and when I reviewed their concept at the launch almost three years ago I believed they would take the steps they could have taken.  They did not.  Space became a very simple “operations” tool, a slave to cost management and TCO and not even a factor in monetization.

When SDN and NFV came along, Juniper embraced the former and at least in a positioning sense ignored the latter.  Service chaining is an NFV use case, but Juniper presented it as an SDN application.  Yes, you can chain services with SDN, but unless you frame service chaining in the operations and deployment context of NFV you don’t have the savings that made it interesting in the first place.  I called Juniper out on their tendency to sing SDN songs about NFV concepts, but they’ve really not changed that theme at all.

I don’t know what Kevin Jonhson thought Juniper software would look like.  Like a Windows ecosystem?  Some inside Juniper have told me that’s exactly what he thought.  Like operations glue to link Juniper to OSS/BSS?  Some say that too.  The problem is that the ideal Juniper software story isn’t either of those, or perhaps it’s both but at another level.  Network software is about the virtual world that networking lives in, and in particular about the elastic and critical boundary between SDN and the network, and NFV and the cloud.  NFV, which to be fair came about long after Johnson joined Juniper, defines a framework that is aimed at costs but can be applied to revenue.  The critical error Juniper made under Johnson’s command was to ignore NFV because it seemed to be about servers, and embrace SDN because it seemed to be about software and networks.  Semantics are a bad way to engage the customer.

Cisco is the ranking player in networking, in SDN, and even in NFV even though its positioning is as vacuous as that of Juniper.  Why?  Because they’re the incumbent, and all they have to do is kiss the right futuristic babies and they can hold on.  Juniper has to come from behind.  Its earnings are not a reflection of it’s strategic success—it’s losing ground steadily in strategic influence.  The earnings reflect the inertia of the industry, an industry that buys stuff on long depreciation cycles.  It will take years for operators to wean themselves out of Juniper gear even if they try, and in that time Juniper needed to be darn sure they didn’t try.  That’s what Kevin Johnson was likely hired to do, and he didn’t do it successfully.  Juniper used to be a box company that couldn’t position strategically.  Now they’re part box company, part traditional software company, and still groping with the real problem of defining “network software” and their role in it.

The Cisco acquisition of Sourcefire is even more logical in the light of Johnson’s departure.  If Cisco can kill Juniper in enterprise security and cloud security while they fumble for CEO candidates it won’t matter much who they end up with.  And security is only one of three or four cloud-related killer areas where Juniper needs a cogent strategy to develop a lead.  If they miss any of them, they’re at risk to losing market share, and if their P/E drops to the industry average they’re a seven-dollar stock.  Think M&A, but think of it under decidedly buyer’s-market terms.

Watch the CEO choice, and watch what they do in the first hundred days of their tenure.  This is do-or-die time for Juniper.

Can Cisco Ride Sourcefire to Cloud Supremacy?

Cisco today announced one of their bigger acquisitions—security specialist firm Sourcefire.  The move is likely linked to the trends in security that I’ve seen in our surveys—most recently the spring survey published in Netwatcher just a few days ago.  It’s also likely to be another Cisco shot at Juniper, whose enterprise strategy is heavily linked to security.

Enterprises have generally had a bit of trouble accepting the idea that security was a problem the network should solve.  For years, they rated it as a software issue even as publicized security breaches illustrated that hacking was a big problem for everyone.  Why?  Because they saw this is an access security problem, thus a software problem.  This view was held by about three-quarters of businesses through the whole last decade.  What changed things was the cloud.

As cloud computing became more a strategic issue, businesses started thinking about security differently.  That started with a dramatic increase in the number who recognized multiple security models—network, access, software.  In just 2 years the number of businesses who saw security as multi-modal increase sharply.  The number who say that cloud security is a software issue fell by 10% in just the last year and there was a significant increase in the number who saw cloud security as a network issue.

For somebody like Cisco, this is important stuff.  If network-based security is linked to cloud adoption, then Cisco clearly needs to be on top of network-based security if it hopes to achieve and sustain cloud differentiation.  Given that Cisco’s main cloud rivals are not network companies, Cisco’s best offensive play would be a holistic network strategy that included security.

That’s particularly true given rival Juniper’s reliance on security for enterprise engagement.  Juniper hit its peak of strategic engagement in security just at the time when network security was about to go on a tear, and surprisingly they lost ground steadily as security-in-the-network was gaining.  Cisco, who dipped a bit in influence in response to stronger Juniper positioning a couple years back, suddenly gained.  I think that can be attributed to Cisco’s taking a more holistic approach to “network” and “security”, something that the Sourcefire acquisition could easily enhance.

There’s also a strategic shift to be considered here.  With operators pushing for virtual appliances, security is an obvious target, and hosted security is also an element in rival Juniper’s plans for SDN and NFV.  Cisco wants to focus both the SDN and NFV debates on expanding higher-layer network services and capabilities, in the former case through APIs like ONEpk and in the latter by introducing more hostable stuff.  Sourcefire could offer both those options.

What isn’t clear at this point is whether Cisco would create or endorse a “structural” connection between security and SDN.  If you do application-level partitioning of cloud data centers—as opposed to purely tenant-driven partitioning—you have the potential for creating access control by creating application-to-worker delivery conduits at the SDN level, meaning that only workers or groups of workers with explicit rights could even “see” a given cloud app.  This is a logical path of evolution for SDN security, but it might be seen to undermine Sourcefire’s model of more traditional IDS/IPS.

One thing for sure; Cisco is viewing IT and networking ecosystemically, a luxury that UCS gives it.  For all of Cisco’s enterprise rivals, there will be a significant challenge in matching that vision.  HP has both servers and networking, but its presence is more in the data center than in the WAN and it’s not been successful in getting traction on its SDN approach.  IBM OEMs its network gear and has been losing strategic influence in all things network.  Juniper needs a superstrong security and data center story, but security has lost ground over the last two years and their data center strategy has been muddled by poor QFabric positioning.

Cisco beats HP and Juniper in security influence even not considering Sourcefire.  IBM and Microsoft still lead Cisco in security influence, but obviously a shift in focus toward network-based security would benefit Cisco and hurt both its higher-rated rivals.  Even now, Microsoft leads Cisco by less than 10% and IBM leads by about 25%.  We could see Cisco take the number two slot by next spring, I think, and threaten IBM a year later.

Security is a big budget hook, the thing that has gotten more investment each year despite economic conditions.  If it can be made to pull through a larger network portfolio, which I think is possible, then it could cement Cisco as undisputed network leader in the enterprise network, and go a long way toward establishing Cisco as the player to beat in private clouds too.

I think the only solution for rivals is to get way out in front of Cisco on the SDN and NFV aspects of security.  Cisco will likely tread softly in creating revolutions in either space because of the impact it could have on their broader product lines.  Since all Cisco’s rivals have a much smaller market share in network equipment, they could afford to poison the well overall just a bit, in order to gain market share in the leader.  Will they do that?  It’s possible, but remember that none of Cisco’s enterprise rivals have been able to position their way out of a paper bag so far.  Cisco has already gained more in security influence than any competitor.  They could do more, still.

Setting Boundaries in a Virtual World

Everyone knows you have to set boundaries in the real world, to insure that friction where interests overlap is contained and that reasonable interactions are defined.  One of the things that’s becoming clear about virtualization—not a la VMware but in the most general sense—is that even defining boundaries is difficult.  With nothing “real”, where does anything start or end?

One area where it’s easy to see this dilemma in progress is the area of SDN.  If you go top-down on SDN, you find that you’re starting with an abstract service and translating that to something real by creating cooperative behavior from systems of devices.  OpenFlow is an example of how that translation can be done; dissect service behavior into a set of coordinated forwarding-table entries.  Routing or Ethernet switching did the same thing, turning service abstraction into reality, except they did it with special-purpose devices instead of software control of traffic-handling.

But who’s to say that all services are made up of forwarding behaviors?  If we look at “Internet service” we find that it includes a bunch of things like DNS, DHCP, CDNs, firewalls, and maybe even mobility management and roaming features.  So a “service” is more than just connection even if we don’t consider SDN or virtualization trends at all.

The cloud people generally recognize this.  OpenStack’s Neutron (formerly Quantum) network-as-a-service implementation is based on a set of abstractions (“Models”) that can be used to create services, and that are turned into specific cooperative behavior in a community of devices or functional elements by a “plugin” that translates model to reality.  You could argue, I think, that this would be a logical way to view OpenFlow applications that lived north of those infamous northbound APIs.  But OpenFlow is still stuck in connection mode.  As you move toward the top end of any service, your view must necessarily become more top-down.  That means that SDN should be looking not at simple connectivity but at “service” as an experience.  It doesn’t have to be able to create the system elements of a service—DNS, DNCP, and even CDN—but it does have to be able to relate its own top-end components (“northern applications”) with the other stuff that lives up there and that has to be cooperated with to create the service overall.

Even the Neutron approach doesn’t do that, though OpenStack does provide through Nova a way of introducing hosted functionality.  The Neutron people seem to be moving toward a model where you could actually instantiate and parameterize a component like DHCP using Nova and Neutron in synchrony, to create it as a part of a service.  But Neutron may have escaped the connection doldrums by getting stuck in model starvation.  The process of creating models in Neutron is (for the moment at least) hardly dynamic.  An example is that we don’t model a CDN or a multicast tree or even a “line”.

The management implications of SDN have been increasingly in the news (even though network management is a beat that reporters have traditionally believed was where you went if you didn’t believe in hell).  It’s true that SDN management is different, but the fact is that the difference comes less from SDN than from the question of what elements actually control the forwarding.  When we had routers and switches, we had device MIBs that we went to for information on operating state.  If we have virtual routers, running as tenants on a multi-tenant cloud and perhaps even componentized into a couple of functional pieces connected by their own private network resources, what would a MIB say if we had one to go to?  This translation of real boxes into virtualized functions is really the provenance of NFV and not of SDN.

But SDN has its own issues in management.  The whole notion of centralized control of traffic and connectivity came along to drive more orderly failure-mode behavior and manage utilization of resources better.  In effect, the OpenFlow model of SDN postulates the creation of a single virtual device whose internal behavior is designed to respond to issues automatically.  Apart from the question of what a MIB of a virtual device would look like, we have the question of whether we really “manage” a virtual god-box like that in the traditional sense.  It is “up” as long as there are resources that can collectively meet its SLA goals, after all.  Those goals are implemented as autonomic behaviors inside our box and manipulating that behavior from the outside simply defeats the central-control mandate that got us there in the first place.

In any case, what is a DNS server?  Is it a network function (in which case the NFV people define its creation and control), is it a cloud application (OpenStack’s evolution may define it), is it a physical appliance as it is in most small-site networks?  Maybe it’s an application of those northbound APIs in SDN!  All of the above, which is the beauty of virtualization—and its curse.  The challenge is that multiplicity of functional deployment options creates multiplicity of a bunch of deployment and management processes, and multiplicity doesn’t scale well in an operations sense.

I think we’re going about this whole network revolution in the wrong way.  I’ve said before that we have to have only one revolution, but we also have to recognize that the common element in all our revolutions is the true notion of virtualization—the translation of abstraction into reality in a flexible way.  If we look hard at our IT processes, our network processes, SDN, NFV, the cloud, management and operations, even sales and lifecycle processes related to service changes, we find that we’re still dealing with boundary-based assumptions as we dive into the virtual future, a future where there are no boundaries at all.  This isn’t the time for a few lighthearted bells and whistles stuck on current practices or processes, it’s time to accept that when you start virtualizing, you end up virtualizing everything.

Google and Microsoft: More than Mobile Problems

The earnings reports from Microsoft and Google followed the pattern of other tech reports from this quarter—a revenue miss offset at least in part by cost reduction.  There’s been a tendency for the Street to look at these two misses and declare a common cause—that both Google and Microsoft have failed to come to terms with mobile.  Wrong.  There is an element of common-cause here, but it’s related to my point yesterday about a tech industry focused on cutting costs rather than adding value.  You go after lower costs, and you succeed but bleed yourself out in the process.

Mobile isn’t a fad, but mobile change is an issue only for so long as it’s “changing”.  Advertising is surely impacted by mobile, but underneath all the hype the fact is that the biggest force driving mobile advertising differences is the difference between mobile and sedentary behavior.  If I’m out-and-about my use of online resources tends to be tactical, reflecting what my current behavior and goals require.  If I’m sitting at my desk or on a sofa, I’m grazing.  Mobile also presents less real estate on a screen to display something, and my own research says that users who do a search spend less time looking at the results on mobile devices.  You can see how this would impact Google, but it’s not clear what Google could do about it.

Microsoft is the same way.  Yes, Microsoft missed the boat with phones and tablets, and yes they probably lost quite a bit of money in potential phone/tablet sales.  But had Microsoft jumped on tablets and smartphones day one, would that not have reduced Microsoft’s sale of Windows for PCs even faster, hastened the shift to appliances?  Might that not have hurt more over time than waiting and losing that market?  Maybe, maybe not, but it shows that you can’t look at any given issue in isolation.

The proximate cause of Microsoft’s problems is the same as it was for Intel.  As computing technology improves, we can’t absorb the additional horsepower in the same application of the chips (and OSs) that we had.  Twice the performance of a laptop, these days, won’t generate instant refresh.  The improved price/performance has to be offset by increased volume, but if there’s no need to refresh then volume reduces.  Yes, tablets and smartphones are also hurting, but the shift to appliances and the need to increase unit-volume deployment is what’s driving those new gadgets.  And at some point, you fill that niche too.  So we look to smart watches, smart glasses, smart piercings for our navels, something we can swallow to convey our biologics automatically to our Facebook status…where does it end?  In commoditization.  You can never hope to automate everything.

For Google the problem is freeness.  Global ad spending is never going to be more than a percent or so of global GDP.  We cannot fund an industry, an economy on ads.  That Google may struggle with mobile advertising isn’t as significant in the long run as the fact that any ad-sponsored business will hit the wall eventually.  For years I’ve said that Amazon is the king of the hill in online companies, because it actually sells stuff.  Google’s advertising blitz is in some ways lining Amazon’s pockets, because the consumer who relies on Google to find a product is very likely to go to Amazon to buy it.  Google gets a few pennies in ad revenue and Amazon gets the whole retail margin.  Even if that margin is fairly low, you all know in your hearts that you are not going to spend more on advertising than on actually tendering the product to the buyer.

For Microsoft, there is neither a way of making PCs sell better nor a way to at this point capture the phone/tablet market.  For Google, there is neither a way to get significantly more share of online advertising without kicking off regulatory intervention, nor a way of growing that total market fast enough for its current market share to fuel its growth expectations.  Google needs to get people to pay for things.  Microsoft needs to be thinking about how the collection of technology that’s being linked to each user can be made into a cloud-facilitated and cooperative behavioral support ecosystem.

So why aren’t they?  I think there are three reasons.  First, the Street doesn’t want to hear long-term, they want hear this-quarter.  Sell like hell today and let tomorrow take care of itself.  Well, it has.  Second, buyers are now conditioned to think in terms of getting free services and seeing reduced cost as the “benefit” of technology.  It’s going to be hard to wean them away from that.  Third, the online nature of news these days contributes to an instant-gratification cycle.  I get all kinds of requests to describe the workings of IMS and evolved packet core in 500 words.  I doubt you could even name the components in that space, so how do we introduce these wonderful new technology options that involve a collection of personal devices and a vast new cloud ecosystem?  Easier to say the new phone is “cool” or the service is free.  This quarter, we’re seeing that these indulgences aren’t free; the price is pretty high.

Earnings Show We Need Revolution Conservation!

We had some important tech earnings reports yesterday, and so we need to review them in a systematic way.  IBM and Intel are perhaps the two prototypical tech giants, and what they do in concert is a measure of the health of the industry.

Let’s start with IBM, who was a poster child for the view that software is where it’s at.  Their software revenue was up by about 4% y/y, which doesn’t sound great but is good in comparison with hardware numbers, which were off 17% by the same measure.  Mainframes were again the only systems bright spot for IBM, which is bad because clearly new customers are unlikely to come into the IBM fold by jumping into a mainframe.

In our spring survey, published in July’s Netwatcher, we noted that IBM has been on a two-or-more-year slide in strategic influence, and that the slide was almost certainly attributable to a lack of effective marketing/positioning.  IBM remains strong where its account teams are almost employees for all the time they spend on customer premises (read, mainframes).  They’re weak where channels sales diminishes the ability of the sales process to overcome what are obviously major marketing weaknesses.

Intel had a similar quarter—disappointing revenue but better profits.  In Intel’s case it’s pretty obvious what the problem is, and in fact IBM’s problems in hardware also impact Intel.  Hardware price/performance has steadily increased over time, and while that’s been great for things like server consolidation via virtualization, it eventually becomes difficult to justify more muscle per system, which slows refresh.  In the PC space, people can stay with their old PC and buy a new tablet instead.  I think tablet sales are only part of the system problem; the other part is slower upgrades because we’re hitting the plateau in terms of how additional power can be usefully applied.

Revenue shortfalls are nothing unusual this quarter.  SAP also missed on revenues, as did Nokia and Ericsson.  What’s happening here in my view is that both Street analysts and the companies themselves have been relying on that mystical “refresh” driver or presuming (as Cisco has always done) that the demands of Internet users for bandwidth to watch pet videos will be met regardless of profits.  Clearly what’s happening is that the business buyer and the carriers are both demanding better return in incremental investment than current technology is offering, so they’re not refreshing things.

Verizon, who also reported, seems to validate some of this theme.  While the company is expected to boost capex modestly, it’s clear that the expansion is going to wireless and likely that wireline will actually take a hit, meaning that wireless spending will grow more than capex will.  Investment follows return, just like in your 401K.

On the network side, which is of course my focus, I think this demonstrates two key points.  First, there is going to have to be some creative cost management in network infrastructure for both enterprises and network operators.  The kind of revenue trend we’ve seen isn’t going to reverse itself in a year, so cost savings will be demanded to help sustain profits while revenues continue to tail off.  Second, eventually you have to raise the benefit side—revenue for operators, and productivity gains for enterprises.  This isn’t easy, and it’s a culture shift for vendors to support.  They don’t want to make that shift, but there is absolutely no choice in the long run.

Applying this to our current technology revolution trio of cloud, SDN, and NFV, I think there are also two key points.  First, we are not going to fund three technology revolutions independently.  Somehow all this stuff has to be combined into one revolution, a revolution that can manage the costs of the trio by combinatory benefits, and one that can also aggregate the benefits into one use case.  We’re anemic in this unanimity of revolutions department.  Second, the unified revolution has to be aimed at revenue in the long run and cost control as a short-term benefit.  That’s the polar opposite of how all three of our revolutions are seen today.  In my surveys, users were unable to articulate strategic benefits to any of the three technologies, only cost-reduction benefits.  Reduce costs, reduce TAM, and does this quarter show that we should be seeking lower revenue over time?  Vendors are crazy here, and they will have to get smart very quickly.

Cyan’s Metro Win: Shape of Things to Come

Cyan’s Telesystem win in packet-optical for its Z-series and SDN technology is an interesting indicator of some major metro trends.  While it’s victory over TDM is hardly newsworthy, it does show that packet advantages over TDM can justify a technology change—and any time you can justify technology change the barrier to doing something revolutionary is lower.  That’s one reason why metro is so important in today’s market.

Metro technology has historically been based on SONET/TDM, which offers high reliability and availability but is not particularly efficient in carrying traffic that’s bursty, like most data traffic.  Packet technology was designed (back in 1966 for those history buffs) to fill in the gaps in one user’s traffic with traffic from another, allowing “oversubscription” of trunks that gave as many as four or five users what appeared to be the full use of the path.  Obviously that saves money.

The challenge with packet has always been providing reasonable quality of service.  If bursty traffic bursts too much, the result is more peak load than a trunk can carry, which will cause at first delay (as devices queue traffic) and then packet loss (as queues overflow).  Sometimes packets could be rerouted along alternate paths, but such a move gives risk to a risk of out-of-sequence arrivals (which some protocols tolerate and others don’t).  The whole process of getting good utilization from packet trunks to create economic benefits while sustaining reasonable operations burdens to build rather than erode those early savings has become a science.  We usually call it “traffic engineering”.

Metro networks are often very sensitive to traffic engineering issues because they often don’t present a high ratio of trunk to port speed, and that means a given port can influence trunk loading more.  There are fewer alternate paths available, and metro protocols often can’t reroute quickly and efficiently.  Furthermore, since most metro networks are aggregation networks rather than networks providing open connectivity among users, there are fewer service features to justify the multiplicity of protocol layers in a stack.  Since every layer adds to cost, the goal is to moosh things down into as few as possible, which is where packet-optical comes in.

Cyan has done a pretty good job of creating a packet-optical ecosystem around SDN concepts, though many of its competitors would argue whether what they do is really SDN.  Given the loose state of definition for SDN today I think that’s a meaningless point, but I do think that Cyan may be stopping SDN notions perhaps a layer too short.

Traffic engineering is something that should be done at the aggregate level.  We’ve always known that in core networks you don’t want to have application awareness because it’s simply too expensive to provide it when traffic is highly aggregated.  As metro networks get fatter, particularly deeper inside, you have the same issue.  To me, that means that traffic management should really be applied more at the service policy level than at the forwarding level.  For the management of connectivity, the relationships between users and resources at a broad level, you are better off using an overlay virtual network strategy—software networks based on SDN principles rather than forwarding-table control at the hardware level.  OpenFlow, as I’ve pointed out many times, isn’t particularly suited to managing optical flows anyway—it demands visibility down to packet headers and so flows are opaque to normal OpenFlow rule-processing.

I think this double-layer SDN model is the emergent reality of SDN.  Make the top very agile and very “software-controlled” because software is interested in connectivity not traffic.  Make the traffic part as efficient as possible in handling flows with various QoS needs, but don’t couple software down to the hardware layer because you can’t have applications directly influencing forwarding or you’ll have total anarchy.  I think that what Cyan has done so far is consistent with the bottom of my dual-model SDN but I’m not convinced that they have played software-overlay SDN to its full potential.  Perhaps their Blue Planet ecosystem could support it, but their promotion of the platform talks about devices and not about software tunnels and overlays.

I also think, as I noted yesterday in comments about NSN’s CDN partnership, that we need to realize that SDN will never work at any level if we expect applications to manipulate network forwarding and policy.  We have to create services that applications consume, services that abstract sets of forwarding policies upward to applications, and then press the policies down to the lower SDN layer to make traffic flow in an orderly way.  That process is ideal for software overlays because it has to be very malleable.  We can define a service that looks like a Neutron (what they now call OpenStack’s network interface, formerly Quantum) for the cloud users, we can define it as Ethernet or IP, we can define it as a tunnel, a multicast tree…whatever we like.  That’s because software platforms for SDN have low financial cost and inertia so we can change them, or even switch them quickly.  Every good SDN strategy that works at the device level needs a smart positioning of a software overlay.

Operations can also be facilitated with this mechanism.  As long as we create networks from devices—real or virtual, software or hardware—we will have to manage their cooperation.  A service network doesn’t have to be thought of as a device network at all.  Its behavior can be abstracted to a black box, and its management can be similarly abstracted.  Internal features drive black-box services to perform according to their external mission, and those same features can drive black-box management.  We can even create abstract management entities whose boundaries and properties are set to minimize operations costs, even if they don’t correspond to functional boundaries.  Virtualization is a powerful thing.

Packet wins over TDM have been ordained for a decade or more, it’s just a matter of waiting until you can write down the old stuff.  Packet wins over other packet technology will likely depend on the effective use of SDN-layer principles—and that’s particularly true in metro.

How NSN Undershot the CDN Opportunity

So near, and yet so far!  How many times have we heard that comment?  In the case of NSN with their recent deal with CDNetworks, we have a current example.  NSN is seeing beyond the present, but they’re not yet seeing the future of content delivery.  That might mean they’re not seeing, or optimizing, their own future.  Let’s look at why that is.

First, you have to look at the deal in context.  NSN previously had a relationship with Verivue, who had a really state-of-the-art cloud-CDN strategy that sadly they never positioned well.  We gave the combination our top spot in a Netwatcher review of CDN strategies last year.  But Verivue, likely in large part because of their positioning issues, ended up selling out to Akamai.  NSN doesn’t really want to partner with a global CDN player to get software for operator CDN deployment—cutting out the middle-man of NSN is too obvious a strategy.  They need something richer, more differentiating.

Second you have to look at the concept.  NSN is trying, with its “Liquid” initiatives, to marry CDN and mobile.  Great idea, but this isn’t the way to do it.  Mobile services are defined by a set of standards most collectively call “IMS”, but which includes the mobility resource function and evolved packet core (MRF and EPC, respectively).  What we need to do is to frame both these functions, and everything else that happens in a metro network, as a set of “infrastructure services” that are exposed through Network Functions Virtualization, implemented using SDN, and used by everything that runs in metro, from mobile to IPTV.  If you propose to solve the problem of mobile content by creating CDN-specific caching near towers, you fly in the face of the move toward creating generalized pools of resources.  CDNs are, after all, one of the NFV use cases.

NSN’s big move has been the shedding of non-mobile assets to focus on mobile, but their focus isn’t the focus of the buyer.  Any redundant parallel set of networks is just a formula for cost multiplication these days.  NSN may want to speak mobile, but buyers want to speak metro, and that means that NSN should be thinking about how “Liquid” talks to these infrastructure services.  And, of course, how those infrastructure services are created, operationalized, and (yes, because what mobile operator is an island?) federated across multiple providers.  CDNetworks is not going to do all of that in a way consistent with other compute/storage applications.  Maybe not at all.

And even if they could, we go back to the middleman positional issues.  NSN can’t be a leader by just gluing other pieces of technology together.  Professional services of that sort demand some product exposure across all the areas being integrated, so you don’t establish your credibility by shedding those areas and focusing only on mobile.  NSN needs to b a CDN giant, a cloud giant, and a mobile giant—and an SDN and NFV giant too—because the current metro market demands a holistic solution or costs will be too high to support continued operator investment.  And, folks, of you can’t make a profitable investment in the metro network, there’s no place on earth left for you to make a buck as a carrier.

So what the heck are “infrastructure services”?  They’re a combination of connectivity and hosted functionality.  IMS is one, so is EPC and MRF.  So is CDN, and cloud connectivity.  So are a bunch of things we’ve probably not though about—the higher-layer glue that makes lower-level connectivity valuable, just like call forwarding and call waiting and voicemail make voice services valuable.  You create infrastructure services by hosting network functions via NFV, combining them with SDN connectivity, and offering them to applications as useful network tools.  Which is what NSN needed, and still needs to do.  Except that they didn’t.

The good news for NSN is that others aren’t doing it either.  Look at SDN today, in a service sense, and you see a technology collection striving for invisibility.  We want…to do what IP or Ethernet did before!  So why migrate at all, except for cost savings?  And cost savings to a vendor means “cutting me out of more of the pie” so why would you want to go that route?  Look at NFV and you get the same thing—host virtual functions on servers instead of running them on purpose-built hardware that vendors are making me pay thought the nose to get.  If you’re a vendor, you’re again on the “cut” side of the picture.  This reality has pushed all the big network vendors into the blow-kisses-at-something-so-I-don’t-look-bad position.  So they’ve blown for all they’re worth.  NSN, who now has less to defend, could make their portfolio poverty into an asset by threatening the network equipment space with a stronger SDN and NFV story.

So could Brocade or Ericsson or Extreme or Juniper.  All these vendors have the risk of narrow product footprints.  Why not turn that into an asset by proposing a revolutionary change, a change that threatens most those who offer most today?  NSN can’t bank on being joined by all its competitors in the stick-your-head-in-the-sand game.  You can see how a metro revolution as a strategy would benefit NSN, but perhaps they can’t see it themselves.  That’s a shame, because the new era of networking, where connectivity and the cloud fuse into a single structure, is on them.  They could lead, and they could have made CDN a showcase for their leadership.  Now they’re following, and the wolves are at their heels.

One for the Merger, Two for the SMB, Three for the Cloud

What does AT&T’s decision to buy Leap Wireless, Cisco’s decision to do a cloud partnership with Microsoft, and Amazon at three hundred bucks a share have in common?  They’re symbols of a market in transition and a call for action to start gathering your troops for some coherent planning.

Traditional communications services, which are services of connection rather than experiences to be delivered, have been commoditizing for some time.  The current model, where all-you-can-eat Internet is the service dialtone, compromises network operators’ ability to gather profits from their massive investments in infrastructure.  Wireless, which has been less a victim of the shift than wireline, is now demonstrating that it’s not immune, just perhaps resistant.  Leap, a low-end player, is a way for AT&T to get more subscribers and that’s your only option in a market where all the major wireless operators believe that ARPU either has plateaued or will do so by year’s end.

If you’re facing ARPU pressure and lack of additional customers to grow into, your most obvious option is cost control.  A couple of analysts and reporters have remarked on the astonishing level of support operators have given the whole Network Functions Virtualization thing.  Surprise, surprise!  It is an initiative aimed at gutting the cost of the network by translating more functionality into software (preferably open-source software) to be hosted on cheap commercial servers.  That this will gut the network vendors along the way should be clear to all, but hey if you’re the buyer your own life is paramount.  If you die off, vendors get nothing.

The second option for the operators, of course, is the one they should have taken from the start and wanted to take from the start—get into the experience-based services game.  One of my big surprises about the NFV process is that it’s so focused on cost that it almost ignores opportunity.  That’s something that the vendors involved should be very worried about, but they don’t seem to be.  Perhaps that’s because operators have for five years tried to get vendors to help them with service-layer monetization and vendors have simply ignored the requests.  The operators didn’t stop buying then, so why believe something different will happen now?

One of the darlings of the operators, everywhere in a geographic sense, wireline and wireless, business or residential, local or national, is the cloud.  Operators read the rags (or whatever an electronic rag is called) and they’ve been infatuated with the cloud hype too.  Yes, cloud hype.  The whole of the cloud market wouldn’t keep the lights on for a big Tier One for a month at this point.  Which brings us to Cisco and Microsoft.

The big issue with the cloud, which is a positive to the prospects of the operators, is that the best cloud value proposition exists for the SMBs, and nobody much can sell to SMBs directly.  Most can’t sell at all.  Microsoft and Cisco would love to get the SMB cloud socialized for both their benefits, so they’re banding to push back the boundaries of darkness and ignorance, the big problem with the cloud for SMBs.  If you look at SMB literacy in the cloud space you find that it’s below the percentage who can sing the Star Spangled Banner.  Teaching them to sing might be easier, but hardly as profitable, so Cisco and Microsoft forget their traditional enmity and hope to find common programs to advance cloud adoption.  Ultimately that has to lead to SaaS, because what else could a non-tech-literate player consume?  Cisco and Microsoft have the same challenge, so cooperation is logical.

Which isn’t the case with respect to either Microsoft or Cisco and Amazon, our new stock-market darling.  One giant cloud seller is hardly in Cisco’s interest, and Amazon doesn’t sell Azure so Microsoft doesn’t care much for them either.  Plus Amazon is in a core business whose profit margins (online retail) are in the noise level.  They’ve been smart by opening up electronic media (Kindle tablets) and the cloud, but they will have to struggle to justify that kind of stock price in the real world.  Because the Street trades on momentum, they sky’s the limit in the near term but a high P/E multiple demands some “E” to justify the “P”.  From IaaS?  I don’t think so.

In many ways, Amazon is like a telco.  Their core business is a cash cow but hardly a generator of big margins.  The IaaS market, which is the low-margin king of cloud services, is something that can be as profitable to them as online retail is.  Telcos, as former public utilities, have similarly low ROI on their core business and so they can also be profitable in the IaaS conception of the cloud.

But remember Cisco and Microsoft?  If SaaS is really what SMBs want to buy and what vendors would really like to sell them, then SaaS has to be where the cloud is going.  So the question for Amazon, and Cisco/Microsoft, and even AT&T and other operators is how to get there.  You have to have software to deploy.  You have to have effective deployment/hosting processes, and you have to be able to manage what you do so that the quality of experience is good enough that users will pay for it.  And, of course, you have to do this in such a way as to make a profit.

So that’s what all these companies, all these news items, have in common.  We are trying to elevate services, and the logical place to do the elevating is in the cloud space where the transition from basic (IaaS) to advanced (SaaS) is fairly well defined both in terms of business model and technology model.  But what we’ve now got to do is fit all this into a framework that can be profitable.  That’s something that the NFV people could undertake, if they want, but they’re going to have to start down that track soon if they want to make useful progress.