What Kills the Video Star?

We all know the old song, “Video killed the radio star.”  The question now is whether something is killing or at least wounding the video star.  The latest research on video shows, not to my personal surprise, that the impact of OTT video is primarily to increase viewing hours rather than to displace traditional channelized viewing.  What my own models have been saying is that if you let somebody watch video on a smartphone or tablet, the portability of the platform opens video-view entertainment options that did not previously exist, and guess what?  People take advantage of them.

Many would have us believe that traditional television is dead, that Apple and Google are taking over the world, video-wise.  The numbers don’t bear that out, and in fact no respectable research I’ve ever read shows anything but either consistent or even increasing channelized viewing even in the face of OTT.  This proves, obviously, that exciting news is a lot more interesting than the truth, but it also disguises some questions/issues for people who want to make money on OTT video.

People have always told the same story when asked why they’re watching an OTT video.  First reason; they’re where they can’t view a normal TV show.  Second reason; nothing is on their available channels.  Third reason; some friend sent them a link.  To understand the future of OTT video you have to address these points.

I think every generation has at some point leaned back in disgust during a session of channel-flipping and said “TV sucks anymore!”  The fact is that TV is always going to suck for some portion of the population, and in a world where youth has more manipulable spending than their elders do you can expect programming to shift in their direction.  Personally, I hate “Twitter-like” or even actual Twitter feeds superimposed onto shows.  I also hate action shows that spend half their air time developing their characters’ lives.  I have a life; I’m interested in the action.  The point is that all these likes and dislikes are personal and subjective, and producers try to walk a line of keeping every just happy enough not to switch channels.

Where OTT could help is with the cost side.  If we could produce content at a lower cost we could offer content that was more tuned to the personal whims of more classes of viewers.  That would mean OTT might be better than channelized TV in a content sense.  But even companies like Netflix are spending more acquiring content rights than developing fresh content, and so we’re not playing this quality-and-personalization card effectively.

The issue of “can’t watch where I am” is relevant because my data at least suggests that what people watch when they’re out and about tends to be either short clips suitable for viewing on the run, or something they don’t want to miss.  YouTube satisfies the first of these, but the second means getting what’s essentially live TV feeds into OTT.  Clearly that could be done even today in a technical sense, so why aren’t we doing more of it?  Because nobody wants to kill the advertising revenue stream.  If a major sporting event were to support live feeds to portable devices they might lose a ton of money because online ads earn less than TV ads per viewer-minute.  Why?  Because they’re less effective at manipulating purchasing, and that’s because mobile users are more distracted (hopefully) by their surroundings and tune out the ads more effectively.  Not only that, we’re all being conditioned to tune out online ads by being bombarded with them.  Clearly we need more research on how to engage somebody with in-content mobile advertising.

That leaves our last point, the issue of link-sending or “viral video”.  People send others YouTube clips almost a hundred times as often as any other video link according to my numbers.  Why?  Because you probably don’t have to watch for an hour to see what’s interesting.  When you share a link with others you are injecting your thoughts into their context, and unless they can quickly sync with your mindset they’ll not be able to appreciate what you send.  The obvious conclusion is that we need to have accurate metadata indexing of material and also have a way to send a link that includes positioning data on videos if we want to take advantage of viral properties for normal TV or movie video content.  Again this shouldn’t be rocket science at a technical level, but neither of these are common and we’re making particularly poor progress with metadata scene coding, even where it would be easy to use the script to develop it.

So why are we doing badly at promoting a video model we all seem to think is the next big thing?  I think there are a number of reasons.  First and foremost, you don’t dig up all your wheat to plant corn if you’re depending on a crop in the near term.  The transition from a channelized to an OTT-driven video model could well trash the economics of content production, and we have to figure out a way to produce shows profitably or it won’t matter what viewing technology we can support.  Second, you have to develop a video deployment model that’s very agile in terms of viewing support, ad insertion, social-media integration, and even customization of the “overlay” concepts like Tweets.  Right now, we tend to always go for the lowest common denominator approach, and that’s encouraged everyone to sell the platforms short.  We have Internet-integrated TVs now, and most STBs could likely overlay Tweets on screens.  Why then are we sticking content many think are distracting on every screen?

Content is the least-realized of all the operator monetization goals.  Its projects are delayed, its technology is muddled, its proponents at the planning level are losing political credibility.  If we can’t fix the framework of OTT, if we can’t fit it into all the entertainment niches in a profitable way, we’re going to be either watching new reality shows or “I Love Lucy” reruns forever.

Do Alcatel-Lucent and Sprint Need to Speak Cloud?

Alcatel-Lucent’s quarter was far from happy, but there were still happy signs in a small increase in revenues and continued improvement in margins.  The sales of IP products were also good, and all of this seems to indicate that the “Shift” plan the company outlined (and whose restructuring costs killed the bottom line this quarter) has a chance of success.  The question, of course, is whether that indication is real.

One thing that the Shift recognized is that there are too many low-margin things hanging around the bottom line.  About two-thirds of the company’s revenues come from what would be considered commodity or commoditizing products, and even the IP and mobile space are under margin pressure.  Some of this stuff is likely never going to see the high-margin light of day again, and the fact is that Alcatel-Lucent needs to shed that part of its business ASAP.  They will lose on price to Huawei every time, and where features can’t be made meaningful price competition is inevitable.  And even cutting stuff that needs to be cut won’t make the rest a safe haven.

It would seem logical that the service layer of the network is where Alcatel-Lucent would look for the value pot of gold, and in fact no company has done as much to build a coherent technical strategy in services.  Their strength in mobile ecosystems and content delivery give them two of the three monetization pillars operators are looking for, and that’s more than many of their competitors have.  Arch-rival Juniper, for example, has no convincing strength in any of the three.  I think that you can argue that Alcatel-Lucent’s technical coherence has been totally dissipated by an almost complete lack of effective articulation, but there’s also the matter of that missing monetization pillar—which happens to be the cloud.

Of all the monetization project categories operators generally recognize, cloud computing is the run-away winner.  Cloud monetization projects have advanced to trial-or-later in about 50% more cases than any other monetization project type, and content or mobile monetization projects that involve the cloud move faster and further than those that don’t.  In the cloud, Alcatel-Lucent has a problem because it has no servers, which market-leader Cisco has and is winning deals with.  What Alcatel-Lucent needs to do is to create a software-driven vision of the cloud and make it compelling.

CloudBand is obviously an attempt to do that.  Their Nuage SDN position is a good one (one of the best, perhaps even the best), and a marriage of the two would be a helpful thing for carrier-flavored cloud computing.  Alcatel-Lucent has just launched an NFV ecosystem inside CloudBand that could have great promise, but there isn’t much information available on it and that’s not the foundation for a compelling position.

I have to wonder whether there’s not a little of Bell Labs paranoia creeping in here.  Remember that Bell Labs is an innovation giant, even if sometimes it seems a lot of the scientists are studying where their laps go when they stand up.  With what is arguably their most politically powerful internal constituency aimed at creating intellectual property, could it be that Alcatel-Lucent is so afraid of having their thunder stolen by others (perhaps Huawei?) that they keep everything under wraps?  It’s hard to get buyers to purchase things you won’t describe.  At any rate, I’ve got a discussion scheduled on CloudBand, and it will be interesting to see how much gets revealed and what its impact might be.

Meanwhile, there is an important truth that Alcatel-Lucent needs to consider, cloud-wise.  I’ll call it the “commutative property of cloud”.  If I host virtual functions as cloud components, and if I host SaaS application components as cloud components, and if I can deploy and manage them with the same tools, then are they not the same thing?  And if SaaS and NFV are really the same under the skin, could it be that CloudBand’s NFV processes are actually steps toward a rational position for Alcatel-Lucent in cloud computing?  Remember that my CloudNFV initiative has always had the goal of supporting NFV and SaaS equally in both deployment and management/operations.  It’s possible to do that with the right approach.

Sprint also released numbers, and their subscriber losses are truly appalling.  Mobile services are a game for giants, and as soon as you lose the advantage of scale you fall into that murky category of being too big to be specialized and too small to be interesting to the masses.  Sprint, clearly, has a lot to gain from a stronger cost-and-services balance, and just as clearly something like a complete carrier cloud story would be helpful to them.  Inside, though, they seem to have some of the same challenges that Alcatel-Lucent has.  They need a big, bold, vision of the future and not just a mechanism for slowing the bleeding.  Cloud-based mobile/behavioral symbiosis would be the best, even the ideal, answer, but mobile monetization has the lowest rate of advance to trial-or-later of any of the three monetization classes.  This, despite the fact that it now has the highest priority and the highest level of “potential” revenue gain according to operators.

Sprint is considering the future.  Alcatel-Lucent is considering the future.  Both have to realize that the future is the cloud, for a bunch of reasons.  Huawei isn’t strong there.  Feature differentiation is good there.  Cisco is threatening to seize it, so everyone needs to get their turf staked out.  Operators are deploying cloud even where cloud computing isn’t the focus.  It’s a concept that cuts across the operator Tiers and the market geographies.  So the message for both Alcatel-Lucent and Sprint?  Think cloud before it’s too late.


Getting SDN and NFV Up to Speed

My research has been showing a relatively slow uptake rate on SDN and NFV, slower than many would like to believe, and I thought it would be helpful to understand why the model I use is predicting a slow ramp rather than the classic “analyst report” hockey stick adoption.  The truth is often less interesting than fiction, but no less true for that fact!  Surveys of both enterprises and network operators for more than two decades have showed me that there are three factors that influence the rate of adoption of a new technology.

One is the functional availability of the technology—can you actually buy it in a form where it delivers the promised benefits.  In most cases, our “revolutionary” technologies take years to achieve reasonably full functional availability, and yet we expect to see buyers jumping in when product changes year over year would alone jeopardize a buyer decision.

The second factor is buyer literacy.  We don’t expect everyone who buys a car to be an automotive engineer, but we should expect them to be able to frame the basic value proposition for purchase successfully.  A network technology buyer has to understand the cost/benefit equation sufficiently to make the business case or they can’t get anything approved.

The third factor is management validation; will senior management accept that a new technology is in fact ready for prime time and take what is always an objective risk by committing to deploy it?  In most cases, I’ve found, management validation comes about because of effective coverage of something in the media.

The challenge with both SDN and NFV is that none of these areas have reached critical mass.  Only the “distributed” model of SDN where network adaptive behavior is sustained but better harnessed to conform to software goals do we have the required level of functionality available before 2016.  That means that buyers with typical breadths of application needs won’t be able to meet them with the state of the products at the time of sale; they’d be betting on futures.  My model says that the big jumps in functional availability will come between 2016 and 2017.  In SDN buyer literacy, critical mass has been reached for all operators but won’t be reached for enterprises until 2018.  In management validation, operators can expect to have the necessary socialization of technology for SDN in 2014, but enterprises won’t see it until 2016.  With NFV, only Tier One and Two operators will achieve necessary levels of literacy by 2018, and functional availability and management validation will come in 2017.

You might wonder why this takes so long, and the primary reason comes from what my survey has always called buyer influences.  What influences a buyer the most?  For the whole lifetime of my surveys, the number one factor has been the experience of a trusted peer.  At the beginning of the surveys, the second-most-cited influence was the material published in a primary technology publication—Datamation, BCR, Network Magazine.  Two of the three no longer exist, and today media rates seventh on the list of influential factors even counting online material.  What’s worse is that the gap of influence between the top and second choices in influence has widened.  Today, the primary network vendor has less than half the influence of a trusted peer where 20 years ago they had 85% the influence.  Every influence factor except our trusted peers has dipped by at least a third in 20 years, which says that buyers have virtually nobody to trust these days.

In some ways, this should make things ripe for startups, but the fact is that it doesn’t.  Small vendors won’t spend the money to engage correctly through marketing channels.  When we ask buyers about the quality of website material and marketing material, nobody gets great scores but the major vendors to twice as well as the smaller ones.  When there are trade groups or forums created to mass up the little guys, the bigger guys join them and seize control, or buck them with their own marketing.

We have, according to my spring survey, no consensus buyer definition of what an SDN is, no specific understanding of what technology choices would be ideal to implement NFV.  We have in the case of SDN many different products but the products don’t aim at the same issues, don’t address the same opportunities, don’t afford the same benefits.  How then do we justify them?  In the case of NFV we might be 18 months away from a firm standard; what happens between now and then, and how does a buyer make a pre-standard decision (if they can find one) with confidence it won’t be obsolete a year later?

I am confident that there are sustainable SDN value propositions that could justify spending today at twice or more the levels that we’ll see.  I am confident that it will be possible to actually deploy NFV next year and gain even more benefits than the NFV ISG even targeted in their white paper.  I’m just not confident the buyers will know.

We Announce a Set of CloudNFV Proof-of-Concept Proposals for the ETSI NFV ISG

We have posted the following on the proof-of-concept mailing list for the ETSI NFV ISG, offering our participation in the NFV body’s work in this area and offering integration with other NFV members.  We invite parties interested in integrating with us to review our preliminary guidelines on our website, http://www.cloudnfv.com/ and also to apply to join the CloudNFV Group on LinkedIn.  We will approve the Join requests when the group opens up, which we now expect will be in August.

CIMI Corporation will be submitting three POC proposals for consideration, based on the CloudNFV platform created by seven member companies in the NFV.  CloudNFV is an NFV implementation built for OpenStack.  A formal POC proposal document will be contributed in August, but we will propose a three-phase POC because of the scope of the issues we hope to explore.

The first proposal is aimed at validating the data modeling of NFV processes from the retail service level down to the virtual function level.  We want to validate the information needed to describe a virtual function, deploy it, connect it with other functions, and connect the combination with live network equipment and user endpoints.  We will, in this phase, establish the requirements to deploy, run, and manage VNFs and aggregations of VNFs deployed as services or service components.  We expect that this POC proposal will be available for demonstration and review in September, and we will contribute the results.

The second proposal targets the actual deployment and management of VNFs on conforming infrastructure.  In this POC we seek to validate the links between NFV and OSS/BSS/NMS frameworks, based on our TMF GB922-compatible service model.  We also seek to validate our architecture for federating NFV at the cloud infrastructure level upward to the retail service level, and to confirm the structure needed for cross-management and settlement.

The third proposal targets deployment of retail end-to-end services and traffic management in a completely federated, global, NFV-compliant network.  In this POC we seek to identify the operational challenges of global-scale federated NFV infrastructure and to establish the parameters of the management model needed for things like horizontal scaling, availability and QoS guarantees, transitioning from legacy devices to virtual functions, and integration of virtual-function services with cloud SaaS.

We will be publishing guidelines for integrating with CloudNFV in August, and at that point we will consider proposals from NFV member companies to integrate with us by providing virtual functions to host, network and server/system technology on which to run both our platforms and functions, and management and deployment tools to explore further optimizations in this area. 


“Virtual Networking” Means Networking the Virtual

It’s another of those recap Fridays, to pick up news that was pushed aside by other developments and to try to drag some cohesion out of the general muddle of news.  I think the theme of earnings calls for the week has been “an industry in transition” and so I’ll stay with that theme and fill in some interesting developments and rumors.

One rumor I particularly like is that Oracle is looking to develop a cloud-based voice service leveraging its Tekelec and Acme assets.  The service would be, says the rumor, offered by Oracle directly at a wholesale level to operators who didn’t want to run their own voice networks, offered as a product package for operator replacement of legacy voice, or both.

Voice services are an increasing burden to operators, particularly in the wireless space.  The problem is that you can do Google Voice or Skype or other free options over Internet dialtone, and in any event the younger generation doesn’t want to call at all—they text.  The net result is that you have a fairly expensive infrastructure in place to serve a market that is only going to become less profitable and less a source of competitive differentiation.  Most people in our surveys, speaking for themselves personally as voice users, say that if they didn’t have to worry about 911 service they’d probably not have paid voice service at all if the option existed.  No surprise, a 911 solution is part of Oracle’s rumored plans.

A second development that’s not a rumor is some interesting concepts contributed to OpenDaylight.  Recall that in that framework, and pretty much all OpenFlow controller architectures, there’s a northbound API set that connects what I’ll call “service applications” to OpenFlow.  A service application is one that creates a connection network to support application/user needs; IP and Ethernet are legacy models of service connection networks, but other models are possible.  Two are now being proposed by Plexxi and ConteXstream.

Plexxi is contributing its core Affinity concept, which is a dynamic way of visualizing connectivity needs at the virtual level, independent of actual network topology or physical infrastructure.  It might be interesting to consider a framework for SDN that started with base-level adaptive forwarding and then built virtual network overlays based on Affinities.  The key would be getting service properties pushed down as needed to create aggregate traffic handling rules.  ConteXstream is contributing an application of Cisco’s proposed Location/Identifier Separation Protocol (LISP, another of those overloaded tech acronyms), which makes it possible to have assigned logical addresses and independent physical locators.  This is yet another example of an SDN overlay mechanism, but it has interesting potential for mobility.  Both demonstrate that SDN’s biggest value may lie in its ability to define “services” in a whole new way, unfettered by network hardware or legacy concepts.

Needless to say, none of these things are going to change the world overnight.  Network virtualization faces both a conceptual and a practical barrier, and it’s not yet clear how either will be overcome.  On the conceptual side, virtualization of networking opens a major can of worms regarding service assurance and management.  If nothing is real, how do you send a tech to fix a problem, or even decide where the problem might lie?  On the practical side, the services of the network will in today’s world be substantially constrained by the need to support current IP/Ethernet endpoints (or you have no users) and the need to support an orderly evolution from currently installed (and long-lived in depreciation terms) network assets.  There’s also a both-issues question; how do you define middlebox functions in a virtual service?  We depend on these functions for much of networking today.  NFV might offer some answers, but the standards process there is ongoing.

You can argue that Oracle’s rumored service is an example of a virtual PSTN, and obviously the real developments I’ve cited here are also linked to virtualization.  You can only pass real traffic on real links, so virtualization must inevitably map to the real world.  You can only generate and deliver traffic to real users and applications, so there’s another real-world mapping.  What is in between, in OSI terms, are the higher protocol layers.  In network equipment terms, it’s the transport infrastructure.  I think it’s clear that if we redefine the notion of “services” to take advantage of the agility of virtualization, we make the transport network a supporter of virtualization and not a supporter of users.  What does that do to network equipment?  My model says that in the next two years (through 2015) we’ll see an expansion in network capex driven primarily by metro build-out and cloud deployment but also by the opticalization of the core.  After that, we’ll see a world where virtual missions for network devices gradually shifts value up to the stuff that does the virtualizing.

Amazon’s numbers may be a reflection of this same transition.  The company has a heady stock price, one that could not possibly be sustained by its online retail business.  The Street wants growth, which only the cloud and cloud-based services can bring.  Amazon is going to have to lead the service charge to justify its P/E multiple, and that means investing to prepare for the mission, which raises costs and lowers profits.  Their believers haven’t fled after unexpectedly bad profit news, likely because they realize that you have to bleed to heal, strategy-wise.  Amazon is redefining the relationship between devices and networks, and that is what everyone has to accommodate over time.  Along the way, they may end up redefining what we mean by SDN.

Spotlighting Carrier Capex and Profit Plans

Verizon’s comments about capex and generally better visibility from vendors has helped the telecom equipment space look a bit better, and of course that’s been our forecast since the spring.  My model shows general telecom spending will increase through 2015, with spending in all equipment areas showing some gain.  This represents the “last-gasp” modernization funded by mobile revenues and unfazed by SDN and NFV.  Beyond that point we’ll see gradual erosion in spending on network equipment, first in the core where the long-term effects of a shift to optics will be felt.  The core, recall, is the least profitable part of the network for operators.

You can see in AT&T’s earnings that they are expecting to have to do some creative framing of new fees and services if they’re to sustain profit and revenue growth going forward.  The capex increases will likely put all of the major operators near zero profit growth if they don’t cut other costs, and many are concerned about further cuts in OAM&P for fear of creating service issues that would drive churn.

Operators I talk to have three touchstones for future profit growth.  First, they believe that initiatives like SDN and NFV can lower overall costs, though some now admit that they believe the early estimates of savings are generally too high.  Second, they believe that the prepay mobile market can, with the proper programs and handsets, translate either to postpay at a higher ARPU or translate to featurephones with a la carte feature pricing.  Finally, they believe that there is a service-layer framework for profitable cloud-based services and features out there somewhere.

Readers of my blog know that I’m not a big fan of technology changes driven by cost management goals.  Historically operators have underrealized in this kind of investment, and in nearly all cases the problem is that they’ve been unable to create a reduction in operations cost corresponding to the capex reduction.  In fact, Tier Ones tell me that opex as a percentage of total cost has increased more than capex has been reduced by new technology.  Even operators who report higher profits on services like VoIP admit that part of the equation is limiting customer interaction.  They could have done that with the old stuff.  So while I think that SDN and NFV cost-savings won’t create a steady stream of profits, I do think they can help prime the pump.

The “a la carte” feature stuff is linked to operator views that the majority of wireless users will pay for some special features (like international roaming for mobile) on an episodic basis but not in monthly-subscription form.  Today we tend to see two classes of wireless service—prepay and postpay.  The latter, in the US in particular, trends toward unlimited usage and complete feature support and is aimed at customers who rely totally on mobile communications, especially for social reasons.  The former is for those who tend to use only basic services and are trying to control costs.  Operators tell me that their modeling shows that prepay revenues could be raised by as much as a third by introducing special-service packages on a shorter-term basis.  Some of this is already visible in calling packages for Mexico or data packages for prepay customers.

The “featurephone” notion is an offshoot of the a la carte model.  European operators and many in emerging markets are interested in the Firefox OS framework which would allow hosting of features on-network and a cheaper phone.  AT&T’s profits were hit a bit by smartphone subsidies, so you can see why the latter point would be interesting.  But inside the whole featurephone notion is the fact that operators recognize that ad revenues will be an unattractive profit strategy for them.  For the most part the ads are linked to social or other portals already in existence, and competing with them is probably unrealistic.  In addition, total available online ad revenue isn’t enormous.  A pay-for market is better, and if people will pay for apps on a phone they’d pay for network-hosted apps.  Most likely wouldn’t even know the difference.

The service-layer and management stuff is at the heart of this, of course.  Operators are already spending on cloud data centers (F5’s gains in the SP space were linked to that, IMHO) and they’re eager to leverage them (as the NFV stuff shows), but the model of an agile service layer is still illusive.  You need to define a “PaaS” platform for applications, you need a deployment model, and you need operations.  It’s not clear how long any of those will take to mature, but one factor that may help things along is the increased interest of network and other vendors in what could be called “VNF authoring”.  NFV is helpful to the extent that there are properly designed virtual network functions to deploy, and Alcatel-Lucent for example has just established a program, its “CloudBand Ecosystem” to encourage virtual function development.

A VNF authoring framework is a bit of a fuzzy concept.  At the minimum, you need to have an application structure that can be deployed by NFV for hosting, and it’s not clear yet from the state of the ISG work just what the requirements for that would be.  It might be necessary, or helpful, to include some platform APIs for integration with deployment and management services, but this area is even more fuzzy.  We don’t know yet whether things like horizontal scalability are expected to be application-controlled or controlled by a platform deployment or management system, and we don’t know much of anything about NFV management in an official sense.  I don’t have information on the program yet, and I don’t know if these details are revealed and can be made public.  I do think that most vendor NFV strategies are likely to include a program to support VNF authoring at some level.

VNF authoring is a natural step toward service-layer deployment because unless you do a very bad job at defining the hosting framework for VNFs, you should be able to use that same framework to deploy SaaS elements or components of cloud computing services.  That would make a VNF authoring strategy a natural step toward service-layer development programs and the de facto definition of a service-layer architecture.  I do want to stress, though, that it’s the management of virtual functions or cloud service elements that’s hard, not the deployment.  Otherwise we’re back to my early point; operations costs overwhelm capital benefits and the service price is too high to sustain market interest.


Juniper Will Get a New CEO

Juniper reported its numbers, which showed better profits and a slight improvement in revenue, and then issued a pretty nice 3Q outlook to boot.  The initial reaction of the Street was mixed; some hoped for better performance given Juniper’s multiple, and others were happy.  But earnings may not have been the big news.  Kevin Johnson, Juniper’s CEO, announced he would be retiring.

There have been rumors of Johnson’s departure emerging this year.  He arrived from Microsoft to replace Scott Kriens, the original CEO and now Chairman, and many thought he might push Juniper out of the box-fixation mindset that has been its legacy.  He didn’t, and in my personal opinion he didn’t really grasp the difference between “software”, “embedded software”, and “network software” in an SDN and NFV age.  Juniper may have embraced software in an organizational sense, but not in the sense that it needed to.

What should have been done?  Clearly, Juniper like other vendors was facing pressure from operators to support the new operator monetization goals.  Logically, that meant providing service-layer software that would allow operators to build new services that were competitive with those of OTTs, but also to recast current services in a more modern, cost-effective, profitable, and flexible way.  Juniper had an initiative, “Junos Space” that could easily have done that, and when I reviewed their concept at the launch almost three years ago I believed they would take the steps they could have taken.  They did not.  Space became a very simple “operations” tool, a slave to cost management and TCO and not even a factor in monetization.

When SDN and NFV came along, Juniper embraced the former and at least in a positioning sense ignored the latter.  Service chaining is an NFV use case, but Juniper presented it as an SDN application.  Yes, you can chain services with SDN, but unless you frame service chaining in the operations and deployment context of NFV you don’t have the savings that made it interesting in the first place.  I called Juniper out on their tendency to sing SDN songs about NFV concepts, but they’ve really not changed that theme at all.

I don’t know what Kevin Jonhson thought Juniper software would look like.  Like a Windows ecosystem?  Some inside Juniper have told me that’s exactly what he thought.  Like operations glue to link Juniper to OSS/BSS?  Some say that too.  The problem is that the ideal Juniper software story isn’t either of those, or perhaps it’s both but at another level.  Network software is about the virtual world that networking lives in, and in particular about the elastic and critical boundary between SDN and the network, and NFV and the cloud.  NFV, which to be fair came about long after Johnson joined Juniper, defines a framework that is aimed at costs but can be applied to revenue.  The critical error Juniper made under Johnson’s command was to ignore NFV because it seemed to be about servers, and embrace SDN because it seemed to be about software and networks.  Semantics are a bad way to engage the customer.

Cisco is the ranking player in networking, in SDN, and even in NFV even though its positioning is as vacuous as that of Juniper.  Why?  Because they’re the incumbent, and all they have to do is kiss the right futuristic babies and they can hold on.  Juniper has to come from behind.  Its earnings are not a reflection of it’s strategic success—it’s losing ground steadily in strategic influence.  The earnings reflect the inertia of the industry, an industry that buys stuff on long depreciation cycles.  It will take years for operators to wean themselves out of Juniper gear even if they try, and in that time Juniper needed to be darn sure they didn’t try.  That’s what Kevin Johnson was likely hired to do, and he didn’t do it successfully.  Juniper used to be a box company that couldn’t position strategically.  Now they’re part box company, part traditional software company, and still groping with the real problem of defining “network software” and their role in it.

The Cisco acquisition of Sourcefire is even more logical in the light of Johnson’s departure.  If Cisco can kill Juniper in enterprise security and cloud security while they fumble for CEO candidates it won’t matter much who they end up with.  And security is only one of three or four cloud-related killer areas where Juniper needs a cogent strategy to develop a lead.  If they miss any of them, they’re at risk to losing market share, and if their P/E drops to the industry average they’re a seven-dollar stock.  Think M&A, but think of it under decidedly buyer’s-market terms.

Watch the CEO choice, and watch what they do in the first hundred days of their tenure.  This is do-or-die time for Juniper.

Can Cisco Ride Sourcefire to Cloud Supremacy?

Cisco today announced one of their bigger acquisitions—security specialist firm Sourcefire.  The move is likely linked to the trends in security that I’ve seen in our surveys—most recently the spring survey published in Netwatcher just a few days ago.  It’s also likely to be another Cisco shot at Juniper, whose enterprise strategy is heavily linked to security.

Enterprises have generally had a bit of trouble accepting the idea that security was a problem the network should solve.  For years, they rated it as a software issue even as publicized security breaches illustrated that hacking was a big problem for everyone.  Why?  Because they saw this is an access security problem, thus a software problem.  This view was held by about three-quarters of businesses through the whole last decade.  What changed things was the cloud.

As cloud computing became more a strategic issue, businesses started thinking about security differently.  That started with a dramatic increase in the number who recognized multiple security models—network, access, software.  In just 2 years the number of businesses who saw security as multi-modal increase sharply.  The number who say that cloud security is a software issue fell by 10% in just the last year and there was a significant increase in the number who saw cloud security as a network issue.

For somebody like Cisco, this is important stuff.  If network-based security is linked to cloud adoption, then Cisco clearly needs to be on top of network-based security if it hopes to achieve and sustain cloud differentiation.  Given that Cisco’s main cloud rivals are not network companies, Cisco’s best offensive play would be a holistic network strategy that included security.

That’s particularly true given rival Juniper’s reliance on security for enterprise engagement.  Juniper hit its peak of strategic engagement in security just at the time when network security was about to go on a tear, and surprisingly they lost ground steadily as security-in-the-network was gaining.  Cisco, who dipped a bit in influence in response to stronger Juniper positioning a couple years back, suddenly gained.  I think that can be attributed to Cisco’s taking a more holistic approach to “network” and “security”, something that the Sourcefire acquisition could easily enhance.

There’s also a strategic shift to be considered here.  With operators pushing for virtual appliances, security is an obvious target, and hosted security is also an element in rival Juniper’s plans for SDN and NFV.  Cisco wants to focus both the SDN and NFV debates on expanding higher-layer network services and capabilities, in the former case through APIs like ONEpk and in the latter by introducing more hostable stuff.  Sourcefire could offer both those options.

What isn’t clear at this point is whether Cisco would create or endorse a “structural” connection between security and SDN.  If you do application-level partitioning of cloud data centers—as opposed to purely tenant-driven partitioning—you have the potential for creating access control by creating application-to-worker delivery conduits at the SDN level, meaning that only workers or groups of workers with explicit rights could even “see” a given cloud app.  This is a logical path of evolution for SDN security, but it might be seen to undermine Sourcefire’s model of more traditional IDS/IPS.

One thing for sure; Cisco is viewing IT and networking ecosystemically, a luxury that UCS gives it.  For all of Cisco’s enterprise rivals, there will be a significant challenge in matching that vision.  HP has both servers and networking, but its presence is more in the data center than in the WAN and it’s not been successful in getting traction on its SDN approach.  IBM OEMs its network gear and has been losing strategic influence in all things network.  Juniper needs a superstrong security and data center story, but security has lost ground over the last two years and their data center strategy has been muddled by poor QFabric positioning.

Cisco beats HP and Juniper in security influence even not considering Sourcefire.  IBM and Microsoft still lead Cisco in security influence, but obviously a shift in focus toward network-based security would benefit Cisco and hurt both its higher-rated rivals.  Even now, Microsoft leads Cisco by less than 10% and IBM leads by about 25%.  We could see Cisco take the number two slot by next spring, I think, and threaten IBM a year later.

Security is a big budget hook, the thing that has gotten more investment each year despite economic conditions.  If it can be made to pull through a larger network portfolio, which I think is possible, then it could cement Cisco as undisputed network leader in the enterprise network, and go a long way toward establishing Cisco as the player to beat in private clouds too.

I think the only solution for rivals is to get way out in front of Cisco on the SDN and NFV aspects of security.  Cisco will likely tread softly in creating revolutions in either space because of the impact it could have on their broader product lines.  Since all Cisco’s rivals have a much smaller market share in network equipment, they could afford to poison the well overall just a bit, in order to gain market share in the leader.  Will they do that?  It’s possible, but remember that none of Cisco’s enterprise rivals have been able to position their way out of a paper bag so far.  Cisco has already gained more in security influence than any competitor.  They could do more, still.

Setting Boundaries in a Virtual World

Everyone knows you have to set boundaries in the real world, to insure that friction where interests overlap is contained and that reasonable interactions are defined.  One of the things that’s becoming clear about virtualization—not a la VMware but in the most general sense—is that even defining boundaries is difficult.  With nothing “real”, where does anything start or end?

One area where it’s easy to see this dilemma in progress is the area of SDN.  If you go top-down on SDN, you find that you’re starting with an abstract service and translating that to something real by creating cooperative behavior from systems of devices.  OpenFlow is an example of how that translation can be done; dissect service behavior into a set of coordinated forwarding-table entries.  Routing or Ethernet switching did the same thing, turning service abstraction into reality, except they did it with special-purpose devices instead of software control of traffic-handling.

But who’s to say that all services are made up of forwarding behaviors?  If we look at “Internet service” we find that it includes a bunch of things like DNS, DHCP, CDNs, firewalls, and maybe even mobility management and roaming features.  So a “service” is more than just connection even if we don’t consider SDN or virtualization trends at all.

The cloud people generally recognize this.  OpenStack’s Neutron (formerly Quantum) network-as-a-service implementation is based on a set of abstractions (“Models”) that can be used to create services, and that are turned into specific cooperative behavior in a community of devices or functional elements by a “plugin” that translates model to reality.  You could argue, I think, that this would be a logical way to view OpenFlow applications that lived north of those infamous northbound APIs.  But OpenFlow is still stuck in connection mode.  As you move toward the top end of any service, your view must necessarily become more top-down.  That means that SDN should be looking not at simple connectivity but at “service” as an experience.  It doesn’t have to be able to create the system elements of a service—DNS, DNCP, and even CDN—but it does have to be able to relate its own top-end components (“northern applications”) with the other stuff that lives up there and that has to be cooperated with to create the service overall.

Even the Neutron approach doesn’t do that, though OpenStack does provide through Nova a way of introducing hosted functionality.  The Neutron people seem to be moving toward a model where you could actually instantiate and parameterize a component like DHCP using Nova and Neutron in synchrony, to create it as a part of a service.  But Neutron may have escaped the connection doldrums by getting stuck in model starvation.  The process of creating models in Neutron is (for the moment at least) hardly dynamic.  An example is that we don’t model a CDN or a multicast tree or even a “line”.

The management implications of SDN have been increasingly in the news (even though network management is a beat that reporters have traditionally believed was where you went if you didn’t believe in hell).  It’s true that SDN management is different, but the fact is that the difference comes less from SDN than from the question of what elements actually control the forwarding.  When we had routers and switches, we had device MIBs that we went to for information on operating state.  If we have virtual routers, running as tenants on a multi-tenant cloud and perhaps even componentized into a couple of functional pieces connected by their own private network resources, what would a MIB say if we had one to go to?  This translation of real boxes into virtualized functions is really the provenance of NFV and not of SDN.

But SDN has its own issues in management.  The whole notion of centralized control of traffic and connectivity came along to drive more orderly failure-mode behavior and manage utilization of resources better.  In effect, the OpenFlow model of SDN postulates the creation of a single virtual device whose internal behavior is designed to respond to issues automatically.  Apart from the question of what a MIB of a virtual device would look like, we have the question of whether we really “manage” a virtual god-box like that in the traditional sense.  It is “up” as long as there are resources that can collectively meet its SLA goals, after all.  Those goals are implemented as autonomic behaviors inside our box and manipulating that behavior from the outside simply defeats the central-control mandate that got us there in the first place.

In any case, what is a DNS server?  Is it a network function (in which case the NFV people define its creation and control), is it a cloud application (OpenStack’s evolution may define it), is it a physical appliance as it is in most small-site networks?  Maybe it’s an application of those northbound APIs in SDN!  All of the above, which is the beauty of virtualization—and its curse.  The challenge is that multiplicity of functional deployment options creates multiplicity of a bunch of deployment and management processes, and multiplicity doesn’t scale well in an operations sense.

I think we’re going about this whole network revolution in the wrong way.  I’ve said before that we have to have only one revolution, but we also have to recognize that the common element in all our revolutions is the true notion of virtualization—the translation of abstraction into reality in a flexible way.  If we look hard at our IT processes, our network processes, SDN, NFV, the cloud, management and operations, even sales and lifecycle processes related to service changes, we find that we’re still dealing with boundary-based assumptions as we dive into the virtual future, a future where there are no boundaries at all.  This isn’t the time for a few lighthearted bells and whistles stuck on current practices or processes, it’s time to accept that when you start virtualizing, you end up virtualizing everything.

Google and Microsoft: More than Mobile Problems

The earnings reports from Microsoft and Google followed the pattern of other tech reports from this quarter—a revenue miss offset at least in part by cost reduction.  There’s been a tendency for the Street to look at these two misses and declare a common cause—that both Google and Microsoft have failed to come to terms with mobile.  Wrong.  There is an element of common-cause here, but it’s related to my point yesterday about a tech industry focused on cutting costs rather than adding value.  You go after lower costs, and you succeed but bleed yourself out in the process.

Mobile isn’t a fad, but mobile change is an issue only for so long as it’s “changing”.  Advertising is surely impacted by mobile, but underneath all the hype the fact is that the biggest force driving mobile advertising differences is the difference between mobile and sedentary behavior.  If I’m out-and-about my use of online resources tends to be tactical, reflecting what my current behavior and goals require.  If I’m sitting at my desk or on a sofa, I’m grazing.  Mobile also presents less real estate on a screen to display something, and my own research says that users who do a search spend less time looking at the results on mobile devices.  You can see how this would impact Google, but it’s not clear what Google could do about it.

Microsoft is the same way.  Yes, Microsoft missed the boat with phones and tablets, and yes they probably lost quite a bit of money in potential phone/tablet sales.  But had Microsoft jumped on tablets and smartphones day one, would that not have reduced Microsoft’s sale of Windows for PCs even faster, hastened the shift to appliances?  Might that not have hurt more over time than waiting and losing that market?  Maybe, maybe not, but it shows that you can’t look at any given issue in isolation.

The proximate cause of Microsoft’s problems is the same as it was for Intel.  As computing technology improves, we can’t absorb the additional horsepower in the same application of the chips (and OSs) that we had.  Twice the performance of a laptop, these days, won’t generate instant refresh.  The improved price/performance has to be offset by increased volume, but if there’s no need to refresh then volume reduces.  Yes, tablets and smartphones are also hurting, but the shift to appliances and the need to increase unit-volume deployment is what’s driving those new gadgets.  And at some point, you fill that niche too.  So we look to smart watches, smart glasses, smart piercings for our navels, something we can swallow to convey our biologics automatically to our Facebook status…where does it end?  In commoditization.  You can never hope to automate everything.

For Google the problem is freeness.  Global ad spending is never going to be more than a percent or so of global GDP.  We cannot fund an industry, an economy on ads.  That Google may struggle with mobile advertising isn’t as significant in the long run as the fact that any ad-sponsored business will hit the wall eventually.  For years I’ve said that Amazon is the king of the hill in online companies, because it actually sells stuff.  Google’s advertising blitz is in some ways lining Amazon’s pockets, because the consumer who relies on Google to find a product is very likely to go to Amazon to buy it.  Google gets a few pennies in ad revenue and Amazon gets the whole retail margin.  Even if that margin is fairly low, you all know in your hearts that you are not going to spend more on advertising than on actually tendering the product to the buyer.

For Microsoft, there is neither a way of making PCs sell better nor a way to at this point capture the phone/tablet market.  For Google, there is neither a way to get significantly more share of online advertising without kicking off regulatory intervention, nor a way of growing that total market fast enough for its current market share to fuel its growth expectations.  Google needs to get people to pay for things.  Microsoft needs to be thinking about how the collection of technology that’s being linked to each user can be made into a cloud-facilitated and cooperative behavioral support ecosystem.

So why aren’t they?  I think there are three reasons.  First, the Street doesn’t want to hear long-term, they want hear this-quarter.  Sell like hell today and let tomorrow take care of itself.  Well, it has.  Second, buyers are now conditioned to think in terms of getting free services and seeing reduced cost as the “benefit” of technology.  It’s going to be hard to wean them away from that.  Third, the online nature of news these days contributes to an instant-gratification cycle.  I get all kinds of requests to describe the workings of IMS and evolved packet core in 500 words.  I doubt you could even name the components in that space, so how do we introduce these wonderful new technology options that involve a collection of personal devices and a vast new cloud ecosystem?  Easier to say the new phone is “cool” or the service is free.  This quarter, we’re seeing that these indulgences aren’t free; the price is pretty high.