Analyzing SDN’s Potential Links to Security and Management

The whole topic of SDNs has been fuzzy almost from the onset.  Hopefully by now you know my own view is that SDN is a transformation of networks whose value is largely created by the transition of IT to a cloud model.  In short, SDN is the network side of the cloud revolution.

Because “cloud” means “resource pool” to many, it’s not surprising that there’s been a major SDN focus on the data center virtualization missions that Nicira, recently purchased by VMware, also addresses.  But “SDN” isn’t “data center software defined networking”, so we need to think about its application in the wide world.  That raises the issues of security and management, issues that aren’t typically on the table for SDN discussions.

Since nobody thinks that software will directly control networks (imagine every software application grabbing an API and setting up connections; it chills the heart of any professional) the presumption is that some software process will act on behalf of applications at large to do that.  On the resource side of the SDN picture it’s clear that things like the DevOps stuff I’ve talked about and the Quantum network-as-a-service model of OpenStack can manage the aspects of network definition that relate to application provisioning.  The question is who manages the user stuff, and the answer can only be the security system, because for user-to-software connection it’s about rights.

An easy model for an SDN-based security framework is the “branch on-ramp” model.  In this model the DevOps processes that provision applications in the cloud also extend a set of application-specific virtual networks (OpenFlow pipes, if you like) to each worker location.  The workers, when they sign on to the network, are authenticated by their security system and assigned application rights.  This assignment opens forwarding rules that link them with the networks for the applications they’re allowed to use.  Non-authenticated workers have no rights, and workers have no access to the application virtual networks their credentials don’t validate them to.

At a deeper level this process might look like worker-class networks meeting application networks.  You can envision a series of virtual networks in each branch location, each network representing a class of worker there and linked to the appropriate applications.  A worker is credentialed into the appropriate class network and thus receives application access.  There are variations possible here, but the basis idea is that security processes control the “provisioning” of the access network in the same way that DevOps processes control the provisioning of the resource network.

The notion of two different virtualization models (branch and data center plus application) meeting through policies is clearly the end-game if you follow this approach.  Workplace virtual networks are separated by worker classification and data center by application.  The security systems then provide what’s essentially a firewall linkage between the two.  If the mobile worker is strongly authenticated, this model would offer a high level of security.

What about management?  In many cloud applications, the resource network will be contained within a data center and be subject to direct control.  It’s still likely that there will have to be some management link to the SDN processes to set up and restore paths, of course, and this would be a link to what I’ve always called the “Topologizer”, the element of the higher-layer SDN processes that understands the mapping between virtual network services and real networks.  That doesn’t answer the question of how things like fault correlation work, though.  An network problem in our example earlier would likely be seen by workers as an application access failure, and to fix virtual networks you have to push through the abstraction to the real stuff.

One way to mitigate management issues is to presume that the DevOps processes that provision SDNs also record their assignment of resources so that the dependency of the application virtual network structure on a specific set of devices or services from the network is recognized.  This could be used to address failures both as reported from above and from below.  In the former case, the user’s application-specific problem is sent to a DevOps-driven task that finds the network dependencies and determines their current state.  In the latter case a change in network state is sent to all DevOps-created virtual networks to inform them that anything that depends on the indicated resource is in a fault state.

Another strategy that could be used independently or in concert with the one above is to create “services” from the virtual network layer not above the network but in it, by managing network assets directly (using traditional virtual service models or OpenFlow-based models, or both).  If management processes in the network create these services then the virtual networks can be managed by those processes, since the relationship between the virtual and real must be known by the things that create the relationship in the first place.

Yet a third option is to forget traditional network management completely and focus on service management.  Assure the outcome, in other words.  If something breaks you’d presumably get a direct hardware report to act on for repair, but for analysis of trends and congestion or other “subjective” failure issues you rely on either telemetry at the service level or a report by the user.  Then you go to the DevOps notion of fault correlation to determine what you need to do.

I’m not arguing that these approaches are the only way to link security to SDNs or to manage SDNs, but I do think that unless the kind of issues I’ve raised here are addressed, we’ll be under-shooting SDN benefits and risking down-the-line operationalization issues as well.  And all of this should demonstrate that the key to SDN isn’t in OpenFlow or the controller, which only implement SDN policies that are created above, but in the stuff that IS above, the stuff that we’re not paying enough attention to at this critical period in SDN evolution.

Looking Beyond the Xsigo Hype

According to the running commentary after the announcement, Oracle’s purchase of Xsigo is a step toward SDNs, or maybe it’s to go after VMware in network virtualization, or to boost its cloud credentials.  And I thought coverage of the Nicira deal was vacuous!  Of course, the Xsigo probably doesn’t do any of these things, and it’s probably not aimed at doing them either.

The root of the problem here, I think, is that Xsigo says it has “virtual I/O”, and these days anything that’s “virtual” is “cloud” and “SDN”.  Not so, guys.  Virtual storage is valuable in any data center that has a storage pool, and probably the largest market for it is in virtualization-based data centers.  And in any case, Xsigo grows its virtual I/O out of its real business—fabric.

Yep, underneath the hype, Xsigo is a fabric switch vendor, one of two (Mellanox being the other) who’s made a name for themselves in a quiet way by exploiting the Infiniband switching technology introduced by Intel.  If you looked for the leading technology in data center fabric today, at least if you defined “fabric” in an objective way, Infiniband would be it.  Oracle has used Infiniband before, as have other IT giants, and so the move is more evolutionary than radical.  But what drives it?  I think it’s simple; own the data center.

If you believe in cloud revolution for enterprises, it starts in the data center.  If you believe in new network architectures, the data center will be at the heart.  If SDN means anything its meaning has to start in the data center, and if you’re a salesperson for network equipment or IT equipment you’d darn straight better be calling on the people who run data centers.  The bucks start and stop here.  And that’s what Oracle wants, the bucks.

Oracle makes servers, servers they want to make a fixture in data centers.  The cloud and virtualization are two drivers of data center technology change and thus two insertion opportunities.  Both the cloud and virtualization demand some rethinking of the relationship between storage and servers because a fixed or even traditionally provisioned relationship is too static for fast changes in application-to-resource mapping.  That means too slow for the cloud or even for dynamic applications of virtualization.  Xsigo offers virtualization of the I/O, so Xsigo solves a problem for Oracle.

So doesn’t this mean Oracle is after Cisco, VMware, and the cloud?  No.  Going after somebody is helpful only if they have the market you need to get.  Cisco is certainly a server competitor but if you want to steal market share steal it from IBM or HP or Dell. VMware doesn’t make servers, so why chase after them?  The cloud is in its infancy so if all you can do is sell fabrics to cloud providers you’re in for a long cold winter.  No, this is about positioning and issue ownership.  What drives change is new issues, because planners want to make sure that their long-lived capital investments actually DO live long.  Oracle is grabbing two hot things; fabric and virtual I/O.  They’ll follow the trail of the cloud to the bank, but they’d follow other trails too.  The cloud is only a means to an end.  Same with collision with Cisco or VMware; if they’re in the way, you collide, but they’re just collateral damage.

Rounding out tech, we had our first Google+ hangout yesterday, a format we’re calling “Techout Live” ™ and it featured a lively discussion of the Nicira deal and its impact on the network vendors.  If you missed the feature you can access it at http://youtu.be/BgnnHqtn94k.  Feedback is welcome because I want the concept to be as useful as possible.  Our conclusion was that while Nicira is no direct threat to networking at all, it does present the risk longer term that IT players will suck up all the features and capabilities that users find useful, particularly for the cloud.  This could starve the network for differentiation.  The question, of course, is whether network vendors will embrace the SDN goals and make the network the essential place where they’re fulfilled, or watch as the money train departs.  That’s a question only time—and the vendors—can answer.

 

 

Why Google Won’t be Lighting Up Your Ground

Google’s 1Gbps FTTH Internet service is getting ready to roll out in Kansas City (MO and KS), and of course the hype is high on this one.  The idea that people could be getting Internet “twenty times” or “a hundred times” faster has obviously captivated journalists, but the truth of the deal is a LOT more complicated.  Anyone who’s experienced Internet service at widely different speeds probably knows that changes in subjective performance rarely match the changes in access speed.

The big question here isn’t whether 1 Gbps Internet is better, though.  The question is whether FTTH is practical, and interestingly Google is answering the question in the same way the market already has.  Look at their Kansas City plans and you see a kind of populist model for red-lining, with the rate of pre-subscription determining the priority for service deployment.  Google is admitting that you can’t offer this stuff everywhere, it has to be in a high-density location (city and suburbs) and it has to have a high enough socio-economic level to generate a respectable sign-up rate.  That’s exactly how FiOS works, and also why AT&T doesn’t deploy FTTH and Verizon does.  Google seems to be admitting that our demand density model of FTTH feasibility is spot on.

That doesn’t make this unimportant, though.  Google’s service will include TV, and as I’ve pointed out before it’s linear channelized delivery that makes access of any sort profitable these days.  So Google might be answering an important question; are there areas where demand density is high enough that an OTT giant like Google (or Apple, or Amazon) might make a reasonable profit on FTTH?  I’ve tried to run the model on that one, without success.  There are just too many variables in the assumptions, and I hope that Google removes some of the uncertainty one way or the other.

The problem is that Google’s motives here are almost certainly NOT to become a FTTH provider.  In the past (with mobile spectrum bids, for example) Google has used the threat of entering the access market to beat on carriers, as leverage in net neutrality fights and to reduce the chances the operators would impose usage pricing on residential wireline broadband.  The idea that Google can somehow do what a telco can’t because they’re “an Internet” company is the opposite of the truth.  Google’s ROI requirements are about half-again higher than a telco’s would be so if FTTH doesn’t pay off for AT&T it would be less likely to do so for Google.  That means that even if Google finds that pockets of high-speed FTTH can be deployed at a marginal return, they’re more likely to prove that there’s no total-market business case for it in the US than the other way around.

Australia may also be proving that point.  There, the national carrier (Telstra) has ceded its access infrastructure to a not-for-profit “public” corporation that will build out access and lease it to all comers.  The theory is that this would enable competition, but of course the only competition occurs higher than the access layer and that doesn’t promote building out FTTH or anything else.  Telstra recently increased pricing to offset losses from the transaction, and the Australian model demonstrates what I think is a crippling problem—it kills the possibility of cross-service subsidization.

If an operator can push a fat glass pipe to a home and sell good stuff over it, there’s an incentive to accept marginal profits or even a loss on the pipe itself.  If the pipe-pushing occurs in one company and the profit in another, there’s no mechanism for natural cross-subsidies.  To promote the fiber you’d have to increase the payments from the overlay carriers to the fiber provider, and now you’re starting to do settlement on the Internet.  That would be good if it were done overall, but to put it into place in one country and one part of the network isn’t likely to be workable.  Net?  Google is probably going to show us what we already know, which is that somebody willing to lose money can push glass pretty much anywhere, but those looking for a profit on investment are unlikely to make the ground glow under you.

 

Facebook’s Cloudy, Amazon’s Clear, Juniper’s Clearing (SDN-wise)

It’s still earnings season and so we’re getting fewer product announcements and more financial ones.  You can still learn a lot from the numbers, obviously, so we’ll look today at the latest revelations and what they might mean.

Facebook surprised the Street with a loss, due to employee stock compensation costs.  Revenue and profit ex that issue were slightly better than expected.  The problem was that the company offered no guidance.  I said about all I think is useful on Facebook yesterday; social networking is a double-barreled not-proven as a business model for a public company.  I don’t think the Street necessarily gets the details here, but they’re right to be uneasy.

Amazon suffered a significant drop in profits as higher cost eroded their overall margin.  The move initially sent its stock lower, but the shares have recovered.  Some still think that Amazon is trying too hard to be more than it is—the premier online retailer.  Some think that some of the businesses it’s pushing now, including the cloud, are just massive capital-eating cesspools.  Some think that Bezos is “the new Steve Jobs”.  Take your pick.

My pick is that Amazon is treading a thin wire over a deep abyss.  The online retail business is a thin-profit play that has its own set of risks, and competition from suppliers themselves isn’t the least of it.  If you have thin margins already but you generate enough cash, you can probably fund building cloud infrastructure as well as a service provider can.  That means that the cloud business isn’t necessarily a bad one for Amazon.  You can argue, I think, that Amazon is being aggressive there, but that’s what got them to their premier status in online retail.

That doesn’t mean that Amazon is doing everything right.  I think their major problem is that they’ve let themselves be caught up in the cloud myth.  OK, I understand that you don’t want to rain on any parade that’s taking your stock where you want it to be.  The problem is that if everyone assumes that the cloud model means all IT migrates to Amazon, they’re doubly wrong. It doesn’t assume all IT migrates anywhere, and Amazon isn’t the likely recipient of most of what migrating does occur.  At some point, the market reality will collide with the myth and Amazon will suffer.  Needlessly, because it would probably be better able to face the future if it fessed up.

Amazon has taken a giant step by creating a kind of distributable browser function for Fire.  They’re a short step away from creating a tablet supercloud.  But wouldn’t have that have been a compelling story to tell?  It would take them to the front of the line in collecting cloud dollars.  If they then said that they would be offering a mobile-empowered distributable-intelligence service ecosystem, they’d be in line all by themselves.  And they could say this now, with complete truth to back them up.  Nobody else can say it.  Steve Jobs would have sung this to the stars, and that’s why Bezos isn’t Steve Jobs.  You explode on the market scene in a burst of dazzling light, not come in via the stage door.

In a bit of recapitulation, I had a chance to talk with Juniper on their SDN strategy, this time to get some on-the-record positioning instead of an NDA briefing.  I was very interested in their responses to three questions.

My first question was whether the implementation of OpenFlow by Juniper would allow devices that are not currently connected via “standard” interfaces for control protocol compatibility reasons could be connected by OpenFlow pipes with proper forwarding table control on both sides of a compatible interface.  The answer was “Yes”, that Juniper believed that the OpenFlow implementation should avoid proprietary extensions that might compromise this sort of connectivity.  This is important to me because it means that Juniper’s PTX fiber line could be connected to QFabric to create a distributed cloud data center.  I’d sure like to see them say that outright, but OK, we know now that it can be done.

My second question was whether Juniper intended to play or partner in the layers of the process that correspond to my “Cloudifier”, “SDN Central” and “Topologizer”.  Juniper does recognize this loose framework for an SDN stack, and it intends to play extensively in the Topologizer and at least somewhat in SDN Central; the details of the latter aren’t public yet.  That means that Juniper will frame service features from the network upward for creation of SDN services, expose control and status interfaces, and likely create operational abstractions (like the classic “Line”, “LAN”, and “Tree” connection topologies) at the Topologizer level to make it easier to map network services to cloud applications.  Will they orchestrate stuff?  They’re not saying yet.  Will they play in my “Cloudifier” layer?  Unlikely; that’s pretty high on the stack for somebody who doesn’t field servers, OSs, or middleware.

My third question was whether Juniper saw the same three models of OpenFlow application that enterprises in the know now see.  They are the “on-top-of-network” tunnel and virtualization application, the “under-the-network” virtual fiber application, and the “alongside/inside” network zone model.  They said that they saw these same models and they were prepared to support them, in fact having that support in place for any of the OpenFlow-compatible devices they have.

Juniper is the first of the big vendors to go “on-the-record” on this topic, and their capabilities demonstrate that they do have SDN credentials.  I’m not sure that they’ve fully committed to a path of SDN promotion, meaning that they don’t play their cards with as much breadth and finesse as I think their technology justifies.  I do think that they are fully committed to SDN, though, and with some work I think they could support the complete SDN model that I’ve talked about in Netwatcher already and that I’ll be describing in tutorial videos early next month.

 

Earnings from A to Z

Alcatel-Lucent reported its numbers, and as was expected based on their earlier profit warning, the results were disappointing, with the company experiencing a revenue loss and swinging to a loss on the bottom line.  “…we must embark on a more aggressive transformation,” according to CEO Ben Verwaayen, referring to a restructuring plan that would cut costs and jobs.  I agree that an aggressive transformation is needed, but not that cutting jobs is the answer.

Remember the songs of the ‘60s and “Blowing in the Wind”?  How many times must we hear “macro-economic” and “competitive pricing” before we understand that spending on networking is below par because we’re not creating enough value in the network?  Whether we’re talking about enterprise or service provider spending, the “I” part of ROI is dependent on generating the “R” part of it.  What’s frustrating to me is that Alcatel-Lucent has a very strong portfolio of very good stuff, things that could boost operator revenue.  The problem is that they just don’t seem to be able to make anyone understand it.

This is why I get frustrated by things like the SDN and “network virtualization” stories.  Tuning network behavior to applications fits under a broad category buyers would like to call “application networking”, and it includes performance management, security, virtualization, etc.  All of this can be managed from the network side upward, and I’d argue that the management of that stuff is the provenance of the SDN movement.  Some of it, and here we need to read that qualifying “some” very emphatically, can be managed from the network middleware side downward, using stuff like Nicira makes and VMware now owns.  So why have network equipment vendors not been leading the charge here?  At least two vendors I’ve had detailed briefings with (Ericsson and Juniper) have pretty decent SDN strategies.  We need to hear more from them in how these strategies actually advance application networking.

The “competitive pricing” problem that Alcatel-Lucent talks about is a problem vendors like them have created, a problem that arises from a steadfast refusal to develop or promote meaningful features that can become meaningful differentiators.  Bits, guys, are either one or zero; not much differentiation there.  And all of this at a time when buyers tell me in my surveys that they’re crying out for value-based justifications to build their network business case.  I can explain SDN to them in a half-hour.  I can explain why networking and SDN are a better way to get to application networking than a top-down approach.  I’ll be releasing an SDN video series within the next ten days to prove that.  So why can’t vendors with thousands of employees and billions of dollars in revenue do that?  It’s not lack of skill, it’s lack of will.  Fix this problem, vendors, or you’re all going to restructure yourself into a vanishing point.

Another tale of quarterly woe came from Zynga, whose costs exploded far beyond its ability to cover them with revenue growth.  The blow-out was so significant it impacted Facebook’s stock, and the big question is whether Zynga might be a harbinger of future Facebook woe.  There are two reasons it might be.

Reason number one is that Zynga, like all social-linked activities, is fad-driven.  I’ve drawn the analogy to one song in this post; why not two?  Remember “New Kid in Town?”  Youth likes to experiment with life, and so they’re drawn to novelty.  Eventually, another “new kid” shows up and to paraphrase the Eagles, “he’s all they want and you’re still around”.  That kind of instant up and down makes it hard to capitalize on something, particularly to capitalize on an IPO.  Facebook use seems to be plateauing, and many who have used it are now just checking in occasionally.  So it’s fair to ask whether people are bored with social networks.  Yes, perhaps, because they’re probably so overexposed to the updates about acquaintances that they’re bored with THEM.  Social networks stand or fall on socializing.

Which brings us to reason number two.  There has always been a question of just when an online user would see an ad.  Most of us (I sure have) learned to tune out display ads of all types long ago, and so I can honestly say I don’t recall a single ad that’s been presented to me in a year, at least.  Given this, how much real commercial engagement can come from social networking?  It has to come out of the “like” and “friend” and “follow” type of processes, and even there the problem is that the more you “like” the more you’re inundated with junk and the more likely you are to either “unlike/unfriend” or to simply ignore everything.

Don’t get me wrong.  I think social processes are vital to online experience.  The question is what kind of process, and we probably should value platforms to support flexible socially framed services more than the services themselves.  Most of them will simply dissipate with time.

Two Whiffs and a Bunt in Earnings

Earnings reports always give you something, but they don’t always give you conclusive answers.  So it was yesterday with three tech companies, Apple, Netflix, and Juniper.  These three players epitomize the new network ecosystem and its confusion.

Apple missed, no question about it, but there is a significant question on whether the miss really means much. The market wisdom says it’s not Apple’s fault.  After AT&T’s report, which showed that customers might be holding off on iPhones until the new model came out, many had expected Apple to miss on its revenue line.  Their regular schedule of new models is now overhanging the sale of their current ones.  This, with no promise there even will be a new model!

The problem is that none of this should have surprised anyone, and there’s a potentially bigger problem with the Mac sales.  Pundits say that the Mac sales were hurt by the late delivery of the new models in the quarter, but if everyone was waiting for a speculative iPhone model and the same mindset held with Mac’s real models, why didn’t they buy when the products came out?  Pent-up demand is as logical as overhang because it arises from the same notion of unfulfilled expectation.

There may be an element of overhang, and maybe some buyers were too busy at the beach to get to their Apple Store in time for their purchase to count this quarter.  There may also be an element of saturation here.  How many times will even the yuppiest yuppie do an upgrade to be cool?  How long will having the latest iPhone be enough to make you cool?  And there are other questions.  Tablets are overhanging PCs, so clearly they’d overhang Macs too.  Might iPads be cool enough that you don’t need a Mac, so iPads might overhang the Mac even more?  And might Apple’s cloud activity still be half-hearted because they’re afraid that a cloud framework would open their historically closed ecosystem?  We need to watch Apple’s signals carefully here because there are some signs that things aren’t on that hockey-stick trajectory now, and may not return to that trajectory quickly.

Then we have Netflix.  The company missed profits and revenues as costs for programming went up and subscriber growth slowed.  This is the firm that some believed would sweep traditional linear TV out of the market just a year or so ago, and now it’s pretty obvious that Netflix is the one at risk of being swept away.  The problem is simple; you cannot make fresh content and promote it heavily when your business model is based on the presumption that you’re a cost-based alternative.  If you’re in a commodity business you’re a commodity player, and the fundamental reality is that except for very old content, the video space isn’t a commodity business.  People want to watch specific things, not just any old thing.  That means those who have rights to that stuff because they’ve produced it will have an advantage, and their return on investment depends on advertising—commercials.  You can’t make enough in streaming ads to fund the material you’re streaming.

Juniper has the distinction of being the only player in our trio who didn’t miss their numbers, but their guidance was again a bit tepid and it seems pretty clear that Juniper is relying on market changes more than their own changes to boost their sales in the future.  My problem with that is that market changes for everyone in the router/switch business are taking things the other way.  Operators have the same squeeze Netflix has; it costs them more to build capacity but they don’t gain compensatory revenue.  So you have to either help them with the revenue side or expect that they’ll demand steeper discounts on capacity-building and also slow-roll where possible.

Juniper has a decent SDN capability and they’ve articulated it at least in private briefings with some effect.  However, capability and product are not the same thing.  It’s clear from listening to the Street that the future of Juniper in their eyes depends on the PTX and QFabric and security.  OK, SDN and “network virtualization” both impact those spaces in some way.  If Juniper can grab onto SDN principles and leverage them in the areas of PTX, QFabric, and security then they can strengthen all their weak points.  That’s what they need to do, and before more and more of the need for the changes has been pushed up into network virtualization or down into agile optics.  PTX in particular is an assertion that electro-optical coupling in a network core is better than pure ROADMs or OTN.  That can be true if you insert SDN principles, IMHO, but not if you don’t.  Get a move on, Juniper, because others are moving on you.

 

New Cloud Video Tutorial Series

We’re happy to announce that we have released three new video tutorials, a series on cloud computing.  These are available on YouTube as follows:

The Cloud Revolution Part One — http://youtu.be/cD75sF9z2FU

The Cloud Revolution Part Two – http://youtu.be/VYq0s5wZdWY

The Cloud Revolution Part Three – http://youtu.be/0uEK2GdvLAg

You can also find the links for these videos on our new video information services website, http://www.ohnayitshay.com.

 

What’s Not Nice about Nicira?

Cisco says that they’re laying off another 1300 workers or about 2% of their employee base.  VMware says it’s buying Nicira.  Are these two things related?  I think they are.

Nicira is a “network virtualization” player, a company who has built a connectivity layer on top of traditional networking and used that layer to offer communications services to applications, particularly those running on virtualization platforms or in the cloud.  Unlike network-based VPNs or VPLS, there’s no real limit to the number of virtual networks Nicira can support.  And they can run above anybody’s network, so they are “platform independent”.   They can also provide connectivity features to applications, acting as a kind of shim between the real network interfaces and the applications’ southbound service APIs.  Want TCP/IP?  You may get it via Nicira and not directly, and so VMware becomes a network vendor, which they probably like given that their current quarter was light.  One analyst even said it was the future of networking.

Of course it’s not that simple.  Network virtualization is at best like any other kind of abstraction, meaning that you still need a network to virtualize.  While network virtualization can help with aligning network connectivity and segmentation to computing virtualization, it’s hardly a total solution.  Furthermore, network virtualization is an application for software defined networking—if you believe in SDNs—and SDNs are a hot button with the network vendors.  Collision?  You bet, and be patient because that’s where Cisco jobs come in.

Applications link to networks via two distinct layer, what could be called the “logical” interfaces of the software APIs in the communication middleware and the physical interfaces to the network—Ethernet, IP.  Neither Ethernet nor IP are complete service protocols, which is why we have things like TCP to augment them.  We know that both have security issues too, so the point is that network service needs are likely to evolve.  You could argue that SDN principles represent a network-centric vision of that evolution, a bottom-up transformation.  You could argue that network virtualization represents a top-down software-centric vision.

So what does this have to do with Cisco jobs?  Every dollar spent on network virtualization is a dollar lost to Cisco, a dollar that could fund additional work and jobs.  Every feature that’s absorbed into network middleware is gone from the network, and with it the differentiation and margin protection that features bring.  More job risk.  Cisco at its Live event said it was getting architecture religion, but the question now is whether they can get it quickly enough.  There’s real work involved in making SDNs into the capability set that network virtualization already is.  Has Cisco done that work, or have any of its competitors?  We’ve reviewed some of the vendor SDN strategies, but there’s little we can say about them explicitly because the briefings were wrapped in NDA.  So yes, there’s progress, but how many of these vendors could have sold their SDN solution for a billion dollars like Nicira did?

SDN and network virtualization are faces of the same coin, one seen from below and the other from above, and since neither of the two concepts are particularly well understood most don’t recognize that.  They’re outriders in the battle for differentiation between IT and networking, and IT is winning because whatever might be said about the “architecture” of an SDN solution here versus a Nicira-style software solution, you can buy the latter and not the former.  This isn’t about boxes, it’s about capabilities.  Those who produce them win.

 

Why Should the Industry Fear My Getting Roku?

We’re ending the summer soon, and with it what many TV viewers dread; the summer rerun period.  The interesting thing is that there are more options these days for summer viewing, and more are exercising them.  As a result, there are potential changes in TV viewing that could have significant long-term market impact.

I bought a Roku 2 last week to deal with my personal frustration over summer programming.  Since my interest runs more to documentaries, there are fewer channels available, and the networks have taken up the habit of switching shows between them and then running the shows as “new”.  This, in my impromptu survey of consumers, was number two on the list of irks following the protracted loss of new programming.

The combination of Roku and Amazon’s prime videos gives me a lot of material, and that’s the problem for the networks and the linear TV providers.  In TV or movie format I can find literally hundreds of series, probably thousands of hours of viewing, for documentaries alone.  The quality of the viewing is virtually indistinguishable from HDTV programming, without skips or pixelization.  So here’s the question.  Will I, armed with so easy an alternative to traditional TV, be willing to put up with the same amount of drivel in programming this fall, or more, or less?  You know darn well what the answer is.

Anything that breaks linear viewing habits is bad news for the networks, the providers, and even the network equipment vendors.  For a cable company, it asks them to pay for delivery of content that not only isn’t what they get paid for, it competes with what they get paid for.  Can they then continue to build out capacity?  The decline in viewing translates to a decline in ad revenues for networks, which means either less programming or more junk programming.  How many reality TV shows can we have, after all?  Storage Wars, Shipping Wars…what’s next?  Lawn Wars?

The NCTA is rolling out its TV Everywhere platform, and that’s a threat in its own right.  Yes, it helps tie streaming to linear delivery which mitigates some of the negative impacts.  But it encourages people to use streaming, to view on alternative devices, and to exercise personal choice rather than viewing “what’s on”.  All bad for linear, all bad for access.

All of this comes as the FCC’s report shows that while ISPs are more honest about bandwidth claims, only cable and FTTH can deliver broadband in quantity.  In the US there are no major providers of either that don’t rely on linear TV for profit.  So why bother saying that the US needs to encourage FTTH deployment when, at the same time, we’re adopting behavior that undermines the last profits in the fiber loop?

My point is that this is an ecosystem, friends.  We are all going to live in a future composed from the sum of our present decisions.  Networks’ quest for quick quarterly returns is creating a future that undermines their own position.  Consumers’ quest for something for nothing is going to undermine their own choices.  So do we have a network future with no affirmative choices at all?  The operators and the enterprises in our surveys are pushing investment back, deferring to the second half what they would have spent in the first, then perhaps pushing it into next year.  We are seeing the effects of short-term thinking and planning, an effect that first makes it nearly impossible to avoid major pitfalls in the evolution of the market and second nearly impossible to address major opportunities.  There is a future out there that we could as an industry still grasp, but we can’t diddle forever and expect the option to stay on the table.

 

Microsoft and Google: Still Winning

As you all know, I like to read the earnings reports to spot critical trends, both with the companies themselves and for the markets they play in.  Microsoft and Google both reported earnings yesterday, and both gave the markets an upside surprise.  So what does that mean?

Media coverage of Microsoft has missed the point.  Though Microsoft posted its first-ever loss, the problem was because of the writedown of its ill-fated ad company aQuantive and not from sales, and the company’s financial performance ex the exceptional items was better than expected, which is why it’s stock was up.  Microsoft seems to be benefitting from the business space more than the consumer space; PC sales were up to the former group and down in the latter.  This is almost certainly due to a shift of consumer demand toward the tablet, something that’s not yet happened (and may not, at least in comparable numbers) for business.

This trend isn’t wholly bad or good for Microsoft.  I think it validates the thesis that Microsoft sees its tablet and even phone opportunity as coming more from the business side, not so much because businesses will demand Microsoft stuff but because their inertia is higher and their demands a bit different.  Microsoft thus has more time and a slightly different target of opportunity to take aim at.

What will mature things on the business side is a mature conception of the cloud.  Right now less than a fifth of enterprises are cloud-literate to a full extent, and before we can hope for a rational market we need literacy levels of a third or more, which we’re not likely to get until late 2013 at best.  But that means Microsoft has a year for Windows 8 to impact things, for better or worse, and likely just a bit less for Office 2013 and Surface.  It’s not going to be easy, and we can’t really read how Microsoft plans to proceed beyond what I’ve already said in prior blogs, but at least they have a shot.  The death of the PC and of Microsoft are greatly exaggerated.

The same can be said for Google.  The fact that they are “behind” in social networking generated a lot of hype that Facebook would swab the deck with them, performance-wise.  We’ve not heard from Facebook in the current quarter but the Street clearly doesn’t believe that it’s going to be swabbing many decks, and Google’s numbers were impressive.  Google is making display ads work, it’s making YouTube ads work, and it’s still making money aggressively on search ads even with per-click rates down.

The reason is that search is the thing most readily connected with buying.  Yes, some people do pure research on products or other stuff, but if you set aside geeks like me who entertain themselves by knowing things, the search engine market base is probably for the most part looking for stuff to do or to buy.  What better place to advertise?  Yes, it’s a market that’s going to hit the wall eventually.  Yes, SEO practices are making it harder to get good stuff from search.  But it’s not Facebook that will steal prospective buyers away, it’s Amazon.  If you can’t find good research by searching, you go to Amazon and just read reviews on products.

Motorola, which did OK, is probably Google’s hedge against change.  Mobility is where the future lies simply because mobile broadband can be—and is being—integrated into our lives in different ways.  The future of advertising, the future of broadband services, and even the future of the cloud (business and consumer) are tied up in mobility.  Google is going to take some flak for Motorola, as it did for YouTube, but it will probably make a go of the venture down the line, which is what’s making it a powerful player in the space.

Verizon also released its numbers, and they show that mobile is king already.  Financial analysts have summed up the earnings by saying “Mobile’s up, wireline’s down” and if you exclude FiOS that’s true.  Some look at the profit line for Verizon and think it shows the company is gouging the consumer, but the fact is that their return on infrastructure even in the mobile space is below that of Google (measuring return on capital overall).  It’s also getting lower, and that’s why operators are so hot on managing network costs and expanding their service profile via the cloud.

I think Verizon (and to a slightly lesser extent, AT&T) have a fairly realistic vision of how the cloud comes about and helps them make more money.  They see a new developer/partner ecosystem built on top of a bunch of as-a-service URLs exposed from underlying network and platform capabilities, and used to create new offerings at the retail level.  This process is interesting because it will create the largest carrier IT investment in history, and one that is entirely outside the traditional OSS/BSS stuff.  IMS isn’t part of OSS/BSS, and IMS is a voice service layer.  IPTV logic similarly is kept separate where it’s deployed; it’s under Operations and not under the CIO.  The cloud will likewise be separate, and for the OSS/BSS types the question now is whether they can sustain relevance, not whether they can take over.  Inertial thinking kept them from commanding the evolution and it could push them out of the game now.  Google, my friends, doesn’t have an OSS/BSS.