Earnings Show We Need Revolution Conservation!

We had some important tech earnings reports yesterday, and so we need to review them in a systematic way.  IBM and Intel are perhaps the two prototypical tech giants, and what they do in concert is a measure of the health of the industry.

Let’s start with IBM, who was a poster child for the view that software is where it’s at.  Their software revenue was up by about 4% y/y, which doesn’t sound great but is good in comparison with hardware numbers, which were off 17% by the same measure.  Mainframes were again the only systems bright spot for IBM, which is bad because clearly new customers are unlikely to come into the IBM fold by jumping into a mainframe.

In our spring survey, published in July’s Netwatcher, we noted that IBM has been on a two-or-more-year slide in strategic influence, and that the slide was almost certainly attributable to a lack of effective marketing/positioning.  IBM remains strong where its account teams are almost employees for all the time they spend on customer premises (read, mainframes).  They’re weak where channels sales diminishes the ability of the sales process to overcome what are obviously major marketing weaknesses.

Intel had a similar quarter—disappointing revenue but better profits.  In Intel’s case it’s pretty obvious what the problem is, and in fact IBM’s problems in hardware also impact Intel.  Hardware price/performance has steadily increased over time, and while that’s been great for things like server consolidation via virtualization, it eventually becomes difficult to justify more muscle per system, which slows refresh.  In the PC space, people can stay with their old PC and buy a new tablet instead.  I think tablet sales are only part of the system problem; the other part is slower upgrades because we’re hitting the plateau in terms of how additional power can be usefully applied.

Revenue shortfalls are nothing unusual this quarter.  SAP also missed on revenues, as did Nokia and Ericsson.  What’s happening here in my view is that both Street analysts and the companies themselves have been relying on that mystical “refresh” driver or presuming (as Cisco has always done) that the demands of Internet users for bandwidth to watch pet videos will be met regardless of profits.  Clearly what’s happening is that the business buyer and the carriers are both demanding better return in incremental investment than current technology is offering, so they’re not refreshing things.

Verizon, who also reported, seems to validate some of this theme.  While the company is expected to boost capex modestly, it’s clear that the expansion is going to wireless and likely that wireline will actually take a hit, meaning that wireless spending will grow more than capex will.  Investment follows return, just like in your 401K.

On the network side, which is of course my focus, I think this demonstrates two key points.  First, there is going to have to be some creative cost management in network infrastructure for both enterprises and network operators.  The kind of revenue trend we’ve seen isn’t going to reverse itself in a year, so cost savings will be demanded to help sustain profits while revenues continue to tail off.  Second, eventually you have to raise the benefit side—revenue for operators, and productivity gains for enterprises.  This isn’t easy, and it’s a culture shift for vendors to support.  They don’t want to make that shift, but there is absolutely no choice in the long run.

Applying this to our current technology revolution trio of cloud, SDN, and NFV, I think there are also two key points.  First, we are not going to fund three technology revolutions independently.  Somehow all this stuff has to be combined into one revolution, a revolution that can manage the costs of the trio by combinatory benefits, and one that can also aggregate the benefits into one use case.  We’re anemic in this unanimity of revolutions department.  Second, the unified revolution has to be aimed at revenue in the long run and cost control as a short-term benefit.  That’s the polar opposite of how all three of our revolutions are seen today.  In my surveys, users were unable to articulate strategic benefits to any of the three technologies, only cost-reduction benefits.  Reduce costs, reduce TAM, and does this quarter show that we should be seeking lower revenue over time?  Vendors are crazy here, and they will have to get smart very quickly.

Cyan’s Metro Win: Shape of Things to Come

Cyan’s Telesystem win in packet-optical for its Z-series and SDN technology is an interesting indicator of some major metro trends.  While it’s victory over TDM is hardly newsworthy, it does show that packet advantages over TDM can justify a technology change—and any time you can justify technology change the barrier to doing something revolutionary is lower.  That’s one reason why metro is so important in today’s market.

Metro technology has historically been based on SONET/TDM, which offers high reliability and availability but is not particularly efficient in carrying traffic that’s bursty, like most data traffic.  Packet technology was designed (back in 1966 for those history buffs) to fill in the gaps in one user’s traffic with traffic from another, allowing “oversubscription” of trunks that gave as many as four or five users what appeared to be the full use of the path.  Obviously that saves money.

The challenge with packet has always been providing reasonable quality of service.  If bursty traffic bursts too much, the result is more peak load than a trunk can carry, which will cause at first delay (as devices queue traffic) and then packet loss (as queues overflow).  Sometimes packets could be rerouted along alternate paths, but such a move gives risk to a risk of out-of-sequence arrivals (which some protocols tolerate and others don’t).  The whole process of getting good utilization from packet trunks to create economic benefits while sustaining reasonable operations burdens to build rather than erode those early savings has become a science.  We usually call it “traffic engineering”.

Metro networks are often very sensitive to traffic engineering issues because they often don’t present a high ratio of trunk to port speed, and that means a given port can influence trunk loading more.  There are fewer alternate paths available, and metro protocols often can’t reroute quickly and efficiently.  Furthermore, since most metro networks are aggregation networks rather than networks providing open connectivity among users, there are fewer service features to justify the multiplicity of protocol layers in a stack.  Since every layer adds to cost, the goal is to moosh things down into as few as possible, which is where packet-optical comes in.

Cyan has done a pretty good job of creating a packet-optical ecosystem around SDN concepts, though many of its competitors would argue whether what they do is really SDN.  Given the loose state of definition for SDN today I think that’s a meaningless point, but I do think that Cyan may be stopping SDN notions perhaps a layer too short.

Traffic engineering is something that should be done at the aggregate level.  We’ve always known that in core networks you don’t want to have application awareness because it’s simply too expensive to provide it when traffic is highly aggregated.  As metro networks get fatter, particularly deeper inside, you have the same issue.  To me, that means that traffic management should really be applied more at the service policy level than at the forwarding level.  For the management of connectivity, the relationships between users and resources at a broad level, you are better off using an overlay virtual network strategy—software networks based on SDN principles rather than forwarding-table control at the hardware level.  OpenFlow, as I’ve pointed out many times, isn’t particularly suited to managing optical flows anyway—it demands visibility down to packet headers and so flows are opaque to normal OpenFlow rule-processing.

I think this double-layer SDN model is the emergent reality of SDN.  Make the top very agile and very “software-controlled” because software is interested in connectivity not traffic.  Make the traffic part as efficient as possible in handling flows with various QoS needs, but don’t couple software down to the hardware layer because you can’t have applications directly influencing forwarding or you’ll have total anarchy.  I think that what Cyan has done so far is consistent with the bottom of my dual-model SDN but I’m not convinced that they have played software-overlay SDN to its full potential.  Perhaps their Blue Planet ecosystem could support it, but their promotion of the platform talks about devices and not about software tunnels and overlays.

I also think, as I noted yesterday in comments about NSN’s CDN partnership, that we need to realize that SDN will never work at any level if we expect applications to manipulate network forwarding and policy.  We have to create services that applications consume, services that abstract sets of forwarding policies upward to applications, and then press the policies down to the lower SDN layer to make traffic flow in an orderly way.  That process is ideal for software overlays because it has to be very malleable.  We can define a service that looks like a Neutron (what they now call OpenStack’s network interface, formerly Quantum) for the cloud users, we can define it as Ethernet or IP, we can define it as a tunnel, a multicast tree…whatever we like.  That’s because software platforms for SDN have low financial cost and inertia so we can change them, or even switch them quickly.  Every good SDN strategy that works at the device level needs a smart positioning of a software overlay.

Operations can also be facilitated with this mechanism.  As long as we create networks from devices—real or virtual, software or hardware—we will have to manage their cooperation.  A service network doesn’t have to be thought of as a device network at all.  Its behavior can be abstracted to a black box, and its management can be similarly abstracted.  Internal features drive black-box services to perform according to their external mission, and those same features can drive black-box management.  We can even create abstract management entities whose boundaries and properties are set to minimize operations costs, even if they don’t correspond to functional boundaries.  Virtualization is a powerful thing.

Packet wins over TDM have been ordained for a decade or more, it’s just a matter of waiting until you can write down the old stuff.  Packet wins over other packet technology will likely depend on the effective use of SDN-layer principles—and that’s particularly true in metro.

How NSN Undershot the CDN Opportunity

So near, and yet so far!  How many times have we heard that comment?  In the case of NSN with their recent deal with CDNetworks, we have a current example.  NSN is seeing beyond the present, but they’re not yet seeing the future of content delivery.  That might mean they’re not seeing, or optimizing, their own future.  Let’s look at why that is.

First, you have to look at the deal in context.  NSN previously had a relationship with Verivue, who had a really state-of-the-art cloud-CDN strategy that sadly they never positioned well.  We gave the combination our top spot in a Netwatcher review of CDN strategies last year.  But Verivue, likely in large part because of their positioning issues, ended up selling out to Akamai.  NSN doesn’t really want to partner with a global CDN player to get software for operator CDN deployment—cutting out the middle-man of NSN is too obvious a strategy.  They need something richer, more differentiating.

Second you have to look at the concept.  NSN is trying, with its “Liquid” initiatives, to marry CDN and mobile.  Great idea, but this isn’t the way to do it.  Mobile services are defined by a set of standards most collectively call “IMS”, but which includes the mobility resource function and evolved packet core (MRF and EPC, respectively).  What we need to do is to frame both these functions, and everything else that happens in a metro network, as a set of “infrastructure services” that are exposed through Network Functions Virtualization, implemented using SDN, and used by everything that runs in metro, from mobile to IPTV.  If you propose to solve the problem of mobile content by creating CDN-specific caching near towers, you fly in the face of the move toward creating generalized pools of resources.  CDNs are, after all, one of the NFV use cases.

NSN’s big move has been the shedding of non-mobile assets to focus on mobile, but their focus isn’t the focus of the buyer.  Any redundant parallel set of networks is just a formula for cost multiplication these days.  NSN may want to speak mobile, but buyers want to speak metro, and that means that NSN should be thinking about how “Liquid” talks to these infrastructure services.  And, of course, how those infrastructure services are created, operationalized, and (yes, because what mobile operator is an island?) federated across multiple providers.  CDNetworks is not going to do all of that in a way consistent with other compute/storage applications.  Maybe not at all.

And even if they could, we go back to the middleman positional issues.  NSN can’t be a leader by just gluing other pieces of technology together.  Professional services of that sort demand some product exposure across all the areas being integrated, so you don’t establish your credibility by shedding those areas and focusing only on mobile.  NSN needs to b a CDN giant, a cloud giant, and a mobile giant—and an SDN and NFV giant too—because the current metro market demands a holistic solution or costs will be too high to support continued operator investment.  And, folks, of you can’t make a profitable investment in the metro network, there’s no place on earth left for you to make a buck as a carrier.

So what the heck are “infrastructure services”?  They’re a combination of connectivity and hosted functionality.  IMS is one, so is EPC and MRF.  So is CDN, and cloud connectivity.  So are a bunch of things we’ve probably not though about—the higher-layer glue that makes lower-level connectivity valuable, just like call forwarding and call waiting and voicemail make voice services valuable.  You create infrastructure services by hosting network functions via NFV, combining them with SDN connectivity, and offering them to applications as useful network tools.  Which is what NSN needed, and still needs to do.  Except that they didn’t.

The good news for NSN is that others aren’t doing it either.  Look at SDN today, in a service sense, and you see a technology collection striving for invisibility.  We want…to do what IP or Ethernet did before!  So why migrate at all, except for cost savings?  And cost savings to a vendor means “cutting me out of more of the pie” so why would you want to go that route?  Look at NFV and you get the same thing—host virtual functions on servers instead of running them on purpose-built hardware that vendors are making me pay thought the nose to get.  If you’re a vendor, you’re again on the “cut” side of the picture.  This reality has pushed all the big network vendors into the blow-kisses-at-something-so-I-don’t-look-bad position.  So they’ve blown for all they’re worth.  NSN, who now has less to defend, could make their portfolio poverty into an asset by threatening the network equipment space with a stronger SDN and NFV story.

So could Brocade or Ericsson or Extreme or Juniper.  All these vendors have the risk of narrow product footprints.  Why not turn that into an asset by proposing a revolutionary change, a change that threatens most those who offer most today?  NSN can’t bank on being joined by all its competitors in the stick-your-head-in-the-sand game.  You can see how a metro revolution as a strategy would benefit NSN, but perhaps they can’t see it themselves.  That’s a shame, because the new era of networking, where connectivity and the cloud fuse into a single structure, is on them.  They could lead, and they could have made CDN a showcase for their leadership.  Now they’re following, and the wolves are at their heels.

One for the Merger, Two for the SMB, Three for the Cloud

What does AT&T’s decision to buy Leap Wireless, Cisco’s decision to do a cloud partnership with Microsoft, and Amazon at three hundred bucks a share have in common?  They’re symbols of a market in transition and a call for action to start gathering your troops for some coherent planning.

Traditional communications services, which are services of connection rather than experiences to be delivered, have been commoditizing for some time.  The current model, where all-you-can-eat Internet is the service dialtone, compromises network operators’ ability to gather profits from their massive investments in infrastructure.  Wireless, which has been less a victim of the shift than wireline, is now demonstrating that it’s not immune, just perhaps resistant.  Leap, a low-end player, is a way for AT&T to get more subscribers and that’s your only option in a market where all the major wireless operators believe that ARPU either has plateaued or will do so by year’s end.

If you’re facing ARPU pressure and lack of additional customers to grow into, your most obvious option is cost control.  A couple of analysts and reporters have remarked on the astonishing level of support operators have given the whole Network Functions Virtualization thing.  Surprise, surprise!  It is an initiative aimed at gutting the cost of the network by translating more functionality into software (preferably open-source software) to be hosted on cheap commercial servers.  That this will gut the network vendors along the way should be clear to all, but hey if you’re the buyer your own life is paramount.  If you die off, vendors get nothing.

The second option for the operators, of course, is the one they should have taken from the start and wanted to take from the start—get into the experience-based services game.  One of my big surprises about the NFV process is that it’s so focused on cost that it almost ignores opportunity.  That’s something that the vendors involved should be very worried about, but they don’t seem to be.  Perhaps that’s because operators have for five years tried to get vendors to help them with service-layer monetization and vendors have simply ignored the requests.  The operators didn’t stop buying then, so why believe something different will happen now?

One of the darlings of the operators, everywhere in a geographic sense, wireline and wireless, business or residential, local or national, is the cloud.  Operators read the rags (or whatever an electronic rag is called) and they’ve been infatuated with the cloud hype too.  Yes, cloud hype.  The whole of the cloud market wouldn’t keep the lights on for a big Tier One for a month at this point.  Which brings us to Cisco and Microsoft.

The big issue with the cloud, which is a positive to the prospects of the operators, is that the best cloud value proposition exists for the SMBs, and nobody much can sell to SMBs directly.  Most can’t sell at all.  Microsoft and Cisco would love to get the SMB cloud socialized for both their benefits, so they’re banding to push back the boundaries of darkness and ignorance, the big problem with the cloud for SMBs.  If you look at SMB literacy in the cloud space you find that it’s below the percentage who can sing the Star Spangled Banner.  Teaching them to sing might be easier, but hardly as profitable, so Cisco and Microsoft forget their traditional enmity and hope to find common programs to advance cloud adoption.  Ultimately that has to lead to SaaS, because what else could a non-tech-literate player consume?  Cisco and Microsoft have the same challenge, so cooperation is logical.

Which isn’t the case with respect to either Microsoft or Cisco and Amazon, our new stock-market darling.  One giant cloud seller is hardly in Cisco’s interest, and Amazon doesn’t sell Azure so Microsoft doesn’t care much for them either.  Plus Amazon is in a core business whose profit margins (online retail) are in the noise level.  They’ve been smart by opening up electronic media (Kindle tablets) and the cloud, but they will have to struggle to justify that kind of stock price in the real world.  Because the Street trades on momentum, they sky’s the limit in the near term but a high P/E multiple demands some “E” to justify the “P”.  From IaaS?  I don’t think so.

In many ways, Amazon is like a telco.  Their core business is a cash cow but hardly a generator of big margins.  The IaaS market, which is the low-margin king of cloud services, is something that can be as profitable to them as online retail is.  Telcos, as former public utilities, have similarly low ROI on their core business and so they can also be profitable in the IaaS conception of the cloud.

But remember Cisco and Microsoft?  If SaaS is really what SMBs want to buy and what vendors would really like to sell them, then SaaS has to be where the cloud is going.  So the question for Amazon, and Cisco/Microsoft, and even AT&T and other operators is how to get there.  You have to have software to deploy.  You have to have effective deployment/hosting processes, and you have to be able to manage what you do so that the quality of experience is good enough that users will pay for it.  And, of course, you have to do this in such a way as to make a profit.

So that’s what all these companies, all these news items, have in common.  We are trying to elevate services, and the logical place to do the elevating is in the cloud space where the transition from basic (IaaS) to advanced (SaaS) is fairly well defined both in terms of business model and technology model.  But what we’ve now got to do is fit all this into a framework that can be profitable.  That’s something that the NFV people could undertake, if they want, but they’re going to have to start down that track soon if they want to make useful progress.

Can Networking Learn from Microsoft?

Yet another set of Street reports and research reports have stopped just short of describing the PC market as “dead”, though a close analysis of the data seems to suggest that the declines are hitting a plateau.  My view has always been that we’re seeing the “browser bunch”, those who see technology as a pathway to the Internet, migrating to smartphones and tablets.  The people who really need to compute are much less likely to abandon the PC, but that will still create market conditions that are downright scary for those whose wagons have long been hitched to the PC star.  Windows 8, we hear, hasn’t succeeded in driving a PC refresh cycle.  Who thought it would, other than perhaps Microsoft, is the mystery to me.

Microsoft is obviously at risk here because their incumbency is almost totally PC-based.  The company is now reorganizing to avoid the classic problem of product silos, but if you look at the details they’re really only creating a smaller number of bigger silos.  No matter whether you have two or ten product groups, you still have to define and enforce a unifying vision of your market or you’ll never tie them together.  If you can’t link your products, you’re not going to enjoy any symbiosis or pull-through.

Microsoft is pulling all its system software under one group, all its “devices” into a second, its Office and Skype products into a third, and the final group will be its cloud and enterprise stuff.  While going from eight silos to four does reduce the issues of “taking a dependency” that have crippled Microsoft’s cross-silo coordination, it’s a smalls step.  Clearly this new structure doesn’t make independent products, it may in fact make hardware/software coordination harder and it seems it will hamper cloud evolution.  Why separate the cloud when all your other stuff depends on it?

It’s not that Microsoft couldn’t fix the cloud-integration problem, but that if they knew how to do that they’d never have needed a reorg in the first place.  What I think is lacking with Microsoft’s vision is that it’s not a vision but a personnel exercise.  Like every tech company on the planet, Microsoft’s biggest problem is coping with the “cloud revolution”.  PC problems are due to the cloud, even if we don’t think of “the Internet” as “the cloud”.  Opportunities in content, applications, services, devices, games…you name it…are all being generated by the cloud.  No matter where you put the cloud organizationally, you have to put it on a pedestal in that you’re making it your positioning centerpiece.  Azure could be considered the technical leading edge of the cloud platforms, but you’d never know it from Microsoft’s positioning.

Microsoft’s dilemma can teach some lessons for networking vendors too.  If you look at the networking giants of today, you can argue that only Cisco has a notion of charismatic and cloud-centric marketing.  While Cisco has product groups, they don’t seem to be political fortresses in the same way that they’ve become in other companies.  Alcatel-Lucent has yet to get beyond hyphenation in integrating its culture, NSN has avoided product collisions by shedding product lines, Ericsson wants buyers to pay for them to integrate products by buying professional services, and Juniper wants its product groups to make their own way in the world without any collective vision.

The cloud is the future, and maybe the problem vendors are having in accepting that is their narrow vision of the cloud.  Elastic, fully distributable, resources in compute, storage, and knowledge are game-changers in how we frame consumer and worker experiences alike.  Something like Google Glass and augmented reality could be a totally revolutionary element in our work and lives, and yet we’re not really even thinking about just where it could go.  People write more about “privacy risks” to Glass than about Glass applications.  Show me how wearable tech risks privacy more than “carryable tech” like a smartphone!

The cloud vision of what-you-need-when-and-where-you-need-it revolutionizes networking, computing, application design, appliance design—pretty much everything.  Microsoft and all my network vendor friends can see this revolution from either side of the barricades, pushing to lead the charge or hiding in a woodpile.  Every company in the tech space needs to embrace some vision of the cloud future and promote their products and evolve their strategies and refine their positioning and establish internal lines of communication to support that vision.  If your company hasn’t done that, you are behind, period.

The specific things we need to be thinking about?  Three.  First, how do you build componentized software that can be assembled to create the new unified cloudware that is part application and part service?  Second, how do you deploy and operationalize an experience that doesn’t have a single firm element, either in terms of resources or in terms of mission?  Finally, what does a network that hosts experiences look like?  And don’t be tempted to pick a single element from my list that you’re comfortable with and hunker down on it!  You’re all-in or all-out in this new game.  Even if you don’t make software or don’t make networks, you have to make something that accommodates all these changes and so you have to promote a holistic vision of change.

I’m learning a lot about the cloud future by trying to solve some of these problems in a realistic way, and the thing I’ve learned most so far has been that we still don’t think holistically about “experiences” at the consumer or worker level, and holistic experiences are all people ever see.  The whole of the Internet, after all, is compressed into a social portal or search engine or a Siri voice recognition process.  The whole of technology, for most, is compressed into the Internet.  This is an ecosystem, and if you don’t know where you sit on the food chain naturally you have little chance of improving your life.

We Announce CloudNFV

Those of you who follow me on LinkedIn may have caught my creation of a LinkedIn Group called “CloudNFV”.  Even though the group is currently invitation-only I’ve received many requests for membership, and a select few found the CloudNFV website and figured out a bit of what was going on.  Craig Matsumoto of SDNCentral was one of that group, and he called me recently for comment.  We’re not opened up yet, but I did tell Craig some things (see his article at http://www.sdncentral.com/news/nfv-stealth-group-working-on-a-prototype/2013/07/) and I want to introduce my activity to those who follow my blog.

This all started because, by background, I’m a software architect and director of software development.  I’m far from being a modern, functioning, programmer but I’ve still kept my hand in the process and I understand successful network projects because I’ve run them.  I’ve also been a member of multiple network standards activities, and one thing I learned from the combination is that traditional international standards activities don’t do software.

When Network Functions Virtualization kicked off last fall, I emailed all the operators on the white paper list and offered a recommendation—prototype this thing as quickly as possible to minimize the risk that the standard will turn out not to be a useful guide for an implementation.  I also suggested that the body forget the notion of designing NFV to be capable of running on everything from bare metal to the cloud, and adopt the presumption of cloud implementation.  That would simplify the process of specifying how functions could be deployed and operationalized, and take advantage of the enormous body of work cloud computing has produced.

In the early spring of this year, I had an email and phone exchange with the CEO of a highly innovative software company, and I saw their stuff as a logical element in implementing an NFV prototype.  The CEO agreed and attended the April NFV meeting.  At that meeting I made a number of suggestions about implementation, and my comments generated quite a bit of discussion in the parking lot during the breaks.  From the dozens who stopped by and chatted, there were a small number who stayed.  These were the very kind of companies whose contributions would complete an NFV implementation, companies who represented the best technology available in each of their areas.  They agreed to work together as a group, and CloudNFV was born.

There are three foundation beliefs that have shaped our activity since that parking-lot meeting.  One was that we needed an NFV prototype to create a software framework that would pull NFV out of a standard and into networks.  Another was that the cloud was the right way to do NFV because it’s the right way to do computing.  The last one was that as cloud computing converged more on SaaS, cloud providers would converge on the deployment needs and operations needs that were shaping NFV requirements.  There is, there can be, only one cloud, and NFV can define how both applications and services deploy on it.

So we’re doing that “supercloud” deployment and operationalization, following the model of NFV work but framing it in terms of the three guiding beliefs.  We’ve combined the elements needed for optimized hosting of network functions on computers with tools to gather network intelligence, a highly innovative virtual-networking model that recognizes the multidimensional nature of SDN, and a powerful unified knowledge framework that manages everything about functions, applications, resources, services, and policies as a single glorious whole.  And none of this is from giant network vendors; none of the group fall into that category.  We’re not a startup trying to be flipped, we’re a community of like-minded people.  Some of us are small in size and some are IT giants, but all are thought leaders, and all are working toward a strategy that will be effective, open, and extensible.

And we will be open, too.  When we’re ready to announce public demonstrations we’ll also announce when we’ll publish our data models and interfaces, and we’ll describe how we’ll be able to expand the scope of our prototype to include other software and hardware.  We’re not going to offer to integrate the world at our own expense, but we will cooperate with others willing to expend some of their own resources to link to our process.  We’re hosting in one of our member’s labs now, but our approach can support federation naturally so we’re happy to install in operator labs anywhere in the world and run as a global ecosystem.  Nobody is going to be shut out who really wants to participate openly and fairly.  We’re not sanctioned by the NFV ISG, but we do insist that those who want to join also join that activity.  It’s not expensive, and you can’t be committed to NFV without being committed to the international process that’s defining it.  We are, and we’re committed to making it work for the network of today and for the cloud of tomorrow.

If you want to participate in CloudNFV when it opens up, I’ll ask you to send a Join request to the CloudNFV LinkedIn group if possible, and if not to send me an email directly.  You can review all the information we’ve made public (including a link to this blog and Craig’s article) on the CloudNFV website (http://www.cloudnfv.com/).  I’ll update the site as we add information, and this site will be the repository for our public documents as they’re released.

When?  We will be scheduling selected carrier demonstrations in early September and we’ll likely be doing public demonstrations by mid-October.  Somewhere between we’ll begin to publish more information, and we expect to be able to show an NFV deployment of IMS with a basic TMF Frameworx integration and full operational support including performance/availability scale-up and scale-down, by the end of this year.  You’ll see then why I say that CloudNFV is as much “cloud” as it is “NFV”.  Well before then we’ll be starting to work with others, and I hope you’ll let me know if you’re interested in being one of them.

Are We Selling Virtualization Short?

We clearly live in a virtual world, in terms of the evolution of technology at least, but I really wonder sometimes whether everyone got the memo on this particular point.  It seems like there’s a tendency to look at the future of virtualization as one focused on creating virtual images of the same tired old real things we started with.  If that’s the case, we’re in for a sadly disappointing future in all of our tech revolutions—cloud, SDN, and NFV.

I’ve commented before that cloud computing today tends to be VM hosting as a service, which confines its benefits to fairly simple applications of server consolidation.  What the cloud should really be is an abstraction of compute services, a virtual host with all sorts of elastic properties.  We should define the properties of our virtual computer in terms of the normal compute features or course, but we should also define “platform services” that let applications take advantage of the inherent distributability and elasticity of the cloud.  By creating cloud-specific features for our virtual computer, we support applications that can run only in the cloud, and that’s the only way we’ll take full advantage of cloud capabilities.

Why do we get fixated on the low apples of the cloud?  Partly because vendors are so fixated on current-quarter numbers that they don’t even think about the end of the year.  Partly because socializing a complete shift in IT technology is beyond our marketing/positioning/media processes.  Partly because any populist revolution (which all of tech is these days) has to dumb down to be populist.  But we can all drive without being automotive engineers, so it’s possible to make complex technology changes consumable, right?

Well, you’d never guess that was the case in the SDN space.  Software-defined networking today offers us two totally different visions of networking.  One vision, promoted by the software-overlay players, says that “connectivity” is an artifact of the network edge, an overlay on featureless transport facilities.  Another, the OpenFlow hardware vision, says that networking is a totally malleable abstraction created by manipulating the forwarding rules of individual devices.  And yet what do we do with both these wonderfully flexible abstractions?  Recreate the current networks!

Then there’s NFV.  The original concept of NFV was to take network functions out of purpose-built appliances and host them on commercial off-the-shelf servers—if you’re a modernist you’d say “on the cloud”.  How much real value could we obtain by simply moving a firewall, or even an IMS component, from a custom device to a COTS server?  Whatever operations issues existed for the original system of real devices would be compounded when we mix and match virtual functions to create each “virtual device”.  We have to manage at two levels instead of one, and the management state of any network element that’s virtualized from multiple components has to be derived from the state of the components and the state of the connectivity that binds them into a cooperating unit.

The real answer for both SDN and NFV is to go back to the foundations of virtualization, which is the concept of abstraction.  Take a look at Metaswitch’s Project Clearwater architecture (the link is:  http://www.projectclearwater.org/technical/clearwater-architecture/) and you’ll see that we have a cloud-ready IMS at one level.  Look deeper and you’ll see that Clearwater abstracts IMS into a black box and then defines a component set that roughly maps to 3GPP elements but that cooperates to create the expected external interfaces.  That, my friends, is how abstraction should work.  Which is how the cloud, and SDN, and NFV should all work.

Flexibility that expands our options does us little good if we exercise it by affirming and entrenching the same choices we made before we got flexible.  The most important dialogs we could have on the topic of cloud, SDN, and NFV aren’t how we do what we do now, but how we might, in our flexible future, do what we can’t do now.  As we look at new announcements in our three key technology spaces, we should keep this point in mind, and take vendors to task if they don’t address it.  I’m going to do that, and you’ll get my unbiased views right here.

Looking at NGN Through SDN/NFV-colored Glasses

We all think that networking is changing, and most probably agree that 1) the cloud is the primary driver, and 2) SDN is the term we use to describe the new “connection architecture”.  I think that the point-three here is that NFV is the cloud-platform architecture that will describe how the hosted network elements are deployed and managed.  The big question is just how this is going to impact the end-game, what “networking” does really mean in the future.

Scott Shenker, who I’ve quoted before in blogs for his thought leadership in SDN, postulates that the network of the future may push all the traditional networking functions into software and out to the edge, leaving “the core” as a mechanism for pure transport.  He also talks about the “middle-box” problem, meaning the issues created in the network by the multiplicity of devices that don’t switch or route but rather provide some other network service—things like load-balancers and firewalls.  He cites statistics that say that middle-box devices are as common as routers in today’s networks.  That’s at least not inconsistent with my survey results, which cite them as an operational issue as large as that of routers.  So you could argue that Schenker’s vision of “SDN futures” is a combination of an NFV-like function-hosting of network capabilities relating to service and focused near the edge, and a highly commoditized but-pushing process that makes up the core.

The only problem I have with this is that the model is really not a model of networking but of the Internet, which is almost a virtual network in itself.  The real networks we build today, as I’ve noted in prior blogs, are really metro enclaves in which the great majority of service needs are satisfied, linked by “core” networks to connect the small number of services that can’t live exclusively in a metro.  And yes, you can argue that the metro in my model might have its own “core” and “edge”, and that Scott’s model would apply, there’s a difference in that metro distances mean that the economics of networking there don’t match those of the Internet, which spans the world.  In particular, metro presents relatively low unit cost of transport and a low cost gradient over metro distances.  That means that you can put stuff in more places—“close to the edge” is achieved by hosting something anywhere in the metro zone.  NFV, then, could theoretically host virtual functions in a pretty wide range of places and achieve at least a pragmatic level of conformance to Scott’s model.

To figure out what the real economics are for this modified model of the future, though, you have to ask the question “What’s the functional flow among these smart service elements at Scott’s edge?”  Service chaining, which both SDN and NFV guys talk about, is typically represented by a serial data path relationship across multiple functions, so you end up with something like NAT and DNS and DHCP and firewall and load-balancing strung out like beads.  Clearly it would be less consumptive of network bandwidth if you had all these functions close to the edge because the connection string between them would span less network resources.  However, if we wanted to minimize network resource consumption it would be even better to host them all in one place, in VMs on the same servers.  And if we do that, why not make them all part of a common software load so the communications links between the functions aren’t ever moving out of the programming level?

If virtual functions are transliterated from current network software, they expect to have Ethernet or IP connectivity with each other and you see the “edge functions” as a software hosted VLAN-connected virtual community that probably looks to the user like a default gateway or carrier edge router.  And you also see that if this is all done to connect the user to the VPN that provides enterprise services, for example, it really doesn’t change the connection economics if you put the services close to the user, or close to the VPN on-ramp.  That’s particularly true in the metro where as I’ve noted bandwidth-cost-distance gradients are modest.  The whole metro is the edge of Scott’s model.

For content services, it’s obviously true that having caching near the user is the best approach, but the determinant factor in content caching strategies isn’t the connection cost inside the metro, it’s the cost of storing the content.  Because metro networks are more often aggregation networks than connection networks, there is at any point in the aggregation hierarchy a specific number of users downstream.  Those users have a specific profile of what they’re likely to watch and there’s a specific chance of having many users served from a single content element cached there, because some elements are popular in that group.  Go closer to the user to cache and you get closer to giving every user private content storage, which clearly isn’t going to work.  So you have to adapt to the reality of the metro–again.

My point here is that the evolution of the network under the pressure of NFV and SDN isn’t something that we can deduce from structural analysis of the Internet model of networking.  It’s a consequence of the cost/benefit of business and consumer services in the real world of the metro-enclave model of service fulfillment.  We still have a lot of thinking to do on where SDN and NFV will take us, because we still haven’t addressed this basic reality of network evolution.

Netsocket Sings SDN in Perfect Harmony

I’ve noted in past blogs that the world of SDN is evolving, and perhaps the most significant element of this evolution is the emergence of a distinct two-layer model of SDN.  The top layer of SDN, based on “software overlay” virtualization, focuses on agile connection management to adapt to the dynamic notion of the cloud.  The lower layer that represents actual infrastructure is aimed at traffic management and network operations efficiency.

While this “bicameral” model of SDN is helpful, I think, it does have the effect of layering two virtual things on top of each other, which is hardly the formula for creating a hardened, operationally effective process.  In fact, the “dirty secret” of SDN as I’ve called operations, has been a growing problem.  Which is why I’m very interested in the Netsocket SDN approach.  They designed their whole concept around operations, because cloud operations is where the company got started, and they’re bringing a new notion of harmony to SDN.

The Netsocket Virtual Network, as the product is called, has the right three pillars of design—end-to-end application to mimic a real network, a management model that recognizes the inherent difference between virtual and physical networks, and the ability to seamlessly integrate with physical networks, both in an overlay sense and at a boundary point to extend or federate services.  But perhaps the biggest insight Netsocket has brought to the SDN place is their focus on northbound applications, the very area where most SDN players have hidden behind loosely defined APIs.  In fact, the Netsocket model is to give away the virtual network layer and sell the applications.

The infrastructure layer of NVN is made up of their own vFlow switches, which are interoperable with OpenFlow hardware switches (but a lot more agile, not to mention cheaper).  The vFlow Controller layer is analogous to the traditional SDN controller of OpenFlow, but it includes the ability to interact with legacy IP networks at the edge, sniffing the routing protocols to provide the ability to extend network services between vFlow and legacy devices.  This is a commercialization of the private implementation Google did to create an SDN core, but with broader application (and easier integration if you’re not a gazillion-dollar-revenue search giant).

The “northbound API” of NVN is the vSocket API, which is a web service that couples applications to the controller.  This is an open interface in that Netsocket has made the specs freely available.  The applications that run through vSocket provide the network service smarts, including optimization, policy management, and of course connection management.  One of the vSocket-connected applications is their management console, another is the centerpiece of their operationalization.

vNetOptimizer is a service operationalization application that can correlate between virtual-network services and physical network conditions, including the linking of physical network events with the service flow conditions that they cause.  It’s this linkage that gives Netsocket that direct operational support capability that nearly all overlay virtual network technologies lack.  It will take some time for the details on how vNetOptimizer evolves to understand just how complete it can make the operations link between layers, and also to understand how it might present “virtual management” options that are more attuned to network services than to devices, but since Netsocket has cloud-service roots, I’m of the view that they have good credentials here already.

One of the interesting things that the legacy/virtual linkage Netsocket offers can allow is the control and automation of legacy networks.  Their operations scripting/policy control can be extended through plugins to standard devices (Cisco and Juniper are the two mentioned in the press release, but in theory you could build a plugin for pretty much anyone).  This deals with an issue that’s already come up in the Network Functions Virtualization (NFV) discussions—how you use NFV concepts to virtualize some elements of a network while there are still legacy devices in place.  By accelerating operations benefits, the Netsocket strategy can reduce first-cost risks to operators or cloud providers, or even to enterprises.

For end-to-end support, Netsocket is drawing on an NFV-ish concept, which is using a COTS server to host virtual functionality in branch/edge locations.  This is actually congruent with the general goals of the NFV ISG in converting complex edge devices (“branch routers” or “integrated routers”) to applications running on a standard server.  Pretty much any virtual-overlay SDN solution could in theory be run this way, but their providers (as you know) don’t tout that approach but stay focused on the data center.  Netsocket will likely change that, forcing overlay virtual network providers to explain how they can be used across a complete service network…and of course how that virtual creation can be operationalized.

I noted in an earlier blog that I believed that new-generation software overlay SDN solutions were emerging and would likely put pressure on the “traditional” OpenFlow purists to be more articulate about how their stuff actually provides applications north of those anonymizing northbound APIs.  I think that Netsocket is going to ratchet up that pressure considerably.  Their stuff is cloud-friendly, carrier-grade, and it’s the first SDN story I’ve heard that had the specific cloud-service-level operationalization focus as well.  That could be a very powerful combination.

G.fast: Is It Enough?

One of the challenges that wireline has faced (and it doesn’t need all that many challenges for gosh’s sakes!) is the “capacity gap”.  If anyone thinks broadband Internet is profitable enough, you need to read somebody else’s blog.  You need to deliver video, HD video, to make wireline work, and that’s a problem because traditional cable-TV linear RF won’t work over local loops.  You have to do broadband (IP) video, and that’s been a problem too.  Conventional copper loop has been good for perhaps 40 Mbps at best, and while FTTH offers almost unlimited capacity, it has a very high “pass cost”, the cost of just getting a service to the point of customer connection so the customer could order it.

Alcatel-Lucent has been leading the charge to come up with strategies that would expand capacity of copper loop, and their recent G.fast trial promises to drive a gig per second over copper.  While the loop length that can be supported is short and loop quality has to be decent, the approach offers hope in supporting fiber-to-the-curb (FTTC) that would use the high-speed “vectored DSL” copper for the home connections.  That could result in a reduction in pass cost, and also mean that IPTV in the sense that Alcatel has always promoted it (U-verse-style TV) would be feasible in more situations.  That might give wireline broadband a new lease on life and provide a big boost to operator profits.  Obviously it wouldn’t hurt Alcatel-Lucent one little bit either.  But can Alcatel-Lucent rehabilitate copper loop with technology alone?  That’s far from certain.

We can see from the US market that it’s a lot better to be a provider of channelized television services than not.  The internal rate of return for cable companies is a LOT higher than for telcos, and the large US telcos (AT&T and Verizon) have both moved into channelized TV.  But you can also see in the current push for consolidation in the cable market that even channelized TV isn’t a magic touchstone.

You can also see, based on broadband adoption patterns, that faster broadband by itself isn’t a consumer mandate.  Users tend to cluster at the low end of service offerings, where the service is cheapest, not at the high end.  In Seattle, a competitor commented that at 50 Mbps you don’t get much real interest, and that means that any operator who wants to provide that kind of speed and get any significant customer base for it will have do price down considerably.  That reduces margins.

The final issue in all of this is the whole OTT video angle.  My surveys have suggested that the number of households who have a largely fixed-schedule viewing pattern has fallen by over 50% in the last 20 years.  It’s not that people don’t watch TV (most reports say they actually watch just a bit more) but that they don’t watch it at regular times, watch the same shows regularly, as much as they did.  This isn’t being caused by OTT video IMHO, as much as by the fact that there are few shows today that tap into a broad market pool of interest to create loyalty.  OTT has just given voice to a level of frustration with “what’s on” that has been building for decades.  But whatever the cause, the fact is that we are gradually being weaned by our own lifestyles and by the availability of on-demand or recorded TV into a nation of unscheduled viewers.  Which means, ultimately, that less and less value is placed on channelized TV.

This is important to players like Alcatel-Lucent and to network operators, because while a big telco can reasonably expect to command some respect in the channelized TV market because the capex barriers to entry are high, they’re just another competitor when it comes to OTT video.  Take away video franchises derived from channelized delivery and you gut TV Everywhere because you don’t have the material under favorable terms.  Apple’s likely TV offering and Google’s likely competitive response are both likely to present a more interest-based virtual channel lineup that would eradicate loyalty to traditional viewing fairly quickly, except where the networks are committed to their current time-slot models.

That’s the big rub here.  Fresh content, we know for sure, is not going to get produced by OTT players to fill the bulk of their lineups.  They rely on retreads of channelized material from network sources, and those networks are not going to kill their channelized ad flow for OTT ad flows when currently a minute of advertising is worth about 2.5% as much on streamed material versus channelized material.  Will people still watch Apple or Google or Netflix or Amazon?  Sure, when nothing is on the channels or when they can’t view what they want when they want it.  As long as fresh material is what really attracts viewers (and who wants to watch the same stuff every night?) the networks will have the final say in where TV goes, and TV will have the final say in what technologies are meaningful for wireline broadband.