Can Networking Learn from Microsoft?

Yet another set of Street reports and research reports have stopped just short of describing the PC market as “dead”, though a close analysis of the data seems to suggest that the declines are hitting a plateau.  My view has always been that we’re seeing the “browser bunch”, those who see technology as a pathway to the Internet, migrating to smartphones and tablets.  The people who really need to compute are much less likely to abandon the PC, but that will still create market conditions that are downright scary for those whose wagons have long been hitched to the PC star.  Windows 8, we hear, hasn’t succeeded in driving a PC refresh cycle.  Who thought it would, other than perhaps Microsoft, is the mystery to me.

Microsoft is obviously at risk here because their incumbency is almost totally PC-based.  The company is now reorganizing to avoid the classic problem of product silos, but if you look at the details they’re really only creating a smaller number of bigger silos.  No matter whether you have two or ten product groups, you still have to define and enforce a unifying vision of your market or you’ll never tie them together.  If you can’t link your products, you’re not going to enjoy any symbiosis or pull-through.

Microsoft is pulling all its system software under one group, all its “devices” into a second, its Office and Skype products into a third, and the final group will be its cloud and enterprise stuff.  While going from eight silos to four does reduce the issues of “taking a dependency” that have crippled Microsoft’s cross-silo coordination, it’s a smalls step.  Clearly this new structure doesn’t make independent products, it may in fact make hardware/software coordination harder and it seems it will hamper cloud evolution.  Why separate the cloud when all your other stuff depends on it?

It’s not that Microsoft couldn’t fix the cloud-integration problem, but that if they knew how to do that they’d never have needed a reorg in the first place.  What I think is lacking with Microsoft’s vision is that it’s not a vision but a personnel exercise.  Like every tech company on the planet, Microsoft’s biggest problem is coping with the “cloud revolution”.  PC problems are due to the cloud, even if we don’t think of “the Internet” as “the cloud”.  Opportunities in content, applications, services, devices, games…you name it…are all being generated by the cloud.  No matter where you put the cloud organizationally, you have to put it on a pedestal in that you’re making it your positioning centerpiece.  Azure could be considered the technical leading edge of the cloud platforms, but you’d never know it from Microsoft’s positioning.

Microsoft’s dilemma can teach some lessons for networking vendors too.  If you look at the networking giants of today, you can argue that only Cisco has a notion of charismatic and cloud-centric marketing.  While Cisco has product groups, they don’t seem to be political fortresses in the same way that they’ve become in other companies.  Alcatel-Lucent has yet to get beyond hyphenation in integrating its culture, NSN has avoided product collisions by shedding product lines, Ericsson wants buyers to pay for them to integrate products by buying professional services, and Juniper wants its product groups to make their own way in the world without any collective vision.

The cloud is the future, and maybe the problem vendors are having in accepting that is their narrow vision of the cloud.  Elastic, fully distributable, resources in compute, storage, and knowledge are game-changers in how we frame consumer and worker experiences alike.  Something like Google Glass and augmented reality could be a totally revolutionary element in our work and lives, and yet we’re not really even thinking about just where it could go.  People write more about “privacy risks” to Glass than about Glass applications.  Show me how wearable tech risks privacy more than “carryable tech” like a smartphone!

The cloud vision of what-you-need-when-and-where-you-need-it revolutionizes networking, computing, application design, appliance design—pretty much everything.  Microsoft and all my network vendor friends can see this revolution from either side of the barricades, pushing to lead the charge or hiding in a woodpile.  Every company in the tech space needs to embrace some vision of the cloud future and promote their products and evolve their strategies and refine their positioning and establish internal lines of communication to support that vision.  If your company hasn’t done that, you are behind, period.

The specific things we need to be thinking about?  Three.  First, how do you build componentized software that can be assembled to create the new unified cloudware that is part application and part service?  Second, how do you deploy and operationalize an experience that doesn’t have a single firm element, either in terms of resources or in terms of mission?  Finally, what does a network that hosts experiences look like?  And don’t be tempted to pick a single element from my list that you’re comfortable with and hunker down on it!  You’re all-in or all-out in this new game.  Even if you don’t make software or don’t make networks, you have to make something that accommodates all these changes and so you have to promote a holistic vision of change.

I’m learning a lot about the cloud future by trying to solve some of these problems in a realistic way, and the thing I’ve learned most so far has been that we still don’t think holistically about “experiences” at the consumer or worker level, and holistic experiences are all people ever see.  The whole of the Internet, after all, is compressed into a social portal or search engine or a Siri voice recognition process.  The whole of technology, for most, is compressed into the Internet.  This is an ecosystem, and if you don’t know where you sit on the food chain naturally you have little chance of improving your life.

We Announce CloudNFV

Those of you who follow me on LinkedIn may have caught my creation of a LinkedIn Group called “CloudNFV”.  Even though the group is currently invitation-only I’ve received many requests for membership, and a select few found the CloudNFV website and figured out a bit of what was going on.  Craig Matsumoto of SDNCentral was one of that group, and he called me recently for comment.  We’re not opened up yet, but I did tell Craig some things (see his article at http://www.sdncentral.com/news/nfv-stealth-group-working-on-a-prototype/2013/07/) and I want to introduce my activity to those who follow my blog.

This all started because, by background, I’m a software architect and director of software development.  I’m far from being a modern, functioning, programmer but I’ve still kept my hand in the process and I understand successful network projects because I’ve run them.  I’ve also been a member of multiple network standards activities, and one thing I learned from the combination is that traditional international standards activities don’t do software.

When Network Functions Virtualization kicked off last fall, I emailed all the operators on the white paper list and offered a recommendation—prototype this thing as quickly as possible to minimize the risk that the standard will turn out not to be a useful guide for an implementation.  I also suggested that the body forget the notion of designing NFV to be capable of running on everything from bare metal to the cloud, and adopt the presumption of cloud implementation.  That would simplify the process of specifying how functions could be deployed and operationalized, and take advantage of the enormous body of work cloud computing has produced.

In the early spring of this year, I had an email and phone exchange with the CEO of a highly innovative software company, and I saw their stuff as a logical element in implementing an NFV prototype.  The CEO agreed and attended the April NFV meeting.  At that meeting I made a number of suggestions about implementation, and my comments generated quite a bit of discussion in the parking lot during the breaks.  From the dozens who stopped by and chatted, there were a small number who stayed.  These were the very kind of companies whose contributions would complete an NFV implementation, companies who represented the best technology available in each of their areas.  They agreed to work together as a group, and CloudNFV was born.

There are three foundation beliefs that have shaped our activity since that parking-lot meeting.  One was that we needed an NFV prototype to create a software framework that would pull NFV out of a standard and into networks.  Another was that the cloud was the right way to do NFV because it’s the right way to do computing.  The last one was that as cloud computing converged more on SaaS, cloud providers would converge on the deployment needs and operations needs that were shaping NFV requirements.  There is, there can be, only one cloud, and NFV can define how both applications and services deploy on it.

So we’re doing that “supercloud” deployment and operationalization, following the model of NFV work but framing it in terms of the three guiding beliefs.  We’ve combined the elements needed for optimized hosting of network functions on computers with tools to gather network intelligence, a highly innovative virtual-networking model that recognizes the multidimensional nature of SDN, and a powerful unified knowledge framework that manages everything about functions, applications, resources, services, and policies as a single glorious whole.  And none of this is from giant network vendors; none of the group fall into that category.  We’re not a startup trying to be flipped, we’re a community of like-minded people.  Some of us are small in size and some are IT giants, but all are thought leaders, and all are working toward a strategy that will be effective, open, and extensible.

And we will be open, too.  When we’re ready to announce public demonstrations we’ll also announce when we’ll publish our data models and interfaces, and we’ll describe how we’ll be able to expand the scope of our prototype to include other software and hardware.  We’re not going to offer to integrate the world at our own expense, but we will cooperate with others willing to expend some of their own resources to link to our process.  We’re hosting in one of our member’s labs now, but our approach can support federation naturally so we’re happy to install in operator labs anywhere in the world and run as a global ecosystem.  Nobody is going to be shut out who really wants to participate openly and fairly.  We’re not sanctioned by the NFV ISG, but we do insist that those who want to join also join that activity.  It’s not expensive, and you can’t be committed to NFV without being committed to the international process that’s defining it.  We are, and we’re committed to making it work for the network of today and for the cloud of tomorrow.

If you want to participate in CloudNFV when it opens up, I’ll ask you to send a Join request to the CloudNFV LinkedIn group if possible, and if not to send me an email directly.  You can review all the information we’ve made public (including a link to this blog and Craig’s article) on the CloudNFV website (http://www.cloudnfv.com/).  I’ll update the site as we add information, and this site will be the repository for our public documents as they’re released.

When?  We will be scheduling selected carrier demonstrations in early September and we’ll likely be doing public demonstrations by mid-October.  Somewhere between we’ll begin to publish more information, and we expect to be able to show an NFV deployment of IMS with a basic TMF Frameworx integration and full operational support including performance/availability scale-up and scale-down, by the end of this year.  You’ll see then why I say that CloudNFV is as much “cloud” as it is “NFV”.  Well before then we’ll be starting to work with others, and I hope you’ll let me know if you’re interested in being one of them.

Are We Selling Virtualization Short?

We clearly live in a virtual world, in terms of the evolution of technology at least, but I really wonder sometimes whether everyone got the memo on this particular point.  It seems like there’s a tendency to look at the future of virtualization as one focused on creating virtual images of the same tired old real things we started with.  If that’s the case, we’re in for a sadly disappointing future in all of our tech revolutions—cloud, SDN, and NFV.

I’ve commented before that cloud computing today tends to be VM hosting as a service, which confines its benefits to fairly simple applications of server consolidation.  What the cloud should really be is an abstraction of compute services, a virtual host with all sorts of elastic properties.  We should define the properties of our virtual computer in terms of the normal compute features or course, but we should also define “platform services” that let applications take advantage of the inherent distributability and elasticity of the cloud.  By creating cloud-specific features for our virtual computer, we support applications that can run only in the cloud, and that’s the only way we’ll take full advantage of cloud capabilities.

Why do we get fixated on the low apples of the cloud?  Partly because vendors are so fixated on current-quarter numbers that they don’t even think about the end of the year.  Partly because socializing a complete shift in IT technology is beyond our marketing/positioning/media processes.  Partly because any populist revolution (which all of tech is these days) has to dumb down to be populist.  But we can all drive without being automotive engineers, so it’s possible to make complex technology changes consumable, right?

Well, you’d never guess that was the case in the SDN space.  Software-defined networking today offers us two totally different visions of networking.  One vision, promoted by the software-overlay players, says that “connectivity” is an artifact of the network edge, an overlay on featureless transport facilities.  Another, the OpenFlow hardware vision, says that networking is a totally malleable abstraction created by manipulating the forwarding rules of individual devices.  And yet what do we do with both these wonderfully flexible abstractions?  Recreate the current networks!

Then there’s NFV.  The original concept of NFV was to take network functions out of purpose-built appliances and host them on commercial off-the-shelf servers—if you’re a modernist you’d say “on the cloud”.  How much real value could we obtain by simply moving a firewall, or even an IMS component, from a custom device to a COTS server?  Whatever operations issues existed for the original system of real devices would be compounded when we mix and match virtual functions to create each “virtual device”.  We have to manage at two levels instead of one, and the management state of any network element that’s virtualized from multiple components has to be derived from the state of the components and the state of the connectivity that binds them into a cooperating unit.

The real answer for both SDN and NFV is to go back to the foundations of virtualization, which is the concept of abstraction.  Take a look at Metaswitch’s Project Clearwater architecture (the link is:  http://www.projectclearwater.org/technical/clearwater-architecture/) and you’ll see that we have a cloud-ready IMS at one level.  Look deeper and you’ll see that Clearwater abstracts IMS into a black box and then defines a component set that roughly maps to 3GPP elements but that cooperates to create the expected external interfaces.  That, my friends, is how abstraction should work.  Which is how the cloud, and SDN, and NFV should all work.

Flexibility that expands our options does us little good if we exercise it by affirming and entrenching the same choices we made before we got flexible.  The most important dialogs we could have on the topic of cloud, SDN, and NFV aren’t how we do what we do now, but how we might, in our flexible future, do what we can’t do now.  As we look at new announcements in our three key technology spaces, we should keep this point in mind, and take vendors to task if they don’t address it.  I’m going to do that, and you’ll get my unbiased views right here.

Looking at NGN Through SDN/NFV-colored Glasses

We all think that networking is changing, and most probably agree that 1) the cloud is the primary driver, and 2) SDN is the term we use to describe the new “connection architecture”.  I think that the point-three here is that NFV is the cloud-platform architecture that will describe how the hosted network elements are deployed and managed.  The big question is just how this is going to impact the end-game, what “networking” does really mean in the future.

Scott Shenker, who I’ve quoted before in blogs for his thought leadership in SDN, postulates that the network of the future may push all the traditional networking functions into software and out to the edge, leaving “the core” as a mechanism for pure transport.  He also talks about the “middle-box” problem, meaning the issues created in the network by the multiplicity of devices that don’t switch or route but rather provide some other network service—things like load-balancers and firewalls.  He cites statistics that say that middle-box devices are as common as routers in today’s networks.  That’s at least not inconsistent with my survey results, which cite them as an operational issue as large as that of routers.  So you could argue that Schenker’s vision of “SDN futures” is a combination of an NFV-like function-hosting of network capabilities relating to service and focused near the edge, and a highly commoditized but-pushing process that makes up the core.

The only problem I have with this is that the model is really not a model of networking but of the Internet, which is almost a virtual network in itself.  The real networks we build today, as I’ve noted in prior blogs, are really metro enclaves in which the great majority of service needs are satisfied, linked by “core” networks to connect the small number of services that can’t live exclusively in a metro.  And yes, you can argue that the metro in my model might have its own “core” and “edge”, and that Scott’s model would apply, there’s a difference in that metro distances mean that the economics of networking there don’t match those of the Internet, which spans the world.  In particular, metro presents relatively low unit cost of transport and a low cost gradient over metro distances.  That means that you can put stuff in more places—“close to the edge” is achieved by hosting something anywhere in the metro zone.  NFV, then, could theoretically host virtual functions in a pretty wide range of places and achieve at least a pragmatic level of conformance to Scott’s model.

To figure out what the real economics are for this modified model of the future, though, you have to ask the question “What’s the functional flow among these smart service elements at Scott’s edge?”  Service chaining, which both SDN and NFV guys talk about, is typically represented by a serial data path relationship across multiple functions, so you end up with something like NAT and DNS and DHCP and firewall and load-balancing strung out like beads.  Clearly it would be less consumptive of network bandwidth if you had all these functions close to the edge because the connection string between them would span less network resources.  However, if we wanted to minimize network resource consumption it would be even better to host them all in one place, in VMs on the same servers.  And if we do that, why not make them all part of a common software load so the communications links between the functions aren’t ever moving out of the programming level?

If virtual functions are transliterated from current network software, they expect to have Ethernet or IP connectivity with each other and you see the “edge functions” as a software hosted VLAN-connected virtual community that probably looks to the user like a default gateway or carrier edge router.  And you also see that if this is all done to connect the user to the VPN that provides enterprise services, for example, it really doesn’t change the connection economics if you put the services close to the user, or close to the VPN on-ramp.  That’s particularly true in the metro where as I’ve noted bandwidth-cost-distance gradients are modest.  The whole metro is the edge of Scott’s model.

For content services, it’s obviously true that having caching near the user is the best approach, but the determinant factor in content caching strategies isn’t the connection cost inside the metro, it’s the cost of storing the content.  Because metro networks are more often aggregation networks than connection networks, there is at any point in the aggregation hierarchy a specific number of users downstream.  Those users have a specific profile of what they’re likely to watch and there’s a specific chance of having many users served from a single content element cached there, because some elements are popular in that group.  Go closer to the user to cache and you get closer to giving every user private content storage, which clearly isn’t going to work.  So you have to adapt to the reality of the metro–again.

My point here is that the evolution of the network under the pressure of NFV and SDN isn’t something that we can deduce from structural analysis of the Internet model of networking.  It’s a consequence of the cost/benefit of business and consumer services in the real world of the metro-enclave model of service fulfillment.  We still have a lot of thinking to do on where SDN and NFV will take us, because we still haven’t addressed this basic reality of network evolution.

Netsocket Sings SDN in Perfect Harmony

I’ve noted in past blogs that the world of SDN is evolving, and perhaps the most significant element of this evolution is the emergence of a distinct two-layer model of SDN.  The top layer of SDN, based on “software overlay” virtualization, focuses on agile connection management to adapt to the dynamic notion of the cloud.  The lower layer that represents actual infrastructure is aimed at traffic management and network operations efficiency.

While this “bicameral” model of SDN is helpful, I think, it does have the effect of layering two virtual things on top of each other, which is hardly the formula for creating a hardened, operationally effective process.  In fact, the “dirty secret” of SDN as I’ve called operations, has been a growing problem.  Which is why I’m very interested in the Netsocket SDN approach.  They designed their whole concept around operations, because cloud operations is where the company got started, and they’re bringing a new notion of harmony to SDN.

The Netsocket Virtual Network, as the product is called, has the right three pillars of design—end-to-end application to mimic a real network, a management model that recognizes the inherent difference between virtual and physical networks, and the ability to seamlessly integrate with physical networks, both in an overlay sense and at a boundary point to extend or federate services.  But perhaps the biggest insight Netsocket has brought to the SDN place is their focus on northbound applications, the very area where most SDN players have hidden behind loosely defined APIs.  In fact, the Netsocket model is to give away the virtual network layer and sell the applications.

The infrastructure layer of NVN is made up of their own vFlow switches, which are interoperable with OpenFlow hardware switches (but a lot more agile, not to mention cheaper).  The vFlow Controller layer is analogous to the traditional SDN controller of OpenFlow, but it includes the ability to interact with legacy IP networks at the edge, sniffing the routing protocols to provide the ability to extend network services between vFlow and legacy devices.  This is a commercialization of the private implementation Google did to create an SDN core, but with broader application (and easier integration if you’re not a gazillion-dollar-revenue search giant).

The “northbound API” of NVN is the vSocket API, which is a web service that couples applications to the controller.  This is an open interface in that Netsocket has made the specs freely available.  The applications that run through vSocket provide the network service smarts, including optimization, policy management, and of course connection management.  One of the vSocket-connected applications is their management console, another is the centerpiece of their operationalization.

vNetOptimizer is a service operationalization application that can correlate between virtual-network services and physical network conditions, including the linking of physical network events with the service flow conditions that they cause.  It’s this linkage that gives Netsocket that direct operational support capability that nearly all overlay virtual network technologies lack.  It will take some time for the details on how vNetOptimizer evolves to understand just how complete it can make the operations link between layers, and also to understand how it might present “virtual management” options that are more attuned to network services than to devices, but since Netsocket has cloud-service roots, I’m of the view that they have good credentials here already.

One of the interesting things that the legacy/virtual linkage Netsocket offers can allow is the control and automation of legacy networks.  Their operations scripting/policy control can be extended through plugins to standard devices (Cisco and Juniper are the two mentioned in the press release, but in theory you could build a plugin for pretty much anyone).  This deals with an issue that’s already come up in the Network Functions Virtualization (NFV) discussions—how you use NFV concepts to virtualize some elements of a network while there are still legacy devices in place.  By accelerating operations benefits, the Netsocket strategy can reduce first-cost risks to operators or cloud providers, or even to enterprises.

For end-to-end support, Netsocket is drawing on an NFV-ish concept, which is using a COTS server to host virtual functionality in branch/edge locations.  This is actually congruent with the general goals of the NFV ISG in converting complex edge devices (“branch routers” or “integrated routers”) to applications running on a standard server.  Pretty much any virtual-overlay SDN solution could in theory be run this way, but their providers (as you know) don’t tout that approach but stay focused on the data center.  Netsocket will likely change that, forcing overlay virtual network providers to explain how they can be used across a complete service network…and of course how that virtual creation can be operationalized.

I noted in an earlier blog that I believed that new-generation software overlay SDN solutions were emerging and would likely put pressure on the “traditional” OpenFlow purists to be more articulate about how their stuff actually provides applications north of those anonymizing northbound APIs.  I think that Netsocket is going to ratchet up that pressure considerably.  Their stuff is cloud-friendly, carrier-grade, and it’s the first SDN story I’ve heard that had the specific cloud-service-level operationalization focus as well.  That could be a very powerful combination.

G.fast: Is It Enough?

One of the challenges that wireline has faced (and it doesn’t need all that many challenges for gosh’s sakes!) is the “capacity gap”.  If anyone thinks broadband Internet is profitable enough, you need to read somebody else’s blog.  You need to deliver video, HD video, to make wireline work, and that’s a problem because traditional cable-TV linear RF won’t work over local loops.  You have to do broadband (IP) video, and that’s been a problem too.  Conventional copper loop has been good for perhaps 40 Mbps at best, and while FTTH offers almost unlimited capacity, it has a very high “pass cost”, the cost of just getting a service to the point of customer connection so the customer could order it.

Alcatel-Lucent has been leading the charge to come up with strategies that would expand capacity of copper loop, and their recent G.fast trial promises to drive a gig per second over copper.  While the loop length that can be supported is short and loop quality has to be decent, the approach offers hope in supporting fiber-to-the-curb (FTTC) that would use the high-speed “vectored DSL” copper for the home connections.  That could result in a reduction in pass cost, and also mean that IPTV in the sense that Alcatel has always promoted it (U-verse-style TV) would be feasible in more situations.  That might give wireline broadband a new lease on life and provide a big boost to operator profits.  Obviously it wouldn’t hurt Alcatel-Lucent one little bit either.  But can Alcatel-Lucent rehabilitate copper loop with technology alone?  That’s far from certain.

We can see from the US market that it’s a lot better to be a provider of channelized television services than not.  The internal rate of return for cable companies is a LOT higher than for telcos, and the large US telcos (AT&T and Verizon) have both moved into channelized TV.  But you can also see in the current push for consolidation in the cable market that even channelized TV isn’t a magic touchstone.

You can also see, based on broadband adoption patterns, that faster broadband by itself isn’t a consumer mandate.  Users tend to cluster at the low end of service offerings, where the service is cheapest, not at the high end.  In Seattle, a competitor commented that at 50 Mbps you don’t get much real interest, and that means that any operator who wants to provide that kind of speed and get any significant customer base for it will have do price down considerably.  That reduces margins.

The final issue in all of this is the whole OTT video angle.  My surveys have suggested that the number of households who have a largely fixed-schedule viewing pattern has fallen by over 50% in the last 20 years.  It’s not that people don’t watch TV (most reports say they actually watch just a bit more) but that they don’t watch it at regular times, watch the same shows regularly, as much as they did.  This isn’t being caused by OTT video IMHO, as much as by the fact that there are few shows today that tap into a broad market pool of interest to create loyalty.  OTT has just given voice to a level of frustration with “what’s on” that has been building for decades.  But whatever the cause, the fact is that we are gradually being weaned by our own lifestyles and by the availability of on-demand or recorded TV into a nation of unscheduled viewers.  Which means, ultimately, that less and less value is placed on channelized TV.

This is important to players like Alcatel-Lucent and to network operators, because while a big telco can reasonably expect to command some respect in the channelized TV market because the capex barriers to entry are high, they’re just another competitor when it comes to OTT video.  Take away video franchises derived from channelized delivery and you gut TV Everywhere because you don’t have the material under favorable terms.  Apple’s likely TV offering and Google’s likely competitive response are both likely to present a more interest-based virtual channel lineup that would eradicate loyalty to traditional viewing fairly quickly, except where the networks are committed to their current time-slot models.

That’s the big rub here.  Fresh content, we know for sure, is not going to get produced by OTT players to fill the bulk of their lineups.  They rely on retreads of channelized material from network sources, and those networks are not going to kill their channelized ad flow for OTT ad flows when currently a minute of advertising is worth about 2.5% as much on streamed material versus channelized material.  Will people still watch Apple or Google or Netflix or Amazon?  Sure, when nothing is on the channels or when they can’t view what they want when they want it.  As long as fresh material is what really attracts viewers (and who wants to watch the same stuff every night?) the networks will have the final say in where TV goes, and TV will have the final say in what technologies are meaningful for wireline broadband.

Microsoft: From Behind the Duck to The Wrong Duck?

The departure of Microsoft’s top Xbox guy, Don Mattrick, for Zynga has raised again the profile of Microsoft’s still-secret-in-detail restructuring.  Ballmer announced that the company would be working to be more focused on services and devices and less on software.  Clearly there are going to be a lot of changes, but rather than speculating on the changes or on whether Mattrick departed because he thought he might be asked to leave later, let’s look at the fundamentals.  Does Microsoft have a shot at their goal of becoming a device/services company, and why would they want to do that anyway?

On the surface, the last question is easily answered.  PC software is what Microsoft does most, and PC software is sinking as PCs sink.  Windows and Office sales are both tied strongly to new PC sales, which are being lambasted by tablet sales.  So it makes sense to jump out of software and into services and devices, right?  Not that simple.

Devices aren’t replacements for software, they’re carriers for software.  Apple’s iPhone and iPad are at least as much software as hardware, and Android’s success as a software platform is linked to Samsung’s success as a hardware provider.  The more something looks like a gadget, the less users are willing to think about hardware and software as separate elements.  Apple knew that and created the first platform ecosystem.  But it wasn’t a slam dunk.  Had Microsoft responded to Apple aggressively, before both Apple and Google had a chance to establish commanding leads, the Microsoft model of “Wintel” would likely have been just as effective in phones and tablets as it was for PCs, and the current restructuring buzz would have never happened.

Now, with Apple and Google established, and in particular with Google already owning the “software-partners-with-hardware” model of an ecosystem, Microsoft has little choice but to create their own total platform.  They can’t compete with the incumbents through partnerships because they’ll never get the big hardware players to bet convincingly on Microsoft at this point.  No choice, in short, more than good choice.

How about “services”?  It seems to me that every company that’s been slipping in its core business, whether hardware or software, discovers the “services opportunity”.  The problem is that if you’re creating a platform ecosystem of consumer devices, it’s hard to see how services are going to come out of the process.  Does Microsoft think they’re going to be selling professional services to Windows RT users?  If Microsoft wants to be a professional services player they need to be that in the cloud computing evolution in general, and that’s going to be a challenge given their insipid positioning of Azure and the cloud overall.  The cloud as it is today is a cost-based substitution, which means that it loses money for Microsoft by reducing sales of premises software licenses while increasing the sale of hosting.  So even if there’s professional services opportunity, Microsoft has to first pay for the cloud losses with the professional services gains, then get some net benefit.

The solution to all of this, if there is one, is to create a “device cloud”, the kind of symbiosis between devices and cloud-hosted services elements that I’ve been touting for operators to create.  Microsoft cannot win the game that it’s already lost, the head-to-head war with Apple or Google.  It could win the “supercloud-device” war because both Apple and Google would face the problem of kissing off their current incumbency to address future gains.  That’s the problem that got Microsoft behind in phones and tablets to begin with, so why not force the problem down the other guy’s throat?

I think Microsoft could create this device/supercloud model easily at a technical level, probably easier than it could create a completely new structure that’s aimed at becoming a device success and also a services success.  As somebody in finance said, culture will always trump strategy.  Microsoft’s culture is too far from what Ballmer wants, particularly when there are better choices that are closer to Microsoft’s traditional model.  Could it be harder for Microsoft to see something that’s at its own feet than something on the distant mountain-top?  Or could it be that Microsoft has become such a bureaucracy that it can’t lead anymore, so it has to follow its competitors and accept perpetual marginalization?

Huawei is also a part of the equation.  Because the US Congress doesn’t trust them, they’re barred from being a big player in the US carrier space and also from enterprise equipment sales to government agencies.  That means that they’ll have to play in the US through consumer devices—like phones and tablets.  They picked up their own new executive, from Nokia, in a possible move to buttress this play.  If Huawei is really going to get into the device space, then for Microsoft to make a move now would be potentially suicidal.  The cloud is Huawei’s only weak spot, mister Ballmer.  Go for that, or face what might be a fatal level of risk.

 

Nokia Wins NSN Custody, but Is It a Win?

Well, we’ve seemed to finally get beyond the rumor phase with NSN.  Nokia has offered to buy Siemens out of the joint venture, one of several possible outcomes that I’d heard rumors on over the last several month.  The big question is whether this will matter to NSN or to Nokia, and that depends on just what Nokia decides to do with the asset.

NSN is one of three telecom equipment giants who have suffered significantly as network operators’ revenue per bit has fallen and differentiation of equipment based on features has become more difficult.  Huawei has acted as a price catalyst for the market, particularly in the lower network layers, and NSN’s reaction to the commoditization was to shed the lines of business that were most impacted.  NSN dropped WiMax, broadband access, BSS, and IPTV and significantly cut back on its partnerships to sell transport/switching gear, so what was left was pretty much LTE.  Given that wireless broadband in general and LTE in particular has been a capex bright spot, that may have seemed a sensible approach.

Two problems emerged, though.  First, NSN jumped with both feet into a market that was itself just beginning to fall into the commoditization death spiral.  Wireless ARPU is peaking for about half the world now, and will peak and decline for pretty much all of it by the end of 2014.  By jumping into a single boat (one sinking, though slower than some others) NSN gave itself less latitude to move on to another area to compensate for LTE’s inevitable margin decline.  Second, NSN disconnected itself from most of the SDN and NFV opportunity because they don’t provide the equipment any longer.

Wireless transport is still transport, even if one considers that the current operator focus is more on the wireless space.  You can’t really differentiate much; it’s nice to talk about flexibility in the RAN and self-defining network elements, but the problem is that wireless is really about software first and metro second, and if all you have on the table is RAN and IMS you’re not in a strong position to make a case for yourself.

IMS opens perhaps the key question for wireless and for NSN, which is how to accommodate revolution in building network capabilities that have already been designed for the old model.  The Evolved Packet Core manages tunnels to cell sites to allow users to roam between sites without losing their connections.  The same thing could be accomplished with the aid of SDN, and perhaps in a more effective and simpler way.  The new IMS Multimedia Resource Function should probably be viewed as a combination of SDN and a NFV implementation of content delivery networks, all integrated with NFV-compatible session management.  My point is that while we would likely agree that these changes are needed, who’s out there talking about making them?  Logically, NSN needed to lead this initiative because NSN was doubling down on wireless bets, and they’ve not done that.

But perhaps the bigger question here is how Nokia makes a whole that’s greater than the sum of its parts.  Is it possible to leverage symbiosis between a handset business and an IMS/RAN wireless broadband business?  Sure, but why couldn’t Nokia and NSN have leveraged that when NSN was a partnership?  I don’t think Siemens would have stubbornly refused to deal with a mobile ecosystem opportunity.  The challenge was, IMHO, that NSN like most Eurogiant telecom players, is just not particularly agile and the current network market is full of real and imagined revolutions that demand an agile response.

Nokia is almost a poster-child for inertia.  Like Ericsson, they’re incredibly conservative in positioning and strategizing, and all of the dizzying movement in NFV and SDN have probably frightened a lot of managers weaned on Class 5 switching or something.  It’s pushed them further into a consolidative model of their own evolution, and you can’t consolidate without commoditizing unless you can innovate like crazy in the space you’ve carved out for yourself.  Nokia has just not been that innovative before, and frankly I don’t think that it’s likely to be that now.

Can Nokia do something compelling?  Yes.  All they have to do is to look at metro, SDN, and NFV and think outside the box to devise a new metro/mobile architecture that will jump over all the old albatross concepts of the past and into the future.  Do we need mobility management, and thus EPC, in mobile networks?  We need the former, but EPC is an implementation and not a baseline requirement.  We could have “virtual EPC” where one big virtual PGW/SGW covered the whole metro.  We could have an MRF that was one big virtual add-on to a big virtual CDN.  We could have voice supported by a virtual SBC, and of course IMS itself can be virtualized (as Metaswitch has shown).  Nokia can jump into this, make it happen, and create a win.  But NSN could have done that already, and since they didn’t you have to assume that they don’t have the thought leadership internally to cut the mustard.  If that’s true, then were would Nokia get it?

Three Network Sign-Posts

It’s been an interesting week for the markets, in all of the dimensions that drive us forward.  There are glimmers of technology change in SDN, there are signs of vendor shifts, and there are macro indicators that might tell us a bit about demand.  So, given that it’s Friday and a good day to take a broader look at things, I’ll talk about all three.

NTT’s strategy linking SDN technology to cloud service delivery for hybrid cloud and even cloudshifting applications is potentially interesting.  If you look at the infrastructure dimension of SDN, you see it focused primarily on narrow data center switching applications or (in Google’s case) a fairly specialized IP core.  The big opportunity for infrastructure SDN is metro, and we’ve been light on examples of things that you could do with it in the metro space.  NTT proposes to have data center connections made via an SDN-controlled gateway, which could not only create a framework for metro cloud services to enterprises, it could be an element in NFV implementation.

The data center is the foundation of NFV, but the biggest (and perhaps hardest) step in NFV infrastructure evolution is creating a web of resource connections throughout the metro area.  Applications like IMS/EPC and CDN, both NFV use cases, demand interconnection of metro resources.  Most of the advanced mobile/behavioral opportunities that operators have identified—both in the enterprise space and for consumers—demand a highly elastic resource pool with good connectivity that lives underneath the orchestration and operationalization layers.  Thus NTT may be driving some activity in a very critical area.

On the vendor side, we can see that the spin out of Cisco’s Live event on Insieme does seem to be focusing on something more “operationalizing” in terms of benefits, which I think is both good for the market and smart for Cisco.  The goal of SDN must be the support of the evolution of that grand fuzzy artifact we call “the cloud”.  If SDN doesn’t do that, it’s just tweaking current processes and won’t matter much in the long run.  The challenge in supporting the cloud isn’t just a matter of speed-matching agile application-layer evolution with seven-year-depreciation infrastructure, it includes the question of how you can make all that dynamism work without creating an explosion in operating costs.  That problem is so critical to carriers that it swamps the question of how you optimize deployment or whether you even use SDN technology.

We need a new operations model for the cloud, and in fact bodies like the TMF have recognized that for at least four or five years.  The problem is that we’ve been unable to create one using the traditional standards processes, largely because those processes get so tied up in evolution they forget the notion of destination.  If Insieme can be used to take a bold leap into the operational future and jump over all the dinosaurs of past processes, then it gives us the destination of cloud operations.  Getting to that goal can then be addressed, and I know for sure that there are models of operations evolution that can manage both the goal and the route.  I’ve presented them to carriers, in fact.  What has to happen now is that they get productized.

The first of our broad-market points is the Accenture quarterly report, which was a revenue miss due to shortfalls not so much in outsourcing as in consulting activities.  Professional services have been a bright spot for a lot of companies, not the least being IBM.  In networking, Alcatel-Lucent, Ericsson, and NSN are increasingly dependent on it.  So the question at this point is “Are tech buyers being asked to spend more on professional services to competitively pressured equipment price reductions?”  It appears to me based on the market data and my spring survey results (now almost all in from the responders) that buyers think so.  Enterprises are starting to adopt that suspicion-of-my-vendor mindset that evolved over the last five years in the carrier space and has substantially poisoned the credibility of the vendors.

The challenge this poses for vendors is multi-faceted.  On the surface, having your buyers distrust you has to be a bad thing, and that’s especially true when they’re trying to make a major technology transition.  In fact, the poisoning of the credibility well is so serious a problem for vendors that the only reason it’s not hurting them is that nearly all of them have it.  Cisco seems to be the vendor who has escaped the credibility slump the best, and not surprisingly it’s the network vendor doing the best in the marketplace.  But there’s a deeper point too, and that’s the fact that buyers really do need professional services to help them along, and if they don’t get them because the services are overpriced or the sources aren’t trusted, then major new applications of networking and IT can’t be deployed.  That would disconnect all our wonderful new technology options—cloud, SDN, and NFV—from the improve-the-benefit-case side of the process, focusing them entirely on cost reduction.  That, as I’ve said for years now, will take networking and tech to a place where few of us in the industry will want to see.

The good news is that I think we’re starting to see, in the “SDN wars” the beginning of a synthesis of a value proposition.  Yes, it would have been better had someone simply stood up and announced the right answer, meaning positioned their assets to address the real market value of SDN.  However, we’re finding the right spot by filling in a million blank spaces, some substantially useless, and then looking at the impact of each new solution.  Eventually, we’ll see the elephant behind the forest of boulders, snakes, and trees.

“Insieme” Spells “Operationalization” for Cisco

Cisco’s comments on its new “Application-Centric Infrastructure” vision is yet another proof point for my argument that Cisco has successfully played the weaknesses of SDN players against them.  In military tactics, seizing the high ground is a maxim.  So it is in marketing, and Cisco has done that with finesse.

You can harp on about the value of SDN in terms of efficiency or better hardware/software innovation or whatever, but to actually defend SDN’s value you have to get back to the place where software starts defining networking.  That means you have to face those northbound APIs and the applications that control connectivity and manage traffic.  In the SDN space, startups have tended to either jump into the overlay virtual networking space or the SDN controller space.  The former disconnects the application from the network through an intermediary virtual abstraction and the latter is too low on the totem pole to understand services and support software control.  Cisco, I suspect, knew all along that the OpenFlow/SDN community would take too little a bite to be a threat, and they went for the APIs instead.

What Cisco now seems to be doing is preparing a spot for its own SDN-spin-in story, Insieme Networks.  That’s actually going to be the tricky part for Cisco, because up to now their SDN approach was the “quacks-like-an-SDN” model; if something exhibits expected SDN properties at the API level, then it’s an SDN.  That works as long as you don’t look inside the SDN black box Cisco has built, but Insieme forces Cisco to open that box a bit, to define internal structure to their picture.  That will then let competitors take shots not at the philosophy of the Cisco approach but at the technology.  So Cisco has to defend.

If you look at Cisco commentary on Application-Centric Infrastructure, you see that there’s a lot of integration and operationalization inside it.  It’s tempting to see Insieme as something that would address that, particularly since Cisco is aiming it at the data center.  That would make Insieme very Contrail-like or Nuage-like, perhaps, a means of linking virtual and real networks.  But virtualization and abstraction work against integration and operationalization, and Cisco will have to address how those two are resolved or face the risk of a competitor who has tangibly better answers.

Network operators have already told me that they’re more worried about how cloud applications (including NFV) are operationalized than about how they’re optimized and deployed.  The challenges of operations in a virtual, integrated, world are formidable because there are no real devices that present MIBs, and there are elements of application/service performance that don’t belong to any component that you’ve deployed, but to the connections between components.  In the world of the supercloud, the future, you have to be able to derive operations rather than apply them, because in the virtual world there’s nothing real to manage.  As we evolve more to virtual elements, we’ll have to face the transition from real device management.

To what?  One thing that’s clear is that you’ll need to rely more on automated processes and less on human practices in the cloud of the future.  You’ll also have to take a more service-centric or user-centric view of resources and behavior rather than a device-centric view, because you don’t have real devices any more but you’ll always have real users (or you starve, and your problems become irrelevant in a market sense).  As an industry, though, we have never really come to terms with a service-user-centric vision of network or IT management; everything ends up coming down to operations centers drilling down through layers of devices.

Whether Insieme contributes anything directly to this process isn’t the relevant point; the challenge is that when you solve problems in a new way you have to operationalize for that new solution.  Cisco is dragged into the operations side of the cloud whether they like it or not, and of course so are their competitors.  Every SDN strategy should be judged in part based on its operations context, but we’ve been unable to compare SDN operationalization because the competitive focus (and thus the product design and articulation) has been on network features.  And Cisco, by abstracting the network, has been able to stay out of the fray completely.  Now, with Insieme coming along, it has to dive in, and that makes the full picture from APIs to technology fair game for competitive byplay.  Including operations.  Especially operations, in fact, because if you can’t carefully operationalize our new and agile virtual world you’ve only invented a new way of getting into deep cost trouble down the line.  The complexity of a virtual system is inherently higher because of its additional flexibility and the multiplication in the number of elements.

So watch Cisco’s Insieme stuff for operational clues, and start looking at SDN stories for their operationalization story.  What you can’t build and deploy and sustain, you can’t bill for and profit from.