Oracle’s Quarter: We Need New Benefits!

Oracle, certainly no shrinking violet in marketing/sales practices and clearly a software leader, reported a quarter with light earnings and qualified guidance.  The news hasn’t broken through the euphoria of a no-taper-yet stance from the Fed, but it should be a warning that we really do have issues with IT spending.

What makes Oracle’s story particularly important is that the company is a software player primarily, and software is the on-ramp between productivity benefits and IT spending.  Nearly anything that’s going to provide fodder for new projects in IT is going to involve software, according to my surveys.  I noted in the spring that the balance of total spending to project spending in IT, which has historically been biased toward projects, has shifted since 2009 to “budget” or sustaining spending.  You can’t advance IT without new projects, so all of this is a signal.

Enterprises believe by an almost 3:1 margin that IT can provide them incremental benefits in enhancing worker productivity, sales, and overall company performance.  When we ask them to quantify how much more they believe could be obtained from optimized IT, they suggest an average 21% improvement.  That’s really critical because if you raised benefits by 21% you could raise spending by that same amount and still meet corporate ROI targets.

What’s more interesting (to me, at least) is that my model says that the actual gains that could be obtained from something like point-of-activity empowerment that fuses cloud computing and mobility to give workers better information at their point of work would be even higher.  The net investment needed for point-of-activity empowerment is actually lower than the average cost associated with securing a current and more traditional IT benefit set.  Thus, the model says that PofA empowerment could actually justify an IT spending increase of nearly 26% over the next five years.  So why don’t we have that?  I think Oracle is a poster child for the market’s issues.

First and foremost, the operations management of major enterprise buyers are simply not educated in the notion of point-of-activity empowerment.  They are stuck in a fragmented model of IT that recognizes “mobility” as a project, “the cloud” as a project, and nothing that combines the two.  Companies that are highly sales-driven (which means most companies, but Oracle in particular) tend to respond to buyer requirements rather than to propose new and broader ones.  Every salesperson knows that if you try to expand the scope of something you involve other people who have to approve it, increase project delays, and delay sales.

The second problem is that Oracle isn’t a mobile company in the first place.  Oracle has no specific mobility assets to play with; they don’t do carrier LTE or WiFi networking.  While I believe that there is no real need to be a mobile/wireless provider to address a corporate point-of-activity empowerment opportunity, Oracle likely thinks that bringing the cloud and mobility into any story would be tantamount to stamping “Cisco” on the cover sheet.  Overall, companies have tried to focus on what sells for them, and that means that a revolution in benefits may require stuff they never thought they’d need…and don’t currently have.

The third problem is inertia.  We have been stuck in project neutral since 2002 according to real data from government spending and GDP sources.  I’ve noted a number of times that our model of the rate of growth in IT spending versus growth in GDP since WWII shows not a steady ramp but a sine wave.  Each peak in the wave corresponds to a period when project spending as a percent of total IT spending grew significantly, and where new benefits justified as much as 40% better spending growth.  We had the last peak in the late ‘90s and we’ve been stuck near a historically low point in the IT/GDP ratio since.  What all this means is that we’ve trained a whole generation of sales and sales management people on the notion that IT is nothing but a cost center and that the only path to success is to help the customer spend less on the stuff you sell.  And, of course, hope that you get a bigger piece of the smaller pie than your competitors.

When we don’t have new projects, IT spending is at the mercy of the so-called “refresh” cycle, which is the regular modernization of equipment that happens because performance and reliability improvements eventually justify tossing the old in favor of the new.  The problem is that since 2005 users have reported keeping IT technology longer, so this refresh is not driving as much spending as before.  Users also tend to cut IT refresh spending early in any pencil-sharpening savings-finding mission.  Why buy a new version of something to do the same job that you’re getting done using something that’s already paid for?

Oracle, as I’ve said, isn’t the only company facing this sort of issue.  All of the network equipment space and all of the IT space that relies on enterprise purchasing has the same problem.  Even in the consumer space, we’re seeing smartphones and tablets plateau because the next big thing just isn’t big enough.  Operators don’t put “early upgrade” plans in place if users are willing to jump to new models without them.  We’re running out of low apples here.

The answer to this is to grow up and realize that we need some more “R” to drive the desired “I”.  But that’s been the answer for a decade now and we’ve successfully ignored it.  I don’t know whether the industry can learn its lesson now without a lot of serious pain, which is bad news.  We may find out.

Is “Distributed NFV” Teaching Us Something?

The world of SDN and NFV is always interesting, like a developing news story filled with little side channels and some major scoops.  One of the latter just came from RAD, whose “distributed NFV” raises what I think are some very interesting points about the evolution of the network.

The traditional picture of NFV is that it would replace existing network appliances/devices with hosted functions.  Most presume that this hosting will be done on general-purpose servers, and in fact that’s what the White Paper that launched NFV actually says.  Most also believe that the hosting will take place in operator data centers, and most likely will.  But RAD’s announcement demonstrates that there’s more to it.

The details of RAD’s story aren’t out yet; they’re scheduled to be released in New York at the Ethernet and SDN expo in early October.  What RAD is saying now is that their vision is based on the simple principle that “Ideally, virtualized functions should be located where they will be both most effective and least expensive, which means NFV should have the option to return to the customer site.”  That means hosting at least some virtual functions on-premises or at least at the carrier edge, and that in turn might well mean taking a broader look at NFV.

There’s always been a tension between the operationally optimum place to put a network feature and the traffic-and-equipment optimality.  Operationally, you’d like everything to be stacked up in a common space where techs can float about getting things fixed.  However, some functions in the network—application acceleration is an example—rely on the presumption that you’re improving performance by conserving access capacity.  To do that, you have to be upstream of access paths, meaning on the customer premises.  You can always exempt this kind of technology from NFV, but it makes sense to think about whether an edge device might not have some local hosting capability, and might thus serve as a platform for part of NFV deployments.  That seems to be the sense behind RAD’s “distributed NFV” concept, and I agree with the notion.

That’s not the end of the NFV extension issue, though.  I’ve said many times that there are a number of highly credible situations where shifting from custom appliances/devices to servers and hosting doesn’t make sense.  As you move downward through OSI layers you flee features and encounter only aggregated multi-user streams.  What’s the value of software agility there, at say the optical layer?  Further, can we expect server technology to spin optical bits around as effectively as a custom device?  I don’t think so.  Even deeper electrical products like switches and routers may demand higher availability because they’re still aggregation devices not access devices.  You can’t make a data path resilient through redundancy unless you want to stripe data across multiple routes, which I think raises costs more than NFV principles could cut them.  Thus, we’re going to have things that are outside NFV but that stay inside the network.

Deploying a virtual network function inside “the cloud” is a function of cloud management APIs.  You could extend Nova implementations in OpenStack to handle device-hosting of virtual functions, or even simply use Linux cards to create what looks like a server inside a device.  You can also create an alternative “Service Model Handler” (to use our CloudNFV term) to deploy edge functionality using whatever mechanisms are required.  What changes is the question of deployment complexity.  The more we distribute virtual functions, the more likely it is that there are places where one could be put that would be, for the service situation being presented, seriously sub-optimal.  In some cases, in fact, it’s likely that the virtual function being deployed would be specialized for the edge equipment and couldn’t be hosted elsewhere.  That fact has to be communicated to the NFV implementation, or there will be multiple parallel strategies for deploying functional components of service logic.  All we need to ruin a cost-savings and agility-building strategy is multiple incompatible and parallel approaches.

A similar issue arises with non-virtualized functions, meaning native or persisting devices.  If we presumed universal transfer of network functionality from devices to hosting, we could presume cloud principles and software logic would define management practices.  Virtual service components could be managed as application components.  But if we have real devices, do we then force virtual components to be managed as virtual devices, or do we create two tracks of management—one mostly conventional EMS/NMS and the other software-hosted management—and hope to somehow reconcile them?  How do we even define a service that embodies both virtual components and real devices?

The point here is that it’s going to be hard to contain NFV.  Some of its impacts are going to spread out to involve things that the original mandate never included, simply because the original mandate of commercial servers won’t optimally support the full range of network service models.  I don’t think that would surprise the founders of NFV, but it might surprise the vendors who are looking to build NFV support.  In particular, it might surprise those who are looking at the area of “NFV management”.

Why?  Because there can’t be any such thing.  You manage services, you manage resources.  That’s what network management is, and must be.  There can be only one management model or management isn’t effective.  I believe that the management evolution of NFV, an evolution that’s likely to evolve from the liaison between the NFV ISG and the TMF, is the most important element in the success of NFV in the long term.  It’s also the most important element in the continuing relevance of the TMF and OSS/BSS concepts.  This is a test, folks.  Virtualization, fully implemented, would let us define virtual function management any way we like.  It cannot be fully implemented, so we have to harmonize in some way—without losing the cost efficiencies and agility that NFV promised from the first.  Can that happen?  I think it can, but I don’t think that every NFV implementation will support this vision.  That means it’s going to be important to look for that support as implementations emerge.

Sprucing Up Juniper’s Contrail Ecosystem

Juniper continued to develop its Contrail/SDN strategy, with an announcement it was open-sourcing a version of its Contrail SDN controller, which is also available via a traditional commercial-support license.  The move says a lot about the SDN space, and it also may give a hint as to how Juniper might (emphasis is important here) exploit and differentiate SDN.

In a real sense, SDN is an ecosystem building from the bottom up.  We started with OpenFlow, a research-driven protocol to control forwarding table entries, and we’re working up toward some hopefully useful applications.  One critical element in that climb is the SDN controller that translates notions of “routes” and “services” into forwarding commands.  A controller is a necessary condition for OpenFlow but not a sufficient one; you need applications that convey service/route intelligence to the controller.  That’s where all the SDN strategizing comes in.

The APIs that link what I call “service models” to OpenFlow controllers are those infamous “northbound APIs”, infamous because everybody talks about them but there’s relatively little substance provided.  We count nearly 30 controllers on the market or in open source today, and as far as I can tell none of them share northbound APIs.  That’s where this ecosystem stuff comes in, because if you need services for SDN you need service models and translation of those, via northbound APIs, to OpenFlow (or something).  If you make it too hard to get your controller you won’t get anybody to author those northbound apps and you have no service models and no real SDN utility.  So you open-source your controller to encourage development.

Juniper will obviously get some takers for this approach, and in fact some were announced.  The challenge Juniper faces is that Cisco’s OpenDaylight has much broader support and much wider name recognition.  Given that the average user doesn’t know anything about those northbound APIs, it’s fair to say that “all in a name” applies here more than usual.  For our CloudNFV initiative, we have very strong interest from the open source community in a Service Model Handler for OpenDaylight, but none so far for Contrail.

So what does Juniper do about this?  I think there are two tracks they can take.  First, they can shine a light on the whole question of service models and northbound APIs to make prospective users/buyers understand what’s going on there.  Second, they can start applying their API principles to real network issues in a conspicuous way so that their platform is perceived to have immediate value.  NFV might be a useful place to start.

For two possible paths to visibility, Juniper has two barriers to confront.  First, any organized appreciation for service models in SDN will have the effect of making the path to SDN adoption a lot clearer, and you can implement SDN using software switches/routers.  That could cut into the sale of real switches and routers, and so Juniper like other network equipment vendors has been a bit reluctant to trot out a strong service model story.  Second, this is a software ecosystem and Juniper is fundamentally a hardware company.  If you look at things like the cloud, you see Juniper building not a developer program but a channel program.  They want people out there pushing Juniper boxes.  That’s a problem when you’re trying to build an ecosystem north of a vendor-independent standard.

Juniper has some interesting assets with Contrail.  I’m of the view that SDN is a two-layer model, with an agile software-based layer controlling connectivity and a deeper hardware layer managing traffic.  This model would normally require some form of cooperative coupling between the layers, and Contrail extends that coupling to the exchange of address information.  I think it could easily be made to provide service-model communication from the application down through transport and across to adjacent branch or data center sites.  In short, it would be a decent framework for a cloud-like ecosystem and a nice way to frame service models in a general way.  It would be, providing Juniper articulated that view—and believed in it.

And of course there’s the time factor.  A move to establish a Contrail SDN ecosystem would likely provoke a response from Cisco and OpenDaylight, and there’s little question who would win if the battle became a long slog.  What Juniper needs is a strike, a strong and sensible program to create and publish a vision of future networking that is compelling because it’s sensible.  I don’t think that’s all that difficult to do at this point; you can see what networking-as-a-service has to look like already, and how it has to be created.  The cloud, SDN, and NFV have combined to give us all the pieces we need to bake a nice cake.  We just need a baker.

Microsoft Missed the Key Nokia Piece

I know there’s already been a lot said about the Microsoft deal for Nokia, but I think that the deal may have some angles that still need more exploring!  That includes the “NSN dimension” of the deal, of course.

The biggest implication of the Microsoft/Nokia deal is that it doubles down on smartphones for Microsoft.  With the exception of a few die-hards, I doubt many think that Microsoft has a winning smartphone strategy at this point.  The challenge with the “Windows” model for the phone market is that it depends on a host of willing device partners, which has been hard for Microsoft to come up with in both the phone and tablet spaces.  By getting into the phone business directly, Microsoft controls the whole ecosystem.  Yes, it’s going to discourage anyone else from providing Windows phones, but it won’t prevent it.

The reason this is big isn’t that it makes Microsoft a phone vendor, it’s that it commits Microsoft to being one and thus puts a lot of company prestige and money behind what is very possibly a lost battle.  If you look at the Apple iPhone launch and its effect on Apple’s stock, it’s pretty obvious that the market recognizes that smartphones are becoming commodity items.  Even the market leader is now resorting to launching new models based on pretty colors, after all.  Can Microsoft, whose reportedly limited ability to be innovative and cool was the big factor in losing the phone space in the first place, now succeed when Apple could not?  I doubt it, but if they don’t then do they try to come up with new and different colors?  Heliotrope, burnt sienna, or umber, perhaps?  Bet there’s a run on art supply stores in Redmond already.

So why is this big?  It’s because Microsoft didn’t buy NSN.  In my view, the smartest thing the company could have done was to pick up NSN and not the phones, frankly.  That would have signaled that Microsoft was aiming to be a player in point-of-activity empowerment, the fusion of mobility, the cloud, and appliances that I’ve been talking about for some time.  Microsoft might have leapfrogged their competition had they picked up NSN, and even rehabilitated their decision to buy the darn phones by providing themselves a credible direction for differentiation.  The future of appliances doesn’t lie in finding new colors for phones or creating wearable tech, it lies in creating an experience ecosystem built not around devices but around the cloud.  NSN’s position with 4G and IMS could have been added to Microsoft’s own cloud to create the centerpiece of this whole point-of-activity thing.  Without NSN’s tools, Microsoft is still at the end of the day the same company it was before…which isn’t enough unless Microsoft leadership can truly transform their own thinking.  And no, Elop isn’t going to do that.

Which leaves us with the question of what happens to NSN.  Once the small piece of the Nokia tech pie, NSN is now arguably the flagship element and certainly the only place where there’s a major hope of growth.  The question is whether NSN, which had been focused on cutting back and cutting costs and cutting staff, can now focus on actually doing something revolutionary in a market that’s doomed to commoditization if somebody doesn’t figure out how to make features valuable again.  Mobile is the bastion of communications standards, and the inertia of the standards and the standards process has made it exceptionally difficult for anyone to drive real change in the space.  Can NSN now break out of that cycle?  Remember that the company wasn’t noted for aggressive marketing in the past.

The whole notion of “liquid” could encapsulate the NSN/Nokia challenge.  The idea is good in that it expresses the need for a fluidity of resources in an age where user demands for empowerment could mean as little as dialing a call or as much as researching a multi-dimensional social/location decision based on a dozen different factors and vectors.  The problem is that it’s easy to get the notion of “fluid” trapped in the network, particularly if that’s where you’ve been focusing it all along.  You need a fluid IT model for the future of mobile devices, not just a fluid networking model.  Can NSN provide that, or even conceptualize it?  It’s really hard to say at this point because NSN is still recovering from a change of ownership following a series of downsizings.  They may have cut the very people they needed.  But they may now, as the senior partner of Nokia alone, have some latitude in getting the right thing done.

SDN is an area where the media suggests Nokia may have an opportunity, but NSN hasn’t been a player in network-level technology for a decade, so there’s a question of just what NSN could apply SDN technology to.  Do they climb up the SDN stack, so to speak, and develop some smart “northbound applications” to OpenFlow controllers?  Do they build on an NFV-modeled vision of cloud/network symbiosis?  The problem with either of these approaches is that Nokia has nothing firm to anchor them to in equipment terms.  Most operators expect that there will be either an “open” strategy north of OpenFlow, or very vendor-specific vertically integrated strategies.  You can’t make money on the former and if you don’t have a complete network product suite you can’t make it on the latter either.

Microsoft, ironically, is in a better position to build those northern-facing apps than NSN or Nokia is.  Because they have the software credentials and a cloud framework in place, they could easily frame an architecture for software control of networking.  That gets me back to the point that maybe they needed to buy NSN too.  Think of the potential of that deal for SDN and NFV and you might agree.

What Winds are Blowing the Cloud?

The Light Reading story on Verizon comments regarding  cloud adoption by enterprises is yet another example of the issues that face a revolutionary development in an age of instant knowledge gratification.  It’s a decent story, and it frames the cloud dilemma of our time.  There is no question that the cloud is revolutionary, but there’s a big question on our readiness to project its value effectively.

Almost 20 years ago, CIMI started modeling buyer behavior based on a decision model, and that process obviously required us to survey buyers to determine how they made decisions and what factors influenced them.  There are obviously differences in how revolutionary stuff is justified versus evolutionary stuff, and this survey process made it clear that for revolutionary developments buyers pass through some distinct phases.

The first phase is the buyer literacy development phase.  You can’t make a business case for something if you can’t begin to assess the costs and benefits.  When something like the cloud bursts on the scene, the first thing that happens is that businesses start grappling with the question “what good is it?”  In the last fifteen years, this grappling process has been hampered by the fact that objective information on value propositions is increasingly difficult to obtain.  In the spring survey, buyers told us that there were no truly credible sources for business-level validation of a revolutionary technology.  Vendors over-promote, the media trivializes, and analysts are seen as either agents for the vendors or simple-minded regurgitators of the standard market party line—everything new is good.

When buyer literacy reaches about 33%, meaning when about a third of a market can at least frame the business value proposition, the market can sustain itself naturally.  When literacy is below that, the problem is that buyers can’t readily assess the technology, and can’t validate it through conversations with their peers.  We have yet to reach anything like that threshold with the cloud, and so the market is struggling, and Verizon’s original comment about slower-than-expected adoption is right on.

Obviously, a sub-par market literacy rate isn’t the same as a totally illiterate market.  Within the mainstream enterprise market are pockets (verticals) where literacy exceeds the required threshold, and in those sectors it’s possible to move things along.  We are in fact seeing some of that movement today, and so when Verizon says that the cloud’s adoption rate is picking up that statement is also correct.

The real problem isn’t the inconsistency of Verizon’s view of the cloud, it’s the question of just what “value proposition” buyers are becoming literate with.  Stripped of all the hyperbole, the value proposition of the cloud is that hosted resources can be more efficient in addressing IT needs than dedicated resources because the latter will likely be under-utilized and present lower economies of scale for both capital/facilities costs and operations costs.  That statement is true for the SMB space because smaller IT users never achieve good IT economies.  It’s also true for some enterprises, where distribution of IT resources has already created pressure for “server consolidation”.  It is not true for mainstream IT functions because large enterprises achieve nearly as good an economy of scale as a cloud provider would.  Furthermore, cloud pricing strategies and the inherent dependence of cloud applications on network connectivity create issues in price/performance, and most companies will be reluctant to cede mission-critical apps to a cloud player (look at Amazon’s recent outage as a reason for this).  So where Verizon is not correct is inferring that we are seeing a gradual but uniform acceptance of the cloud.  What we’re seeing is increasingly literate buyers recognizing what makes a good cloud application and what makes a lousy one.

If your app runs on a rack-mount server in a big data center with backup power and a staff of techs keeping it going, you are not likely to gain much by taking this app to the cloud.  If the app requires maintaining and accessing a couple terabytes of data per day, you’re not likely to get anything out of the cloud either.  More resource utilization, less cloud value.  More effective in-house operations support, less cloud value.  Higher data and access volumes, less cloud value.

Of course, all of these truths (and truths they are, regardless of vendor or media hype) are predicated on an important assumption that might not be true itself.  The assumption is that the value of the cloud is in its ability to lower costs.  That value proposition, in my surveys, has consistently led to the conclusion that about 24% of current IT spending could be cloudsourced.  Less than a quarter, and not including the prime business core apps.  But suppose we went deeper.  We could gain nearly four times this level of cloud adoption by fully realizing the potential of cloud computing to support point-of-activity empowerment, the framing of IT power at the service of a worker wherever that worker is and whatever that worker is doing.  But to do this we have to rethink the way we empower workers, we have to rethink the way we build applications, how we organize and store data, how we couple processes to business activities…everything.

Which is why you don’t hear about this from cloud players.  Who wants to sit back and live off the in-laws while you wait for your prospects to restructure their whole IT plan?  Why not just go out there with a simple “get the cloud to save money” story, and hope that somehow enlisting media and analysts to support the cloud as being a kind of divinely mandated future will pay off for you?  Earth to cloud providers—you will never achieve your goals if you don’t help your prospects set and achieve reasonable goals for themselves.

Two Winds Blow Changes

Out with the old, in with the new.  Sounds like New Year’s Day, and so clearly inappropriate to the season in a calendar sense.  It also works with networking, though, and we have a couple of the OWTOIWTN items in news today.

First, financial sources are saying today that Juniper is dropping MobileNext, its architecture for mobile metro infrastructure and connectivity.  One of the themes of Juniper’s future success, at least as touted by analysts and on some earnings calls, was that MobileNext would take off and increase Juniper’s share of the mobile capex pie, the only pie getting bigger in all of the carrier world.  So what now?

Actually, Juniper hasn’t been setting the world on fire with MobileNext so in one sense it’s probably not losing something in a tangible sense if the story proves true.  Operators have tended to look to vendors with a bigger piece of the total mobile infrastructure picture in their configuration diagrams, particularly RAN and IMS.  Juniper didn’t have that and so was always a bit on the outs, and it was late to market to boot.  What Juniper is losing whatever intangible upside that a bigger chunk of mobile spending might bring, but that’s where things get complicated.

Mobile metro infrastructure, notably the Evolved Packet Core (EPC) stuff, is high on operators’ list of things they want network-function-virtualized.  Nearly everyone is already near to or offering virtual MME, the signaling-plane part of the process, but operators are looking for hosted versions of the critical PGW and SGW stuff that handles traffic.  Given that, it really doesn’t make a lot of sense to be expecting big windfalls from an EPC-focused product line.

But then there’s the flip side.  If you are a vendor who wants mobile capex bucks in your own pocket, and if you know you lack the full mobile ecosystem, why not jump out with your own totally virtual solution to mobile metro infrastructure?  Particularly when operators tell me that they believe the big players in EPC will not surrender their lucrative business by exposing their assets in virtual form.  The operators want virtual EPC, not just cloud-hosted discrete pieces of 3GPP diagrams, but a single metro-black-box that exposes external EPC-ish interfaces and does everything under the covers, but using new technology—like SDN, which Juniper has.

Poison the well here, Juniper.  If you can’t make EPC money, you should create a virtual EPC, a whole virtual metro core, and make sure nobody else does either.  That would twist competitors’ noses, make your customers happy, and make this whole MobileNext thing look less like a gigantic case of having your nose in the sand as the industry moved on past.

The other news item is the suggestion that Australia could walk away from their ambitious plan to push fiber broadband to over 90% of Australian homes and businesses.  The new proposal, which some say would cut the cost of NBN by more than half, would cover just a fifth of homes with FTTH and the remainder via FTTN, with a DSL hybrid making the final connection.  While this sounds like a typical political bait-and-switch game, the real news is that NBN has so far reached only a small fraction of the target deployment.  That’s what I think raises the big question.

Australia, like some areas of the US, suffers from low “demand density” the revenue-per-square mile that can justify broadband deployment by generating ROI.  The solution there was to bypass the incumbent (Telstra) and drive a publicly funded not-for-profit access business.  Run by?  Well, a former vendor CEO for one.  When this came about, I was highly skeptical given the fact that every politician loves to talk about giving the public free or nearly free Internet at extravagant levels of performance, at least till the actual deployment has to start and the actual bills have to be paid.  Creating what was just short of outright nationalism of assets of a public company to protect a scheme that had no credible foundation to prove it could ever succeed wasn’t a good move.

It probably was an inevitable one, though.  The lesson of Telstra is that you have to get access carriers, ISPs, make money.  If you don’t, they don’t invest.  If you have an unfavorable geography or customer base such that ROI to offer great services isn’t likely available, you can do some taxing or subsidy tricks to make things better, but you’d darn straight better think carefully about giving the whole problem to the government to solve.  Incumbent carriers know how to run on low ROI.  If we did any credible measurement of “government ROI” where do you suppose their numbers would fall?  In the toilet, in most cases.

Telecom regulation has to be an enlightened balance between consumer protection and “the health of the industry”.  We have passed over onto a decidedly consumeristic balance in Europe and under Genachowski the US has done some of the same.  Australia simply extended this bias to its logical conclusion.  If you can’t get companies to do what you want for the price you want because it’s not going to earn respectable ROI, you let the government take over in some form.  Maybe you force consolidation, maybe you quasi-nationalize.  In all cases, you’ve taken something you had regulated and made work as a public utility, and turned it into a business experiment.  Australia shows that experiment can have a bad outcome.  The government shouldn’t be running networks, and they can’t micromanage how they’re run if they want anyone else to run them either.

Shrinking Big Data and the Internet of Things

If you like hype, you love the cloud, and SDN, and now NFV.  You also love big data and the Internet of things.  I’m not saying that any of these things are total frauds, or even that some of them aren’t true revolutions, but the problem is that we’ve so papered over the value propositions with media-driven nonsense and oversimplification that it’s unlikely we’ll ever get to the heart of the matter.  And in some cases, “the heart of the matter” demands some converging or mutual support from our elements.

The “Internet of things” is a good example of a semi-truth hiding some real opportunity.  Most people envision the Internet of things to mean a network where cars, traffic sensors, automatic doors, environmental control, toll gates, smart watches, smart glasses, and maybe eventually smart shoes (what better to navigate with?) will reside and be exploited.  How long do you suppose it would take for hackers to destroy a city if we really opened up all the traffic control on the Internet?

The truth is that the Internet of things is really about creating larger local subnetworks where devices that cooperate with each other but are largely insulated from the rest of the world would live.  Think of your Bluetooth devices, linked to your phone or tablet.  The “machines” in M2M might use wireless and even cellular, but it’s very unlikely that they would be independently accessible, and in most cases won’t have a major impact on network traffic or usage.

These “local subnetworks” are the key issue for M2M.  Nearly all homes use the public address 192.168.x.x, (a Class C) which offers over sixteen thousand addresses but only 256 per network.  There are surely cities that would need more, but even staying in IPv4 the next public address range is a Class B with over 65 thousand addresses, and there’s a single Class A with over 16 million addresses.  Even though these addresses would be duplicated in adjacent networks there’s no collision because the device networks would be designed to contain all of the related devices, or would use a controller connected to a normal IP address range to link the subnets of devices.

What this is arguing for is a control architecture, and that’s the real issue with M2M or the Internet of things.  If we have local devices like wearable tech, the logical step would be to have these devices use a local technology like WiFi or Bluetooth to contact a controlling device (a phone or tablet).  The role of this controlling device is clear in a personal-space M2M configuration; it’s linking subordinate devices to the primary device.  In sensor applications of M2M, this controller would provide the central mediation and access control, the stuff that lets secure access to the network happen or that provides for control coordination across a series of sensor subnets.

To me, what this is really calling for is something I’ve already seen as a requirement in carrier and even enterprise networks—“monitoring as a service”.  The fact that you could monitor every sensor in a city from a central point doesn’t mean that you have to or even want to do it all at the same time.  In a network, every trunk and port is carrying traffic and potentially generating telemetry.  You could even think of every such trunk/point as the attachment point for a virtual tap to provide packet stream inspection and display.  But you couldn’t make any money on a network that carried all that back to a monitoring center 24×7, or even generated it all the time.  What you’d want to do is to establish a bunch of virtual tap points (inside an SDN switch would be a good place) that could be enabled on command, then carry the flow from an enabled tap to a monitor.  Moreover, if you were looking for something special, you’d want to carry the flow to a local DPI element where it could be filtered to either look for what you want to see or at least filter out the chaff that would otherwise clutter the network with traffic and swamp NOCs with inspection missions.

This to me is a good example of what we should be thinking about with virtual networking.  If networking is virtual, then network services should be defined flexibly.  Who says that IP or Ethernet forwarding are the only “services”?  Why not “monitoring as a service” or even “find red car in traffic?”  NFV, in particular, defines network functions as hosted elements.  Cloud components and all manner of traffic or sensor-related functionality are all hosted elements too, so why not envision composite services that offer both traffic handling and device control (as most networks do) but also offer functional services like monitoring-as-a-service or “find-red-car?”

In at least the monitoring applications of M2M, “big data” may be more an artifact of our lack of imagination in implementation than an actual requirement.  OK, some people might be disappointed at that conclusion, but let me remind everyone that the more complicated we make the Internet of things, the more expensive it is and the less likely it is to ever evolve because nobody will pay for it.  We have to rein in our desire to make everything into an enormous tech spending windfall because there is absolutely no appetite for enormous tech spending.

SDN and NFV cooperate through network services, too.  Simplistic NFV design might use SDN only to create component subnetworks where virtual functions live.  But why stop there?  Why not think about all the services that completely flexible packet forwarding could create?  And then why not combine SDN connection control with NFV function control to produce a whole new set of services, services truly “new” and not just hype?  Could we perhaps find an exciting truth more exciting than exaggeration?  Well, stranger things have happened.

Two Tales, One Cloud

If you’re a cloud fan, which I am in at least the sense that I believe there’s a cloud in everyone’s future, it’s been a mixed week for news.  VMware has announced its Nicira-based NaaS platform aimed I think at cloud data centers, and the move has gained a lot of traction among the “anybody-but-Cisco” crowd, and a major Amazon outage has made more people wonder how cloud reliability can be better when cloud outages seem a regular occurrence.

On the VMware side, I think that there’s an important move afoot, but not one as revolutionary as VMware might like to portray.  If you look at software-overlay-modeled SDN, which space Nicira launched, it’s evolving in two dimensions.  First, it’s spreading more to a complete network architecture by embracing more end-to-end capability.  Second, it’s becoming more a formal “network-as-a-service” framework, focusing on what it delivers more than how it’s delivered.

The challenge for VMware is that anything that’s designed to be a piggyback on virtualization is going to be inhibited with respect to both these evolutions and for the same reason—users.  Making NaaS work inside a data center or even at the heart of a cloud isn’t all that difficult, but the challenge is that you’re either focusing on NaaS services that are horizontal inter-process connections or you’re doing one half of a network—the half where the application or servers reside—and not the end where users connect.  With limited geographic scope you can’t be a total solution.

I think it’s very possible to construct a model for enterprise network services wherein each application runs in a subnet, each category of worker runs in a branch subnet, and application access control manages what connections are made between the branches and the applications.  VMware could do this, though I admit it would force them to create branch-level software SDN tools that would necessarily rely on software agents in end-system devices.  But would VMware’s “excited” new partners jump on a strategy that threatened network equipment?  “Anybody but Cisco” has more partner appeal than “Anything but routers!”

The thing is, all of this protective thinking is inhibiting realization of SDN opportunity by limiting the business case.  SDN isn’t one of those things that you can cut back on and still achieve full benefits.  The less there is of it, the less value it presents, the less revolution it creates.  For VMware and its partners, the big question is whether SDN in their new vision is really “new” enough, big enough, to make any difference.  What it might do is set Cisco up to turn the tables on them, because nobody will like little SDN in the long run.  Go big or go home.

With respect to Amazon, I think we’re seeing the inevitable collision of unrealistic expectations and market experiences.  Let me make a simple point.  Have twenty servers spread around your company with an MTBF of ten thousand hours each, and you can expect each server to fail on the average about once every year and a half, but there’s a pretty good chance that at least one of them will fail every month, so something will be down often.  Put the same 20 servers in a cloud data center behind a single cloud network with perhaps 20 devices in it and you have a whole new thing.  We can assume the same server MTBF, but if the network works only if a half-dozen devices all work, the MTBF of the network is a lot lower than that of the servers, and when the network fails all the applications fail, something that would have been outlandishly improbable with a bunch of servers spread around.

My point here is that the cloud is not intrinsically more reliable than discrete servers, it simply offers more and better mechanisms to control mean time to restore, or MTTR.  You may fail more often in the cloud but you’ll be able to recover faster.  If one of our 20 servers breaks it might take hours or days to get it back—you send a tech and replace it, then reload all the software.  Amazon was out less than an hour.  Could Amazon have engineered its cloud better, so the outage was shorter?  Perhaps, but it would then be less profitable and we have to get away from our childish notion that everything online or in the cloud is a divine right we get at zero cost.  Companies either make money on services or they don’t offer them.

The fault here isn’t Amazon, it’s ours.  We want to believe in the cloud, and we don’t want to fuss over complicated details like the formulas for calculating MTBF and MTTR for complex systems of devices.  The details of cloud computing are complicated, the suppliers of the services and the vendors involved have little to gain by exposing them, and so if buyers and the media don’t demand that exposure we’ll never get to those complexities, and never really have an idea for how reliable a given cloud service is, or could be.

The other point I think is interesting about the Amazon cloud outage is that we’ve had several of these now and the big news is the number of social-network sites that are out, not the companies who have lost all their compute capabilities.  It’s not that company outages aren’t important but that it’s likely all the big customers of Amazon’s cloud are social network startups.  That’s not a bad thing, but it does mean that those who use Amazon’s cloud growth as a measure of cloud adoption by enterprises may be mixing apples and oranges.

Two tales, and both suggest that we’re not getting the most from the cloud because we’re not trying hard enough.

Microsoft’s Problems: More than One Man Deep

Probably a lot of people, both now and in the future, are going to say that Steve Ballmer’s departure from Microsoft was “the end of an era”.  Certainly in a management continuity sense that was true; Ballmer was Gates’ heir apparent after all, so he was a continuation of the original Microsoft.  What’s not certain is whether the original Microsoft has ended.  If it hasn’t, then Microsoft is in big trouble.

Another thing a lot of people are going to say is that Microsoft wasn’t “innovative” and that’s not true IMHO.  Microsoft had all the pieces of the mobile-device pie in place, had an early insight into the cloud, had street creds in all of the critical innovations of our time, at least as early as some who are now known for “developing” the spaces.  The thing that hurt Microsoft was that classic need to trade off some of the present to seize the future.  Behind every Microsoft innovation was the shadow of Windows, and Microsoft could never get out of it.

I don’t see much value in reprising how Microsoft got to where they are except in a narrow sense to weave the tale of how they might get out of it.  If you spent too long looking out the Windows (so to speak) then you’ve got to turn away and do something else, and that raises two questions.  First, what else?  Second, is it too late for any steps to save Microsoft.

If you polled the masses for where Microsoft needed to go, you’d likely get “the cloud” and “wearable tech” as your answers.  I think that underneath it all, these are faces of the same coin.  As technology becomes more portable, it’s not surprising that you’d start to take it with you.  If you do, then you’re likely to weave it into your life more intimately than you’d weave something that was nailed to your desk somewhere.  If you do that, it becomes an information conduit to some broader infrastructure—the cloud—and it’s also helpful to have those tech elements integrated in some way with what you normally have and wear while romping about.

The point here is that smart appliances creates a new behavioral revolution, tech-wise, and it’s that revolution that Microsoft has to play.  What happens to how we live, entertain ourselves, work, play, think, decide, when we have an awesome agent clipped to our belt or in our pocket or on our wrist or nose?  This is the stuff Microsoft needed to be thinking about, and still needs to plan for in some way.  The PC was a more distributable form of computing than the mini, which was a more distributable form of the mainframe.  We still have mainframes and minis, but as we move to smaller and more portable devices we shift our behavior to accommodate a more intimate interaction with technology.  Microsoft wanted to see the future as the PC versus these things, and had IBM done that when PCs first came along they’d likely be bought by somebody else by now.  Which could happen to Microsoft, in the extreme.

So what does Microsoft need to do?  Start with the behavior and not with the device.  How exactly will people change their lives based on portable technology?  We know that whatever it is, it will present itself as device agents for cloud knowledge and power.  That means a new software architecture, a new network architecture, new devices.  If I have a phone and a tablet and a watch and glasses that are all empowered, do I have to contort myself into a knot to look at them all in quick sequence?  Imagine walking down the street in a crowd where everybody’s doing that; it boggles the mind what Times Square might look like.  So you likely have to see wearable tech as a dynamic ecosystem.  That’s a space where the Apple’s and Google’s haven’t got all the answers yet, so Microsoft could do something there too.  All of these behavioral impacts create opportunities, but all of the opportunities don’t endure forever.  It’s too late to have a tablet success, or a phone success, Microsoft.  You need to have a behavior success.

All of this is true for the rest of the IT and network industry as well.  For Intel and other chip makers, we’re moving into a time when the big-money items will be on the polar extremes of the tech space—little chips that run at lower power and can be integrated into devices, and big behemoth knowledge-crunching technology suitable for a cloud data center.  The new model makes a lot of middle ground go away and there’s nothing that can be done to save it.

In networking we know that the most critical development will be a “subduction” of the Internet into an agent-cloud model.  That was already happening with icons and apps and widgets and so forth.  Nobody can effectively use behaviorally empowering technology if they have to spend ten minutes entering search terms.  They have to have shortcuts that link whim to fulfillment.  That’s interesting because it reshapes the most basic notion of the Internet itself—a link between an address and a desired outcome.  You go to a site for something, but if you can’t really “go” in a direct sense, what happens network-wise.  And how is it paid for, because ads displayed on watches don’t seem to offer much potential?

The world is changing because of tech, which is no surprise (or shouldn’t be) because it’s been changing since the computer burst on the commercial scene in the 1950s.  Microsoft’s success in the future, and the success of every network operator, network vendor, and IT vendor, will depend on its ability to jump ahead of the change not try to replicate the steps that have driven it along.  The past already happened; it won’t happen again.

NFV Savings and Impacts: From Both Sides

I’ve been reading a lot of commentary on network functions virtualization (NFV) and I’m sure that you all have been too.  Most of it comes from sources who are not actively involved with NFV in any way, and since the NFV ISG’s work isn’t yet public it’s a bit hard for me to see how the comments are grounded in real insight.  It’s largely speculation, and that’s always a risk, particularly at the high level when the question of “what could NFV do to infrastructure” is considered.  Sometimes the best way to face reality is to look at the extremely possible versus the extremely unlikely and work inward from both ends till you reach a logical balance.

If you think that we’re going to run fiber to general-purpose servers and do optical cross-connect or even opto-electrical grooming, be prepared to be disappointed.  General-purpose servers aren’t the right platform for this sort of thing for a couple of reasons.  First, these applications are likely highly aggregated, meaning that breaking one breaks services for a bunch of users.  That means very high availability, the kind that’s better engineered into devices than added on through any form of redundancy or fail-over.  Second, the hardware cost of transport devices, amortized across the range of users and services, isn’t that high to begin with.  Bit movement applications aren’t likely to be directly impacted by NFV.

On the other hand, if you are selling any kind of control-plane device for any kind of service and you think that your appliance business is booming, think again.  There is absolutely no reason why these kinds of applications can’t be turned into virtual functions.  All of IMS signaling and control is going to be virtualized.  All of CDN is going to be virtualized.  The savings here, and the agility benefits that could accrue, are profound.

Let’s move inward a bit toward our convergence.  If we look at middle-box functionality, the load-balancing and firewalls and application delivery controllers, we see that these functions are not typically handling the traffic loads needed to stress out server interfaces.  Most middle-box deployment is associated with branch offices in business services and service edge functions for general consumer services.  The question for these, in my view, is how much could our virtual hosting displace?

If we presumed that corporate middle-boxes were the target, I think that the average operator might well prefer to host the functions at their network’s edge and present a dumbed-down simple interface to the premises. Customer-located equipment can be expensive to buy and maintain.  Since most service “touch” is easily applied at the customer attachment and much harder to apply deeper, it’s likely that virtual hosting could add services like security and application delivery control very easily.  Based on this, there would be a strong pressure to replace service-edge devices with hosted functions.

On the contrary side, though, look at a consumer gateway.  We have this box sitting on the customer premises that terminates their broadband, offers them DHCP services, possibly DNS, almost always NAT and Firewall.  Sure we can host these functions, but these boxes cost the operator perhaps forty bucks max and they’ll probably be installed for five to seven years, giving us a rough amortized cost of six dollars and change.  To host these functions in a CO could require a lot of space and the return on the investment would be limited.

This edge stuff is the current “NFV battleground state”.  You can already see box vendors addressing the risks by introducing “hosting” capability into their boxes.  A modern conception of an edge device is one that has basic packet-pushing combined with service feature hosting, which essentially makes a piece of the box into an extension of NFV infrastructure (providing that NFV deployment can actually put something there and manage it).  You can also see software vendors looking at how they could create better economy of scale for middle-box functions that support consumer or SMB sites and thus have relatively low CPE costs.

If we move up from our “unlikely side” the next thing we encounter is the large switch/router products.  These products, like transport optics, are likely doing a lot of aggregating and thus have availability requirements to consider, and high data rates create a major risk of swamping general-purpose technology with traffic, even using acceleration features.    If we were to presume that the network of the future was structurally 1:1 with that of the present, having all the layers and devices in either virtual form or real form, I think we could declare this second transport level to be off-limits.

But can we?  First, where aggregation devices are close to the network edge, in the metro for example, we probably don’t have the mass traffic demand—certainly nothing hopelessly beyond server capability.  Second, even if we presume that a device might be needed for traffic-handling or availability management, it’s possible that NFV could get an assist from SDN.  SDN could take the functions of switching or routing and convert them into a control/data plane behavior set.  The former could be NFV-hosted and the latter could be hosted in commodity hardware.  That would make a victory of legacy device technology at this first level of aggregation a pyrrhic victory indeed.  All that needs to happen is that we frame the notion of aggregation services in the metro in a nice service-model-abstraction way so that we can set up network paths as easily as OpenStack Neutron sets up subnets to host application components.

This is the key point to our middle ground, the key factor in deciding how far from the “good side” of potential NFV applications we can really expect NFV to go.  If you look at the technology in isolation, as middle-box hosting, then the impact is limited.  If you look at NFV as edge hosting then there are a number of very logical steps that could make NFV much more broadly successful.  And the more successful it is, the more of metro networking (which is where edges and aggregation are located, after all) gets translated into NFV applications.  And NFV applications are cloud applications where traffic is pre-aggregated by data center switching.  That means you could consume optics directly, and you’d end up with a metro network consisting of NFV data centers linked with lambdas, fed by a thin low-cost access network.  If you believe in NFV revolution, this is what you have to believe in, and the big step in getting there is a service-model partnership between SDN and NFV.