Google Enters the Cloud IoT Space–Tentatively

Google has now followed Amazon and Microsoft (Azure) in deploying cloud tools for IoT.  In many ways, the Google announcement is a disappointment to me, because it doubles down on the fundamental mistake of thinking “IoT” is just about getting “things” on the “Internet.”  But if you look at the trend in what I call “foundation services” from the various cloud providers, we might be sneaking up on a useful solution.

IoT is at the intersection of two waves.  One, the obvious one, is the hype wave around the notion that the future belongs to hosts of sensors and controllers placed directly on the Internet and driving a whole new OTT industry to come up with exploitable value.  The other, the more important one, is the trend to add web services to IaaS cloud computing to build what’s in effect a composable PaaS that can let developers build cloud-specific applications.  These are what I’ve called “foundation services”.

Cloud providers like Amazon, Microsoft, and (now) Google have bought into both waves.  You can get a couple dozen foundation services from each of the big IaaS players, and these include the same kind of “device-management-pedestrian” solutions for IoT.  Network operators like Verizon who have IoT developer programs have focused on that same point.  The reason I’m so scornful about this approach is that you don’t need to manage vast hordes of Internet-connected public sensors unless you can convince somebody to deploy them.  That would demand a pretty significant revenue stream, which is difficult to harmonize with the view that all these sensors are free for anyone to exploit.

The interesting thing is that for the cloud providers, a device-centric IoT story could be combined with other foundation services to build a really sensible cloud IoT model.  The providers don’t talk about this, but the capability is expanding literally every year, and at some point it could reach a critical mass that could drive an effective IoT story.

If you look at IoT applications, they fall into two broad categories—process control and analytic.  Process control IoT is intended to use sensor data to guide real-time activity, and analytic IoT drives applications that don’t require real-time data.  You can see a simple vehicular example of the difference in self-drive cars (real-time) versus best-route-finding (analytic) as applications of IoT.

What’s nice about this example is that the same sensors (traffic sensors) might be used to support both types of applications.  In a simplistic view of IoT, you might imagine the two applications each hitting sensors for data, but remember that there could be millions of vehicles and thus millions of hits per second.  It would never work.  What you need to assume is that sensor data would be “incoming” at some short interval and fuel both applications in an aggregate way, and each app would then trigger all the user processes that needed the information.

This kind of model is supported by cloud providers, not in the form of what they’d call IoT, but services like Amazon’s Kinesis can be used to pass sensor information through complex event processing and analysis, or to spawn other streams that represent individual applications or needs.  You can then combine this with something like Amazon’s Data Pipeline to create complex work/storage/process flows.  The same sort of thing is available in Azure.

You could call the foundation services here “first-level” foundation services in that they are basic functions, not specific to an application or even application model.  You can also easily imagine that Microsoft and Amazon could take these first-level services and build them into a second-level set.  For example, they could define a set of collector processes that would be linked to registering devices, and then link the flows of these collectors with both real-time correlation and analytic storage and big data.  There would be API “hooks” here to allow users to introduce the processing they want to invoke in each of the areas.

These second-level services could also be made into third-level services.  Traffic analysis for route optimization is an example; a GPS app could go to such a service to get traffic conditions and travel times for a very large area, and self-drive controllers could get local real-time information for what could be visualized as a “heads-up” display/analysis of nearby things and how they’re moving.

The emergence of an OTT IoT business actually depends more on these services than on sensor management.  As I’ve already noted, you can’t have individual developers all building applications that would go out and hit public sensors; there’s no sensor technology short of a supercomputer that could handle the processing, and you’d need a gigabit trunk to the sensor to carry the traffic.  The reality is that we have to digest information from sensors in different ways to make the application practical and control sensor costs.

Why are we not seeing something logical here, then?  Why would Google be doing something that falls short of the mark, utility-wise?  The likely answer lies in how technology markets evolve.  We hear about something new, and we want to read or hear more.  That creates a media market that is far ahead of any realization—how far depends on the cost of adoption and the level to which an early credible business case can be defined.  During the media-market period, what’s important is whether an announcement gets press attention, and that relies most often on the announcement tracking the most popular trends, whether they’re likely to be realized or not.  We’ve seen this with NFV, with the cloud, and with most everything else.

Eventually, though, reality is what’s real.  You can only hype something till it’s clear that nothing useful will ever happen, or until the course the technology will really take becomes clear and shouts out the hype.  We’re already getting to that point in NFV and the cloud, and we’ll get there with IoT as well.

Speaking of IoT, and of real-time processing and workflows, all of this stuff is going to end up shaping NFV as well.  IMHO, there is no arguing with the point that NFV success has to come in the form of NFV as an application of carrier cloud.  Carrier cloud is a subset of cloud.  Right now we have an NFV standardization process that’s not really facing that particular truth.  IoT and real-time control are also applications of “carrier cloud” in the sense that they probably demand distributed cloud processing and mass sensor deployment that operators would likely have to play a big role in.  If a real-time application set drives distributed cloud feature evolution, then that could build a framework for software deployment and lifecycle management that would be more useful than NFV-specific stuff would be.

I also believe that operator architectures like AT&T’s or Verizon’s are moving toward a carrier-cloud deployment more than a specific deployment of NFV.  If these architectures succeed quickly, then they’ll outpace the evolution of the formal NFV specifications (which in any event are much more narrow) and will then drive the market.  Operators have an opportunity, with carrier cloud, to gain edge-cloud or “fog computing” supremacy, since it’s unlikely Amazon, Google, or Microsoft would deploy as far as a central office.  If, of course, the operators take some action.

They might.  If Amazon and Microsoft and Google are really starting to assemble the pieces of a realistic IoT cloud framework, it’s the biggest news in all the cloud market—and in network transformation as well.  Operators who don’t want to be disintermediated yet again will have to think seriously about how to respond, and they already admit that OTTs are faster to respond to market opportunities than they are.  It would be ironic if the operators were beat by the OTTs in deploying modernized examples of the very technologies that are designed to make operators more responsive to markets!  IoT could be their last chance to get on top (literally!) of a new technology wave.

A Read on Operator Priorities for the Fall Planning Cycle

The network operator technology planning cycle that typically happens annually between mid-September and mid-November is just getting underway, and I want to share some of the “talking points” operators have told me about.  None of these positions are firm this early, but in past years a decent number of the early positions were solidified into policy.  That, by the way, is the reason most vendors try to get announcements they want considered out before mid-October.

The most significant point I got from operators was that nobody thinks they have the budget for a major infrastructure transformation, by any technology means, for 2017.  Capital budgets are therefore going to continue to be under pressure as operators push on vendors for discounts.  There will be no onrush of SDN or NFV displacing IP or Ethernet.  In fact, nobody thought those new technologies would hit even 2% of their capex.

Nearly all of the operators have lab trials or limited field trials of next-gen network technology underway, but they say that the trials are still “early stage” and “limited scope”, meaning that it’s going to take longer to get to a potential deployment and the early deployments are very likely to have a narrow customer/service target.  I asked one whether they thought they could reach a thousand SDN- or NFV-customers in 2017, and he said “Dream on!”

Obviously the reason for this lack of infrastructure-revolution enthusiasm is the lack of a convincing business case, but another interesting talking point emerges here.  Operators say that they are not looking for a business case to justify SDN or NFV.  What they are trying to do is to tie SDN or NFV to a business case, which is a very different thing.  What they’re trying to tie things to is the general goal of transformation, which has been a goal for eight years now.  It means, simply put, wringing out cost and augmenting revenue by technology means.

Holistic visions of change don’t match the narrow product visions.  SDN and NFV are technology standards that offer a different way of doing connectivity or offering network features.  In both cases, the responsible bodies set very strict limits on scope of activity, and this isn’t uncommon in technology standards.  There are a lot of standards bodies and you don’t want to overlap, and the bigger your mission the longer it takes and the harder it is to sustain momentum.  But a narrow scope of activity means that neither SDN nor NFV intersects much of the total process of end-to-end, top-to-bottom, service-building.  That limits how many benefits you can claim, which means a business case would likely have to extend beyond the scope of the technology.

A lot of vendors see this, and that’s why we still hear a capex-driven approach to SDN or NFV.  In the case of NFV, we’re even seeing the dominant early model (agile CPE hosting of VNFs, deployed almost one-off per customer) as a rather insignificant subset of the originally conceptualized mission (which was cloud-hosted VNFs on a large scale).  The biggest virtue of this approach is its ability to control first cost, to deploy a subset of (or even just the concept of) hosted virtual functions on a per-customer basis.

In some ways this is not only unsurprising, it’s old news.  Both AT&T’s and Verizon’s architectures may have been characterized as being NFV architectures or SDN/NFV, but they are in fact service transformation architectures that had a role for both legacy and new network technologies.  The people I talk to do not say that the goal of the transformation is the adoption of SDN or NFV, but that it’s goal is to reduce costs overall and improve network/service agility.  There are a lot of pieces that have to be addressed to realize those goals.

Another interesting talking point related to the first two is the priority areas operators have identified.  Not surprisingly, mobile infrastructure is the top priority in terms of transformation exploration.  Operators, as I’ve already noted in earlier blogs, are committed to 5G modernization, though they don’t think that will start in 2017.  What they look for in the near term is a way of insuring that any improvements in metro connectivity can be leveraged for 5G.  They’ve said they’d love to see a good 5G architecture based on a framework that could also be applied in the near term to improve 4G.  In fact, IMS/EPC signal and data plane modernization (not necessary for 5G because it’s not supposed to use either in current forms) that would be based on a common virtual-network-and-resource model with 5G is a high priority.  Interestingly, operators say that vendors aren’t pitching that earnestly, perhaps because they don’t want their deals held up till 5G planning can be driven forward confidently.

The number one priority on the business service side was to create effective customer service and care portals, which is a task that’s hardly hand-in-glove with transforming network technology.  In fact, even operators say that it’s more closely tied to OSS/BSS processes than to network infrastructure.  No facility-based carrier believed they would rebuild network technology to create these portals, in fact.  They do say that they believe that both SDN and NFV could, over time, increase the effectiveness of portal-based customer service and care by improving their infrastructure’s response to service changes.

Does “more closely tied to OSS/BSS processes” mean that operators think they’ll get transformation support from OSS/BSS vendors more than network vendors?  That’s not clear.  One priority talking point is to assess the merit of an operations-centric approach to service automation versus an orchestration-driven approach outside OSS/BSS.  One reason why there’s an open question here is that operators really identify only two OSS/BSS vendors (Amdocs and Netcracker, with the latter leading) as having a strong integrated-orchestration service automation approach.  However, even though I believe that there are seven or eight vendors in total who could make a full service-automation business case, the great majority of operators didn’t recognize any of the options other than the OSS/BSS vendors.

This doesn’t mean that it’s over for the “SDN” or “NFV” vendors.  Well over two-thirds of operators I’ve checked with tell me that they do have “plans” to try to expand their current SDN or NFV lab trials vertically toward their service automation transformation goal.  I was struck by the fact that there didn’t seem to be a huge interest in a specific PoC or trial, even mobile.  This suggests that current trials are driven substantially by the vendors’ interests rather than by operator transformation goals.

And that, I think, is the key point.  We have no shortage of PoCs and trials, but we seem to have a shortage of connections between them and transformation goals.  It seems to be the latter, and not business cases for SDN/NFV, that everyone is really looking for.  This represents a bit of a shift from what I was hearing even a couple months ago, and that could mean that the fall planning cycle is already focusing people on the CFO’s goals, which are broader and more business/financial.  The link between technology and these goals still has to be proven.

We can expect to see some success for both SDN and NFV in operator networks in 2017, but it’s less clear that we’ll see it in the form of a model that expands into a carrier-cloud commitment that would result in the kind of server deployments and virtual-function opportunities that people have expected.  To get that, the alignment between SDN and NFV on the technology side, and transformation on the business-case side, will have to be clearer than it is now.

Which, of course, it could still become.  The underlying truth about the current planning cycle is that it can’t plan something that will succeed convincingly in 2017, and so even those wheels that might be set in motion could still be turned in another direction.  But I think that 2017 is the end of the period when creating an SDN/NFV-centric vision of the future could be central to transformation.  Time, as always, marches on.

Should We Be Thinking of Network Service Evolution in Overlay Terms?

For the decades where IP dominance was a given, we have lived in an age where service features were network features.  When Nicira came along, driven by the need to scale cloud tenancy more than physical devices tended to support, we learned about another model.  Overlay networks, networks built from nodes that are connected by tunnels over traditional networks, could frame a very different kind of network future, and it’s worth looking at that future in more detail.

One of the challenges for this space is fuzzy terminology (surprise!)  The term “overlay network” is perhaps the most descriptive and least overloaded, but it’s also the least used.  Another term that’s fairly descriptive is “software-defined WAN” or SD-WAN, but many associate SD-WAN with not only technical overlays but business overlays.  You can build your own SD-WAN on top of a real network using only CPE, but you can use overlay networks either independent of or in partnership with internal nodes and the underlying physical network.  SDN is the worst term applied to this because practically anything with an API is called “SDN” these days.  I’m going to use the term “overlay networks” for neutrality’s sake.

In an overlay network you have two essential pieces—tunnels and edge elements.  Tunnels are the overlay part—they represent virtual wires that are driven between network service access points on the physical network that underlays the overlay.  Edge elements terminate tunnels and provide a traditional interface to the user, one that standard equipment and software recognizes.  In an IP overlay network, these edge elements would “look” like an IP device—an edge router or gateway router.  Some vendors offer a combination of the two pieces, while others promote a kind of implicit overlay model by offering hosted switch/router instances and supporting standard tunnel technology.

Some overlay networks have a third element (most often, those offered as a tunnel-and-element package), which is a header that’s attached to the data packets to carry private addresses and other information.  Others simply use tunnels to elevate the routing process and isolate it from the network-layer devices, but retain the same IP addresses or use “private” IP addresses.  You can make an argument for either approach, and to me the distinction isn’t critical enough to include in this particular discussion.

Simple overlays are built by meshing the edge elements with tunnels, using any convenient tunneling protocol that suits the underlayment.  In the edge element, a basic forwarding table then associates IP addresses (usually subnets) with a tunnel, and thus gets traffic onto the right tunnel to terminate in the appropriate edge device.  You can apply policy control to the forwarding tables to either limit the access of specific users/subnets to specific destinations, or to steer traffic onto different tunnels that go the same place, based on things like availability or class of service.

The tunnel-steering thing is one benefit of the architecture.  If you have a set of sites that have multiple service options available at the underlayment level, you can tunnel over them all and pick the tunnel you want, either for failover reasons or to do application-based QoS management.  This is how many SD-WAN offerings work.  But multi-tunneling can also be used to bridge different networks; an edge element functioning as a gateway might allow tunnel-to-tunnel routing, so it might then bridge users on one network with users on a different one.  This mission is the other common SD-WAN application; you link MPLS VPN sites with Internet VPN sites on a common overlay-based network.

In theory, any overlay network-builder could deploy gateway devices even if they didn’t have different underlay networks to harmonize.  An intermediate gateway point could let you create natural concentration points for traffic, creating “nodes” rather than edge points in the overlay.  This could be done to apply connection policies in one place, but it could be combined with multi-underlay features to allow overlay-builders to aggregate traffic on various underlay networks to a place where a different tunnel/underlay technology connected them all.

All of these overlay applications work as business-overlay networks; you can set them up even if you’re not the underlay provider.  However, the real benefit of overlay networks may be their ability to totally separate the connectivity part of networking from the transport part, which requires their use by the network operator.

As I noted earlier, it’s perfectly possible to build an overlay (SD-WAN-like) network without technical participation on the part of the underlay.  It’s also possible to have a network operator build an overlay VPN, and if that’s done there could be some interesting impacts, but the difference depends on just how far the operator takes the concept.  An operator offering an overlay VPN based on the same technical model as a third party wouldn’t move the ball.  To do more than that, the operator would have to go wide or go deep.

If an operator built all their IP services on an overlay model, then the services would be true ships in the night, meaning that it would be impossible for users of one to address users of another, even attack their underlying public address.  Overlay routing policies would control both connectivity and access, and movement in a physical sense (geographic or topological) would not impact the addressing of the endpoints or the efficiency of transport.

The most significant impact, though, would be that if all services were overlay, then the task of packet forwarding/routing would almost certainly be within the capabilities of hosted nodes, not reserved for custom routers.  Since you don’t need real L2/L3 when you’re creating connectivity above it in an overlay, you could dumb down current L2/L3 layers to be simple tunnel hosts.  This approach, then, is one of the pathways to the substitution of hosted resources for devices.  This is not necessarily NFV because you could treat the hosted nodes as virtual devices that were deployed almost conventionally rather than dynamically, but NFV could support the model.

A tunnel-focused infrastructure model would also deal with class-of-service differently.  Each tunnel could be traffic-and-route-engineered to provide a specific SLA, and “services” at the overlay level would be assigned a tunnel set to give them the QoS they needed.  You could implement any one of several options to link service changes and traffic to the infrastructure level, which means that you could implement vertically integrated dynamism.  That’s essential if you’re actually going to sell users elasticity in available connection capacity.  Best of all, you could do the matching on a per-route-pair basis if needed, which means you’re not paying for any-to-any capacity you don’t use in a logical hub-and-spoke configuration of applications and users.

All of these positives could be achieved quickly because of the nature of overlay technology—by definition you can do whatever you like at the service layer without impacting the real-network underlayment.  You could thus transition from IP/Ethernet to SDN, or from IP to Ethernet, or to anything you like from anywhere you already are.  The overlay structure creates unified services from discontinuous infrastructure policies (as long as you have some gateways to connect the different underlayments).

To me, an overlay model of services is the only logical way to move forward with network transformation at the infrastructure level, because it lets you stabilize the service model throughout.  We should be paying a lot more attention to this, for SDN, for NFV, and even for the cloud.

How Far Should NFV Componentization Be Taken?

If software is driving networks and infrastructure, perhaps we need to look more at software architecture in relation to things like the cloud, SDN, and especially NFV.  We’re seeing some indication that real thinking is happening, but also some suggestions that further thought is required.

Many of you know that I write regularly for Tech Target on software development and architecture issues.  One of the persistent software architecture issues of the last two decades has been componentization, the dividing of applications into components or modules.  In that period, the focus has changed from programming languages to facilitate componentized software to the concept of modules linked through network connections, or services.  SOA (Service Oriented Architecture) came along about 15 years ago, and today it’s all about “microservices”, meaning even smaller components.

Componentization has also been an issue in NFV from the first.  The ISG meeting I attended in April 2013 included a rather testy discussion of “decomposition”, meaning the division of features now provided in monolithic vendor appliances to functional components that could then be “recomposed” into virtual functions.  Operators seemed to favor this approach, while most vendors were opposed—quite vocally in some cases.

This political-or-business-level reaction is predictable.  The goal of any buyer is to commoditize everything they buy, and the goal of every seller is to differentiate their offerings.  The notion that a virtual function set created from decomposed vendor physical functions would allow best-of-breed or open-source substitutions to be made is attractive to operators and repellant to network vendors at this level.

Not all NFV vendors feel that way.  The 2013 discussion was led by a vendor whose product was designed to facilitate threading work through network-connected chains or paths.  The incumbent network vendors, knowing that anything that opens up their own device logic hurts them, opposed the measure.  It’s recently been gaining support again, boosted in part by non-VNF incumbent vendors and in part by NFV ISG participants who see composing microservice-based functions as something that increases NFV’s role and value.

OK, so if I put on my software-architect hat, just how far can microservices or decomposition or whatever you’d like to call it, actually take us?

If we visualize componentized software, we can draw a picture of how an application processes a unit of work (provides a response to a request, transfers something from a port to another) as being a bunch of arrows between component blocks.  Each arrow means that one element passes something to another, and this has two impacts that are important.

First, there is no such thing as a zero-time exchange.  If, in software, I invoke an external function rather than simply code the implementation in-line, I’ll always introduce a delay that represents the time needed to pass control, the overhead associated with that external function.  For functions that are part of the same software package or image, this is a “local” call and the delay is usually very small, usually less than a millisecond.  If the function is network-connected, though, then the propagation delay of the network has to be added in.

How much is that?  It depends.  Across the US, a single-fiber link would introduce about 25 milliseconds of delay.  A multi-hop router connection of the same distance would introduce somewhere between 100 and 300 ms, depending on what kind of network (VPN, the Internet) you’re running.  The important point is that every arrow in our little component diagram is an exchange that, if network-connected, adds the network delay to the end-to-end delay, the time between input and output.

Suppose we have a service chain of three elements.  There are two arrows connecting them, and if both are connected via a network we’d, conservatively, 70ms per arrow, or a total of a 140 ms delay introduced.  That’s not extravagant given typical Internet delays.  But suppose now that we micro-componentize and generate 20 components and 19 arrows.  We’re now talking about a one and one third second delay.

The second issue is that microservices typically mean framing applications not as a linear flow but as a master application that links successively to services provided by components.  We now have not only the presumption of accumulated delay, we have the possibility that each master-to-component interaction will generate a round-trip delay because it has to get a response before it moves on.  Our 70ms delay then becomes 140 ms per interaction, and we have almost three seconds of total delay.

Some data-plane exchanges are very sensitive to delay.  Imagine, for example, a database application whose record-level I/O is separated via a network from the main application.  If it does a million I/O operations, the delay accumulated is a million times the round-trip delay.

The point here is that it’s not good software design to segment things so much that introduced delay becomes a major issue.  You’d combine the components into a single image to make the data paths internal, but that’s outside the scope of function-hosting as NFV defines it.  In fact, to do it would require not only that you segment functions down to a low level, you’d have to enforce a common component structure across all functions or you wouldn’t be able to interchange the pieces.  Every deployment is then a software development task, in a sense.

I’m a big fan of microservices, but not in data plane roles.  I also have a kind of philosophical problem with the notion of promoting micro-segmentation of VNFs when the NFV software itself wasn’t designed that way and isn’t developing that way.  Nor, at present, are the cloud pieces (like OpenStack) that are expected to be used.  NFV and cloud software could and should be designed in a microservice-friendly way, so why not start with that?

The notion that current technology can be split into a bunch of interchangeable parts is attractive in the sense that it could reduce the influence of key VNF-centric vendors.  If that’s the goal, though, then a sensible project on open source would make a lot more sense.  I’ve advocated what I called a “VNFPaaS” model, a model where a library of open functions supports the development, deployment, and management of VNFs.  Would this model be backed by VNF providers?  Probably not, but they’re not going to back a microsegmentation model either.  The fact is that microsegmentation isn’t going to solve any problems because the network-distributed form generates too much delay and the software-integrated form would require my VNFPaaS or every VNF construction would become a development project.

I think we’re losing sight of the ball here.  Micro-segmenting VNFs except in specific control and management applications dodges reality.  The goal of NFV isn’t to justify itself by tweaking the market to fit its parameters.  It’s the other way around; we’re designing NFV here not designing the market.  What will make NFV successful is now, as it has always been, proving out a compelling business case.  Microservices in NFV and SDN control and management processes, and even in service control plane processes, is logical.  In the data plane, I think we’d have to be very careful of our delay budget or we risk major problems instead of major victories.

How Dell’s and HPE’s Transformations Impact Them May Depend on Their Carrier Cloud Positioning

Just as we enter a critical period for the role of servers in networking, we’re seeing some of the major server players enter a period of transformation.  Dell has finalized its EMC/VMware acquisition, and HPE has announced they’re selling off their enterprise software business.  These two decisions seem to cement both vendors into a position of a “platform” company, and in an age of cloud, SDN, and NFV it’s not easy to say what that means.

Solution sales used to be the watchword in tech.  You saunter into an account, listen to their problems and opportunities, and whip out a ready-made offering that addresses them.  This model was sensible because it kept other players out of the account, where they could muddle up or even steal the deal, by insuring you had everything the buyer needed.

Obviously this approach fails when the buyer already has some of what they need.  As technology commitment matures for any buyer, they are fitting pieces into an existing pie to address their needs rather than building out one-off to do that.  If you sell “solutions” that are made up of a combination of hardware, system software, applications, networks, and so forth, your solution will likely overlap or even collide with what’s there already.  The modern tech world has moved toward selling parts not cars.

Solution selling poses a second risk, which is that the time you spend listening and suggesting a “solution” isn’t likely to benefit only you when other vendors are already in place.  In fact, nobody wants to educate the buyer anymore because it’s expensive and because everyone else will then step in to undercut your price because they don’t have to cover the cost of all that problem-solving.

This is what Dell and HPE seem to have been thinking.  We sell servers.  To use servers, you need system software to create a “platform”.  You don’t need applications, so we won’t even offer them.  Applications make the connection between business benefits and IT platform costs, and that means that they are where most of the problem-solving costs come in.  And virtualization, the cloud, and NFV are essentially extensions of system software, a part of the platform, right?

Yes, and no.  There is no question that the plumbing of all these modern tech revolutions are at the platform level, but the new technologies have something in common with the solution-buyer needs of old.  There has to be a business case.  People didn’t buy servers and software and wait around for something to do with them, nor will they deploy the cloud, SDN, or NFV without some compelling benefit to play off.  That’s what poses the challenge for Dell and HPE.

Every vendor in any new technology space has to ask the question “What’s in it for me?”  If you’ve committed to being a server/platform vendor, then the only credible answer is “Servers and platforms!”  If you’ve also already bought into the notion that solution selling is for suckers who want others to piggyback on their efforts and insights, then you’ll probably want to package servers and platforms to fit into cloud, SDN, and NFV deals.  The question is how.

If we look at cloud computing as a pure enterprise-data-center-offload mission, then it will reduce the total number of servers sold for the simple reason that something that’s cost-driven can’t end up costing more overall than what it’s replacing.  My own modeling says that in any event no more than a maximum of a third of current IT spending could be displaced by the cloud.  This would make pure application-hosting cloud computing pretty much a non-event in terms of potential revenue.

That means the winning cloud can’t be enterprise data center offload.  You either have to target SMBs to broaden your TAM, or you have to do stuff in the cloud that wasn’t being done before.  Or both.  And whatever course you take, you have to decide whether you’re ready to lead the charge into the new market opportunity or wait for it to be developed by others.

It’s my view that both Dell and HPE have decided not to do solution selling with NFV or SDN, but rather to assume that the two technologies have momentum that they can ride.  They see the evolution to these technologies as being one where their differentiation is important.  While I don’t think the story is as clear and explicit with the cloud as with SDN or NFV, I think it’s true in the cloud as well, particularly if you focus on the two key new-TAM points of stuff not being done or the SMB space.

Who promotes the new stuff that could drive up cloud usage?  If it’s not vendors like Dell or HPE, then is it the cloud providers alone?  If that’s the case, then I think we’ve made a great case for an Open Compute Project, because if the providers are going to carry the water they’re not going to pay fat margins to vendors to supply the technology.  But if Dell or HPE are going to make a case for a supercloud mission, they need to fulfill the mission with web services that support it.  Microsoft and Amazon offer web services for a couple dozen specialized cloud-specific capabilities.  Add to that inventory and enrich the capabilities of the basic tools already there, and you could build cloud-specific apps.  Why wouldn’t you want to offer that if you were trying to provide cloud platforms?

The SMB space is even clearer.  Nobody can really sell to an SMB except a local reseller, VAR, or integrator.  Every major vendor has a channel program to drive this kind of business.  If those same players are going to adopt cloud computing, then they’ll need to have specialized tools to lower the bar, and be facilitated by those same local resellers.  Who, we just heard, already have relationships with vendors like Dell and HPE.  So if Dell and HPE did specialized platform software to help channel partners drive the cloud SMB business, they’d have something they could bring to the table with cloud providers.

The telco space is the clearest of all, issue-wise, and the big opportunity, for both Dell and HPE.  While my model says that you could have a massive cloud transformation without the participation of telcos, it would take three times as long and penetrate a smaller piece of IT spending.  Some telco applications, like mobility and CDN, could add over 30 thousand data centers and millions of servers, and nobody but the telcos can invest in this.  Other applications, like IoT, could add even more servers and are difficult to credibly launch except with strong telco participation because telcos have a history of investment in technologies that require a massive capital build-out.  As I noted earlier, I think it’s clear that neither Dell nor HPE have been prepared to spend resources to develop a credible SDN or NFV market—no “solution selling”.

I think Dell’s acquisition of VMware puts them in a position where driving the market is more important, because the natural course of both SDN and NFV would be based on standards that VMware has no particular play with.  On the other hand, VMware has a super-credible enterprise-proven solution applicable to both SDN and NFV.  They can promote it if they can wire the deal early, but not if it’s already gone to OpenStack and Open Daylight.  SDxCentral says that VMware is “getting serious” about the telcos and NFV”.  If serious means wanting seriously to get the check, I agree.  With respect to deeper meaning, I’m still looking for the full solution to a business case.

HPE has always been a very strong player in NFV technology and they have decent if underplayed SDN assets.  Unlike Dell, who would have to build or partner to get a full-spectrum NFV approach that could make a broad business case, HPE has all the pieces.  However, they seem to believe that they can simply follow the progression of PoCs into deployments, and unfortunately none of the PoCs really make a broad business case, particularly for carrier cloud.  But this could change if Dell takes a serious run at the NFV and carrier cloud space, because HPE cannot afford to play second fiddle to Dell here.  The Street is mixed on whether HPE would be worth more as a bunch of split-off parts or as a whole, and carrier cloud might be the deciding factor.

If you model out all the possible ways in which the cloud could deploy, what stands out is that it deploys fastest if it has broad support from a class of provider whose rate of return expectations are historically low and who are not judged by the Street on the size of their capital budgets.  The network operators and public utilities are the only players who fit that model.  Dell and HPE, then, would be the most likely to succeed if they promoted carrier cloud.  Neither are doing that effectively right now, they’re simply admitting it exists, and that may be the stroke that leads to commoditized Open Compute competition being the real winner in the cloud.

How Revolutionary is Huawei’s New SDN Controller?

The Huawei announcement of new “Agile Controller 3.0” for SDN raises some interesting issues in the transformation game.  Huawei’s press release offers comments like “provides the capabilities of on-demand network resource pool reservation, automatic deployment, intelligent optimization, and bandwidth adjustment on-demand for enterprise customers across campuses, carriers and data center networks” and “The AC 3.0 is the “Super Brain” of networks which enables service innovation in cloud era”.  A lot of that seems to overlap the concept of “orchestration”.  Does it?  It’s complicated.

Orchestration isn’t firmly defined, but the sense of the market at the moment is that it means the process of sequencing complex steps toward a common goal.  In most NFV terminology, it has a slightly narrower meaning, which is the decomposition of a model element within a service model, or of the model overall, into deployment/connection steps.  Orchestration operates across multiple classes of resources, and in most cases it would include provisions for lifecycle management, meaning the response to ongoing events/conditions.

If you can apply “orchestration” to a “model element” then it would be reasonable to say that the element you’re modeling could have two different makeups, one a functional or logical makeup (it decomposes into other models) and the other a physical or deployment makeup.  This relates to the idea of decomposing a general “service” down to the committed “resources”.  At the top, things are functional, and toward the bottom they’re physical.

What Huawei is announcing is an SDN controller, which would ordinarily put the controller toward the bottom of a classic orchestration/decomposition process.  When you draw pictures of a traditional cloud or NFV implementation, you’d find the controller south of OpenStack Neutron, meaning way down the structure of deployment.  However, Huawei is offering something that is very much needed and not much discussed, federation of SDN domains.  Or at least that’s what I think; I can’t get a detailed picture quite yet.

SDN controllers are normally associated with OpenFlow and white-box switching.  This model of SDN has tended to focus on cloud multi-tenancy, which means it’s been applied mostly within a data center.  In that mission, it is logical to think of it as an adjunct to committing a VM or container—the connection part.  However, networks are bigger than data centers, and no single SDN controller could be expected to handle the work associated with a large scope.  That’s where federation comes in.

“Federation” is another term of surpassing fuzziness.  I use it in the context I first heard it, which was to describe a relationship between network-owners that allowed for sharing of resources to create services.  In the SDN context, federation is the thing that lets you combine SDN domains so that one mega-controller doesn’t have to run the whole world.  Instead, you have a hierarchy not unlike the one we find in the DNS world today.  If you add federation to SDN, you get the potential to define and build mega-networks.  That necessarily elevates the controller from being down in the dust under OpenStack up to somewhere higher.

Where, obviously, there’s the question of the relationship between SDN federated service-building and orchestration.  If we look at SDN in a pure connection-service mission, you could argue that a federated controller could do anything orchestration could do, as long as the entire infrastructure was in-scope to come controller or another and the controllers were federated.  For cloud-building or NFV, that presumption of mission could be practical under some conditions.

There are a lot of ways that you could conceptualize deployment of a service consisting of components of software and connectivity.  Even OpenStack has recognized this, with a kind of hazy option for Nova-hosting-centric or Neutron-network-centric deployment.  An application or an NFV service could be set up by assigning components/VNFs to data center complexes and networking them, then deploying the components to a particular server in the complex and making the final stub connections.  You could also deploy all the components and network them, or you could do things by data center or metro area.  You get the drift.  The point is that if you had a deployment strategy that did network-building in a burst, you could treat that process as setup of a massive federated SDN service.  You could pull much, perhaps all, the network part out of an OpenStack NOVA or container deployment.

Obviously this would have to be reflected in the higher-level orchestration model, but so does any other approach.  Orchestration and modeling drive a decomposition process that has to proceed in some logical way, meaning that you have to deploy software, connect elements, and report status so everything is coordinated.  What the SDN-centric vision that Huawei’s federation approach could offer is a deployment model where the hosting and connection sides are separated.  That could simplify some of the issues of decomposing models to little islands of functionality that are then connected into the glorious whole.

I’d be a lot more confident about all of this if I had all the details on AC 3.0, but I can’t find anything released on it other than the press release.  I’ve gathered insight from the specs of the earlier SDN products.  I’d also like to see how Huawei would propose to tie all of this into DevOps for the cloud and orchestration for NFV.  Huawei has some good insight in NFV, but I can only speculate how AC 3.0 might be tied to it.

If that tie-in could be provided it could be very interesting.  I blogged earlier this week about the notion that we needed to take a network-centric perspective on NFV, not a hosting-centric perspective.  If operators agree, and if Huawei’s initiative can promote such a shift in mindset, then it might be a major step forward in NFV.

The change won’t be easy, though.  I’ve worked on a number of standards and technologies that were based on modeling services and decomposing the models to build something that works.  The easiest way to decompose a model, to orchestrate something complicated end-to-end, is to do it successively.  You decompose the current level into the level below, and keep that up until you hit bottom.  This creates that islands-of-functionality situation I described above, islands that have to be connected some way.  The key point is that this approach tends to create a network as a series of fragments.  If you want to separate connectivity and hosting to be parallel functions, you’d have to parse the model and build separate instructions for the two areas, then execute them.  Even then, you’d still need to build WAN and data center connections separately if you want hosting and connectivity to be separated, because you can’t do data center connectivity without functions to connect.

It’s also not going to be easy for Huawei to reap the full benefit of AC 3.0, even if I’m totally correct in my view of its capabilities.  In my first consulting job, I used to talk about the “feature-function-benefit” trajectory in marketing.  We seem, as an industry, to be leaving out the “benefit” piece.  A capability is meaningful only in the context that establishes buyer value.  Huawei’s marketing/positioning of AC 3.0 is weak, but so are the positions of its competitors.  That means that the notion of holistic SDN, or cloud SDN, or SDN-centric NFV, is still a fruit hanging and waiting to be picked.

How Three Pure-Play Optical Vendors are Coping with Network Transformation

Last month I laid out a number of points on the network implications of “fog computing”, and now is a good time to take those implications and mix them with vendor positioning and opportunity to judge how well vendors will be able to address the new issues.  There are four classes of vendors to look at, so this will be spread out over a fair interval.  Since I have to start somewhere, I propose to start with the vendors who have the greatest opportunity and risk, which is the optical vendors, and to focus on three “pure-play” vendors, ADVA, Ciena, and Infinera.  While these guys all get classified as “optical” players, they’ve actually taken a very different path toward the future, so let’s first summarize their position and move forward from that.

ADVA is an on-ramp-centric or data/control-plane player with respect to my fog-distributed metro model.  They have a cloud-DCI product that’s targeted not so much at enterprises but at OTT players, cloud providers, and potentially carrier cloud players.  They purchased Overture Networks, and through that deal acquired a strong NFV and orchestration product set as well as carrier Ethernet and vCPE.

Ciena has a long history in the optical space, going all the way back to the SONET era.  Like many of the old-line vendors, they’ve faced transitions before and not always with complete aplomb.  Most recently, they’ve entered the NFV and orchestration space with the acquisition of Cyan and their Blue Planet product and ecosystem.

Infinera was known as a fast-bits player from its inception, and the company enjoyed a boom when the market seemed to be focusing on optical capacity (which has since busted).  They’re the most traditional of all the optical players, having made no specific moves to enter the broader SDN space or to include NFV and orchestration.

As far as stock price goes, the Street has liked ADVA most over the last year, and Infinera the least.  Ciena’s share price has held steady for the last two years, a kind of baseline.  ADVA jumped up about a year ago, and has remained high, and Infinera started a run up early in 2015 but has fallen significantly to below its 2-year-ago price level.  On a scale where 1 is a strong buy and 5 a strong sell (3 a hold), ADVA is rated 2, Ciena 1.9, and Infinera 2.7 (as of the week ending September 2nd).

I’ve rated vendors in terms of strategic influence with their operator buyers, and over the same 2-year period, Ciena has increased its influence by about 30%, ADVA by about 13%, and Infinera has lost 15%.  Much of Ciena’s gains came from the Cyan purchase, though they’d been trending up slowly before that move.  ADVA’s influence has been growing steadily and the gains don’t correlate to any specific event, and Infinera’s influence loss tracked their stock price, which may indicate that a negative stock trend is something a company has to overcome if it’s going to exercise influence on its market’s future.

The strategic influence score is a good place to start my discussion of the three companies.  None of the three come close to matching the traditional equipment vendors like Cisco, Ericsson, Huawei, Juniper, or Nokia.  Even Ciena can muster only about 60% of Juniper’s score, for example, and Juniper is the lowest of the traditional vendor group in terms of strategic influence.  What this means is that no optical vendor really drives strategy for network infrastructure, and that has a profound impact on optical vendor prospects as operators put pressure on their capital budget.

If optical vendors do nothing, then the electrical giants will set the strategic agenda, and will frame both SDN and NFV more in higher-layer terms.  That means optical missions don’t change, SDN and NFV features aren’t critical in optical equipment, and differentiation diminishes over time.  To win, a pure-play optical company has to make sure that they have a seat at the strategy table and can position their assets as a part of an SDN/NFV transformation.

Ciena, the most engaged of the group, has obviously seen the meaning of the SDN and NFV waves, and has concluded that they needed to have a play in both spaces.  Their acquisition of Cyan, whose Blue Planet technology had promise but was under-funded, and their subsequent focus on making Blue Planet real has made them the most SDN/NFV-centric of the three.  However, their positioning is still very conservative relative to the mainstream vendors in networking, and in SDN/NFV.  They could do a lot more to sell their unique value proposition.

ADVA has a more service-focused positioning than the others.  I’ve run into them at Tier Two/Three players where they’re favored for connecting business sites directly with fiber, and their acquisition of Overture Networks gave them a strong carrier Ethernet offering, not to mention an NFV story.  Overture’s NFV orchestration capability was always among the strongest in the industry, but Overture themselves tended to play it as an adjunct to Ethernet services, focusing on vCPE.  ADVA seems to be following that same path, and if that continues then it might face a challenge in developing a strategic optical mission and promoting it to customers.  Like Ciena, they could do more.

Infinera is the hardest of the three players to assess.  They recently announced Xceed Software Suite, which is an open-source SDN controller and two custom applications, elastic bandwidth and optical virtual networks.  The problem is that it’s proved incredibly difficult to add features to the transport layer, for the simple reason that services are supported higher up.  Infinera, in my view, doesn’t move the ball with its offerings.  The fact that they just did their Xceed announcement and exposed SDN features without creating a compelling link to services makes it harder for them to now refine their SDN role and make it better.  The media hates a re-launch.

So it’s fair to say that none of the optical vendors have done a stellar job connecting their SDN/transport features to service evolution.  Ciena arguably has the best higher-level framework to link with services, but ADVA’s Ensemble NFV stuff is very strong and won an award at a mobile show for application to 5G evolution.  Infinera, as I’ve said, has provided an SDN model in transport, but they don’t connect it to higher-layer service processes either with a uniform orchestration approach of their own (ADVA and Ciena have such a capability) or with policy links to service-layer control processes.

The lack of a clear tie between optical deployment and credible service and infrastructure trends worries the Street, even though it’s earnings and not revenue or future technology trends that tend to move stocks.  Ciena, after a decent report, was downgraded by an analyst firm on the basis of “valuation”, which simply means the share price is too high to be justified given credible trends.

There’s no question in my mind that the smartest move an optical vendor could make would be to tie their strategy to metro evolution as driven by 5G rollout.  All of the vendors have at least blown kisses at the 5G through comments on things like the network slicing that’s included in the 5G vision, but none have been really effective at showing how 5G service-layer and control-plane technology would connect to their fiber transport offerings in an especially effective way.  That would involve framing optical SDN as a partner to service SDN and to NFV.

That’s a lot to ask of optical vendors, to be fair, but if you want relevance in a dynamic market you have to expect to carry your own burdens.  In any event, both ADVA and Ciena have taken a leap upward to service orchestration, which provides the easiest way of linking optical transport with higher-layer evolution.  Infinera is behind in this respect; their SDN controller and applications approach isn’t tied tightly enough to metro changes driven by content delivery and 5G, and that’s what the need to be doing.  That doesn’t mean ADVA and Ciena are setting pretty, though.  Both these companies need to do a better job of capitalizing on their assets, and that involves sterling marketing/positioning.  Getting to that has been difficult for all optical vendors, and for NFV aspirants as well.

You can’t make optical transport strategic by postulating that you’ll provision a bunch of users with pure optical service.  Even optical access to Carrier Ethernet isn’t enough.  You have to make agile optics a player in agile metro infrastructure, not just a set of pipes over which agile tunnels are built at the electrical layer.  But direct service coupling to optics isn’t the solution.  You need two symbiotic orchestration processes, one for service-layer and one for transport.  That vision is apparently easier to understand from the top down.

And there are players at the top, with their own designs.  Lurking in the wings here are the network-layer vendors who, like Juniper, have been pushing hard on packet optics from above.  These vendors haven’t made the perfect connection between the two layers either, but they don’t need to justify an independent optical layer so the burden on them is less.  If none of the optical players manages to sing the song right, then packet-from-on-high will win.  If only one gets it right, then that one is going to take a lot of market share.

Three Specific Steps Needed for Vendors To Sell their NFV Approach

I had an interesting exchange with a vendor recently, talking about the future of and leadership in NFV.  What made it so interesting was that the vendor echoed a sentiment I’d heard from some operators.  His point in essence was that “modernization” was the driver for NFV, that the operators now needed only to prevent falling into the proprietary traps of the past as they embraced the inevitable NFV future.

I’m sure that nobody is surprised to hear I don’t agree with that point.  I’ve heard it before and even fallen prey to the view myself.  The telco market of old was essentially a public utility, with technology changes coming as the players believed they should.  In such a market you could argue that NFV is inevitable.  You could also argue that ATM (asynchronous transfer mode, for those who don’t remember) was inevitable.  It wasn’t, as history has shown.

So what could vendors do to win?  I’ve tried to combine the views of vendors and network operators to create a kind of action list, targeted in particular to the executives who set product, marketing, and sales policies.

First, NFV has to be made into a win-win.  There is nothing inevitable about it, nor will NFV success necessarily make a vendor money.  The key is going to be to navigate some very complicated points so that a real value proposition can be made, and so that vendors will have the incentive to make it.

Making the business case for NFV isn’t difficult in one sense; we already know that the only thing that can really drive NFV from its current point to early success is operations cost reduction through service automation.  The problem for vendors is that only seven or eight vendors can actually provide the essential tools.  You have to automate services from top to bottom, end to end, which means you have to start with OSS/BSS processes and work your way down to management processes.  Operators know now that this is going to take two or three levels of orchestration, created either by layering interdependent technologies or through a single unified model.  If that can be done, then opex savings could easily reach ten cents per revenue dollar, which would be equivalent to more than half the operators’ capital budget.

The problem here is that operations savings through service automation doesn’t really sell new equipment.  In fact, you could get a better ROI from not transforming hardware at all, at least for the first four years.  There are few vendors who would be willing to do the heavy lifting to drive NFV success when “success” would involve little incremental hardware spending.  Server vendors gain little, and network equipment vendors might lose less, but nothing dramatic happens.  And worse yet, most of the “NFV success” in virtual CPE is really success that doesn’t drive or even much involve “NFV” in a strict sense.  It’s an agile CPE device strategy.  You can’t create big vendor wins without major infrastructure transformation.

The only way out of this is to attack a green-field opportunity that has clear credibility, and the best of these is mobile infrastructure.  If you want to make a case for your product set in an NFV future, you need to make a case for it in mobile infrastructure.  We could easily build 30 thousand edge data centers with 5G rollouts, and that would be enough to get NFV going.  However, you can’t do this with servers alone; you need a really strong 5G metro strategy.

Which leads us to the second point, which is that NFV like the cloud is first a virtual network problem and only second a hosting problem.  If there’s anything transcendentally frustrating about NFV progress, it’s the second-rate consideration given to virtual networking.  NFV, to succeed, builds a carrier cloud.  We know from current cloud-provider successes that you need a very strong virtual network model in the cloud.  So why is it that we’re doing essentially nothing to build one?  Yes, I know that OpenStack includes Neutron as its NaaS tool.  Yes, I know that OpenFlow and especially Open Daylight offer the promise of network control at the flow level.  But what are we building there?  We still have to develop a network architecture.  Google’s Andromeda is arguably the centerpiece of Google’s cloud.

Most of the vendors who have full-service NFV support also have an SDN capability.  All of them should understand open networking and virtual networks.  This understanding has to step from behind the scenes to a prominent position, if for no other reason than that 5G metro deployment will depend on a strong virtual network model, and NFV hosting and carrier cloud are just as dependent.  If you have an SDN story, sing it proud.  If you don’t, then you’d better get one.

5G presents a special challenge in terms of virtual networking because it’s clear that it won’t be like 4G with IMS and EPC, but not clear just what it will be.  We’ll also have a 4G-to-5G transition problem to address, which means that mobility (the big challenge in all mobile networks) has to support an EPC model and some new approach.  This should induce vendors to look at metro virtual networking as an open, generalized, problem not a specific one, and to present tools that combine to create solutions, not solutions in isolation.

The concept of “open” is our last point.  Vendors must provide an open model of virtual-network and virtual-function architecture.  The NFV ISG is slowly recognizing the essential features, but they’re still a long way from a practical structure.  The open-source activities we see are all working on the assumption that open-source equals open, and that’s not true.  Open-source software can be just as restrictive as proprietary software if the componentization and licensing aren’t done right.  Operators are already learning that with VNFs, both of which suffer from proprietary elements and licensing issues.

Modeling is the key to making something open, and we know that from the evolution of DevOps tools for the cloud.  There is a superior modeling technology out there in my view—TOSCA—but at the very minimum you’d have to support the notion of “intent models” for each layer of service structure and provide a means to adopt any modeling and decomposition architecture at any point, by simply making a given opaque intent model decompose through another tool.  That capability, which I’ve been calling “federation” of model elements, is absolutely critical and yet we almost never hear of it.

If you as a vendor can provide a truly federated approach, you can be as open or more open in a practical sense than an open-source implementation.  That means you can at the least provide some differentiation in composable feature/model element terms.  If you don’t do that, then you’ll be competing with the variety of operator-supported open-source models, even if they can’t do everything needed.

So there we have it.  There’s a pathway to creating a winning NFV story.  I think virtually every operator would agree with it.  I think virtually every vendor will ignore it, because fiction is always more appealing than the truth.  I suspect that a year from now, I’ll be having the very same conversations with vendors about how NFV is justified by “modernization”.  Well, at least with those vendors who survive in the NFV space a year from now, and that may be far fewer than we have today.

Can NFV Rise Above vCPE to Reach For the Carrier Cloud?

Many vendors have found hope in NFV opportunity, including network vendors, software vendors, server vendors, and chip vendors.  At VMworld the CEOs of Dell and VMware held a kind of NFV love-fest, and Intel has long been promoting NFV for the obvious reason that hosting anything consumes hosts, which consume chips.  At the same time as all of this is going on, though, we hear that operator projects in NFV have largely focused on premises-hosted vCPE applications.  Are these going to evolve to “real” NFV, or are all these vendors dreaming?  Do they even have to?  Could NFV be a “success” if it never goes beyond virtual CPE?

Let’s deal with that last question first.  Operators’ consensus on defining NFV success is that it would have to improve their overall profit per bit by 10% or more.  Except for business-service-only players, that clearly cannot be achieved with the business-directed vCPE services that are the current priority.  We have to get beyond those, and the question is how we do that.

NFV is about virtualizing network functions, meaning extracting features from dedicated appliances and making them available in software form so that they can be hosted on something.  The original NFV model was focused on hosting in the cloud, or at least on virtualization-equipped data centers of some sort.  “COTS”, meaning commercial off-the-shelf servers, as the hosting point could credibly lead to carrier cloud deployments that my models have forecast could add over one hundred thousand new data centers worldwide.  That’s the kind of opportunity that would engage Dell or Intel or HPE or anyone with a server business.

The challenge up to now has been the limited number of business prospects suitable for vCPE.  While you can credibly host any VNFs in a cloud data center, the vCPE initiatives to date have largely focused on business buyers with Ethernet access.  There aren’t enough of these opportunities to create a big demand for hosting or drive deployment of a hundred thousand data centers.  In fact, one of the reasons vCPE has been popular as an application of NFV is that it doesn’t require data centers at all.  Operators like the idea of putting a general-purpose device on the premises, a kind of mini-server-CPE box, and then deploying the software in it.  The benefit is less economy of scale than it is agility in adding features for managed services.  But that agility value is hard to put a number on, and it doesn’t build masses of data centers that consume servers and chips.

Operators who love the idea of vCPE will generally admit (or at least have admitted to me) that these applications don’t seem to lead to a carrier cloud.  A very few hope that the repertoire of VNFs that could be hosted will expand, but they don’t have convincing candidates for the expansion or market data to validate the opportunity.  Some think that consumer-level vCPE might get them there, but the benefit of deploying cloud-hosted access features to consumers when a typical broadband hub costs less than fifty bucks is limited.  Particularly when you still need home termination of the broadband connection and WiFi.

If vCPE is going to build carrier cloud, it would have to extend to the consumer, and that extension would obviously depend on having a large number of new service opportunities that would justify cloud hosting rather than edge hosting.  Most operators say that things like home monitoring could help, but marketing these against established incumbents is a challenge, and if you need hosting you’d have to deal with the first-cost question, which is getting the hosting out there to fulfill the opportunities marketing creates.

This is why most operators believe that it will take something other than vCPE to drive carrier cloud.  What that could be divides them somewhat, but mostly in terms of the priority given to each option.  The most obvious and most credible is mobile infrastructure, particularly the way that infrastructure would change leading up to 5G.  IoT ranks second, and the generic hosting of application components ranks third.  Let’s look at them in reverse.

Application hosting (meaning offering cloud services) has been attractive to operators for the carrier cloud for almost as long as cloud computing has existed.  Verizon tried it, and by most accounts failed, and nothing much has really changed.  The cloud is about marketing, positioning, and brand.  Verizon shot behind the duck, or ahead of it, depending on what you thought the opportunity was.  Most operators still yearn for cloud revenue, but they still seem uncertain as to how to get it.  That makes the cloud computing driver the least dependable in terms of building infrastructure.  However, it is a logical way to extend vCPE services if you could get that started, which is why it makes the list of drivers in the first place.  My model says you can’t drive an initial deployment with cloud services.

IoT is the most interesting and probably compelling of all the drivers.  My model says that IoT alone could drive a deployment of a hundred thousand or more data centers, making it the only driver of carrier cloud that could stand on its own.  If it could stand at all, that is.  The problem with IoT is that it’s a poster child for what’s called a nascent opportunity.  IoT is to carrier cloud what a whiff of perfume is to a walk down the aisle.  It’s a major commitment in the long term, but in the short term it’s just a vague promise.  There are so many things that would have to come together to make IoT a driver for carrier cloud that the combination looks unlikely in the near term.  In the long term, it rules.

My current model says that mobile infrastructure would likely add no more than about 30,000 data centers worldwide if taken alone.  That doesn’t get us to nirvana, but it would be enough base deployment to facilitate other applications’ use of the data centers, which could then promote those applications, and that would bootstrap us to the necessary level of deployment.  If you believe in 5G, which I do for “arms race” reasons, then it’s going to happen in some form.  The trick would be making sure that the form that happens is a driver for carrier cloud and not just an abstract change in the RAN.

One other thing to consider is the combinatory value of the carrier cloud computing services and IoT drivers, particularly if they’re combining with a consumer target and maybe even consumer vCPE.  Home control, after all, could be framed as an IoT application as long as we don’t get religious about demanding all the sensors be directly on the Internet.  It’s not a major step from home control to home financial management, home photo management, and so forth.  Thus, those applications kind of have a foot in multiple doors.  Could you target them collectively?

You could.  Verizon rolled out FiOS in a cherry-picking way, focusing on the areas where they had the highest probability of earning early return on infrastructure.  You could do carrier cloud the same way, providing that you had credible services with provable opportunities and that you had a very strong marketing plan to promote them.

All my modeling and conversations with operators converge on the point that to make NFV successful, to make it into something other than a niche approach to business services, you have to build out carrier cloud at enough scale to enable a cascade of other applications that can exploit but not justify the cloud.  So doing that is critical, and it’s going to either mean betting on 5G and mobile infrastructure or constructing a more complicated service set around a combination of cloud hosting and IoT.  That’s why I’ve been saying that I think 5G may be the critical NFV driver.

That would seem to spell success for players who are incumbent in the mobile space, but operators tell me that none of these mobile incumbents are really swinging for the carrier-cloud bleachers.  Instead, they’re bunting by aiming at very limited mobile missions.  The opportunity is still there for any of the full-spectrum NFV players to step up and claim the space.  Which might mean claim the market.

Can Rackspace Reinvent Itself as a Private Company?

Rackspace knows a lot about the cloud.  Maybe they know more than the pundits do, and very possibly more than the consortium of investors (led by Apollo Global Management) that are taking them private.  The question now is whether they know more than those who think that the managed cloud services space is a great, latent, independent, opportunity.

For at least two decades, the world of technology has been driven far more by hype than by reality.  Arguably this started with the Frame Relay Forum, which was essentially a marketing promotion activity hiding under the label of “standards group”.  In most cases, including that of frame relay, there was at least some real substance underneath the covers, but the hype tended to blur things so much it was hard to ferret it out.  The cloud is like that today.  There is no question that the cloud has substance (no pun intended) and in fact it could develop into the IT advance of the age.  There’s so much hype, though, that the market often responds to that more than top opportunity.

Rackspace was a hosting company when the two-decade hype-dominating period I just referenced began.  It transformed itself into a cloud company as the cloud wave exploded, but here I think the hype started to catch up with it, and the latest going-private step is just the latest reaction.

Being a cloud provider of any sort is putting yourself in a vice.  On the one hand, cloud services are attractive to the extent that they’re cheap—cheaper than traditional purchased IT devices.  On the other hand, you have to make a profit to stay in business, so you have to sell the cloud for more than it costs you.  If your clients are big companies, you can expect their own economies of scale (which rise according to an Erlang C curve, meaning it rises slowly, then more quickly, then plateaus) to nearly match yours.  How do you respond?

Managed cloud services, meaning professional services offered to facilitate cloud adoption.  These can admit a company to the sacred field of cloud computing without requiring them to invest in massive infrastructure in competition with others.

Professional services and managed cloud could work, but it faces an early and serious challenge—marketing it.  You can’t sell managed cloud door-to-door like it was lawn service.  That’s particularly true with SMBs, because the SMB market is a very large number of prospects with a small sales value each.  Imagine knocking on a couple million doors.  No, the customer has to come to you because of marketing, and where Rackspace fell down was in its ability to stay in the public eye.  Amazon got all the good ink, and companies like IBM were able to leverage their brand and account presence.

Well, that explains why the Rackspace strategy didn’t take off.  Their stock peaked in early 2012 and has generally trended downward as “real” cloud momentum built.  But taking a company private is a bet that they’re seriously undervalued and you can turn things around.  Is that true?  Actually, it might be.

Like most tech companies, Rackspace has undervalued marketing.  The hype around things like the cloud makes it seem to vendors that there’s some great ecosystemic tidal wave sweeping buyers toward inevitable adoption, and nothing need be done but toss a windmill in the stream and use it to drive a press to print money.  “Build it and they will come,” as the saying goes.  Had Rackspace been smart in marketing they could probably have built a business to rival Amazon.

That was then.  What about today?  Well, the truth is that the cloud is kind of old news.  Every day, reporters have to write stories people will click on.  “The Cloud Saves You Money” might have worked 20 years ago, but it’s been said too many times now to generate much interest.  If the only thing that Rackspace and its new investors do is better marketing against the traditional cloud messages, I don’t think they’ll succeed.

Obviously, then, there would have to be other things that might make this deal sensible, and there might be two of them.  One is the systemic change to cloud usage that we’re now starting to see, and the other is the potential entry of a bunch of new cloud aspirants, some of whom might be happy to do some M&A.

IaaS stinks as a business model, and even Amazon knows and is proving that.  There’s minimal differentiation possible, and because all you’re doing is hosting a VM you’re not displacing much in the way of opex.  The problem is that PaaS in the true model (Azure, for example) isn’t as versatile in addressing opportunity.  What’s emerging as the alternative is creating an ad hoc PaaS by adding a set of web services to IaaS.  These web services, accessible to applications that are built to link to their APIs, can provide horizontal or vertical tools to build cloud-centric or cloud-specific applications.  Because customers pay for their use, they add to provider revenue and that’s a direct benefit.

The indirect benefit for a managed-cloud provider is that the web services can facilitate application-building for customers, for specialized VARs and resellers, or even for the cloud-MSP itself.  For example, Amazon and Microsoft, both of whom offer these services in their clouds, include tools to build IoT applications.  These kinds of tools could be expanded to provide a much bigger slice of application functionality, which would make them even more attractive to VARs or SMB users.  From them, Rackspace could even build a kind of IoT shell application that could be customized as needed.

The challenge for Rackspace, as a “cloud services provider” rather than a cloud hosting company, is that they really don’t have a great place to put such services.  They do have servers for hosting, but not the kind of structure optimal to host web services.  They resell other providers’ public cloud services.  That reduces their chances of profiting directly from hosting web-service enhancements or specializations like Amazon or Google or Microsoft.  Thus, the goal has to be developing super-skills that value you highly.  Those skills are credible only if we step beyond IaaS to web services.

However, leveraging skills doesn’t always provide a fast path to profit.  The challenge of marketing remains, and so does the challenge your own cloud-host partners could pose if they decided to get into the same space.  Which, if it’s a good one, they should.  That’s what raises the second issue, of selling yourself off.  To exit going private, you can either go public again at a profit, or sell out.

The telcos are the only players in all the world who actually have a path to building massive distributed clouds that could fully open the opportunity for new-style cloud applications.  They’d be crazy not to want to exploit those kinds of applications to build value on their clouds, which is why I think “carrier cloud” is the real goal for telcos and not “SDN” or “NFV”.  Exiting to a telco could be a very smart play for Rackspace, providing that Rackspace builds the essential skills to validate the carrier cloud—then sings the song beautifully and builds brand recognition there.

Here, the problem is that telcos already have favored integrators—the integration arms of their big equipment providers and particularly Ericsson, who has been almost a professional services company with limited equipment sidelines.  Why would telcos want to buy a telco professional services company when so far they’ve been content to hire one?  Rackspace would have to answer that question the only place it can be answered—in the media via marketing and positioning and singing pretty.

That brings us full circle, doesn’t it?  Rackspace didn’t push themselves effectively, and still doesn’t.  They now need to develop a new asset to push, and expertise in “fog-cloud” applications would surely qualify.  They can then position that new asset and make themselves attractive as a target.  All of this is possible, but it’s going to take a lot of enlightenment to drive it.  Do they have that?  Look at Dell, who went private.  With carrier cloud the largest single server opportunity out there, has Dell been revolutionary in their offerings or positioning?  Not by my standards for sure.  Can Rackspace do better?  We’ll have to wait and see.