What Are We Missing About “Multi-Cloud?”

What’s the thing (well, one of the things) I’m sick of hearing?  It is that, as Michael Dell and others have said recently, “It’s definitely a multi-cloud world.”  Why am I sick of it?  Two reasons.  First, because it’s never been anything else, and the fact that’s only now recognized is a sad commentary on the industry.  Second, because the statement has no value to planners, only to publications who want to sell ads based on story clicks.  We’ve missed the technical point by focusing on glitz…as usual.

It’s always nice to put things into a financial perspective.  Global total IT spending is about a trillion dollars a year.  I ran the cloud through my modeling process five years ago, and it churned out a prediction that the migration of current applications to the cloud could never exceed 23% of that total.  Today, total cloud spending is about a tenth of that 23%, and more than half isn’t business applications at all, but web companies serving the consumer market.

What this says is that unless you believe that enterprises have scrapped over 95% of their current IT and gone back to high stools and green eyeshades, the cloud has not displaced the data center.  Nor will it, ever.  Given that, the data center will always be a “private cloud” supplemented by a “public cloud”, which makes it both a hybrid cloud and a multi-cloud.

The application view of the same picture creates a similar result.  What are the top reasons for enterprise use of the cloud?  Resiliency and scalability.  If I want to use the cloud as a backup resource, to replace something that’s broken or to scale something that’s overloaded, where does the original application live?  In the data center, by long odds.  Thus, users expect the public cloud to look like an extension of the data center, which is a multi-cloud environment.

Even if you want to say that “multi-cloud” is multiple public cloud providers, that kind of vision is the explicit goal of almost three-quarters of all enterprises I’ve talked with.  Most feel that way because they don’t want to be “locked in” to a single provider, but the second-place answer is that they believe that they would find the “optimum” provider for different geographies or different applications to be…well…different.

These are all “why?” reasons to say that multi-cloud is the de facto approach.  There’s also a “why not?” reason, meaning that there is a set of technology requirements and trends that would tend to erase any distinction among multiple clouds, which raises the question of why you wouldn’t want to adopt that model.  We met one already—users want to be able to move applications and their components freely to wherever they need to be hosted.  There are more, and in particular one giant one.

The largest use of public cloud services for enterprises today is as a front-end for business applications.  The public cloud hosts web and mobile elements of applications, and it can spin up another to replace or supplement what’s there.  Public cloud providers know this and have offered a lot of support for the application, in the form of web services.  They are now offering a set of web services aimed at what’s being called “serverless computing”.  The right kind of component (a functional process or “lambda” or a microservice, depending on the cloud provider) can be run on demand anywhere, with no reserved resources at all.  Wouldn’t “anywhere” logically mean “in any cloud or in the data center?”  You can’t believe in serverless without believing in multi-cloud.

OK, hopefully this all demonstrates that anyone who looked at the cloud seriously and logically would have concluded from the first that multi-cloud was where things had to go.  What about my second reason?

If you dip into multi-cloud drivers and requirements, what you see is a vision of the cloud as a kind of seamless compute fabric.  You want to run something?  You make some policy decisions that include QoE, price, security, and so forth, and you deploy.  Every option you have doesn’t have its own unique deployment and lifecycle requirements because the differences would make operationalizing the picture impossible.  What do you do, then?  Answer: You rely on the principles of abstraction and virtualization.

In IaaS services, a “host” is a virtual machine.  The services from different public cloud providers or different cloud stacks for the private cloud differ from each other in their management, but they’re all supposed to run things the same way.  That property should be even more apparent in serverless computing.  In effect, what cloud users want is a kind of “virtual cloud” layer that’s above the providers, describing component hosting and connectivity in a uniform, universal, way.  This is what we should have realized we needed from the first, and might have realized had everyone recognized that multi-cloud was where we’d end up (which they should have).

We also need to be thinking about how “serverless” computing is represented at the functional level, as well as how various cloud provider web services are represented.  If you want something to be portable, you’d also like for it to be able to take advantage of service features where they’re available, or to limit hosting options to where you can get them.  That suggests a middleware-like tool that’s integrated with the virtualization layer to allow developers to build code that dynamically adapts to different cloud frameworks.  If we had all of that, then multi-cloud would be a giggle, as they say.

The frustrating thing about the one-two combination of insightless promotion of the cloud is how much it’s probably cost us.  We still don’t have a realistic picture of what a true multi-cloud architecture would look like.  We don’t have a software development framework that lets enterprises or software houses serving enterprises build the optimum software.  Who was the innovator that launched functional/lambda/microservice serverless computing?  Twitter.  Even today, more than two years after a Twitter blog described their model, most enterprises don’t know about it, what it could mean, and how they should plan to use it.

This has infected areas beyond the enterprise.  NFV kicked off in a real sense in 2013, so the Twitter blog came along a couple years after the start.  Have we fit the model, which supports what’s approaching ten billion sessions per day, into NFV to make it scalable?  Nope.  Probably most people involved, in both the vendor and operator communities, don’t even know about the concept.

Nor do they know that Amazon, Google, IBM, and Microsoft now all have serverless options based on that original Twitter concept.  The efforts by network operators and network vendors to push networking into the cloud era is falling further behind the players who have defined the cloud era.  This may be the last point in market evolution where network operators can avoid total, final, irreversible, disintermediation.  NFV will not help them now.  They have to look to Twitter’s and Google’s model instead.

Can Service Providers Really Win With an API Strategy?

Everyone loves to talk about APIs, so much so that you could be forgiven for thinking that they were the solution to all of the problems of the tech world.  I did a straw poll, and wasn’t too surprised to find that only about 40% of network professionals had a good grasp of what APIs were and could do, and that almost 20% couldn’t even decode the acronym properly.  All this, and yet there are many who say that APIs are the key to monetizing service provider networks, or the key to new services, or both.  Are we leaping to API conclusions here, based on widespread misunderstanding?  To answer that we have to dig into that “what-are-the-and-what-do-they-do” question.

API stands for “Application Program Interface”, and the term has been used for decades to describe the way that one program or program component passed a request to another.  The concept is even older; many programming languages of the 1970s supported “procedures” or “processes” or “functions” that represented semi-independent and regularly used functionality.  And even in the 1960s and the days of assembler-language programs, old-timers (including me) were taught to structure their logic as a “main routine” and “subroutines”.

All this shows that first and foremost, APIs are about connecting modular components of something.  Back in the older times, those components were locally assembled, meaning that an API was a call between components of the same program or “machine image” in today’s cloud terms.  What happened in the ‘70s is that we started to see components distributed across systems, which meant that the APIs had to represent a call to a remote process.  The first example of this was the “remote procedure call” or RPC, which just provided a middleware tool to let what looked like a local API reference connect instead to a remote component.  Web services and Service Oriented Architecture (SOA) evolved from this.

The Internet introduced a different kind of remote access with HTTP (Hypertext Transfer Protocol) and HTML/XML.  With this kind of access, a user process (a browser) accessed a resource (a web page) through a simple “get” and updated it (if it was a form that could be updated) with a “post”.  This kind of thing was called “Representational State Transfer” or REST.  Most procedure calls are “stateful” in that they are designed to transfer control and wait for a response; RESTful procedures are stateless and the same server can thus support many parallel conversations.

So we have APIs that can represent remote processing functions or resources.  Why is this so hot?  The reason is that businesses that have processing functions or resources (cloud providers, web providers, and network operators) could sell access to these functions/resources.  For network operators in particular, there’s been a theory that selling access to network services/features through APIs instead of selling traditional connection services, could be a new business model.  Some pundits think that exposing all the network and management/operations features through APIs might be a significant revenue source.  Could it?

Well, it depends.  You can sell a service feature profitably if there are buyers and if the price the buyers are willing to pay generates a profit for you even after you factor in any loss of revenue created by having the features either create competition for or displace higher-level services you sell.  In other words, would others leverage the stuff you sell through APIs to either compete with you broadly, or replace a composite offering you make with a cheaper set of piece-parts.  We can see this in two examples.

In our first case, Operator A exposes operations services via an API.  These provide for robust service ordering, billing, and customer care.  A startup operator might never be able to establish these services on their own, but could they add them to a bare network and create a credible competitive offering?  Yes, they could.  Thus, the cost of the services delivered through the API would have to factor in this risk, and that might end up pricing them out of the market.

In our second case, Operator A exposes a simple message service among sites via an API.  A customer who purchases connectivity services could take this message service and use it to carry transactions, which might allow them to replace the connectivity services.  Unless the message service was priced higher, the result would be a net loss to the operator.

The point here is that the most likely way for APIs to pay off is if they represent new capabilities rather than exposing old ones.  In the latter case, there will always be some risk that the exposure will in some way threaten the services that contributed the capabilities in the first place.

APIs that represent new services open a question, not of just what the APIs should look like but what the new services really should be.  An example is IoT.  Should an operator build a complete IoT service, or provide a set of low-level features for sale, enabling third parties to turn those features into a complete set of retail offerings?  In short, should the operator use APIs to create retail-model or wholesale-model services?

The “classic wisdom” (which, I’m sure regular readers of my blog, I’ll contend isn’t wisdom at all) has been that operators should fall into a wholesale API model and expose their current service components.  In other words, get a fillet knife and cut off pieces of yourself for OTTs to eat.  The smart money says that operators have to get quickly to new features to expose, new service components, and then make a retail/wholesale decision based on the nature of the element.

IoT represents the best source of examples for that smart-money choice.  Operators could look at the entire IoT event-to-experience food chain, and formulate an architecture to host key processes.  They could then see how much work it would take to turn that into a retail service, what the revenue potential might be for that service, and whether there would be a risk that others might pick a better retail service choice to fund their own deployment of the basic processes.

IMHO, operators should look at something like IoT to frame a vision of an event-driven, context-enhanced, service future.  That would give them a retail outlet on one hand, an outlet that might have enough profit potential to significantly reduce the magnitude of investment in infrastructure that operators would make before they saw enough revenue to break even and then show a profit.  That’s “first cost” in carrier parlance.  They could then, with retail value established there, expand at the retail level where they had market opportunity to exploit, and at the wholesale level where others could do it better.

The value of this approach is clear; you have a specific service target and revenue opportunity with which to justify the deployment of servers and software.  The problem is nearly as clear; you need to convince the operators of a linkage, and that’s something I think vendors would ordinarily be expected to do.  They’ve not done it yet, particularly in NFV, and while things were fiddling, the operators were focusing on an open-source solution.  Today, five times as many operators think NFV will emerge from open-source projects like ONAP than from vendors.  That’s bad because it would be very difficult to get the right architecture out of an open-source project.

It’s not that open-source isn’t a player.  Most of the technology that will shape the kind of service-centric software infrastructure I’ve described comes from open-source.  What doesn’t is the glue, the organized middleware tools and application notes, and that will require a unique marriage of software and network expertise.  I don’t doubt that most vendors have the right individual skills somewhere, and that it exists in the open-source community, but the combining of the skills is going to be a challenge, particularly in development activity that has to start from scratch.

So, are APIs over-hyped?  Surely; they are not a source of opportunity but rather a step in realizing a revenue model from new service features.  In that role, though, they are very important, and it’s worth taking the time to plan an API strategy carefully—once you have planned the underlying services even more carefully!  A gateway into a useless, profitless, service isn’t progress.

Who’s the Biggest Force for Network Technology Change and Why?

It’s always popular to talk about who’s going to lead the next big step in something.  In networking these days, you might look to Cisco’s Robbins, for example.  I have my own candidate, one many of you may never heard about.  It’s Ajit Pai.

Pai is the new Chairman of the FCC, and like all Federal commissions, the FCC is changing leadership and tone with the change of the party controlling the Presidency.  Under the previous Democratic Chairman, Wheeler, the FCC took dramatic steps to impose net neutrality.  Pai is already dedicated to relaxing those rules.  On May 23rd, the FCC took the mandatory first step of issuing a Notice of Proposed Rulemaking (NPRM) that outlines what the FCC is looking to do, which is essentially to reverse the Wheeler FCC decision to declare the Internet a telecommunications service subject to full FCC regulation.  This could make a radical change in the business of the Internet, and a similarly radical change in infrastructure.  We’ll see what that might look like, and what’s driving my “could” qualifier, below.

There’s not much point in doing a deep analysis of the regulations at this point because there are still steps to be taken, and the final order probably won’t come along until well into 2018.  However, one of the key differences between the positions of the two FCC party factions is the issue of paid prioritization and settlement on the Internet.  So, let’s not try to handicap the outcome of the FCC’s current action.  Let’s also forego the question of whether this is “good” or “bad” for an open Internet.  Instead let’s look at what specific technology impacts we might see were the FCC to reverse the policy on prioritization and settlement.

Way back in the ‘80s, I was involved with the then-CTO of Savvis, Mike Gaddis, on an RFC to introduce settlement to ISPs.  This was obviously in an earlier and less polarized time, and many in the Internet community believed that for the Internet to prosper as a network you had to introduce some QoS, which can’t happen if each ISP bills its own connection customers and keeps all the revenue they gain.  Why prioritize when you’re not paid?  I still think this principle is a good one, and in any event, it opens a great avenue to discuss the technology implications of changes in neutrality policy.

The thing we call “the Internet” here, of course, is virtual.  In the real world, the Internet is a federation of operators, and it’s this fact that makes the whole QoS thing important, and difficult.  Remember that we started this discussion with a “suppose…”  Just because the US creates QoS and settlement within this community of operators doesn’t mean everyone does.  We still might have places where there is neither settlement nor QoS, and the more such places exist, the harder it would be to totally eclipse private networks globally.  But let’s carry on with our supposition to see what else could happen.

Suppose that you could ask for specific QoS from the Internet and get it?  There would be two impacts, one a leveling of business services into an Internet model, and the other the populization of QoS by its extension into the consumer market.  Both could be significant, but the second could be seismic.  In fact, the changes that Chairman Pai may be contemplating would change the business structure of the Internet, perhaps taking such a long step toward establishing a rational business framework that it would reverse the pressure on capex.

With full QoS on the Internet, the notion of VPNs now separates from the network and focuses instead on edge devices and the SD-WAN model.  SD-WANs can manage the prioritized services and request priority when needed, balancing traffic between best-efforts services and various levels of priority.  SD-WANs can also add security to the picture, creating what is much closer to being a true “virtual private network” than just using a subset of Internet addresses would create.

Consumer QoS, in either the subscriber-initiated form (premium handling subscriptions or the “turbo button”) or provider-paid (Netflix or Amazon paying for premium delivery) would mean that QoS would have to be a much broader capability, touching many more users and impacting much more traffic.  It’s likely that this would drive operators to seek the most effective way of offering QoS, especially since consumer price tolerance would be lower.  Thus, prioritization and settlement could boost things like fiber, agile optics, and SDN virtual wires.

Prioritization doesn’t mean that you don’t still have the current model, but I’m sure many would argue that ISPs would all collude to make best-efforts services unavailable or so bad that they were useless.  Well, we have best-efforts now and that’s not the case.  Just being able to charge for special handling doesn’t eliminate all other handling options.

QoS and settlement would tend to favor larger operators with either a lot of reach or with market power to enter into agreements with other operators.  Regulations aimed at preventing that would end up looking much like common carrier regulations, and if the FCC is getting us settlement and QoS by declaring that Internet services are not common carrier services, those additional regulations to prevent large-operator dominance might be hard to impose.  However, the experiences I had myself, and those of others still involved in brokered peering, suggest that the ISPs overall would be happy to adopt an open brokered peering strategy, where everyone could do QoS peering at designated points.

All of this, of course, depends on there actually being a demand for Internet QoS.  If there were no consumer demand, then operators would obviously have no incentive to offer it even if regulators allowed for it, because operators would lose money on a business switch from MPLS VPNs or Ethernet VLANs to SD-WAN Internet VPNs.  Research suggests that in order for consumer QoS to pay, it’s essential that a “third-party payment” mechanism be validated.  If Netflix can charge customers for premium delivery, then settle with the ISPs for the QoS, there’s a very strong chance that this can all work.  The “turbo button” approach has much less appeal.

The Wheeler FCC took the position that no paid prioritization was acceptable.  The Genachowski FCC (before Wheeler) said that it was OK if consumers paid.  What Pai may end up with is that any form of prioritization is OK as long as it’s non-discriminatory, meaning everyone can pay for it if they want, and pay based on the same pricing structure.  That’s because the absence of Title II common carrier status for the Internet means that the FCC has no jurisdiction to regulate pricing or pricing policies there.  That’s what the Federal Appeals Courts told the FCC, which is why we ended up with Title II status to begin with.  Thus, we are probably heading for a prioritization and settlement decision that would lift all barriers, creating the largest business impact on operators and vendors, and potentially the largest technical impact as well.

In the near term, the prioritization-and-settlement policy would, as I noted above, reduce the pressure of price/cost crossover for operators.  That would likely open up capital budgets, raising revenues for network equipment vendors.  The increased spending would also be directed mostly at currently validated infrastructure and devices, meaning that it wouldn’t immediately result in a flood of SDN or NFV spending.

NFV would benefit from the SD-WAN process, but only in the limited premises-hosted vCPE model that we already see dominating.  Operators now realize that even if you had multiple features to deploy (firewall, SD-WAN, etc.) you would almost certainly elect to use a composite image for all the current features rather than service-chain multiple separately hosted features.  The latter approach would cost more in hosting, and generate more delay.  It would also raise operational complexity considerably; a two-host chain is twice as complex as a single-host image to deploy and sustain operationally.

If you believe the operators, though, the relaxation in profit pressure that prioritization and settlement would create would further both SDN and NFV innovation.  The operators recognize that anything that’s Internet-related and consumer-driven is going to be subject to price pressure, which means that it will have to be cost-managed carefully.  Operators tell me that both SDN and NFV innovation would be accelerated by the regulatory shift, but that these would not be the first or primary focus points.

What would be?  Number one is service lifecycle automation.  The nice thing about the prioritization and settlement shift is that it would allow operators to undertake a change in their service management practices without the pressure of creating an immediate return in terms of cost reduction.  Operators know, of course, that Internet prioritization is not constant as much as on-demand, episodic, based on content viewing.  That means it has to be invoked and removed quickly and cheaply.

The problem with this area is that operators really don’t have a solid strategy.  Most of their automation vision comes from pieces of SDN and NFV, and neither were designed as full-range lifecycle automation projects or based on advanced cloud principles.  Not all operators (in fact, less than half) accept the need to frame automation on advanced cloud principles, but nearly all know that they have to cover the whole of the service lifecycle and the full range of operations tasks.

The second focus area is carrier cloud service-layer deployment.  Operators are coming to realize that their best long-term strategy is to mimic the OTTs in framing higher-level (meaning non-connection) services, but they have struggled with how to get started, both in targeting terms and in infrastructure terms.  I think it’s likely that the second problem needs to be solved in a way that delivers an agile, generally capable, infrastructure model that they can then trial-target as they build up confidence.

The problem in this area is obvious; they don’t have that model of infrastructure, they don’t know how to get it, and no vendor seems to be offering it.  NFV and SDN make sense in a context of an increasingly cloud-centric infrastructure model, but neither can really drive operators there.  They can only exploit.

The third focus is SDN and NFV, which operators have not abandoned but rather simply re-prioritized.  Even that comment may be, on my part, reading motive into what they’ve expressed.  I think that operators know that both SDN and NFV will play a big role in their future, but they’re coming to realize that, as I noted above, they are going to be important to exploit the cloud to do more with legacy services as they become more cloud-centric in infrastructure planning.  In short, though I don’t think any operator planner would say this, they see themselves migrating to a more Google-like service-centric infrastructure model that they’ll simply run some legacy stuff on for continuity.

How many operators really see this?  I can’t say, of course, but I have fairly good contact with 57 of them at the moment, and three or perhaps four would see things as I’ve just described.  But that’s not really the question.  The question is how many would buy in were they to be offered a pathway to that sort of future.  I think all of them would.

There are some missing pieces in all this happy realization, not the least being that while operators may be willing to step into the future, there’s still no pathway to be had.  The problem isn’t a hardware problem but a software problem, and it’s not strictly operations software or even service lifecycle management or MANO or SDN controllers.  What’s really needed is what in the software world is called middleware.  The future has to be built on software that’s designed to be infinitely agile and scalable.  Google, Amazon, and Microsoft all know that now, and Google in particular has been framing their infrastructure to support the agile model we’re talking about.  So, can operators follow?

No.  Operators don’t have the kind of software people to do it, because they’ve not recognized they need them.  Even vendors don’t have a lot of the right stuff, but they do have enough to make something happen here.  It’s critical for vendors that they do that, because open source projects aren’t going to get us to the right place quickly enough.  Chairman Pai is going to give the industry a gift, a gift of time.  But it’s not going to last forever.

Does Microsoft’s CycleComputing Deal Have Another Dimension?

They say that Microsoft is acquiring CycleComputing for its high-performance computing (HPC) capabilities, combating Amazon and Google.  They’re only half-right.  It’s combating Amazon and Google, but not so much about HPC.  It’s mostly about coordinating workflows in an event-driven world.

Traditional computing is serial in nature; you run a program from start to finish.  Even if you componentize the program and run pieces of it in the cloud, and even if you make some of the pieces scalable, you’re still talking about a flow.  That is far less true in functional computing and even less in pure event-driven computing, and if you don’t have a natural sequence to follow for a program, how do you decide what to run next?

Functional computing uses “lambda” processes that return the same results for the same inputs; nothing is stored within that can alter the way a process works from iteration to iteration.  This is “stateless” processing.  What this means is that as soon as you have the input to a lambda, you could run it.  The normal sequencing of things isn’t as stringent; it’s a “data demands service” approach.  You could almost view a pure functional program as a set of data slots.  When a process is run or something comes in from the outside, the data elements fit into the slots, and any lambda functions that have what they need can then run.  These could fill other slots, and so the process continues till you’re done.

This may sound to a lot of people who have been around the block, software-wise, like “parallel computing”.  In scientific or mathematical applications, it’s often true that pieces of the problem can be separated, run independently, and then combined.  The Google MapReduce query processing from which the Hadoop model for big data emerged is an example of parallelizing query functions for data analysis.

Event-driven applications are hardly massive database queries, but they do have some interesting parallelism connections.  If you have an event generated, say by an IoT sensor, there’s a good chance that the event is significant to multiple processes.  A realistic event-driven system would trigger all the applications/components that were “registered” for the event, and when those completed they could be said to generate other events that would be similarly processed.

In a true event-driven system you can’t sequence events as much as contextualize them.  Events generate other events, fill data fields, and trigger processes.  The process triggers, like the processes in our functional example, are a matter of setting conditions associated with what the processes need before they run.  Don’t ask for five fields, generate an event when you get each, and when they’re all in you do what you wanted to do with the data.

This is very much like parallel computing.  You have this massive math formula, a simple example of which might be:

A = f(x(f(y))/f(z)

This breaks down into three separate processes.  You need f(z), f(y), and f(x(f(y)).  You can start on your f(z) and f(y) when convenient, and when you get f(y) and the value of x you can run that last term and solve for A.  The coordination of what runs in parallel with what, and when, is very much like deciding what processes can be triggered in an event-driven system.  Or to put it another way, you can take some of the HPC elements and apply them to events.

If you follow the link to their website above, then on to “Key Features” you find that besides the mandatory administrative features, the only other feature category is workflow.  That’s what’s hard about event processing.

I’m not saying that big data or HPC is something Microsoft could just kiss off, but let’s face it, Windows and Microsoft are not the names that come unbidden to the lips of HPC planners.  Might Microsoft want to change that?  Perhaps, but is it possible that such an attempt would be just a massive diversion of resources?  Would it make more sense to do the deal if there was something that could help Microsoft in the cloud market overall?  I think so.

Even if we neglect the potential of IoT to generate massive numbers of events, I think that it’s clear from all the event-related features being added to the services of the big public cloud providers (Amazon, Google, and Microsoft) that these people think that events are going to be huge in the cloud of the future.  I think, as I’ve said in other blogs, that events are going to be the cloud of the future, meaning that all the growth in revenue and applications will be from event-driven applications.  I also think that over the next decade we’ll be transforming most of our current applications into event-driven form, making events the hottest thing in IT overall.  Given that, would Microsoft buy somebody to get some special workflow skills applicable to all parallel applications?

In fact, any cloud application that is scalable at the component level could benefit from HPC-like workflow management.  If I’ve got five copies of Component A because I have a lot of work for it, and ten of Component B for the same reason, how do I get work from an Instance of A to an Instance of B?  How do I know when to spawn another instance of either?  If I have a workflow that passes through a dozen components, all of which are potentially scalable, is the best way to divide work to do load-balancing for each component, or should I do “path-flow” selection that picks once up front?  Do I really need to run the components in series anyway?  You get the picture.

We’ve had many examples of parallel computing in the past, and they’ve proven collectively that you can harness distributed resources to solve complex problems if you can “parallelize” the problems.  I think that the cloud providers have now found a way to parallelize at least the front-end part of most applications, and that many new applications like IoT are largely parallel already.  If that’s true, we may see a lot of M&A driven by this same trend.

Does AT&T’s Digital Life Prove There’s No Life in Digital?

The Street report that ATT is considering the sale of its Digital Life division should have a lot of telco transformation people on edge.  This is the division that handles consumer offerings like home security, long seen as the basis for any shift of a network operator into non-connection services.  Is it not working?  Worse, AT&T is a poster-child for SDN/NFV transformation at the infrastructure level.  Is that transformation then not producing what’s needed to support a shift to services beyond connection?  If so, then this could be very bad news.  The question is just what the “news” really is.

AT&T has been spending a lot on M&A, most notably and recently the still-pending-approval deal with Time Warner, but earlier the DirecTV deal.  The media deals make AT&T the largest pay-TV provider.  In contrast, the Digital Life stuff is about six-tenths of a percent of AT&T revenues, and AT&T decided to sell off DirecTV’s home security business when it did the acquisition.  On the surface, it looks like the home security and even home services space doesn’t merit operator attention.  Verizon dropped its own offering several years ago, remember.

It’s always difficult to get an official reason why a given service idea seems to be heading for the trash.  In the case of home security, some of my operator friends have been willing to comment off the record.  They say that there are three reasons for the problems with home security as a service.  First, incumbent competition.  Second, low ARPU.  Finally, unfavorable cross-contamination of other services.  Let’s take them one at a time.

Most of the people who read this blog probably have a home security system.  Most upscale developments include them at least as an option, and many communities have 100% penetration.  The homes are wired when built, or by the first security firm that comes in.  The homeowner will then go back to that company for changes in the system, including the inevitable repairs for the sensor pieces.  Increasingly, the aftermarket for systems is supported by wireless models that involve self-installation by the homeowner.  Data suggests that this is a down market segment, with less revenue potential overall.

The problem here is that unless you want to try to be a pure wireless-self-install player, you need to have installation services.  Operators generally contract these out, which means there is effectively no profit for them in the installation.  Since the operators’ names aren’t household words in security services, they have to advertise heavily to even get a play, and that means that, given the zero-profit installation, the initial sale probably won’t even pay back marketing costs for several years.

The revenue side is a big issue in other ways.  Most of the money in security systems comes from the monitoring process.  Operators obviously have call centers, so they in theory should be able to monitor the home sensors and act, but their costs for this have typically run well above the costs of independent security firms.  Some of my contacts told me that if they matched monitoring prices with incumbent firms, their profits on monitoring would be about half what they’d like, and well below the profits of those incumbents for the same services.

Perhaps the biggest issue is the downward price pressure coming into the market.  The operator contacts I’ve listened to on this tell me that their customers are not the high-end users in most cases, but perhaps a bit below mid-market.  This space is already under price pressure from increased competition, and if strike prices for services continue to fall, operators are in another market where profit declines seem baked in.  Your customer gets worse every day.

In more ways than one, perhaps.  People are way more likely to get rabid over a problem with a security system provider than even one with their Internet or TV.  There are inevitably callbacks with home security, often decades after the sale.  Many of them don’t result in incremental revenue, and if the operator has contracted for installation they’d likely have to contract for some of this stuff.  The rest would end up going to that call center where operators already have higher base costs.  In short, it’s going to be hard to provide quality support.

What happens if they don’t?  Sure they could drop the security service, but how many customers do you suppose won’t threaten their provider with loss of the whole relationship?  So, for a service you might make a minimal profit on, you could be risking the whole bundle.  Let me see…little ARPU upside, big customer loss downside…why did I think this was a good idea?

Probably because you thought that “moving up the food chain” from connections to OTT services would be easier.  Perhaps it looked like a technical problem, or (if you read the tech news) a political one within your own organization.  Apparently, it’s not that easy.  The truth is that what makes Google or Facebook or Amazon winners isn’t just that they offered something over the top.  It was because they offered something unique in the market.  You don’t find those niches by going out to look for services others now sell that you could also sell.

The reason this stuff is relevant is that the concept of NFV is almost totally dependent on virtual CPE, which in turn can’t be a broad-based service if you can only sell it to businesses.  You could argue in favor of consumer vCPE providing you could provide some service kickers for it.  The services of security (firewall), and DNS/DHCP are already present in under-fifty-buck home gateways.  At best, operators would have to give them away, and that assumes they could even justify cloud-hosting features that can be purchased that cheaply.  What services would be credible to consumers beyond those gateway services?  Obviously, home monitoring and security would be on top of the list, which is why the hint that those services can’t be profitable enough is critical.

However, it’s not NFV that’s the problem here, only vCPE, and that’s a problem for the same reason home security as an OTT service is a problem.  It aims at stuff already being done, and all of that stuff is very likely to pose the very same challenges as home security does.  NFV is only threatened to the extent that it relies on “basic” vCPE, which unfortunately it probably does way too much.  If NFV wants to ride the vCPE train, they’d need something that is unique.

SD-WAN, in a form that links the edge elements (usually boxes today, but often cloud components, and easily translated into virtual network functions for NFV) to internal service features for added capability and differentiation, is an easy answer.  If operators linked SD-WAN with vCPE they could create an offering that had real sticking power.  They’d also reduce the risk that they’ll lose customers to Internet VPNs, a likely outcome of their current (non-) strategy.  Versa follows this general model in their relationships with CenturyLink, Comcast, and Verizon, but I think it could be tied better with infrastructure-level services.  And, in any event, SD-WAN is still a connection service with a very limited (business) appeal.  The Internet took us out of the age where business services dominated.  SD-WAN can ease operators out of traditional connection services, but they have to know what they’re easing into.

You could take a similar view of home security and monitoring.  Why would operators elect to jump in and go head-to-head with incumbent providers in a market that’s facing declining prices already?  Don’t offer customers the same thing they already have or can get elsewhere at a bargain price.  Offer something unique.  Tie in external sensors and analytics to predict security risks as they develop.  Correlate multiple sensor inputs to help define what’s likely happening.  Correlate alerts in nearby homes, and IoT sensor information.  Think about what advanced technology, applied by operators at massive scale, could do for home monitoring.  It beats scrabbling in the market dust for a few tenths of a point of profit margin.

Agility is what this means, pure and simple.  You have to be able to frame new services to meet market opportunities, not to try to catch up with the competition.  The whole value proposition for things like SDN and NFV and even service automation is tied to agile response to market opportunity, because even cost control is just a short-term way of getting a payback for an agility and automation investment.  That means that firms need to be looking at a reasonable platform for delivery of OTT services, one that can be reused and exploited.  SDN and NFV can be part of that platform, but they’re not the whole story.

What we’ve learned in the last two decades is that what users want from broadband isn’t connectivity, it’s information and experiences.  “Climbing the OSI stack” to add connection functionality isn’t a long-term answer.  In fact, these kinds of services are really best as means of translating current services to exploit carrier cloud.  If you don’t have carrier cloud to exploit, then you don’t have the best growth medium for things like SDN and NFV.

Google built its network to deliver services.  They’re totally open about its structure.  Maybe the network operators should take a look at it.

What Should We Expect from Controllers and Infrastructure Managers?

One of the key pieces of network functions virtualization (NFV) is the “virtual infrastructure manager” or VIM.  In the E2E ETSI model for NFV, the VIM takes instructions from the management and orchestration element (MANO) and translates them to infrastructure management and control processes.  One of the challenges for NFV implementation is just what shape these instructions take and just how much “orchestration” is actually done in MANO versus in the VIM.  To understand the challenges, we have to look at the broader issue of how services as abstractions are translated to infrastructure.

A service, in a lifecycle sense, is a cooperative behavior set impressed on infrastructure through some management interface or interfaces.  Thus, a service is itself an abstraction, but the tendency for decades has been to view services as a layer of abstractions, the higher being more general than the lower.  Almost everything we see today in service lifecycle management or service automation is based on an abstraction model.

The original concept probably came from the OSI management standards, which established a hierarchy of element, network, and service management.  It’s pretty clear that the structure was intended to abstract the notion of “service” and define it as being a set of behaviors that were first decomposed to network/administrative subsets, and finally down to devices.  This was the approach used by almost all router and Ethernet vendors from the ‘90s onward.

If we presume that there’s a service like “VPN” it’s not hard to see how that service could be first decomposed by the administrative (management) domains that were needed to cover the scope of the service, and then down to the elements/devices involved.  Thus, we could even say that “decomposition” is an old concept (even if it might have gotten forgotten along the way to new developments).

The Telemanagement Forum (TMF) largely followed this model, which became known as “MTNM” for Multi-Technology Network Management.  An implicit assumption in both the old service/network/element hierarchy and the MTNM concept was that the service was a native behavior of the underlying networks/devices.  You just had to coerce cooperation.  What changed the game considerably was the almost-parallel developments of SDN and NFV.

SDN networks don’t really have an intrinsic service behavior that can be amalgamated upward to create retail offerings.  A white-box switch without forwarding policy control sits there and eats packets.  NFV networks require that features be created by deploying and connecting software pieces.  Thus, the “service behaviors” needed can’t be coerced from devices, they have to be explicitly created/deployed.  This is the step that leads to the abstract concept of an “infrastructure manager”.

Which is what we should really call an NFV VIM.  All infrastructure isn’t virtual; obviously today most is legacy devices that could still be managed and service-coordinated the old way.  Even in the future it’s likely that a big piece of networks will have inherent behavior that’s managed by the old models.  So an “IM” is a VIM that doesn’t expect everything to be virtual, meaning that on activation it might either simply command something through a legacy management interface or deploy and connect a bunch of features.  In SDN, an IM is the OpenFlow controller, and in particular those infamous northbound interfaces (NBIs).

It’s comforting, perhaps, to be able to place the pieces of modern network deployment and management into a model that can also be reconciled to the past.  However, we can’t get cocky here.  We still have issues.

I can abstract a single management interface, at a low level.  I can abstract a high-level interface.  The difference is that if I do abstraction at a low level, then I have to be able to compose the service myself, and issue low-level commands as needed to fulfill what I’ve composed.  If I can abstract at a high level, I have the classic “do-job” command—I can simply tell a complex system to do what I want.  In that case, though, I leave the complexity of composition, optimization, orchestration, or whatever you’d like to call it, to that system.

This is a natural approach to take in the relationship between modern services and OSS/BSS systems.  Generally, service billing and operations management at the CRM level depend on functional elements, meaning services and meaningful, billable, components.  Since billable elements are also a convenient administrative breakdown, this approach maps to the legacy model of network management fairly well.  However, as noted, this supposes that there’s a sophisticated service modeling and lifecycle management process that lives below the OSS/BSS.

That’s not necessarily a bad thing, because we’ve had a pretty hard separation between network management and operations and service management and operations for decades.  However, having two ships-in-the-night operations processes running in parallel can create major issues of coordination in a highly agile environment.  I’m not saying that the approach can’t work, because I think it can.  I am saying that you have to co-evolve OSS/BSS and NMS to make it work through a virtualization transition in infrastructure.

The thing that seems essential is the notion of a service plane separate from the resource plane.  This separation acknowledges the way operators have organized themselves (CIOs run the former, and COOs the latter), and it also acknowledges the fact that services are compositions built from resource behaviors.  The infrastructure has a set of domains, a geographic distribution, and a set of technical capabilities.  These are framed into resource-level offerings (which I’ve called “behaviors” to separate them from the “service” elements), and the behaviors are composed in an upward hierarchy to the services that are sold.

Infrastructure managers, then, should be exporters of those “behaviors”.  You should, in your approach to service modeling, be able to ask an IM for a behavior, and have it divide the request across multiple management domains.  You should also be able to call for a specific management domain in the request.  In short, we need to generalize the IM concept even more than we’re working to generalize it today, to allow for everything from “do-job” requests for global services to “do-this-specifically” requests for an abstract feature from a single domain.

But we can’t dive below that.  The basic notion of intent modeling demands that we always keep a functional face on our service components.  Behaviors are functional.  Service components are functional.  In the resource domain, they are decomposed into implementations.

I do think that the modeling approach to both service and resource domains should be the same.  Everything should be event-driven because that is clearly where the cloud is going, and if service providers are going to build services based on compute-hosted features, they’re darn sure not going to invent their own architecture to do the hosting and succeed.  The cloud revolution is happening and operators first and foremost need to tap it.  Infrastructure management and controller concepts have to be part of that tapping.

How an Event-Centric Cloud Model Might Influence the Edge Devices

If we assume that the notion of an event-driven cloud is correct, we have to ask ourselves what that cloud model would do to the way edge devices get information and content.  If the cloud is a new computing paradigm, does the paradigm extend to the edge?  How does it then impact the way we build software or deliver things?  The answers are highly speculative at this point, but interesting.

Right now, consumers and workers both tend to interact with information/content resources through a URL click.  This invokes a “connection”, a session, between the user and the resource, and that is sustained through the relationship.  In an event model, things would have to work differently, but to see why (and how they would then have to work) we’ll have to use an example.

Let’s say we have a smartphone user walking down a city street.  In a traditional model of service, the user would “pull” information from the phone, looking for a location or perhaps a retail store.  In an event-driven model the user might have information pushed to the device instead, perhaps based on “shopping habits” or “recent searches”.  Indeed, this sort of push relationship is the most plausible driver for wearables, since hauling the phone out to look at messages would be intrusive to many.

Making this sort of thing work, then, is at least a reasonable goal.  Let’s start with the notion of “push”, which would mean having events cast to the user, representing things that might warrant attention.  It’s easy to envision a stream of events going to the user’s phone, but is that really logical, optimal?  Probably not.

A city street might represent a source for hundreds or thousands of IoT “events” per block.  Retail stores might generate more than that, and then we have notifications from other users that they’re in the area, alerts on traffic or security ahead, and so forth.  Imagining tens of thousands of events in a single walk is hardly out of line, but it’s probably out of the question in terms of smartphone processing.  At the least, looking all that stuff up just to decide if it’s important would take considerable phone power.  Then you have the problem of the network traffic that sending those events to every user nearby would create.

Logically speaking, it would seem that event-based applications would accelerate the trend toward a personal agent resident in the cloud, a trend that’s already in play with voice agents like Apple’s Siri or Amazon’s Alexa or “Hey, Google”.  It’s not a major step from today’s capabilities to imagine a partner process to such an agent in the cloud, or even cloud-hosting of the entire agent process.  You tell your agent what you want and the agent does the work.  That’s the framework we’d probably end up with even without events.

What events do is create a value for in-cloud correlation.  If there’s a cloud agent representing the user then there’s a way of correlating events to create useful context, not just flood users with information like an out-of-control visual experience.  We can do, in the cloud, what is impractical in the smartphone.  Best of all, we can do it in a pan-user way, a way that recognizes that “context” isn’t totally unique to users.

Say our smartphone user is at a concert.  There’s little doubt that the thing that defines the user’s focus and context at that moment is the concert, and that’s just what is defining those things for every user who attends.  News stories also create context; everyone who’s viewing an Amber Alert or watching breaking news is first and foremost a part of the story those channels convey.

If there are “group contexts” then it makes sense to think of context and event management as a series of processes linked in a hierarchy.  For example, you might have “concert” as a collective context, and then perhaps divide the attendees by where they are in the venue, by age, etc.  In our walk-on-the-street example, you might have a “city” context, a “neighborhood” and a “block”.  These contexts would be fed into a user-specific personal-agent process.

I say “hierarchy” here not just to describe the way that contexts are physically related.  It would make sense for a city context to be passed to neighborhood contexts, and then on down.  The purpose of this is to ensure that we don’t overload personal-agent processes with stuff that’s not helpful or necessary.

In this sort of world, a smartphone or PC user doesn’t need to access “the web” nearly as much; they are interacting with personal agent and context agents, which are cloud processes.  It’s pretty easy to provide a completely secure link to a single cloud process.  It’s pretty easy to secure cloud processes’ connections with each other, and to authenticate services these processes offer to other processes (if you’re interested in a “consensus” model HERE is how the original ExperiaSphere project approached it back in 2007).  Thus, a lot of the security issues that arise with the Internet today can’t really happen; all the identities and relationships are secured by the architecture.

This approach doesn’t define an architecture for context creation or personal agency, or the specific method for interconnection; those issues can be addressed when someone wants to implement the architecture.  The approach does define, in effect, the relationship between personal agent and user appliance.  It’s what the name suggests agency.  In some cases, the agent might provide a voice or visual response, and in others it might do something specific.  Whatever happens, though, the agent is acting for the user.  We see that now with Amazon’s Alexa in particular; some people tell me that they talk to it almost as they would a person.

Which I think is obviously where we’re headed with all of this.  The more sophisticated our processing and information resources are, and the more tightly they’re bound to our lives, the harder it is to surmount artificial barriers created by explicit man-machine interactions like clicking a URL.  We want our little elves to be anthropomorphic, and our devices likewise.

The biggest trend driving networking today is the personalization of our interaction with our devices and the information resources those devices link us with.  The second-biggest trend is the growth in contextual information that could be used to support that personalization, in the form of events representing conditions or changes in conditions.  The biggest trend in the cloud is the shift in focus of cloud tools toward processing and exploiting these contextual and event resources.  The second trend, clearly, is driven by the first.

As contextual interpretation of events becomes more valuable, it follows that devices will become more contextual/event-aware themselves.  The goal won’t be to displace the role of the cloud agent but to supplement that agent when it’s available and substitute for it when the user is out of contact.  The supplementation will obviously be the most significant driver because most people will never be out of touch.

Devices are valuable sources of context for three reasons.  First, since they’re with the user they can be made aware of location and local conditions.  Second, because the device can be the focus of several parallel but independent connections, the device may be the best/only place where events from all of them can be captured.  Texting, calling, and social-media connections all necessarily involve the device itself.  Third, the device may be providing the user a service that doesn’t involve communications per se.  Taking a picture is an example for smartphones, or perhaps movement of the device or changes in its orientation.  An example for laptops is running a local application, including writing an email.

The clearest impact of event-centric cloud processing is event-centric thinking in the smartphone.  Everything a user does is a potential event, something to be contextualized in the handset or in the cloud, or both.  Since I think that contextualization is hierarchical, as I’ve noted above, handset events would likely be correlated there.  The easy example is a regular change in GPS position coupled with the orientation shifts associated with walking or driving.  This combination of things lets the device “know” the user is on foot or in a vehicle.  You could correlate the position with the location of public transport vehicles to see if it’s a car or not.  You can learn a lot, and that learning means you can provide the user with more relevant information, which increases your value as a service provider.

The net of this is that devices, particularly smartphones, are going to transform to exploit cloud agency and contextual processing of events.  But even laptops will be impacted, becoming more event-centric with respect to application context and social awareness.  We can already see this in search engines, and every step it expands offers users and workers and businesses more value from IT.  It’s this value increase that will drive any increases in spending, so it’s important for us all.

Cisco’s Earnings and the Need for Aggressive Thinking on the Cloud

Cisco reported roughly in-line numbers for the quarter but the stock was down over 2% because it is still reporting a sequential decline in revenues.  Guidance was also at the low end of Street expectations, which further suggests a “no-improvement” scenario, and there was special Street concern for the fact that security products didn’t do as well as expected.  I’d guess you’d not be surprised if I said this wasn’t a good sign for networking.

Why does equipment revenue decline?  Because operator and enterprise capex is at least not growing, and overall is declining.  Why is that happening?  Because it takes new benefits to justify new spending, and buyers are focused on reducing their own costs.  Cisco and other vendors are cutting costs so their profits aren’t dropping with revenues.  Buyers are supposed to do something different?  Get real.

If you’re not surprised that I don’t think this is good for the industry, you won’t be surprised if I say that it’s hardly news.  Operators have told the Street that they’re cutting capex.  Enterprises don’t have that kind of long-term capital-project planning, but CIOs are telling me that every round of new network spending is focused on lowering costs overall, and if that can’t be done then at least raising them as little as possible.  All of this because ROI for network projects isn’t meeting internal guidelines for approval unless the cost is lower, not higher.  Lower cost, lower spending, lower vendor revenue.  QED (which for those not blessed as I with two years of high-school Latin, means “quod erat demonstrandum”, or in English “Thus it has been demonstrated.”  Or, if you like “res ipsa loquitur” or “the thing speaks for itself”.  Don’t you love a practical education?)

I really feel like “Groundhog Day” here, as long as we’re doing quotes.  Yes, I have been saying that absent new benefits there cannot be new spending.  No, people don’t seem to be paying attention.  So I’m saying it again now and perhaps offering some comments based on Cisco’s earnings call comments.

Vendors love to explain shortfalls as being due to market conditions and not to their own missteps.  Well, gosh, is it a surprise to them that they are in the market?  What conditions did they expect, and did they have anything other than blind hope that they’d come along?  If you dig past the usual trash in vendor comments, they all are saying in effect that they thought that more traffic would drive more earnings in the operator space, and also in the enterprise space.  More bits means more bucks, and that’s true for vendors.  Not so for buyers.

The future is never a linear extension of the past.  Any technology, any business idea, has a logical lifespan beyond which its benefits no longer grow, and so no longer justify increased investment.  We are today depending on a notion of networking that goes back about 40 years.  What in tech has survived that long?  In 1974, the year TCP/IP arguably was born, a small computer was one that would fit in a 19-inch rack and cost thousands of dollars.  Today a computer a hundred times more powerful can be carried in your pocket.  So how could computing change so much and networking change so little?  It’s not logical.

To be fair, though, computing’s change was more quantitative than qualitative in networking terms.  Today’s systems are a lot faster, but they are still discrete devices that have fairly static relationships with networks.  A network that connects a multi-core smartphone and a network that connected a DEC PDP-8 still address endpoints the same, and expect connectivity between those endpoints to be the essence of any service.  Cisco and others, perhaps, might be forgiven if they fall today into the same service-mission trap that ensnared them four decades ago.

Can ignoring four years of static or declining revenue be forgiven, though?  Certainly it shouldn’t be ignored.  I saw the classic “profit-per-bit” compression and crossover slides in 2013, and so did a lot of other people.  Cisco now, perhaps more than other network equipment vendors, is ready to face the truth and push more for “software and subscriptions” as a revenue source.  The question is whether this shift can really accomplish what Cisco needs.

All my modeling and all the logic in the industry suggests that networking and computing had a kind of push-pull relationship initially.  Computing creates an information/content pool that, for a time, was inhibited by network infrastructure designed for low-speed voice and terminal traffic.  Cisco took advantage of the sudden excess of that-to-be-delivered and insufficiency of delivery options.  Now the problem is that we need more from the compute side—processing resources to enhance the value of delivery again.  And it’s not obvious how that happens.

Corporate IT needs to reframe its network to support point-of-activity worker empowerment, creating what is effectively an event relationship with workers.  Consumer services need to be able to contextualize every consumer interaction to make them more valuable.  I know that Cisco knew there were at least some who said this a decade ago, because I told them.  They, like most of the industry, elected to stay the course of traditional networking.

Security is another example of short-thinking.  Do we really think that network operators and enterprises will pay nearly as much to secure networks as they paid to build them?  Is the fact that, as Cisco says on their earnings call, IoT device security attacks are up 90% an indication that we need to spend gazillions of dollars on IoT security?  We need to be spending more on making networking intrinsically secure, not gluing remedy onto imperfection.

Cisco’s call says in essence that data center is growing and WAN is flat, and they correctly name the cloud as the reason.  However, where in the call does Cisco say they know why the cloud is growing in importance?  It’s not because it’s cheaper for current applications, but that it’s the right platform for future applications.  There’s a lot of computing changes between us and where we need to be, in order to support those future applications at the server/software level.  That’s where Cisco, and other network equipment vendors, need to be.  Don’t expect the consequences of cloud expansion to win the game for you, expect to win it by driving that expansion directly.

Why a Model for Network-Computing Fusion is Important

After my blog on a model-driven service lifecycle management technique, I got a bunch of emails from operator and vendor contacts (and from some who’d never contacted me).  Part of the interest was driven by a Light Reading article that noted my skepticism about the way that ETSI NFV was being implemented.   Most was driven by interest in the specifics of the model that I think would work, and it’s that group that I’m addressing here.

My ExperiaSphere approach to NFV has been extensively documented HERE in a series of tutorials, and I’ve now added another tutorial to the list, this one cutting horizontally across the service-lifecycle-stage approach taken by the earlier tutorials.  I’ll be adding it to the ExperiaSphere tutorial list, but in the meantime the new presentation can be found HERE.

My goal in this extra tutorial is to link the event-centric evolution in public cloud services I’ve blogged about, with the needs of network service lifecycle automation.  This was a principle of ExperiaSphere from the very first (which some of you may remember was in 2007), but the specific features of Amazon, Microsoft, and Google can now be used to explain things in a modern-relevant way.

The key point here is the same one I’ve made in the earlier blog I referenced, and that I made to Carol Wilson in the interview for the Light Reading piece.  We have to do NFV using the most modern technology available, which means “event-centric” cloud and data-model-driven portability of features.  We know that now because of the direction that all the major cloud providers are taking.

We’ve built our concept of networking on the notion that a network connects addresses, which represent real and at least somewhat persistent things.  We’re entering an age where the concept of addresses, persistence, or even “things” is restrictive.  In the cloud, there’s no reason why features can’t migrate around to find work, rather than the other way around.  There’s no value to a specific resource, only to resources as a collection.  Users are transient things, and so are the services they consume.  This is the future, both of the cloud and of networking.

All of this was pretty clear a decade ago, and so were the challenges to promoting the vision.  I was a part of a group called the “IPsphere Forum” or IPSF, and it was probably the first initiative that saw services as things you composed on demand.  The founding force behind it was a vendor, which prompted the vendor’s arch-rival to try to torpedo the whole notion.  Operators jumped on it and worked hard to bring it to fruition, but they were defeated in part by regulations that forbid their “collusion” in operator-dominated standards work.  And above all of this was the fact that at the time socializing a concept this broad and different was difficult because most in the industry had never even thought about it.

They think about it now.  We’re now seeing the future of networking differently, and in no small (and sad) part because we’re really seeing it in the cloud and not in the network.  In networking, everyone has focused on protecting their turf as market changes and competition threaten their profits.  There was no massive upside to connection services, no big benefit cloud to fight for.  Cloud provider competition, rather than trying to protect the status quo, is trying to advance the cloud faster into new areas.  Regulations don’t impair cloud providers.  Most of all, for the cloud we’re sitting on a trillion-dollar upside.

That upside could have gone to network providers, both services and equipment.  Networking was, and still is, the broadest of all tech industries in geography and “touch”.  It can tolerate low ROIs, and has plenty of cash to invest.  As an industry, networking has squandered a host of advantages it had in defining the fusion of network and computing that we call “the cloud”.  As an industry, networking has even largely squandered itself, because its future is now out of its own hands.

ExperiaSphere is my attempt to frame the future in at least a straightforward way.  Maybe it’s not something everyone understands, but I think every network and IT and cloud professional would understand it.  I want to emphasize here that I’m not “selling” ExperiaSphere, or in fact selling anything related to it.  The material on the website is all open and public, available to anyone without attribution or fees.  I’m selling an idea, a vision.  Better yet for the bargain-conscious, I’m giving it away in these tutorials and my blogs.

This blog is posted on LinkedIn, and anyone who has a view on the issues I’ve raised can comment and question there.  There’s an ExperiaSphere Google+ community too and I’d be happy to take part in discussions there as well.  Do you share the vision I’ve cited, or do you have reasoned objections?  Let’s hear your views either way.

How the Cloud That’s Emerging Will Shape the Network of the Future

Have you noticed that in the last six months, we’ve been having more stories about “cloud” networking and fewer about SDN or NFV?  Sure, it’s easy to say (and also true, as it happens) that the media jumps off a technology once it becomes too complicated to cover or is discredited in terms of impact versus hype.  In the case of the cloud, which is older conceptually than both SDN and NFV, that can’t explain the shift.  What’s going on here?

One fairly obvious truth is that a lot of what has been said about the impact of SDN and NFV is really about the impact of the cloud.  SDN is highly valuable in cloud data centers, and SDN software is therefore a critical adjunct to cloud computing, but it’s the cloud computing part that’s pulling SDN through.  Without the cloud, we’d be having relatively little SDN success.  NFV, somewhat in contrast, has been assigned a bunch of missions that were in truth never particularly “NFV” at all.  Many were cloud missions, and that’s now becoming clear.

A truth less obvious is that underneath its own formidable burden of hype, the cloud is maturing.  There was never any future in the notion that cloud services would be driven by the movement of legacy apps from data center to cloud, but it wasn’t clear what would be driving them.  Now we know that the cloud is really about event-handling, and that most of the applications that will deliver cloud revenues to providers in the future aren’t even written yet, or are just now being started.

All of this begs us to rethink what “the cloud” is.  It’s not a pool of resources designed to deliver superior capital economies of scale.  It’s about a pool of resources that are widely distributed, pushed out to within ten miles or less of almost every financially credible user and within forty miles of well over 99% of all users.  It’s about features, not applications, being hosted.  It’s about things that are cheaper because they’re rarely done and widely distributed when they are, not about centralized traditional OLTP.

SDN and NFV are consequences of the “true cloud”, applications of it, and elephant-behind-the-curtain glimpses of the final truths of the cloud.  If we have what is almost a continuous global grid of computing power, we obviously need to think about connecting stuff differently, and similarly have to start thinking about what we could do to utilize that grid to simplify other distributed applications, which networking clearly is.  But if both SDN and NFV only glimpsed the truth of the cloud future, what would give us a better look?

Let’s start with SDN.  The notion behind SDN is that adaptive networks whose intrinsic protocols and service protocols are the same are restrictive.  Sure, they work if your goal is to provide connection services alone, but if we have this enormous fabric of computing out there, most of our connectivity is within the fabric and not between users.  SDN’s most popular mission today is that of creating extemporaneous private LANs and WANs for cloud hosting.  But SDN still focuses on connections—it just makes them less “adaptive” and more centrally controlled.  Is that the real solution?

Mobile networks kind of prove it isn’t.  We have this smartphone that’s hauled about on errands and business trips, and we have to adapt the networks mightily (via the Evolved Packet Core or EPC) to let users sustain services while roaming around.  Even recent work on what we could call “location-independent routing” falls short of what’s needed.  Most cloud networking will depend on what we could call “functional routing” where the packet doesn’t specify a destination in an address sense at all, but rather asks for some form of service or service feature.

A generated event may have a destination, but that’s more an artifact of how we’ve built event processing than of the needs of the event.  Current trends toward serverless (meaning functional, lambda, or microservice) computing demonstrate that we don’t have fixed hosting points for things, in which case we really don’t have fixed addresses for them either.  That’s what we need to be looking at for the cloud-centric future.

Then we have NFV.  We build networks by connecting trunks through nodes.  Nodes are traditionally purpose-built devices like routers, and NFV was aimed at making a node into a software instance that could be hosted somewhere.  Where?  Today, the notion would be in general-purpose virtual CPE boxes on premises, or in a fairly limited number of operator data centers.  But if we have a global compute fabric in the cloud, does that make sense?

A network built from hosted software instances of routing functionality doesn’t differ all that much from one built using appliances.  Same trunks, same locations, since most operators would host their virtual functions in the same places they now house network devices.  The specific target for NFV was the non-connective appliances like firewalls and encryption elements, or embedded functions like IMS and EPC.  These features would almost surely be radically changed if we shifted from a user-connecting to a cloud-connecting mission.  Many of the things these appliances do wouldn’t be as relevant, or perhaps wouldn’t be relevant at all, because the focus would have shifted away from traditional “connection” services.

We are not, or should not be, trying to build today’s networks in a somewhat different way.  The cloud is already demonstrating that we’ll be composing services more than delivering them, and that the process of composition will render the communications needs of the future in a totally different way.  I’d bet you that engineers at Google have already started to work on the models of addressing and networking that the future will require.  I think it’s likely that Amazon and Microsoft are doing the same.  I’d bet that most network operators have done nothing in that space, and few network equipment vendors have either.

SDN and NFV were never transformative technologies, because technologies are really not transformers as much as they are enablers of transformation.  The cloud is much more fundamental, in no small part because software that we ran decades ago would still run and be useful today.  The model of computing has not changed, and that may be a big piece of why the model of networking has also been static.  Computing is now changing, and changing radically, and those changes are already unlocking new service models, because software processes are what the cloud changes fundamentally and software processes are what create the services of the future.