What Tech Companies and Tech Alliances Tell Us About Tech’s Future

In the last couple weeks, we’ve heard from three tech companies who could define how IT is done.  Ericsson represents the service-centric view of transformation, though in the network space.  HPE represents the agile side of the IT incumbent picture, and IBM the entrenched (some would say, ossified) side.  It’s worth looking at the picture they paint as a group, and in particular in light of recent news that Berkshire Partners was buying MSP Masergy and that AT&T was partnering with both IBM’s SoftLayer and Amazon’s AWS.

If you look at the recent discourse from all of these companies, and in fact from most players in the cloud, IT, and networking space, you’d see a story that’s strong on the notion of professional services.  This is due in no small part to the fact that buyers of technology generally feel they’re less equipped to make decisions, and to implement them, than ever before.

In 1982, when we started to survey enterprises, about 89% of IT spending was controlled by organizations who relied on internal skills for their planning and deployment.  Nearly all (98%) of this group said that they were prepared for the demands of their IT evolution, and their top source of information was the experience of a trusted peer followed by key trade press publications.  Vendor influence was third, and third-party professional services were fourth.

In 2016, the numbers were very different.  This year only 61% of IT spending is controlled by companies who rely on their own staff, and of that group only 66% said they were prepared for the tasks before them.  Their top information resource was still the experience of a trusted peer, but the media had dropped to fourth, with second now being vendors and third being third-party services.

You can see from this picture that enterprises seem to be depending on others much more, which should be a boon to professional services and those who offer them.  In fact, professional services revenues in IT are up about 18% over the last decade.  Why, then, would Ericsson seem to be missing on its services bet?  Yes, part of it is that the network operators are less committed to professional services than enterprises, but is that all?

No.  The big factor is that technology consumers are generally less likely to want professional services from third parties.  They are happy to pay vendors for them, or to contract for software development or integration from a software company, but much less likely to consume anything from a third party.  The desire for an impartial player is strong, but stronger yet is the feeling that your service vendor will be better for you if they’re also a primary provider of hardware or software.

IBM and HPE are both primary providers of hardware and software, and they both have big expectations for professional services, but the industry-average growth rate per year would hardly send chills up the spine of the CFO or the Street.  What the enterprise surveys I’ve done show is that the movement of vendors from number three in influence to number two is driven by a change in the nature of IT spending.

IT spending has always included two components—one the budget to sustain current operations and the other the budget to fund projects to transform some of that operations.  The transformation can in theory be cost- or benefit-driven.  For the period from 1982 to 2004, the spending was balanced between these two almost equally—certainly never more than a 10% shift.  In 2005, though, we saw two changes.  First, sustaining-budget spending grew to account for about 64% of total spending and gradually increased from there to 69% in 2015.  Second, the target of the “project spending”, which until 2002 had been largely improved productivity, shifted that year to controlling costs.  “I want to do what I do now, but cheaper,” was the mantra.

It’s only logical that if that much focus is on just doing what you do today at lower cost, your need for professional services is subordinate to your need for cheaper components, and you probably favor the idea of vendors making their own analysis of your situation and recommending an alternative in the form of a bid.

The same logic drives users to adopt virtualization technology to reduce hardware costs, to look to automate their own design and application deployment processes, and to transfer to the cloud any work that’s not efficiently run in-house.  In some cases, where internal IT is expensive or seen as unresponsive, that could mean almost everything.  Since server/hardware spending is a big target, this creates systemic headwinds for all the IT players who sell it, including both HPE and IBM.

The challenge for businesses who embark on this kind of transformation, as opposed to simply looking for cheaper gear and software, is the lack of strong internal skills to support the process itself.  All forms of virtualization are likely to succeed, based on my survey data, to the extent to which they are no-brainers to adopt.  That can be because they’re a high-level service, or because the service provider/vendor does the heavy lifting.  Which is why “the cloud” is problematic as an opportunity source, in its current form.

What users want is essentially SaaS, or at least something evolved well away from bare-metal-like IaaS.  HPE’s approach to fulfilling that need seems to be focused on enabling partners to develop wide-ranging cloud solutions that would then be deployed on HPE hardware.  IBM seems to want to control the software architecture for hybrid clouds, by extending them out from the data centers they already control.  Both these approaches have risks that have already impacted vendors’ success.

HPE’s problem is the classic problem of partner-driven approaches, which is getting your partners to line up with your own business needs.  That’s particularly challenging if you look at the enterprise space as a vast pool of diffuse application demand.  What do you need to focus on?  Traditionally, the senior partner in these tech partnerships is driving the strategy through marketing, but HPE doesn’t have inspiring positioning for its strategic offerings.  They need to, because you recall that buyers are looking more to vendors for this now.

IBM’s problem is that the cloud isn’t really dominated by IT stakeholders, but by line stakeholders.  Those people aren’t part of IBM’s account team call-on list, and IBM doesn’t market effectively anymore so it can’t reach cloud influencers that way either.  Yet again, we see a company who is not reacting efficiently to a major shift in buyer behavior.  Not only that, IBM has to establish a relationship with stakeholders and company sizes it doesn’t cover with an account team.  They need marketing more than anyone.

How does this relate to Masergy?  Well, what is an MSP anyway?  It’s a kind of retail-value-add communications service provider who acquires some or all of the basic low-level bit-pushing service from a set of third parties, and adds in management features and functional enhancements that make the service consumable by the enterprise at a lower net cost.  Like SaaS, which can save more because it does more, managed network services can save users in areas related to the basic connection service, enough to be profitable for the seller and cheaper for the buyer overall.

What better way to deal with the declining technical confidence of buyers?  Masergy isn’t boiling the ocean, service-target-wise.  They find one service, appealing to one class of buyer (enterprises) and typically consumed in a quantity large enough to be interesting on an ARPU basis.  They then address, in framing that one service, the very internal technical skill challenges that make strategic advances in tech hard for the buyer.  That’s an approach that makes sense, at least to the point where you’ve penetrated your total addressable market.  Enterprises don’t grow on trees.

The AT&T/Amazon-and-IBM deal could be more interesting, if more challenging.  One similarity with Masergy is obvious; the market target is largely the same.  Yes, you can have SMBs on either IBM’s cloud or Amazon’s, but the real value of the deal to both the cloud guys and to AT&T is to leverage the enterprise customer, who already consumes a lot of AT&T network services and who could consume a lot of cloud too.  AT&T could even add security and other features using NFV technology.   In many respects, these deals would add another higher-layer service set, not unlike the MSP approach.

Not in all respects, of course.  AT&T seems to be the out-front retail control in the deals in most cases, and AT&T’s NetBond cloud-integration strategy covers pretty much the whole cloud waterfront, which means neither Amazon nor IBM have an exclusive benefit and AT&T has the potential for classical channel conflict—who do they partner with on a preferred basis for things like IoT, a focus of the deals?

The big difference between the Masergy model and the AT&T model is that point about easing the burden of meeting technical challenges with your own staff.  Masergy simplifies the problem through bundling, but it’s difficult to bundle business services with networks because business services are delivered through applications that vary almost as much as the businesses do.  The cloud is not a solution, it’s a place to host one, which makes AT&T’s NetBond partnership paths harder to tread.  We’ll have to see if they can make it work.

Will the AT&T/TW Deal Offer Operators Another Pathway to Profit?

The deal between AT&T and Time Warner hasn’t yet to be approved but it seems pretty likely to happen.  In any event, just the attempt is big news for the industry.  Many see this as a new age for media companies, but there are many questions of whether a conglomerate like this is the answer to telco woes or just a slowing of the downward spiral.

The theory behind the “this is the age of conglomerates” position is that 1) owning content and distribution makes you powerful and 2) Comcast has already done this with NBCUniversal.  The second proves the first, I guess.  Let’s leave that aside for a moment and examine the question of whether the parts of the deal are solid, and whether the whole is greater than their sum.  We’ll then move on to “what it means for networking.”

Content is indeed king these days, and TV or movie content in particular, but it’s a king facing a revolution.  The big factor is mobility and the behavioral changes that mobility has generated.  People are now expecting to be connected with their friends in a social grid almost anytime and anywhere.  We all see this behavior daily; people can’t seem to put their phones down.  Yes, this does create some interest in viewing content on that same mobile device, but that’s not the big factor driving change.

The first real driver is that if your day is a bunch of social-media episodes punctuated by occasional glimpses of something else, then you’re probably less interested in 30- or 60-minute TV shows and they frown on using phones in movies.  It’s also harder to socialize a long TV program because everyone probably can’t get it and because you really only want to share a specific moment of the show anyway.  Mobile users live different lives, and thus need different content paradigms, perhaps even different content.  YouTube or SnapChat or Facebook video is a lot closer to their right answer than broadcast TV or going to a movie.

The second driver is a negative-feedback response to the first.  The most coveted consumers are the young adults, and as more of them move to being focused online the advertising dollars follow them.  Television offers only limited ad targeting, and as a result of the mobile wave TV has been losing ad dollars.  To increase profits, the networks have to sell more ads and reduce production costs, which is why a very large segment of the viewing population (and nearly all that coveted young-adult segment) think that there are too many commercials on TV and that the quality of the shows is declining.

What this means is that buying a network or a studio isn’t buying something rising like a rocket, but something gliding slowly downward.

That may not be totally bad if your own core business is gliding downward faster.  The online age has, perhaps by accident, killed the basic paradigm of the network provider—more bits cost more money and so earn more revenue.  With the Internet in wireline form, there’s no incremental payment for incremental traffic generated, and the revenue generated by a connection isn’t proportional to speed of connection.  Operators have been putting massive pressure on vendors to reduce their cost of operations since they don’t have a huge revenue upside.

Developments like SDN and NFV were hoped-for paths to address cost management beyond just beating vendors up on price, but operators don’t seem to be able to move the ball without positive vendor support, and even the vendors who’d stand to gain from a shift from traditional network models to SDN or NFV don’t seem to be able to get their act together.  The vendors who disappoint operator buyers the most are the computer vendors, who should be all over transformation and who seem mired in the same quicksand as the other vendors.

Why are vendors resisting?  Because they want to increase their profits too, or want an easy path to success and not one that puts them at risk.  That attitude is what’s behind the AT&T/TW deal, and it was also the driver for Comcast’s acquisition of NBCU.  New revenue is a good thing, and buying it is safer than trying to build it, even if what you’re buying also has a questionable future.

Both Comcast and now AT&T have two decisions to make, more complicated ones than they had before their media buy.  The first question is whether they try to make up their network-profit problems with media profits, and the second is whether they try to fix their acquired media risk or their current network risk.

If the future profit source is media, then the goal on the network side is just to prevent losses.  You minimize risk, but also minimize incremental risk and first cost.  You answer the second question decisively as “I bet on media!”  The network is a necessary evil, the delivery system and not the product.

If the future profit source is the network and media is intended to provide transitional coverage on the profit side, then you have a big question on the priority front.  Do you push hard for network profit improvement now, when you still have air cover from your recent media deals, or do you work on media to extend the period when media can profit you as you need it to?

This all comes to roost in the issue of AT&T’s ECOMP.  It’s one of the two dominant carrier architectures for next-gen networking, and since AT&T has promised to open-source ECOMP and Orange is trialing it in Poland, it’s clearly gaining influence.  When AT&T’s future depended completely on network profitability, ECOMP was probably the darling of senior management.  The question now is whether it will remain so, not only for AT&T but for other operators.

Could the TW deal be viewed as an indication that AT&T isn’t confident that ECOMP will fix the contracting profit margins created by declining revenue per bit?  It could be, though I don’t think that’s the dominant vision.  TW represents a plumb that AT&T would hardly want to go to a rival, right?  (Think Verizon, rival-wise).  But if network profits could rise quickly, would AT&T care about that as much as they’d care whether other operators were leveraging the same architecture as they are?  If they cared about that, would they open-source their solution?

I don’t think AT&T will intentionally shift its focus to advancing media and forget ECOMP.  I’d estimate that TW would represent only about 15% of the revenue of the combined company and just a bit more of the profit, so TW can’t fill the revenue-per-bit hole for very long.  However, the TW deal could spark a run of M&A by telcos, and everyone I know on the Street tells me that you don’t do a big acquisition then let it run by itself.  Management focus follows M&A, particularly big M&A.

Could this deal slow SDN and in particular NFV?  Yes, absolutely, though candidly NFV has been more than capable of stalling its own progress without outside help.  It could also help focus proponents of a new architecture model for networks, focus them on a realistic business case and a realistic pathway to achieving it.  It could even accelerate interest in a higher-level service model as the driver of that architecture, because content and content delivery is a higher layer.

A final note; there aren’t enough big media companies out there to allow every telco or cableco to buy one.  Those who can’t will have an even bigger problem than before, because other network providers with media-boosted bottom lines will be available for investors.  When any M&A frenzy subsides, those who lost at musical chairs may be even more dedicated to improving infrastructure efficiency.

Applying Google’s Service Orchestration Lessons to NFV

When I blogged about the Google federated layers model I had a request from a LinkedIn contact to blog about applying the approach to NFV.  I said I’d blog more on the topic next week, but it’s important enough to slot in for today.  It is indeed possible to view NFV’s coordinated orchestration or decomposition as a federation problem, and there are several options that could be applied to the approach, all of which have benefits.  And perhaps issues, of course.  Let me warn you now that we’ll have to ease into this to be sure everyone is up to speed.

For purposes of this discussion we need to broaden the traditional definition of “federation” to mean a relationship between network domains that supports the creation of services or service elements using contributed resources or services.  This would include the traditional operator mission of federation, which is focused on sharing/cooperating across administrative domains or business units.

OK, let’s now move to the orchestration coordination point.  “Orchestration” is the term that’s arisen in NFV to describe the coordinated combining of service elements to create and sustain services.  Generally, orchestration has a lot of common with DevOps, the process of deploying and managing an application lifecycle in the cloud or data center.  DevOps for ages has recognized two basic models, “declarative” and “imperative”.  The former defines deployment in terms of the goal-state—what are you trying to get as your end-game.  The latter defines a set of steps that (presumably) lead to that goal-state.  One defines intent, in modern terms, and the other defines process.

A further complication in the area of NFV is the jurisdictional division that’s been created.  NFV explicitly targets the process of combining virtual functions and deploying them.  A service would rarely consist entirely of virtual functions, and so NFV implies that there exists an end-to-end orchestration process that would do the stuff needed beyond what the NFV ISG defines.  This approach is explicit in the AT&T ECOMP and Verizon architectures for NFV.

If you have end-to-end orchestration of something that itself is an orchestrated process (as NFV function-deployment is) then you have an orchestration hierarchy and layers.  That raises the question of how many layers there are and how the layered structure is decomposed to drive deployment.  Let’s then start with the two overall models—declarative and imperative—inherited from DevOps and see if we can make any useful judgements.

I think it’s clear that an imperative approach—a deployment script—works only if you assume that you have a fair number of layers or abstractions.  Imagine having to write a recipe for service deployment that had no interior structure.  You’d end up with one script for virtually every customer order of every service, or certainly a very large number of configuration-dependent and order-parameter-dependent steps.  This is what software people call a “brittle” structure because a little change somewhere breaks everything.

If we do have an imperative approach with a hierarchy of layers, then it’s obvious that the layers at the “bottom” would be really deploying stuff and those at the top organizing how the bottom layers were organized and connected.  Thus, IMHO, even the imperative approach would depend on higher-layer descriptions that would look more and more like declarations.

This is why I don’t like the YANG-driven thinking on orchestration.  YANG has limitations even for describing connectivity, and it has more serious limitations describing virtual function deployment.  It is, IMHO, unsuitable for describing open abstractions—declarations.  TOSCA is far better for that.  But for this discussion let’s forget the details and focus on the structure.

Right now, we have a kind of fuzzy view that there are two or three specific levels of orchestration.  There’s service-level, end-to-end stuff.  There’s functional orchestration, and then there’s the device diddling at the bottom in some models.  Is that the right number?  What, if anything, could actually set the number of layers, and if something did would the setting be useful or arbitrary?

Here we get back to the notion of a model—declarative or imperative.  A model has to be processed, decomposed if you will, to some substructure.  While it’s true that we could develop software to decompose multiple model structures, it’s unlikely that would be a logical approach.  So, we can say that the number of different decomposition approaches equals the number of different models we use.  But what is the optimum number?  I think Google’s approach, which has five layers of decomposition, illustrates that there probably isn’t an optimum number.

Does this then argue for an infinitely layered and infinitely complicated orchestration approach?  The answer is that it depends.  In theory, a hierarchical model could be decoded by any number of orchestrators as long as each understood the model at the level that it was applied.  If every model represents a series of intent-or-declarative elements, then you could have any element decoded by anything, and it wouldn’t even have to decode into the same deployment as long as the SLA offered to the higher layer was met.

This is the lesson Google teaches.  You can structure a layer of the network in any way that optimizes what you value, and if you present its capabilities in an abstract way (one that cedes details of implementation for firmness of SLA) then it doesn’t matter.  You build layers and models to represent elements in layers as you will.

My personal view has always been that the best approach would be to have a uniform modeling down to the point where resources are committed (roughly to what ETSI calls the Virtual Infrastructure Manager).  You compose upward to services by aggregating whatever is useful in whatever way you like; layers would be unimportant because there’d be no functional subdivision to create them.  TOSCA could support this approach all the way to the bottom, which is one reason I like it.

The relationship between “layers” of orchestration and different models or different orchestration software tools is pretty clear.  A fixed layer exists where there’s a boundary, a different model or a different orchestrator.  I think boundaries of any sort are a risk to an emerging technology, so I think we either have to demand a single model/orchestrator from top to bottom, or we have to mandate an open model and orchestration strategy where any element in a service structure could be decomposed in any convenient way.

This puts me in gentle opposition to the approach that’s embodied in both AT&T’s and Verizon’s NFV models, because they do have fixed layers with implicitly different models and explicitly different orchestrators.  I think both operators have an interest in advancing NFV, which I share, but I think that since service modeling and orchestration is the cornerstone of software automation, we can’t afford to get this wrong.  I’d hope that somebody revisits this issue, quickly.  We could get locked by inertia into a limited approach with limited benefits.

Google’s Cloud Network as a Model of Federation

I have already blogged on aspects of Google Andromeda and a network-centric vision of the cloud.  SDxCentral did an article on Google’s overall vision, based on a presentation Google gave at SDN World Congress, and I think the vision merits some additional review.

One of the key points in the Google approach is that it is really five SDNs in one application, or perhaps a better way to put it is that Google applies five different SDN frameworks in a cooperative way.  At the SDN level, at least, this is an endorsement of the notion of infrastructure as a loosely coupled hierarchy of structures that are independently controlled within.  That’s a form of federation, though the fact that Google is a single company means that it doesn’t have to worry about the “horizontal” form, cutting over across multiple administrative domains.

There has never been any realistic chance that SDN would deploy in a large-scale application using a monolithic controller.  However, Google seems to be illustrating a slightly different vision of vertical federation, and that might be helpful in NFV.  First, though, we should look at federation overall.

“Federation” is a widely used but not universal term for a cooperative relationship between independent service management domains, aimed at presenting a single set of “services” to a higher-level user.  That user might be a retail or wholesale customer, so federation in this sense is a kind of fractal process, meaning that a given service might be a federation of other lower services.

Taken horizontally, federation has proved a requirement in even conventional services because many buyers have operating scopes broader than a single operator can support.  In the old post-divestiture days, a simple phone call could involve an originating local exchange carrier (LEC), an interexchange carrier (IXC) and a terminating LEC.  In this model of horizontal federation, there’s a common service conception within all the players, and an agreed gateway process through which they are linked (and settled).

Vertical federation isn’t as common, but it’s still used by operators who acquire transport (lower-layer) services from a partner to use in building a higher-level service infrastructure.  Mobile services often involve a form of horizontal federation (roaming) and a form of vertical federation (acquisition of remote metro trunks to support cell sites, or tower-sharing federations).

Even in modern networks we could see these two models migrating over, largely unchanged.  In fact, since current services depend on these early federation models, they’ll likely remain available for some time.  The question is what other models might arise, and this is a question Google may be helping to answer, but I want to talk about emerging horizontal federation first.

When you create a service across multiple infrastructure domains, you have three basic options.  First, you can follow the legacy model of a service set in each domain, linked through a service gateway.  Second, you can cede service creation to one domain and have it compose a unified service by combining lower-level services published by other domains.  Finally, you can let subordinate domains (for this particular service) cede resource control to the owning domain and let that domain deal with what it now sees as a unified set of resources.  All these options have merit, and risk.

The gateway approach is essential if you have legacy services built to use it, and legacy infrastructure that offers no other option.  The problem is that you’re either building something service-dependent (PSTN intercalling) or you’re concatenating lower-level services (notably IP) and then adding a super-layer to create the service you’re selling.  The former lacks agility and the latter poses questions on exploitation of the model by passive (non-paying) or OTT players.

The resource-ceding approach is an evolution of the current vertically integrated substructure-leasing model, like fiber trunks or backhaul paths.  It would give one operator control over the resources of another, and that’s something operators don’t like unless the resources involved are totally static.  However, the cloud’s model of multi-tenancy offers an opportunity to cede resources that include dynamic hosting and connectivity.  Groups like the ETSI NFV ISG have looked at this kind of cloud-like federation but it’s not really matured at this point.

The final model is the “component service” model.  The subordinate operators publish a service set from which the owning operator composes retail (or, in theory, other wholesale) services.  These subordinate services are inherently resource-independent and in modern terms are increasingly “virtual” in form, like a VPN or VLAN, and thus known by their SLAs and properties and not by their implementations.

Even a cursory review of these models demonstrates that the last one is really capable of becoming the only model.  If operators or administrative/technical domains publish a set of low-level services in virtual form, then those services could be incorporated in a vertical or horizontal way, or in combination, to create a higher-level service.

It’s into this mix that we toss the Google five-controller approach.  At the highest level, Google is building “Cloud 3.0” which is a virtual computer that’s created by connecting a bunch of discrete systems using a multi-layer network structure.  Google has two controllers that manage data center networks, the first to connect things at the server level and the second to abstract this into something more service-like.  Andromeda is this second controller set.

We then move to the WAN and higher.  B4, which is Google’s base-level WAN controller, provides for link optimization and their TE controller manages overall DCI connectivity.  Above all of that is the BwE controller that does service routing and enforcement at the total-complex level, and that’s responsible for insuring that you can meet application/service SLAs without creating lower-level issues (like the trade-off between latency and packet loss).

Google is doing vertical federation, but they don’t need to horizontally federate in their own network.  Their model, though, would be capable of federating horizontally because it’s based on a modeled service abstraction at all levels.  I think that Google is illustrating that the notion of an abstraction-based federation model is applicable to federation in any form, and that it would be the best way to approach the problem.

The abstraction approach would also map as a superset of policy-based systems.  A policy-managed federation presumes a common service framework (IP or Ethernet) and communicates the SLA requirements as policy parameters that are enforced by the receiving federation partner.  You can see the limitation of this easily; a common service framework doesn’t cover all possible services unless you have multiple levels of policies, and it also doesn’t open all the federation opportunities you might want to use because the partners never really exchange specific handling parameters, only service goals.

I also want to remind everyone that I’ve long advocated that NFV define a specific network and federation model, and I think the fact that Google defines its cloud in network terms shows how important the network is.  In enterprise cloud computing, too, Google is demonstrating that a private cloud has to be built around a specific network model, and that IMHO means adopting an SDN architecture for private clouds.

Are We Entering a New Phase of the “Cloud Wars?”

The truth will out they say, and that includes truth about the cloud.  According to a report in Business Insider, both Google and Amazon are now looking at their own premises cloud strategies, an acknowledgement that Microsoft and IBM may be onto something in just how far public cloud services can go.  And, perhaps most important, how they might get there.

There’s a perception, driven no doubt by publicity, that public cloud will eat all of IT.  That’s hardly realistic in my view.  Most of you know that I’ve been modeling public cloud adoption for years, and through that time my numbers have consistently shown that only about 24% of current applications can be transitioned to public cloud, and that even postulating adoption of cloud-specific application architectures the public cloud share struggles to hit 50%.  That means that virtually every enterprise ends up with a hybrid cloud because much of their stuff stays in the data center.

If the data center is an active partner in the future cloud, then it follows that data center IT giants could have, or could develop, a commanding cloud position by exploiting that truth.  In fact, since mission-critical or core applications are the ones most likely to be hybridized (and obviously are the most important), these IT giants could end up framing the way the cloud is used and the way that public cloud services have to be presented.

Microsoft’s Azure services have played on this truth from the first.  Windows Server and Azure share a programming model, and since Azure is PaaS it’s easy to migrate applications written to that model to an Azure cloud, and easier to integrate multi-component and microservice applications using Microsoft’s model.  All of this means that Microsoft could gain public cloud traction because it has a foot in the data center too.

Azure was in many respects a bold decision by Microsoft, because Microsoft has a product line (Windows Server and associated applications) that could be displaced by the cloud.  However, Microsoft is a PaaS player, which means that it gets incremental revenue from the cloud licensing of software.  Since Microsoft is in virtually every enterprise, even in Windows Server form not to mention desktop Windows, it’s been gradually building a following among enterprises, even in line departments.

IBM also seems to be playing a hybrid card, from a slightly different angle.  Their presumption appears to be that hybrid mission-critical stuff will indeed drive the cloud, but that this stuff will drive the cloud from the data center out.  IBM’s cloud strategy is carefully positioned to be presentable by IBM account teams to CIO-level buyers.  To prevent themselves from being left out of the phantom-IT-line-department deals, they split their public cloud (SoftLayer) off and have tended to focus on channel partners to move it.

The IBM approach seems to shift innovation focus to traditional IT and to partners, and as a result perhaps IBM’s SoftLayer web service repertoire falls short of the offerings of both Amazon and Microsoft.  Where the latter two companies have a rich set of both horizontal and vertical services aimed at developers of cloud-specific applications, IBM seems to prefer taking data center middleware elements into the cloud to preserve their ability to drive things from the IT professional side.

The challenge that both Microsoft and IBM pose for Amazon and Google is that very relationship between IT professionalism and cloud deployment.  If “Cloud One-Point-Zero” is the simple IaaS model, then it’s never going to move the ball much and probably never really alter the IT landscape either, media hype notwithstanding.  Thus, the question is how “Cloud Two-Point-Zero” comes about, and both IBM and Microsoft are doing a lot to empower traditional data center people in driving the cloud bus within the enterprise.  That would leave both Amazon and Google on the outside, since neither company has a position in the data center.

The question is how to counter this, and I wonder if trying to get into the data center platform business is the right answer, for three reasons.

The first reason is that they risk exacerbating their risk rather than controlling it.  The threat to Amazon and Google is less that someone else will drive a data-center-centric vision of cloud evolution, but that such a vision will develop at all.  What both Amazon and Google need is for computing to become more populist, tactical, framed on hosted offerings rather than on self-development.  That’s not what data center and development people would be predisposed to do, no matter who’s selling cloud to them.

The second reason is that to the extent that either Microsoft or IBM would pose a specific threat, open-source premises computing is a better response, and one that’s both accepted by buyers and supported by several highly credible sellers.  Microsoft and IBM already struggle to position against open-source alternatives, and it’s very possible that an organized drive by Amazon or Google to penetrate the data center would help the very IT giants they fear most.

The final reason is that Amazon and Google would fail in an attempt to field a real data center platform.  You need massive account presence, field marketing support, technical support, positioning and marketing—stuff neither company has in place for data center sales situations.  Further, deciding to compete in the data center would likely result in Microsoft and IBM using their account influence against Amazon and Google cloud solutions.

So what should Amazon and Google do?  The answer is simple; create more and better cloud web services that can be used in data center and hybrid applications.  And target IBM specifically, not Microsoft.

Distributed, cloud-hosted, application components are the best way to advance cloud adoption and shift dollars from data center to cloud.  The further this trend goes, the more spending will be impacted and so cloud providers would benefit more.  Obviously, the converse is true and if IBM and/or Microsoft got more enterprises to drive their cloud plans in a data-center-centric way, then probably more spending would stay in the data center.

IBM is the greater threat in this regard because IBM has a very specific need to protect its remaining enterprise data center business.  IBM has suffered a revenue decline for almost 20 quarters, and it seems very unlikely that the growth in its cloud revenues will cover future declines in data center spending any better than it has in the past—which is not well at all.  Also, IBM is struggling with an effective cloud strategy and positioning, far more so than Microsoft who seems on track.  Finally, IBM doesn’t have the broad horizontal and vertical web services in its cloud that Amazon, Google, and even Microsoft have.  Any campaign to promote growth of this type of service would thus hurt IBM.

We are not in a battle to see if the cloud will replace private IT; it won’t.  We are definitely in a battle for how the applications of the future will be developed, and that battle could alter the way that public cloud, private cloud, and traditional application models divide up the IT budgets.  In fact, it’s that issue that will be the determinant, and I think we’re seeing signs that the big players now realize that basic truth.  Expect a lot of development-driven cloud changes in the next couple years.

Divergence of Operator Visions of NFV Show Inadequacies in Our Approach

Transformation of a trillion-dollar infrastructure base isn’t going to be easy, and that’s what network operators are facing.  Some don’t think it’s even possible, and a Light Reading story outlines Telecom Italia’s thinking on the matter.  We seem to be hearing other viewpoints from other operators, so what’s really going on here?

There’s always been an uncertainty about the way that virtualization (SDN, NFV, SD-WAN) are accommodated in operations systems.  If we were to cast the debate in terms of the Founding Fathers, we could say that there are “loose constructionists” and “strict constructionists” regarding the role of the OSS/BSS in virtualization.

In the “loose construction” school, the thinking is that orchestration technology is applied almost seamlessly across virtualization processes, management processes, and operations processes.  A service is made up of truly abstract elements, each of which represents a collection of features that might be supplied by a device, a software function, or a management system representing a collection of devices.  Each of the elements has its own states and events, and operations is integrated on a per-element basis.  It’s all very flexible.

In the “strict construction” view, all this is complicated and disruptive.  This group believes that since the general practice of the past was to manage devices with operations systems, you used virtualization to build a specific set of “virtual devices”, most of which would probably be software-hosted equivalents of real things like firewalls or routers.  These would then be managed in pretty much the same way as the old stuff was, which means that operations wouldn’t really be seriously impacted by virtualization at all.

Both these approaches still need “orchestration” to handle the automation of service lifecycle tasks, or you can’t achieve the benefits of SDN, NFV, or SD-WAN.  Arguably, the big difference is in the extent to which orchestration, abstraction, and virtualization are integrated across the boundary between “services” and “networks” and between “networks” and “devices”.

With the strict-construction virtual-device approach, virtualization has to live inside the virtual device, which becomes a kind of black box or intent-model that abstracts the implementation of device features to hide the specifics from the operations layer.  You don’t need to change the OSS/BSS other than (possibly) to recognize the set of virtual devices you’ve created.  However, the OSS plane is isolated from the new tools and processes.  This point is what I think gives rise to the different operator views on the impact of things like NFV on OSS/BSS.

If you have an NFV mission that targets virtual CPE (vCPE), you have a model that easily translates to a virtual-device model.  The boxes you displaced were chained on premises, and you service-chain them inside a premises device or in the cloud.  The feature looks to the buyer like a box, so it makes sense to manage it like a box, and if you do adopt a premises-hosted model there’s no real shared resources used so there’s no need for fault correlation across a resource pool.

If you have a broader vision of NFV, one that imagines services created by dynamically linking cloud-hosted features in both the data plane and control plane, then it’s difficult to see how this dynamism could be represented as a static set of virtual devices.  There are also concerns that, to prevent the virtual device from becoming a hard barrier to extending operations automation, these virtual devices would all have to be redefined to more generally model network functions—a true intent model.  That would then require that traditional devices somehow be made to support the new features.

Both existing infrastructure and existing operations tools and practices act as an inertial break on transformation, and that’s especially true when nothing much has been done to address how the legacy elements fit into a transformation plan.  We don’t really understand the evolution to NFV, and in particular the way that we can achieve operations and agility savings with services that still (necessarily) include a lot of legacy pieces.  We also don’t understand exactly how the future benefits will be derived, or even what areas they might come from.

When you have uncertainty in execution, you either have to expand your knowledge to fit your target or you have to contact your target to fit your knowledge.  Some operators, like AT&T, Verizon, Telefonica, and (now, with their trials of AT&T’s ECOMP) Orange, seem to have committed to attempting the former course, and Telecom Italia may believe that we’re just not ready to support that evolution at this point.

The underlying problem is going to be difficult.  A service provider network and the associated craft/operational processes is a complex and interdependent ecosystem.  Yet every technology change we support is (necessarily) specific, and we tend to focus it on a very limited area of advance in order to build consensus and promote progress.  That means that, taken alone, none of these technology changes are likely to do more than create a new boundary point where we have interactions that we don’t fully understand, and that don’t fully support the goals of technology transformation.  Certainly we have that in SDN and NFV.

The end result of this is that we ask network operators to do something that the equipment vendors doing the asking would never do themselves.  We ask them to bet on a technology without the means of fully justifying it, or even understanding what credible paths toward justification exist.  They’re not going to do that, and it’s way past time that we stop criticizing them for being backward or culturally deficient, and face business reality.

NFV has been around since October 2012 when the Call for Action paper was first published.  I’ve been involved in it since then; I responded to that paper when there was no ISG at all.  In my view, every useful insight we’ve had in NFV was exposed by the summer of 2013.  Most of them have now been accepted, but it’s been a very long road even to that limited goal, and it’s going to be even more time before we have a workable framework to implement the insights.  We need to nudge this along.

I’d like to see two things happen.  First, I’d like to see the ISG take the combination of the Verizon and AT&T frameworks and look at them with the goal of harmonizing them and drawing from them a conception of end-to-end, top-to-bottom, NFV without making any attempt to validate the work already done.  We need to use formal frameworks that address the whole problem to develop a whole-problem architecture.  Second, I’d like to see the TMF stop diddling on their own operations modernization tasks (ZOOM) and come up with a useful model, again without trying to justify their current approach or their own business model.

If we do these things, I think we can get all the operators onto the same—right—page.

Getting the Range on 5G Evolution

I am just as excited about the potential of 5G as anyone, but I’m also a little worried that we might fall prey to some of the same issues that have hurt SDN, NFV, and IoT.  We have a lot of time to get this done right, but we do have to be sure to take some essential steps in the time we have.

Stripped of all the hype and nonsense, 5G is a set of specifications that are designed to bring mobile/cellular services into the mainstream.  Up to now, we’ve based mobile services on a special set of standards that called out special infrastructure, custom devices, and so forth.  The hope is that 5G will fix that.  Up to now, we’ve had to build non-connection services above and independent of the mobile network; the hope is that 5G will fix that too.  Segmentation of services and operators (MVNOs, roaming) were barely accommodated before, and should be fundamental to 5G.

If we had no mobile services today, all of these capabilities would be somewhat daunting.  With mobile services being the profit focus of operators and the consumer’s target of desire, it’s way more than that.  Given that we’re also trying to work in “revolutions” like SDN and NFV, you can start to appreciate the industry challenge this represents.  To meet it we have to do something we’ve typically not done—start at the top.

Logically, the top of 5G is the user.  Every user is represented by an appliance, and that appliance provides the user with a set of service connections.  Since we don’t want to have an architecture that isolates 5G users from other users, we should visualize the user population as being connected to a series of “service planes”, like voice calling, SMS, and Internet.

So far, this model would describe not only current wireless services but also wireline.  It’s a matter of how the service-plane implementation is handled.  That’s a good thing, because the first test of an architecture is whether it can represent all the stages of what you’re trying to do.

The next thing, then, is to look at the service-plane connection and implementation.  As I’ve just suggested, this is a two-stage process, meaning that we have a “connection” to services and an “implementation” of services.  Let’s look at that with an illustration, HERE, and you can follow this along with the text below.


Today, in theory, we have a consumer and business service plane that can support three classes of users—wireline, wireless, and satellite.  Today, each of these user types are supported by independent implementations within the service plane (though there is some trunk-sharing), connected into a cohesive service through a gateway process set.  Voice and data services are converging on IP but still have a substantial service-specific set of elements.  All of this is due to the way that services and infrastructure has evolved.

This now opens the discussion of 5G, because one of the goals of 5G is to converge mobile service implementation on a common model, probably one based on IP.  So what we could say the future would look like is an expanded model for an IP service plane that could support not only mobility (for new mobile services) but also wireline IP and satellite services.  This means that access-technology or device-specific service connectivity would be harmonized with generic service-plane behavior in the access connection.

That single service-plane implementation, as defined here, is based on IP connectivity.  But in reality what we want is to be able to support service tenancy, meaning that we can segment the service plane by service, by operator (MVNO), etc.  Each segment would be potentially a ship in the night, though any of them could still be integrated in some way with other segments.  This is a mission that’s usually assigned to SDN in some form, including possibly the overlay-SD-WAN model.

Any of the service planes would be made up of trunks and nodes, and these trunks and nodes could be either physical devices/pathways or virtual (function) elements, meaning that they could be hosted on something.  To me, this suggests that under the service plane and its tenants, we have an infrastructure plane that is made up of multiple layers.  First, we have contributed services, meaning the fundamental services that are consumed by the service plane to create retail offerings.  Second, we have a resource layer that provides the raw materials.

In the resource area, we have a combination of three things—real devices, virtual devices/functions, and hosting facilities.  My view here is that we have function classes, meaning abstract things that can build up to services, and that these function classes can be fulfilled either with real appliances or hosted virtual functions.  There should be a common model for all the necessary function classes; this wasn’t done with NFV and it’s essential if you’re going to harmonize service management and make operations more efficient.

Orchestration, as a function, is parallel with these layers, but you don’t necessarily have to use a single model or tool across them all.  I do believe that a single orchestration model and software set would be beneficial, but I’m trying to be realistic here and avoid demanding uniformity when it’s not clear whether quick adoption and facile evolution might be better served with multiple models.

Management, in my view, focuses on managing function classes using class-specific APIs that would likely build from a base function-management set common to all function classes.  The function-class model could assure that we have consistent management no matter how the specific component services are implemented at that moment, which makes it evolution-friendly.

Some people will argue that this approach doesn’t have much specific to do with 5G, but I disagree.  5G’s goals can be divided into the radio-network goal, which is wireless-specific, and the “convergence and segmentation” goal, which is generalized.  We don’t need as much industry debate on the former as on the latter, and you can’t achieve a general goal while considering only one technology from the list your trying to converge.

The general, but largely unrecognized, trend in networking has been toward a specific overlay/underlay model, a model where infrastructure is harmonized in capabilities across a range of service options, and where service specificity is created like any other feature.  Increasingly by hosting, obviously.  Virtualization is the current way we see this model evolving, but we can’t lose sight of the goals by diddling with pieces of solution and small segments of the overall opportunity.

Getting Beyond NFV Problem Recognition

The state of technologies like SDN and NFV is important, but it seems we can get to it only in little snippets or sound bites.  A couple of recent ones spoken at conferences come to mind.  First, AT&T commented that they wanted VNFs to be like “Legos” and not “snowflakes”, and then we had a comment from DT that you don’t want to “solve the biggest and most complex problems first.”  Like most statements, there are positives and negatives with both of these, and something to learn as well.

The AT&T comment reflects frustration with the fact that NFV’s virtual functions all seem to be one-offs, requiring almost customized integration to work.  That’s true, of course, but it should hardly be unexpected given that not only did NFV specifications not try to create a Lego model, they almost explicitly required snowflakes.  I bring this up not to rant on past problems, but to show that a course change of some consequence would be required to fix things.

What we need for VNFs (and should have had all along) is what I’ve called a “VNFPaaS”, meaning a set of APIs that represent the connection between VNFs and the NFV management and operations processes.  Yes, this set of APIs (being new) wouldn’t likely be supported by current VNF providers, but they’d provide a specific target for integration and would standardize the way VNFs are handled.  Over time, I think that vendors would be induced to self-integrate to the model.

What we have instead is the notion of a VNF Manager that’s integrated with a virtual function and provides lifecycle services.  This model is IMHO not only difficult to turn into a Lego, it’s positively risky from a security and stability perspective.  If lifecycle management lives in the VNF itself, then the VNF has to be able to access NFV core elements and resources, which should never happen.  The approach ties details of NFV core implementation to VNF implementation, which in my view is why everything ends up being a snowflake.

An open, agile, architecture for NFV always had three components—VNFs, infrastructure, and the control software.  The first of the three needed a very explicit definition of its relationship with the other two, and we didn’t get it.  We need to fix that now.

Snowflakes are also why the notion of not solving the biggest and most complex problems first is a real issue.  Yes, you don’t want to “boil the ocean” in a project, but you can’t ignore basic requirements because they’re hard to address without putting the whole solution at risk.  The architecture of a software system should reflect, as completely as possible, the overall mission.  If you don’t know what that mission is because you’ve deferred the big, complex, problems then you can end up with snowflakes, or worse.

What exactly is an NFV low apple?  You’d have to say based on current market attitude that it’s vCPE and the hosting of the associated functions on premises devices designed for that purpose.  There are a lot of benefits to this test-the-waters-of-NFV approach, not the least of which is the fact that the model avoids an enormous first-cost burden to the operator.  The problem is that, as it’s being implemented, the model really isn’t NFV at all.

There is no resource pool when you’re siting the VNFs in a CPE device.  The lifecycle management issues are minimal because you have no alternatives in terms of locating the function and no place to put it in the event of a failure.  You can’t scale in our out without violating the whole point of the premises-hosted vCPE model by putting multiple devices in parallel by sending out a new one.  Management issues are totally different because you have a real box that can become the management broker for the functions that are being hosted.

It’s also fair to say that the VNF snowflake problem is glossed over here, perhaps even caused here.  Nearly all the vendors who offer the CPE boxes have their own programs that integrate VNF partners.  That’s logical because the VNFs are really just applications running in a box.  Do these boxes have to provide a virtual infrastructure manager (VIM)?  Is it compatible with a cloud-hosted VIM?  Leaving aside the fact that we really don’t have a hard spec for the VIM overall, you can see that if vCPE hosting isn’t really even a hard-and-fast VIM-based approach, there’s little hope that we could avoid the flakes of falling snow.

The other early NFV application, mobile infrastructure (IMS, EPC) has in a way the same problem from a different direction.  Some of the operators testing virtualized IMS/EPC admit that the implementations really look like a static configuration of hosted functions, without the dynamism that NFV presupposes.  If you think of a network of virtual routers, you can see that you could go two ways.  Way One is that you have computers in place to host software router instances.  Way Two is that you have a cloud resource pool in which the instances are dynamically allocated.  There’s a lot more potential to Way Two, but are the early applications’ attempt to avoid difficulty/complexity going to favor Way One?

For both the snowflake-avoiders and the difficulty/complexity-avoiders we also have the specter of operations cost issues.  It’s hard to imagine how you could do efficient software automation of snowflake-based NFV; lifecycle tasks are embedded in VNFMs and their host VNFs after all.  Does this then mean that all of the operations integration would also have to be customized by the resident VNFMs?  And surely operations automation is a major goal and a major complexity.  Can we continue to ignore it by assuming that dynamic virtual processes can be represented to OSS/BSS as static virtual devices?

I think we’re on the verge of doing with NFV what we have done with a host of other technical “revolutions”.  We start with grandiose scope of goals and expectations.  We are stymied by the difficulty of defining and then justifying the models.  We accept a minimal step to test the waters, then we redefine “success” as the achievement of that step and forget our overall goals.  If that happens, then NFV can never impact enough services and customers and infrastructure to have the impact those ten operators who launched it expected it to deliver.  Recognition of the problem is the first step in solving it, as they say.  It’s not the only step.

Ericsson’s Challenge: How They Got it Wrong, and How They’ll Need to Fix It

Everyone by now knows that Ericsson has issued a profit warning, and that many analysts and reporters are wondering whether Ericsson can survive in the long term.  I think it’s premature to call Ericsson a member of the walking dead, or even seriously wounded, but I also think that it might be helpful to look at the primary causes of Ericsson’s problems.  They’re not what you’ve read about.

It’s true that networking is commoditizing, that operators are more and more concerned about keeping costs under control since revenue per bit seems to be stuck in perpetual free-fall.  It’s also true that this puts pressure on sales and margins, and that it favors vendors who are known to be price leaders, like Huawei and ZTE.  But transformation favors those who can drive it, and there’s plenty of indication that operators are open to fairly radical changes in networking.  Commoditization, then, isn’t inevitable.

Ericsson, concerned about the commoditization of networking, made an early move to address the trend by focusing less on selling hardware and more on professional services.  In the near term, this focus would compel buyers to pay explicitly for things that other vendors might include with their hardware/software, but if hardware was heading for commoditization and software for open-source, it could be a darn smart move to shift to a service stance.  That could also play to the transformation interest among operators.

Which leads us to the first of those causal issues.  Ericsson anticipated a shift in the market that hasn’t happened as fast as expected.  SDN and NFV should have been poster children for a shift from traditional proprietary networking to commodity boxes and professional services, but both have been much more a media success than a real driver of change.  The truth is that Ericsson’s primary customers build networks much the same way now as they did five years ago.

Which tends to favor vendors with more equipment skin in the game.  Ericsson’s professional services numbers were decent; the drop came in their Networking unit, and was blamed on underperformance of wireless deals and (in a margin sense) too aggressive bidding for emerging-market deals.  The point is that Ericsson isn’t known as a broad equipment vendor, and that hurts you when buyers believe that those who offer more gear will also offer services at a better price.

It was incumbent on Ericsson to demonstrate they brought something to the table, in SDN and NFV and transformation.  They have participated insightfully in the standards process for both areas, but transformation is more than standards, it’s using technology to solve profound business problems.  That should have been a big opportunity for Ericsson, and it’s not been so.

Because of the oldest, biggest, bugaboo for network vendors—marketing.  For most of network operator history, sales have been the ultimate example of “marketinglessness”.  Forget ads, web sites, branding.  You send your geeks to talk to the buyer’s geeks, you respond to RFPs that you’ve worked hard to wire in your favor, and you don’t expect to have to do a lot of creative singing and dancing.  The problem is that operators now recognize their challenges as being systemic rather than point-of-purchase in nature.  They don’t need a new box, they need a new approach.  That should have been perfect for Ericsson’s professional services slant, but like other Nordic networking giants, they just don’t know how to engage a broad (systemic) constituency.  It’s not sales there, it’s marketing.

Systemic positioning is especially critical for companies who don’t have highly visible product families whose names are household words.  It unifies what can otherwise be silos, and most important it provides visibility for things that on their own don’t seem all that visible.  That positioning can then easily tie in with buyer goals is another bonus, but that kind of success takes some serious effort and major insight.

One of my Ericsson friends told me years ago that to Ericsson, “marketing” meant “education”.  You told the buyer what they needed to know, which in this day and age is never going to work.  You have to inspire, not educate.  I think Ericsson figured that out this year, which is what lead to their partnership with Cisco, the ultimate marketing machine.  The problem is that Cisco may be able to sing, but they aren’t necessarily going to sing Ericsson’s song.

Cisco as an incumbent equipment vendor isn’t particularly interested in either systemic revolutions in approach or technologies that obsolete current equipment.  Ericsson needs both those things to develop a strong professional services commitment.  I don’t think a Cisco deal is going to do either party a whole lot of good.

What should Ericsson have done, or more important what should they do now?  The answer, I think, is simple.  The operators have a network transformation problem.  They need to forget the classic business of selling bits and sell something more directly useful, something that ties to buyer needs more explicitly.  Yes, some of that means going up the service stack, but it’s not the simplistic virtual-CPE junk that NFV has generated or the elastic-bandwidth model SDN advocates hype.  If you look at the architectures of operators like AT&T and Verizon, you see an attempt to model a new approach to services by framing a new model for infrastructure and service lifecycle management.  That’s what Ericsson should have come up with on its own, and should still work to support.

We have, in 5G, a transformation coming down the line in the very area where most capital dollars are going to be spent, and where change will be easier because you’re really adding new stuff and not just tweaking the old.  I’d argue that what 5G really represents is a model for mobile infrastructure transformation.  We need a similar model for wireline, and then we need technology elements that can support one or both the transformations we’ve defined, elements like those of SDN and NFV.  I do not believe that either AT&T or Verizon have fully developed such a vision, much less defined an architecture to support it.  Could Ericsson?  Sure; they’re smart people.

Will they, though?  We spend a lot of time in this industry bemoaning the operators’ adherence to a Bell-Head culture, but what about the vendors?  Ossified buyers begat ossified sellers.  Can Ericsson recognize that if operators are going to do a different business, they’ll do business differently?  Forget education, concentrate on inspiring buyers to believe you have the answer and that you’re prepared to make it work.  It’s still not too late, but the signals that time is passing are now clearly visible.

Google Enters the Cloud IoT Space–Tentatively

Google has now followed Amazon and Microsoft (Azure) in deploying cloud tools for IoT.  In many ways, the Google announcement is a disappointment to me, because it doubles down on the fundamental mistake of thinking “IoT” is just about getting “things” on the “Internet.”  But if you look at the trend in what I call “foundation services” from the various cloud providers, we might be sneaking up on a useful solution.

IoT is at the intersection of two waves.  One, the obvious one, is the hype wave around the notion that the future belongs to hosts of sensors and controllers placed directly on the Internet and driving a whole new OTT industry to come up with exploitable value.  The other, the more important one, is the trend to add web services to IaaS cloud computing to build what’s in effect a composable PaaS that can let developers build cloud-specific applications.  These are what I’ve called “foundation services”.

Cloud providers like Amazon, Microsoft, and (now) Google have bought into both waves.  You can get a couple dozen foundation services from each of the big IaaS players, and these include the same kind of “device-management-pedestrian” solutions for IoT.  Network operators like Verizon who have IoT developer programs have focused on that same point.  The reason I’m so scornful about this approach is that you don’t need to manage vast hordes of Internet-connected public sensors unless you can convince somebody to deploy them.  That would demand a pretty significant revenue stream, which is difficult to harmonize with the view that all these sensors are free for anyone to exploit.

The interesting thing is that for the cloud providers, a device-centric IoT story could be combined with other foundation services to build a really sensible cloud IoT model.  The providers don’t talk about this, but the capability is expanding literally every year, and at some point it could reach a critical mass that could drive an effective IoT story.

If you look at IoT applications, they fall into two broad categories—process control and analytic.  Process control IoT is intended to use sensor data to guide real-time activity, and analytic IoT drives applications that don’t require real-time data.  You can see a simple vehicular example of the difference in self-drive cars (real-time) versus best-route-finding (analytic) as applications of IoT.

What’s nice about this example is that the same sensors (traffic sensors) might be used to support both types of applications.  In a simplistic view of IoT, you might imagine the two applications each hitting sensors for data, but remember that there could be millions of vehicles and thus millions of hits per second.  It would never work.  What you need to assume is that sensor data would be “incoming” at some short interval and fuel both applications in an aggregate way, and each app would then trigger all the user processes that needed the information.

This kind of model is supported by cloud providers, not in the form of what they’d call IoT, but services like Amazon’s Kinesis can be used to pass sensor information through complex event processing and analysis, or to spawn other streams that represent individual applications or needs.  You can then combine this with something like Amazon’s Data Pipeline to create complex work/storage/process flows.  The same sort of thing is available in Azure.

You could call the foundation services here “first-level” foundation services in that they are basic functions, not specific to an application or even application model.  You can also easily imagine that Microsoft and Amazon could take these first-level services and build them into a second-level set.  For example, they could define a set of collector processes that would be linked to registering devices, and then link the flows of these collectors with both real-time correlation and analytic storage and big data.  There would be API “hooks” here to allow users to introduce the processing they want to invoke in each of the areas.

These second-level services could also be made into third-level services.  Traffic analysis for route optimization is an example; a GPS app could go to such a service to get traffic conditions and travel times for a very large area, and self-drive controllers could get local real-time information for what could be visualized as a “heads-up” display/analysis of nearby things and how they’re moving.

The emergence of an OTT IoT business actually depends more on these services than on sensor management.  As I’ve already noted, you can’t have individual developers all building applications that would go out and hit public sensors; there’s no sensor technology short of a supercomputer that could handle the processing, and you’d need a gigabit trunk to the sensor to carry the traffic.  The reality is that we have to digest information from sensors in different ways to make the application practical and control sensor costs.

Why are we not seeing something logical here, then?  Why would Google be doing something that falls short of the mark, utility-wise?  The likely answer lies in how technology markets evolve.  We hear about something new, and we want to read or hear more.  That creates a media market that is far ahead of any realization—how far depends on the cost of adoption and the level to which an early credible business case can be defined.  During the media-market period, what’s important is whether an announcement gets press attention, and that relies most often on the announcement tracking the most popular trends, whether they’re likely to be realized or not.  We’ve seen this with NFV, with the cloud, and with most everything else.

Eventually, though, reality is what’s real.  You can only hype something till it’s clear that nothing useful will ever happen, or until the course the technology will really take becomes clear and shouts out the hype.  We’re already getting to that point in NFV and the cloud, and we’ll get there with IoT as well.

Speaking of IoT, and of real-time processing and workflows, all of this stuff is going to end up shaping NFV as well.  IMHO, there is no arguing with the point that NFV success has to come in the form of NFV as an application of carrier cloud.  Carrier cloud is a subset of cloud.  Right now we have an NFV standardization process that’s not really facing that particular truth.  IoT and real-time control are also applications of “carrier cloud” in the sense that they probably demand distributed cloud processing and mass sensor deployment that operators would likely have to play a big role in.  If a real-time application set drives distributed cloud feature evolution, then that could build a framework for software deployment and lifecycle management that would be more useful than NFV-specific stuff would be.

I also believe that operator architectures like AT&T’s or Verizon’s are moving toward a carrier-cloud deployment more than a specific deployment of NFV.  If these architectures succeed quickly, then they’ll outpace the evolution of the formal NFV specifications (which in any event are much more narrow) and will then drive the market.  Operators have an opportunity, with carrier cloud, to gain edge-cloud or “fog computing” supremacy, since it’s unlikely Amazon, Google, or Microsoft would deploy as far as a central office.  If, of course, the operators take some action.

They might.  If Amazon and Microsoft and Google are really starting to assemble the pieces of a realistic IoT cloud framework, it’s the biggest news in all the cloud market—and in network transformation as well.  Operators who don’t want to be disintermediated yet again will have to think seriously about how to respond, and they already admit that OTTs are faster to respond to market opportunities than they are.  It would be ironic if the operators were beat by the OTTs in deploying modernized examples of the very technologies that are designed to make operators more responsive to markets!  IoT could be their last chance to get on top (literally!) of a new technology wave.