Three Steps to Maximize 5G (That Nobody will Like!)

I think there are three steps to 5G success.  Some of the steps are intuitive, some are counter-intuitive, and one may be downright offensive to some of the 5G proponents.  I’m sorry about that, but I have to call things as I see (or, in this case, model) them.

The first step is to recognize that supply-side thinking in telecommunications is dead forever.  Networks once were fulfillments of natural, unequivocal, demand.  If your goal is to support people talking and listening, nearly everyone is equipped to play their role.  If your goal is to provide them with information and entertainment, we have a series of objective steps that can identify the service characteristics and total addressable market.  When we try to promote network technology with value propositions beyond the things we can quantify, we’re in a different world, a world that I can’t visit.

5G standards, like every other standards process I’ve either participated in or been aware of for the last 15 years, were flawed by supply-side bias, the classic “Field of Dreams” approach.  I understand why that is; it worked for decades and it eliminates the need for network technologists to deal with uncomfortable questions and issues.  But whatever else happens now, it should be clear that 5G standards hasn’t served the industry well.  We can’t go back and fix that at this point, but what we can do is ensure that we don’t throw other technologies under the standards-organization bus.

Rakuten is demonstrating what’s arguably another model already, focusing on delivering service features more than rigid standards compliance.  It illustrates a key point, which is that a cellular network’s standards must define the interface between network users and the networks, and the way services cross provider boundaries.  The other role of standards, which is to ensure interoperability among devices within the network, is likely to be less valuable as those devices are implemented as cloud-hosted functions.

But don’t take this as my recommending that 5G it itself immune from standards-created risks.  A number of my telco friends tell me that the 5G vendors are pushing for a “complete commitment” to 5G, meaning adoption of all the 5G standards, including 5G Core.  Even when operators say they’re concerned about the business case and would prefer to ease in via 5G New Radio and NSA (Non-Stand-Alone, meaning 5G NR with 4G core), vendors push for more commitment.

Vendors have been reluctant to abandon that Field of Dreams, the old notion that infrastructure is always built in anticipation of demand.  That’s not been true for at least fifteen years, people.  Get over it.  The choice today is to face the need to do something meaningful at the application/service-feature level, or fail.  The problem is that for vendors focused on making their numbers in the current quarter, neither of these choices seem viable.  So instead, just ignore the facts and hope buyers will do the same.

The second step is to embrace open-model technology for 5G.  There are very few 5G vendors who will like that idea, of course, but the fact is that all technologies that depend on broad adoption for their success are going to be under cost pressure.  The more evolutionary they are in terms of their application, the more cost pressure is likely.  5G may have the potential to do new things, but if we picked a random smartphone user and gave them 5G, they’d likely never know they had it.  We need cheaper 5G gear, as Huawei’s success in the space shows.  We’re not going to get gear cheap enough without an open approach.

We have them, too.  Fierce Telecom described the latest marriage of an open-source device operating system (ONF Stratum) and a Telecom Infrastructure Project (TIP) open-model device (the Cassini optical transport gear).  The two organizations have been increasing their level of cooperation, and operators have been showing broader interest in both groups, and in particular in activities arising out of the cooperation.

The overall approach taken by the ONF with Stratum is not dissimilar to the approach taken by traditional operating systems to hardware variability; you have a “driver” that represents a class of hardware interfaces, and this driver then presents a single interface upward to the operating system and any applications.  Stratum embraces the P4 flow programming language this way, and that means that enhancements to network forwarding created at the chip level can be accommodated in an open way.

This hardware/software symbiosis is critical if an open-model approach to 5G is to be successful.  There are many missions associated with 5G, many approaches to fulfilling them.  Hardware/software innovation is not only good for missions, but for vendors.  Specialization of open hardware with chips, or open software via higher-level (above Stratum) elements, represents the best response network vendors could have to the inevitable price pressure of 5G, price pressure that’s intrinsic to the space’s dependence on the mass market, not due just to open-model competitors.

Stratum isn’t the only option, of course, DANOS is another network operating system that’s been open-sourced.  I’m less concerned about having a single focus here than on having an optimum hardware-software marriage, and with two initiatives in play we may improve our chances of getting one.  We may also improve the chances that what we end up with can be built up to support features above the network layer.

That brings up the last step, which is to build above the connection layer.  Look for a moment at the Internet.  As a “network”, it’s pretty much the same connectionless IP that’s been around for decades, and in that form, it would have very limited utility, and likely zero mass-market appeal.  Imagine socially minded teens saying “Hey, let’s go push some connectionless packets!”  What made the Internet, made it a cultural revolution, was not IP, but things like DNS, HTML, search engines, SIP servers, and other stuff that represented a higher layer of functionality.  The higher layers build mission support, and missions are what have business cases.

For years now, I’ve been harping on the notion that personalization and contextualization were the critical elements of whatever comes next for both business productivity and consumer entertainment and empowerment.  I’ve used the concept of “information fields” to illustrate how workers or consumers would move through a virtual world made up of what we know about them, their goals, their surroundings.  Obviously, I like the approach, but feel free to suggest a different model.  The point is that by picking the “information fields” model, I can then postulate the specific technical elements of that higher service layer.  From that, it would be possible to architect the layer, defining not only the features that it, in turn, would publish to applications above, but the features it would expect from below.  That’s how we get 5G into the picture.

There are a lot of things needed to create a virtual-world-to-real-world parallelism and exploit it in applications.  Location of the worker/consumer is a big piece, and relative location (the worker to the thing that’s to be worked on, the consumer to the shop or restaurant, the car to the intersection) the biggest piece of all.  I went with the “information fields” analogy because it’s convenient to think of empowering data as a “field” that extends our senses.  We have a visual field, so why not an information field.  Creating that field involves issues like where the intersection of fields is processed (the edge, the device?), how a user communicates sensitivity to a field (do they “look” for it, does their behavior infer they want it?), and how sensitivity translates into having the field information actually made available.  These are the application model issues I’m hoping we’ll start thinking about.

The presumption that 5G applications are things that can be done only with 5G is destructive, not helpful.  Again, I understand the lure of the “killer app”, but it’s been a long time since we’ve had a single unrealized opportunity that could pull through a mass technology change in a single revolutionary stroke.  What we need to have is not a 5G app, but a 4G mission whose early fulfillment would inevitably lead to the need for specialized 5G support.  If we get started on this now, we can still boost 5G, IoT, edge computing, and AI to near-optimum levels.

My frustration in all of this is that my modeling suggests that every feature of 5G and every anticipated mission for things like IoT and edge computing could be justified.  There are credible applications and services that would generate the business cases needed, and in most cases these applications and services could be realized with less effort overall than we’ve spent on things like NFV.  The difference is that rather than trying to promote a massive early buildout based on the hope of a business case, these applications/services would address the business cases immediately, the pull through technology changes down the line.  This is the logical approach, and I guess I’ll never understand why vendors and operators alike are refusing to consider it.

The Best Job and Industry Targets for Productivity Empowerment

If business services are as critical as they seem (to the cloud, 5G, IoT, AI, and more), then we need to understand as much as possible about them, and how they’re a candidate for driving technology change.  As I noted in an earlier blog, there’s a trend today to jump on business services as the logical driver of practically everything.  I’ve also said that what we need in 5G or IoT is not a killer app, but a killer trend.

I’ve been studying and modeling business services for decades, and so I have a lot of material to cite on the topic, and I promised to get into more detail in a later blog.  This is it.  What I hope to do is identify the targets that show the greatest promise in business services.  It’s not a question of picking an application, but picking a strategy of the sort that has driven past spurts of IT growth and innovation.

To start with, it’s impossible to talk about business services without talking about business benefits.  There are a lot of things we could say about how a service might improve revenues or reduce costs, but over time (since the 1950s, in fact) the one provable truth was that companies invest in IT to improve worker productivity overall.  That means working faster, working with fewer errors, getting the correct or optimum result faster, etc.

Generally, IT spending (budgets) grow roughly in proportion to GDP, for the obvious reason that businesses overall spend to realize the overall opportunity.  When a new paradigm develops to improve productivity, it’s followed by a period of growth in IT spending relative to GDP growth, meaning IT spending grows faster than the economy does.  When that paradigm has been fully realized, the pace of spending growth drops below the pace of GDP growth.

The drivers of past growth/decline cycles (they actually plot as a kind of sine wave) have all deepened the integration of IT with workers’ jobs.  I coined the term “jobspace” over a decade ago to describe the total information content, the sum of the IT activity, associated with a worker’s job.  Big jobspaces means a lot of information coupling, and conversely a small jobspace means little information is actually required or useful.  When a worker has a big jobspace, integrating the information with the job is more likely to improve productivity.  That’s been true in the past, and we should expect future cycles to have this same dependency on information’s ability to empower workers.

The business benefit of a worker productivity improvement is proportional to their unit value of labor, which is the sum of wages, benefits, and spending on services associated with the worker.  A big unit value of labor means a productivity improvement creates a significant benefit, which means it’s more likely to get project approval.  Based on current project approval (internal rate of return versus project ROI) requirements, about a third of workers in the US are “empowerable”, meaning they have a combination of jobspace and unit value of labor that justifies productivity improvement projects.  Industries with a higher percentage of empowerable workers are better targets for sale of products/services than ones with lower levels.  Jobs with higher empowerability have more potential for project support than those with lower potential.

Almost a third of all workers are empowerable, meaning that their unit value of labor and jobspace combine to validate at least some productivity enhancement projects.  If we were to optimally empower them all, we would initiate a new IT investment cycle that would deliver, over an eight-year period, we could boost IT spending by an average of 12% over the baseline rate of GDP growth for each year, meaning we’d essentially double IT spending for the period, versus normal growth.

The challenge, of course, is empowering everyone, and in the real world of enterprise IT budgets you’d never get approval for something as aggressive as a mass empowerment touching a third of all workers (in some companies with uniformly high unit values of labor, you’d touch every worker).  The logical approach would be to ease into the empowerment transformation by focusing on workers who were not just “empowerable” but highly empowerable.  Who are they?

If you recall my PRIOR BLOG on the phases of productivity empowerment, you know that I’m postulating the next step in empowerment is to push it to the hands of the worker, at the point where they’re doing the work.  In past blogs I’ve called this “point-of-activity empowerment” because you’re touching the work as much as the worker.  It’s point-of-activity empowerment that offers the potential to touch a third of all workers, but obviously workers who are not only empowerable but highly mobile and highly valuable constitute the brass ring in terms of ROI.  Those workers make up about 8% of the total workforce in the US (and a similar percentage in other developed economies).

These workers are even more focused on specific verticals.  The “best” sector, healthcare, has almost 20% of its workers as primo targets for point-of-activity empowerment projects, and the worst (agriculture, forestry, fisheries) has only 0.62% of workers as prime targets.  Of the top 20 major industrial/industry classes, 8 reach the threshold of practical near-term empowerment.  That means that these sectors should lead the charge.  They’re Health Care and Social Assistance, Management of Companies and Enterprises, Federal-State-Local Government, Professional-Scientific-Technical Services, Educational Services, Accommodation and Food Services, and Finance and Insurance Information.  In the US, the total workers in the sectors identified as high-value are about 13 million, about 8% of the labor force overall.

The most easily targeted sectors are where productivity improvements could reduce the need to add headcount.  About half the total employees in the high-value target sectors are in job categories that are underfilled, the most being in health care and professional/scientific/technical services, but government employees at all levels are also an opportunity where population growth or other factors create unusual demand for workers.

There are two reasons why vertical market sectors could be important in promoting empowerment.  The first is that many software providers will target specific verticals, even with software that could be used more generally.  Reference-based sales is more powerful within an industry than between industries, as nearly all sales research (including mine) has shown.  The second reason is that industry targeting fits a solution better to members of that industry, requiring less of prospects in the way of customization or unique development.

The risk of vertical segmentation of early productivity empowerment is that solutions would be less portable to other sectors.  The overall model for all four phases of productivity empowerment software evolution (covered HERE) seems to show that many of the most time-consuming software development tasks would create a framework that would apply equally across all sectors.  It also seems that the customization or personalization needed to apply most of them to industries would be modest.  On the other hand, it’s very likely that looking at a single industry for inspiration could lead to excessive specialization of the base platform, requiring more work to make it useful elsewhere.

Modeling, surveys, and my own experience in software design and development all suggest that the elements of the four-phase evolution of empowerment could be built into a generalized software product with a modest level of work, perhaps six to ten developer-years total effort, and even less time could be needed if some open-source tools could be leveraged.  The question is who’d be willing to expend the effort or fund (through contributions of time or money) an open-source development?  Who has the credentials?

I don’t think network operators have a shot at this task; they’re not software-centric enough.  Proprietary software vendors seem unlikely to build a general toolkit; they’d want a proprietary software product.  That leaves IT hardware vendors, network vendors, open-source software players, and cloud providers.

Of this group, I’d expect that the public cloud providers would logically lead the charge.  Point-of-activity empowerment seems to require the elasticity that the cloud can provide, and much of the empowerment process will be related to merging current IT data with other information drawn from outside, including location information.  Much will involve mashups that integrate multiple information and communications channels into one easy-to-use panel or screen.

I’d like to see open-source take a greater role in the process than even the cloud providers, although some cloud providers (like Google) have a reputation for open-source support of this sort of initiative.  Google’s recent alliance with AT&T for edge business service support (in 5G applications) could be a sign they’re working on the subject.  The reason for my preference is that a good open-source solution, in the form of a set of middleware elements to be used in application-building, would reduce the risk of balkanization of the space, something likely if cloud providers all roll their own solutions.

The most important thing is to focus as much development as possible on the common toolkit.  There are, as I’ve noted above, a lot of verticals that show greater promise than others, and a lot of employee types within those verticals.  None of them, taken one at a time, are likely to be able to build a broad business case, so we need to collectivize our efforts to create that rising tide that lifts all boats.

Are Carriers Giving Up on Carrier Cloud?

ATT and Google have partnered in a cloud venture, but it’s just an indicator of a broader strategy by operators and cloud providers, and one that could impact cloud software players like IBM/Red Hat and VMware.  The move is a further indication that network operators are not rushing out to deploy carrier cloud, and that alone has significant ramifications.  This could be huge (as they say) or it could be another flash in a vastly over-flashed pan, and there are even related news items to consider while trying to build a broad strategic view of the venture.

Five years ago, carrier cloud presented the largest potential new data center opportunity in the world, with the potential to create over 100,000 new data centers by 2030.  I said at the time that there were six driver applications for carrier cloud, ranging from the small-ish but early NFV possibility to the intersection of personalization and contextualization, and IoT that would have accounted for more than half the data center total.  However, this was a forecast of potential and not of realization, because from the first it’s been clear that operators were struggling to make any headway in the space.

The prospect of a hundred thousand new data centers would be enough to make server vendors and software-platform suppliers salivate.  I think that IBM/Red Hat and VMware are obviously targeting the network operators, and for them to win big, the operators have to deploy a large number of those data centers.  While the demand drivers ebb and flow in importance over the coming decade, there’s been at least a chance that eventually deployment would happen.  Now, that’s in question.

AT&T cut an earlier deal with Microsoft in the cloud space, and a number of international operators have also inked cloud pacts with some of the public cloud providers.  These haven’t closed the door on carrier cloud (many, in fact, were really about resale of cloud services by the operators), but they’ve all demonstrated that operators overall are unwilling to build it and hope they come, so to speak.  The AT&T deal is a reinforcement of this, because it’s about (according to Google) “monetizing 5G as a business services platform, engaging customers with data-driven experiences, and improving operational efficiency across telecom core systems.”  This isn’t resale of cloud services, it’s darn-straight carrier cloud outsourced to a cloud provider.

One obvious truth here is that if carrier cloud could have generated a hundred thousand incremental data centers for operators, outsourced carrier cloud would likely result in a comparable number of data centers split among public cloud providers.  What was at one time the biggest pie for server and software vendors to hope to slice in their favor, now becomes something cloud providers may well be fighting over.  But the AT&T/Google deal takes us back to a business platform again, another example of the fact that 5G proponents are looking for anything they can claim will drive their technology.  Business 5G isn’t going to drive a hundred thousand data centers.  What’s needed is for it to jump-start something than can then be given further stimulus by other applications and missions.

There’s a lot to stake here.  Any cloud provider with a carrier cloud win of significant magnitude could ride that win to market leadership no matter what else happened.  None of the cloud providers are touting this at the moment, though, because it’s clear that the operators are very reluctant to rush out and admit that they can’t make carrier cloud work…but it’s sure looking like they can’t.

This deal could have enormous consequences for Google, and not just because it gives them an opportunity to go after their share of those one hundred thousand data centers’ worth of cloud demand.  Google is in third place among cloud providers, and it’s struggled to create momentum even as things like hybrid cloud and Google’s own Kubernetes invention have triggered revolutions in cloud buying.

Also remember the deal is about edge computing, which in any form introduces a pretty significant requirement for segmenting resource pools by their characteristics, which in turn demands tools for policy-based Kubernetes federation of deployments across all those discontinuous resource pools.  This is very similar to the needs of hybrid-multi-cloud deployments, and in fact among the top data center owners in the world, over three-quarters actually have a need for segmenting data centers (and sometimes within them) by characteristics.  Google’s Anthos, a big part of this announcement, is one tool to provide this, and in my view, the best tool among those offered by cloud providers.  It wins in the deal, as does Kubernetes, as does containerized software in general.

Obviously, other cloud providers aren’t going to lay down on this challenge.  Microsoft has its own deal with AT&T, but Microsoft’s deal is more focused on resale of Azure by AT&T than on hosting AT&T network functions.  Google’s deal could well end up doing the latter, which is why it’s a more fundamental threat to the carrier cloud as an independent operator deployment (rather than as a segment of the public cloud market).  The question that comes to mind is “Why Google?” and I can offer some insight from my own years of experience with operators.

About a dozen years ago, I was asked by a committee of operators to approach Google on the subject of what would be called today a “cloud partnership”.  I worked with some of my Google contacts, and they let me know quite bluntly that management would have zero interest in such a thing.  The point is that the operators wanted to approach Google, not Amazon or Microsoft, which shows that Google had stuff the carriers already knew they might need.  Since then, Google has built not only one of the top three public clouds, but the most important tools for cloudifying applications.  They have the best example of an SDN-based IP network in the world, and they also run the largest cohesive but distributed data center.  Those are a lot of good reasons why operators might like them.

But it gets better.  Google isn’t Amazon, who operators feared even twelve years ago.  Google sees “the cloud” in a clear technology light, free of any biases based on their own sales of services or the competitive dynamic in the market.  In some ways, the fact that Google isn’t a hybrid cloud leader already is a good thing, because operators could undertake (as AT&T has) a partnership with Google aimed at business 5G service opportunities, without colliding with Google’s own service plans.  Of course, 5G is a big focus of operator interest, but from an edge hosting perspective, any sort of business service could well be just as valuable.  Leverage AT&T’s connectivity, add in a mix of Google Cloud, and you have something that could be compelling, both to operators and also to business customers.

Remember my four phases of worker empowerment?  They were linked to mobile services, and even perhaps to 5G.  They unlocked an enormous potential pool of new benefits to drive purchase of equipment, software, and services.  It’s not difficult to see that applications that related to those four phases could easily involve edge hosting, and thus be a target of Google’s initiatives.  Also note that the later phases of empowerment involve personalization, contextualization, and IoT.  Those were the drivers that combined to create over half those hundred-thousand data centers.

This, IMHO, is the critical opportunity, and it also defines the critical risk for the venture.  Too much focus on abstract business services invites the initiative to fall into one of those circular-justification pits—5G is essential for edge computing for businesses, which is the driver of 5G.  Something has to be a driver without dependency on that which it drives, or it’s just eating power and HVAC.  Or it’s going nowhere.

There are business services that could make this venture a success.  Success here could lead to wider consumer applications of personalization, contextualization, and IoT.  The reason the deal is important isn’t that it answers all the questions that need answered (it doesn’t answer any), but that it introduces a relationship that could lead to the answers, one that combines a desperate partner (AT&T, who is looking to cut costs massively, including laying off people) and one with great power (Google, who has probably more of the right answers than anyone else).

Telco/public-cloud cooperation in business services, dependent as they are on productivity gains, could create a back door through which the personalization and contextualization services bleed away from carrier cloud into public cloud.  That’s not the only threat.  We also need to consider the impact that SaaS might have on the telco cloud.  Salesforce has acquired Vlocity, a kind of spun-out play for vertically focused CRM.  Network operators are one of the specific targets, and this shows that cloud providers, via SaaS, could possibly host operations software for niche organizations within the operators, gradually syphoning off what might otherwise have turned into telco cloud applications, even within the OSS/BSS framework.

SaaS applications aimed at telcos’ own operations processes duplicate some early telco outsourcing of business activities to public cloud providers (AT&T announced this earlier).  While these don’t seem to have the potential to evolve into generalized frameworks for personalization and contextualization, as the more recent Google/AT&T announcement does, they do have the potential to tap off early carrier cloud deployment incentives.  If there is no near-term carrier cloud deployment driven by other factors, then there’s limited infrastructure in place to incubate emerging contextual service opportunities.

My model suggests that the creation of contextual services, leveraging both IoT information and customer information, would get underway around 2022, but that presumed some earlier deployment of carrier cloud justified by other drivers.  If those drivers are lacking, there is not likely to be pre-existing carrier cloud capacity available for the early market trials of contextual services.

On the other hand, if operators truly reconcile to outsourcing carrier cloud to the public cloud providers, there’s no need for early telco-cloud opportunities to build up resource pools; they’re already there in the public cloud.  Might that jump-start personalization and contextualization, not only for business services but for the consumer space?  Perhaps.

What may be the best news is that cloud providers are in a better position than network operators to come up with the software framework needed to support all six of the “carrier cloud” drivers.  The partnerships between operators and cloud providers may be the way the former group is expressing its admission of failure.  Since all the carrier cloud drivers relate to services above traditional connection services, cloud providers probably have a better handle on what’s needed.  The partnership may close some doors for vendors who hoped to grab some of those hundred-thousand data centers, but in the long run it may deliver more value for everyone.

For 5G and Edge, Specific Target Applications for Productivity Improvement

Business productivity improvement should be the target for a lot of the tech initiatives we hear about, including things like 5G, AI, and edge computing.  In a PRIOR BLOG I talked about the importance of business productivity enhancement in the 5G business case.  At some future point, I’ll dig into the details of how and where such enhancements should be targeted.  Here, I want to explore the technical framework for enhancing productivity at the next level.

My modeling shows that the next wave in productivity enhancement for workers has to shift the focus of “empowerment”.  Today, we think of empowering workers via IT as establishing an IT-centric framework for a job, which we do by giving workers “applications”.  The workers’ task is then to do the job by doing the application.  This is logical in a lot of ways, but it’s not optimum in most cases, and it’s downright impossible in others.

The new empowerment focus has to be integrating IT into the work, not the other way around.  In order to do this, we have to provide the worker with an information appliance that’s handy, which is where smartphones (or networked tablets) come in.  We’re culturally tuned to having a smartphone in our pocket/purse, and so there’s no great transitional training effort needed to exploit that in the work context.  In fact, “mobile empowerment” has been a driver of a lot of stuff already, including the cloud.

We can call “Phase One” of mobile empowerment an information portability approach.  The applications and data involved are largely the same as that already in use at desks or other fixed locations, but the platform to which they’re delivered is now the smartphone.  This model of empowerment doesn’t require anything new in the way of technology, if we define “new” as meaning beyond what we already have.  It’s a good on-ramp to other more specialized approaches to empowerment, but it’s not revolutionary, nor is it likely to increase IT budgets significantly.

It could still play a role in later phases, though.  We have cobbled together a series of “cloud front-end” approaches to mobile empowerment, and these are aimed at creating a better user experience through either a browser or app interface.  It would have been great to have seen a model of this front-end development be suitable for later phases, but since we haven’t anticipated those later phases, that would happen only through serendipity.

Phase Two of mobile empowerment is what we could call the “co-presence” phase.  Workforces are just what the name suggests, meaning groups of workers.  Most people don’t work in their own little world, they share projects with others and play their own role.  That implies coordination and collaboration, and so the second phase of mobile empowerment is to create a useful collaborative framework.

Collaboration isn’t impossible today, even with mobile workers.  Phones have voice, text, and video calling and can also interact with web collaboration sites.  The problem that needs to be solved here isn’t creating a collaborative relationship, as much as preventing the collaborative relationship from interfering with (rather than supporting) the work.  Smartphone screen real estate is limited, so having a full-screen web conference kind of rules out looking at data at the same time.  That’s particularly true when you consider that collaborating with a mobile worker, in my surveys, is almost certain to involve sharing some data or visual focus.

The enterprises that I’ve surveyed, and who had actually looked at better mobile empowerment, suggested that what was needed was a “panel” or “mashup” approach.  The goal would be to first select the kind of communication that was actually needed by the worker.  Video calling commits both parties, to a degree, to video.  In most cases, the collaborating parties didn’t need to see each other, they needed to see what the other was seeing, or more directly, what the worker was working on.  The panel or mashup means that a visual “job frame of reference” is established to contain what any of the parties needs to see/hear/read.  The parties are then allowed to set their own “viewer” to select from the referenced items, and in some cases to direct the viewer of the other party to focus on something specific.

There’s a need for additional front-end visual mashup tools, building the frame of reference and managing the viewers.  There’s also likely to be an increase in the mobile traffic required, but the most significant difference in this phase is that the tighter integration between worker, partner, and information means that reliability of the entire application ecosystem is more critical.  Loss of the tools means a major shift in worker behavior and a loss of productivity rather than a gain.  Thus, this phase of mobile empowerment could generate incremental spending.

Phase Three moves IT closer still to the work processes themselves.  We’ll call this the “Hal” phase, because it would involve the injection of artificial intelligence into worker empowerment.  While this is a distinct phase of empowerment strategy, it’s divided into steps, and the order in which these are taken may vary.

One step is to have AI create the job frame of reference based on past experience with the job, a machine learning mission perhaps.  Similarly, AI could generate the viewers for the parties involved and keep the viewer information synchronized and relevant to activity.  Speech recognition is likely to be useful in this mission, but so is the ability to “see” where any of the collaborators are focusing their attention, by knowing what the information content of a given viewer window was.

AI could also be used to initiate the collaboration, meaning to generate the job frame of reference, make collaborative connection(s), and set viewer contexts based on a worker’s information viewing, speech, buttons, etc.  Think of this process as starting with a panel that lays out a series of steps and asks for confirmation of each, in positive or negative form.  The AI process would look like a kind of finite state machine, leading the worker and initiating collaborative relationships as needed.

Another step would be to have AI be the collaborating party.  This goes beyond basic speech recognition to what might be called “contextual interpretation”.  Remember that the worker is using a job frame of reference, and is signaling via worker focus the specific thing they’re exploring.  Speech would be interpreted within a series of set contexts, based on the job frame of reference and the focus.  The AI element would interpret questions and then use some form of AI, including expert systems or neural networks, to frame a response.  The response could include refocusing the viewers, but also giving directions via voice or triggering a video or textual response.

Augmented reality could also be introduced at this point.  If a worker had AR goggles, for example, it could be possible to “see” what the worker sees, either in the sense of sharing that view with the collaborator(s) or interpreting the view.  The view could then be augmented either by superimposing a reference image or by having a collaborator “draw” or indicate something, which would then be added to what the worker sees.  This step could be eased into if the worker, holding the phone up, could see the image the camera captured as well as any superimposed annotations or images.

And then there’s Phase Four, which is the integration of real-world context with worker activity.  This phase is where IoT information is critical, because the goal is to create that “virtual world” from information, then synchronize it with the real world via IoT-contextual sensor information.

Almost all work involves getting the worker into proximity with what they’re expected to work on, or with.  Much of it also relates to understanding the ambient conditions (including lighting, temperature, humidity, noise level) and inputting them into the real-world simulation, so that the empowerment processes, particularly those involving AI/ML, can accommodate them.

The big difference in this phase versus the earlier ones is the significant injection of information from outside the worker and outside current IT.  Most of that information is aimed at providing a virtual image of real-world conditions so that applications can interpret conditions rather than forcing workers to do that.  True automation is about doing something for the worker, not pushing the worker along a path of doing something, and this is the phase that accomplishes that.

Obviously, the integration of the real and virtual world implies that empowerment processes and tools are coupled more closely to work processes and the worker.  That means they have to be highly available and scalable, but also that the latency associated with the empowerment workflows have to be short enough that empowerment synchronizes with the target worker(s).  Add this to the fact that there is considerably more front-end interpretation and manipulation than before, and you have the reason why my models have said that this phase is the real start of “cloud-native” applications.  This is where the benefits that can justify the shift can be reaped.

The cloud is the consistent beneficiary of the empowerment shift, and if there is indeed a new IT cycle coming along, driven by mobile empowerment, the cloud will both lead it and reap the rewards.  In point of fact, the cloud will drive the bus on all of this, as I hope my descriptions have showed.  That’s why I’m frustrated (and even disgusted) by the constant prattle that “edge-will-do-this” or “IoT-will-do-that”, or “5G-will-do-everything.”  None of them will do anything, or even develop much, without the support of the cloud, and the cloud won’t support them if cloud applications aren’t developed to realize these phases of empowerment growth.

I believe the four phases I’ve outlined here are logical, and that each of the phases can be incrementally justified by the benefits it creates.  I also believe the benefits will build from phase to phase, to the point where they could generate over $700 billion in incremental cloud-related revenue.  Finally, I believe that the software that’s created to support each phase could build the base for the next phases, and also build a framework for things like edge applications, IoT applications, AI applications, and even 5G applications.  This would be the optimum way to get to where all these technologies (and their proponents) want us to be.

Software architects will surely see the shape of the tools and features needed for this, and surely see the value in developing an architecture for Phase Four that’s not excessive for Phase One, but rather can be enhanced into later phases without requiring everything be redone.  That should be our goal; it’s my goal for sure.

Can We Help 5G Reach Its Potential?

It should be obvious that 5G is in trouble, at least in terms of fulfilling all the hopes operators and vendors (in particular) have pinned on it.  Nokia’s CEO stepped down, and there was a rumor last week that Ericsson and Nokia were considering a merger (the rumor was squashed, however).  Huawei, who claims to be winning a lot of 5G deals, is under suspicion by the US government, and AG Barr has suggested the US might want to take a big stake in Ericsson or Nokia to counter Huawei.  This isn’t the dynamic you’d expect in a healthy market, and since 5G is the focus of all the vendor maneuvering, it’s not a sign of a healthy 5G opportunity.  Fierce Telecom makes this point HERE, and there are news stories quoting early 5G users as not getting what they expected.

Well…the simple truth is that we should never have expected the kind of 5G market that’s been publicized.  There has been, and is today, no question that 5G New Radio is a logical successor to 4G, primarily because it offers higher subscriber density per cell site.  There’s also little question that having higher-density sites means having faster backhaul connections, and that the Evolved Packet Core (EPC) component of 4G and its 5G equivalent, would likely be adopted as 5G devices and 5G NR rolled out.  However, since there is no credible evidence that consumers will pay more for 5G, any assumption that operators would have significant incremental revenue to fund deployment would have to be based on something other than traditional cell service.

The current fallback positions for the 5G proponents are “IoT” and “business services”.  I’ve worked through additional modeling on both of these, modeling directed at trying to identify a set of opportunities that would maximize the chances of success.  That’s what I propose to work through today.

The “IoT postulate” is simple; an enormous increase in machine communications, created by linking new sensors and controllers, will generate additional 5G customers.  This will counter the fact that smartphone empowerment is plateauing as the population becomes fully equipped.  How many new “virtual customers” might be created?  Billions, so they say.

The obvious problem with this assertion is that we already have smart homes, smart buildings, and sensor/controller applications without 5G or even 4G connectivity.  The majority of places where a consumer or business would want to install a sensor or controller are within WiFi range, within range of other established short-range IoT protocols like ZigBee/Z-Wave, or even suitable for direct wiring.

Where 5G IoT makes sense is in two specific situations.  First, some prospective IoT locations are not within range of short-range RF or wireless.  Second, some prospective IoT applications involve sensors/controllers that are themselves moving.  Let’s look at optimizing each of these.

“Thin IoT” is credible in public missions, meaning where sensors and controllers are to be installed not on a company facility but in areas like public right-of-way.  It may be credible in company installations where the geography covered is large (railroad yards are an example, and airport runways another).  The problem with these applications is the true vulnerability of IoT business cases for 5G; machine communications isn’t like human communications.

We call or text for a lot of reasons, few of which can be called critical, but all of which are at least somewhat important to us personally.  Machines don’t personalize; they support specific applications.  Give a teen a phone and let nature take its course, and you’ll have cellular revenue.  Give a sensor a 5G connection, and it has nobody to talk to…unless we define an application framework in which it’s used.  We’ve got suggested 5G applications, but not real ones, and so my modeling says that it is about as likely that a given city/area would simply deploy 5G sensor/controllers in the hope that something that justified them arise, as public WiFi would be.  Some do it; most don’t.

Company applications are more promising, largely because in most cases, the companies know what the application would be and can frame a value proposition.  The problem today is that even in industries like transportation, where as much as 80% of companies have potential IoT applications, and utilities (45% have IoT potential), the ability to realize those applications is limited.  Rough data from two dozen transportation companies suggests that current application development costs and delays would disqualify over two-thirds of all projects.  Utilities have a similar rate of disqualification.  What these companies want is a good “IoT middleware” offering.  Public cloud providers like Amazon come close, but many of the applications aren’t considered suitable for the public cloud.

My modeling suggests that 5G IoT in fixed locations needs an application framework to facilitate adoption.

Mobile 5G sensors are a whole different story.  Transportation (again) is a critical opportunity, and here we would find the only space where consumer IoT presents an opportunity—the (gasp) connected car.  This space is the perfect counterpoint of hype and reality.

Connected cars are a good way to share a cellular connection among multiple passengers, something valuable if we continue to shift toward WiFi-centricity in controlling cellular data costs and avoiding throttling.  However, this would create a net loss to operators if optimizing plan pricing is the only goal.  Other connected-car benefits can be derived from having vehicle telemetry integrated with cellular services, as current models of vehicles are already showing.  Hype has tended to push the notion way beyond these currently interesting applications, toward autonomous vehicle control, for example.  Even if connected-car technology were necessary for autonomy, the high cost of vehicles makes it difficult for it to pull through 5G.  Adoption rates are too slow, and again we’re depending on an application framework to make the concept viable.

A kind of reverse-connected-car-think may be essential here.  Most auto GPSs will pair with smartphones, and so the first step in connected car may be to use the smartphone as a bridge to the car, as a means of offering some connected-car value without requiring the purchase of a new vehicle equipped with connected-car technology.

In the transportation industry there’s a lot of value in 5G IoT for telemetry on big, expensive, moving stuff like airplanes, trains, ships, and so forth.  Obviously, all of this requires wireless, but it also requires a different 5G service model than we’re used to.  IoT sensors likely don’t need to be able to make calls, and texting would be only an optional way of exchanging data versus simple IP Internet access.  Operators need to be thinking about a new kind of service plan, with suitable service types and pricing.

They also need coverage, which likely means that it will be difficult to launch a mobile IoT market without having the service capable of falling back to 4G connectivity.  That raises the question of why, if 4G could work for the applications, we would expect them to create 5G opportunity.  My model says that only connected car (done right, without the hype) has the potential to create mobile 5G applications.

The big challenge with mobile 5G will be the transition to it.  All mobile applications of 5G either need ubiquitous 5G to develop (meaning “Field of Dreams” spending by operators) or fallback to 4G capability.  That means that we need an application framework that could be sold as the benefit, and that could then be mapped to 5G as the number of users increases.  Without an application framework, we have no particular reason to think that 5G will be driven by any form of mobile IoT.  That means that what proponents should be doing now is driving mobile-cellular application models on 4G, with the goal of increasing usage quickly and thus contributing to the potential 5G demand pool.

If we return to the Fierce Telecom piece I cited above, we see that the issues they’re raising are really not about 5G drivers at all, but issues related to the supply-side Field of Dreams deployment model.  Spectrum is expensive.  Phones and services are locked in co-dependency.  Network equipment is expensive.  All that is true, but all could be solved for a moment if we had some convincing killer 5G apps, which Fierce Telecom also says.  You have to wonder when the industry will recognize this.

For the record, all this is just as true, or more so, for edge computing.  Edge computing is a cloud-hosting model variant, and anything that’s centered on a hosting model necessarily depends on having something to host.  It does no good to say that 5G and the edge transform applications, because both are saying the other will be the driver. The most visible example in the real world of a circular dependency is a toilet flushing.  The truth is that applications will have to transform 5G and the edge.  Vendors should ponder that.

OK, now let’s consider business services.  My model says that productivity enhancements based on 5G could generate over $700 billion in incremental IT spending, generally concentrated in industries that have a large percentage of “empowerable” workers, meaning workers with a high unit value of labor.  Workers who are more mobile are obviously more susceptible to 5G-related empowerment strategies, and of course both unit value of labor and mobility vary by job category and industry.

I’ve run detailed analysis of both job categories and industries, and the range of potential by vertical market ranges from a low of about 0.62 for agricultural and related (NAICS code 1) to a high of 18.7 (health care and related (NAICS code 62).  Those figures represent the percentage of workers in the workforce that would obtain benefit from productivity enhancement.  The level of enhancement ranges from 25% to 88% depending on the number of different “jobspace targets”, meaning job/application combinations, the worker is involved with.  Government (all levels), conglomerate management, professional/technical, and (believe it or not) accommodation and food services round out the spaces where the greatest potential would lie.  I’m glossing over a lot of details here, but I’ll pick them up in another blog later.

Business services are, then, a highly credible opportunity for 5G, but when I survey enterprises (even in the key opportunity sectors), they’re not aware of any efforts to improve empowerment, despite the fact that there have been some vertical-market initiatives in various places.  There just isn’t enough being said about the needs and opportunities in the space.  In particular, users don’t know of any sources from which they can obtain suitable technology.

Business services could drive 5G deployment to its full potential, but only if there were clear application frameworks to deliver the productivity benefits.  Just having 5G doesn’t move the ball; it’s what 5G might enable that does the job.  It’s also important to note that most of the 5G business service opportunities relate to worker empowerment through smartphones, coupled with input from pervasive IoT.  A solution to the business service challenges would thus enable the IoT driver.

Time to summarize.  First, the natural course of mobile evolution will result in a gradual and limited deployment of 5G, mostly to the RAN and then to backhaul.  The modeling says that there is no way to build a justification for aggressive 5G deployment from the field-of-dreams side, only a number of opportunities that could justify earlier deployment if we had a strong application model—middleware, tools, and some reference applications—to support them.  Approaching the problem from the perspective of use cases (the operator mindset) does no good because the use case development doesn’t create the necessary application frameworks needed to actually empower anyone.  What we need is IoT application and productivity middleware, which if we got smart and focused on worker productivity, could end up being the same thing.

Open-source?  Maybe, but ONAP shows that even an open-source concept can be contaminated by standards-think.  It may be up to a vendor to take the lead here, perhaps a smaller one with a willingness to stand up in the open-source world and start something insightful.

Perhaps a larger, and more desperate, vendor works too.  Nokia and Ericsson are the deer caught in the 5G headlights here.  As Protocol said about the former, “…5G is so new and so enormous that the company has a chance.”  The problem is that while 5G NR is enormous, the rest of 5G is still groping for a business case, and in trying times such as these, with the delicate ballet of smartphone-vs-5G-radio in driving growth, it may grope for some time.

Companies like Nokia and Ericsson need to understand a basic truth that seems to be the outcome of my modeling.  The pace of 5G deployment, and the depth of impact on services and infrastructure, will not depend on network equipment or technology.  It will depend on what we can do with them.  If the future of 5G lies outside the smartphone, then we need to think about the justification all the more, because there are no natural exploits of non-smartphone 5G.  Sensors and controllers don’t have ears and mouths.

Applications will drive the future of networking, period.  Somehow, the network equipment vendors in general, and the 5G-dependent players in particular, are going to have to start promoting the applications, not the networks.  If they don’t, then 5G pressure will be in the direction of commoditization, with open-model elements.  There’s still time to do this right, but current industry and market indicators say the time is running out.

A Deep Dive into Intent Models, and What they Need to Include

It may be time to look deeper into intent models.  I’ve used the term in a number of blogs, and while the sum of those blogs probably gives an acceptable picture of what the approach could bring to both computing and networking, there seems to be a lot of pressure to rethink both IT and networking these days.  That pressure is, perhaps, best approached (if not addressed) with a more specific look at intent modeling.  Can it really help transform, help integrate the transformation of, networks?

At a high level, an intent model is an abstraction that represents a set of functions.  In one sense, it’s the classic “black box”, whose properties can be determined by the relationship between its inputs and outputs.  In a deeper sense, though, an intent model shouldn’t require inspection to determine properties, not even inspection of what’s visible at the edge.  An intent model should expose specific things that represent useful features/properties.  The implementation is hidden, but the interfaces are not only exposed but published in detail.  It’s this property that makes intent models useful in building things made up of multiple pieces.  Think of them as LEGOs that can be used to build stuff.

I’ve played with the intent-model concept for well over a decade, and the view I’ve developed through that exercise is that while the specific interfaces exposed by an intent-modeled element will vary depending on the nature of the features/properties the model represents, there are classes of interfaces that are important, so let’s look at these first.

Intent models seem to expose four classes of interfaces.  It’s important to note that these are “logical” interfaces, and that when a modeled element has actual physical interfaces like Ethernet ports, they may support multiple logical interfaces.

The first class is the data plane interfaces, which represent the interfaces through which information is sent to or received from the model.  There are a variety of these interfaces, but they all have a direct relationship to the primary function of the element that’s being modeled.  Router intent models, for example, expose port/trunk interfaces that are data plane interfaces, and software exposes APIs.

The second class of interface is the management and parameter class.  These interfaces provide the means of parametric control of the modeled element.  Things like an HTML (web) interface to a management element, a command line interface (CLI), or an SNMP port, are examples of this.  In software, management APIs may also be provided.

The third interface class is the control-plane cooperative interface.  This interface is used to support collaboration among the intent-modeled elements themselves.  Adaptive discovery, event exchange, and similar things would be supported via this interface.  Today, in IP networks, these interfaces normally share a physical connection with others, particularly the data plane interfaces.

The final class is probably the least intuitive but perhaps the most important.  It’s the behavior model interface.  It’s becoming common practice in the IT world to have a descriptive model of a complex system, one that represents the goal-state behavior.  The system then works to attain that state.  This interface would be used to communicate the goal-state to a modeled element, and also to deliver the current state.

This interface offers us more than just the (significant) advantage of having an explicit goal-state reference.  It also offers a way of integrating simulation with operations, and that includes the results of AI analysis of conditions.  By allowing simulation/AI to establish or modify the goal-state of an intent-modeled element, we can inject guidance into the recovery from abnormal conditions, bound the range of remedies that might be applied, and more.

Interfaces don’t quite tell all the story, of course.  An important feature of an intent-modeled object that’s related to interfaces is the property of identity and behavior.  My assumption has been that this property is what’s stored in a catalog that helps bind modeled objects into complex relationships, but I also required, in my own implementations of the concept, that each object deliver this via the management and parameter interface.  In the first of the two ExperiaSphere projects I did, this was done by sending the object a command (“speak”), to which it responded with the identity and behavior data.

Identity and behavior, IMHO, describes the taxonomy of the modeled element.  For example, we might have a classification “Network-Node”, under which we might have “Optical-Device” then “ROADM”, or “Forwarding-Device”, under which we might have “Router”.  The presumption I made was that a fully qualified element name would denote a single model, all implementations of which (being contained inside the black box) would be equivalent.  I also assumed that lower-level elements in the taxonomy would “extend” (in Java terms) the higher-level ones, meaning that a “Router” would present all of the interfaces of a “Network-Node” but would extend and perhaps redefine them a bit.

Identity and behavior also introduce the fact that the classification hierarchy I’ve noted here isn’t the only hierarchy.  A good intent-modeled service or application should be conceptualized as a series of functional decompositions.  “Service” might decompose to “Access” and “Core”.  These are all functional composites, not devices or components, so an intent model can contain other intent models.  When that’s the case, the containing model has to present the interfaces described, and since all intent models are black boxes, the fact that this particular one is really a composite is kind of irrelevant at the composition and management level.  However, it’s important at the management level, for reasons we’ll get to.

For an intend-modeled element to be truly and optimally composable, it’s critical that every implementation of a singular modeled object type have identical behavior at all interfaces.  Thus, the interior of each of the black box has to satisfy all the interfaces in the same way, so that what’s inside can never be inferred.

This, to me, is the most important property of an intent-model-based composition of features for an application or service.  Without the presumption of full equivalence, anyone building a composition has to resolve interface differences.  With the full-equivalence presumption, the creator of a model element has to fulfill the interfaces of the class of element their gadget (or software) represents, which assures it can be composed into something.  Again, referencing Java, a procedure that “implements” a Java “interface” must fulfill the interface’s defined properties exactly.  Something that “extends” is expected to add something but “implement” the base to which its additions are made in the proper way.

The behavior model interface is the least intuitive of all, but it might be the most critical.  An intent model may (and perhaps will, most of the time) define a system with self-healing properties.  In other words, it will recognize a proper operating state and seek to achieve it.  Rather than have that state a constant within the implementation, it should be something that can be provided (in standard form for the specific fully-qualified model class).  Further, the current state should be available on request, so that if the object reports a fault (which is defined as a deviation from the goal-state) that cannot be corrected internally, the associated (higher-level) external processes can be expected to decode the issue for remediation.

One implicit point in all of this is that an intent-modeled multi-element application or service has to be seen in two dimensions.  The one is the data or service dimension, which reflects the functionality of the application or service overall.  Pushing data is the data/service dimension of an IP network.  The second dimension is the management/control dimension, which is responsible for lifecycle management.  A critical lesson I’ve learned is that this second dimension has to be event-driven.

Event-driven lifecycle management is, IMHO, essential for services or applications based on discrete components.  That’s true whether they’re based on intent-modeled elements or not.  A composite system is a bunch of churning pieces, each of which has its own operating state and its own way of impacting the state of the service or application overall.  This is asynchronicity in action, people, and if you want to coordinate that kind of stuff you have to do it with state/event processing, which of course implies three things—states, events, and a model.

The notion of state is based on the fact that most complex systems have a fairly small number of possible conditions of behavior.  Think “working” and “not-working” as a start, and perhaps add in things like “preparing-to-work” and “repairing”, and you’ll get the picture.  There may be a dozen or a hundred reasons why something is “not-working”, but all result in…well…the thing not working.  States define the meaningful set of conditions on which interpretation of future events will depend.

Events are signals of change, of something having happened.  The “something” may come from an external source above the current element or system, or below/within it.  The way the event is interpreted is based on the state.  For example, an event signaling a request to activate, received in the “preparing-to-work” state, is an indicator that actions should be taken to become operational (and enter the “working” state when that’s complete).  The same event, while in the “working” state, is a logic error somewhere.

The model is a logical structure that indicates the parent/child relationships of the hierarchy of elements in an application or service, and provides the state/event table that, for each element, defines how events and states will be handled.  In a modern cloud-think process, the model is also where state is maintained and where shared operating variables are stored.  An application or service, in lifecycle terms, is the model.

I think that intent-modeled elements are the critical, still-missing, technical piece of true cloud-native, next-gen applications and services.  I also think that we’re converging on some model because of the rapid progress in containers (Kubernetes) and the cloud (particularly hybrid cloud).  The obvious questions are 1) will the emerging model be intent-based, and 2) is there any chance that networking will somehow play a role in development or adoption?  I think the answers may be related, and to explain why I need to invoke my “work fabric” approach, covered HERE.

It’s a convenient abstraction to view inter-component exchanges as workflows, and the network and software framework in which they’re exchanged as work fabrics.  The properties of a work fabric are set by the software interfaces and network behaviors.  The requirements are set by the application or service being supported.  Networking is an extreme example of a high-availability, low-latency, mass-market requirement set.

The workflow/work-fabric model likely to emerge from the cloud computing universe is suitable for the control-plane piece of networking, but likely not ideal.  It’s probably not suitable for the data plane.  The networking industry, and particularly the operators, have focused on transformation by virtualizing the devices rather than the networks, which has a very limited impact on overall costs, and which tends to make operator initiatives diverge from the cloud.  The often-heard desire to make things like NFV “cloud-native” ignores the fact that cloud-native is about dissecting monoliths into functional pieces, while NFV is about hosting monoliths differently.  Even now, startup focus in the networking space is on new device models, as THIS article shows.

Still, a “disaggregated” device model might be a useful concept.  In a general sense, a network device is a bit like an application that has a very specific hardware dependency.  Chip-enabled forwarding is certainly a specific hardware feature, and this kind of specialization is already being considered in the cloud world.  Similarly, control/data-plane separation, a part of SDN and some emerging segment routing schemes, is a consideration on the networking side.  I’m not sure that either of these developments can be linked to a specific awareness of the long-term problem/opportunity, but both are likely to move things in the right direction.

The downside to all this “creeping commitment” stuff is the “creeping” part, and the inherent risk of the “diminishing marginal reward” it creates.  I offer the example of full-scale operations automation.  When the concept came along about seven years ago, lifecycle automation would (if fully realized) have cut process opex in the network and related areas by 20% to 25%, saving almost as much as some operators’ capital budgets.  There was no realization of the potential, and still isn’t, and so operators have adopted less efficient tactical and limited opex reduction strategies.  They’ve cut about 15% from opex, and the remaining 5-10% really doesn’t justify the effort needed to fully automate lifecycle management.

We have sort-of-intent models today, in the descriptive models used in IT deployments.  We’ve had something pretty close to the ideal approach, in things like TOSCA and the TMF NGOSS Contract, for a decade or more.  I think it’s clear that we’re going to get more intent-based over time, but it’s also pretty clear that we’d gain more if we got there faster.

Is VMware on the Wrong Track with the Best Tools?

VMware disappointed the Street in its earnings call last Thursday.  That’s not all that surprising; they’re hardly the first.  The company actually turned in decent revenue numbers, but earnings and guidance fell short of expectations.  That’s not new in today’s tech market either.  So why talk about them?  Because if the future is the cloud, cloud-native, containers, and transformation of IT and the network, VMware should have knocked the cover off the ball.  We need to try to figure out why, to see whether there’s a VMware problem or a problem with our vision of the future.

On their earnings call, VMware noted two issues that impacted their performance.  One was the fact that they had an issue with the efficiency of closing deals at the end of the quarter, a problem they didn’t have last year.  The other was that there was a larger-than-expected portion of deals that focused on SaaS and subscription revenue versus (one-time) license sales.  That pays off more in the long term (how many companies are moving to a subscription model?) but in the near term it impacts current-quarter realization.

Let’s address that second problem first.  As I noted, nearly every software giant in the industry has been pushing for a shift to annual subscriptions for software rather than one-time perpetual licenses.  VMware, obviously, has been one.  The question then is why they were surprised when their strategy worked?

I think a part of the reason, perhaps the largest part, is that VMware does a lot through channel sales rather than direct.  I’ve been a reseller myself (back in the 1980s), and I understand the space fairly well from the inside, and from consulting with vendors who focus on reseller channels for sales.  One of the challenges with channel sales is that the channel does what the channel finds most effective for its own revenues.  When you make a change in your pricing model, the channel finds the best way to bend it to their favor.  Subscription sale of software, SaaS, and stuff like that have a lower price on-ramp for the customer.  Might the channel have used that to reduce the selling cycle, thus shifting the sales faster than expected?  They didn’t go into that.

Deal-closing efficiency can also be related to the channel, or more specifically to the relationship between sales and marketing.  If there’s a business-related (as opposed to technology-related) topic that’s dominated my consulting practice over the years, it’s optimizing that relationship.

Marketing isn’t about making a sale by email, it’s about providing leads through web, email, or other avenues of information dissemination.  I used to draw a diagram on a whiteboard during my sales/marketing sessions: “Editorial mentions sell website visits, website visits sell sales calls, sales calls sell the product or service.”  The information-equivalent of this is that strategic vision precedes tactical fulfillment.  You get your stuff written up in the tech media, which means you present some exciting strategic message.  The titillation that generates causes many to visit your website, where they find the hooks between that excitement and an avenue for them, the buyer, to connect with it.  That causes them to ask for more information, and you’re now in sales-tactical mode.

This progression is particularly critical where you’re dependent on channel sales.  What a channel wants is leads.  To get them, a vendor has to have the first two steps in the progression—editorial notice and website visits—covered.  To have that is to have a great strategic vision and articulation.  Sadly, VMware doesn’t have enough of that.

I offer the following comment, made by the CEO on the call, in response to a question on the overall demand environment: “The bigger picture, we would say, is unchanged.”  It’s hard for me to imagine a worse perspective.  Even if it were true, you don’t ever want to imply that the market isn’t changing, unless you own all of it.  But it’s worse when, as in this case, it isn’t true at all.

We are in the early stages of the most transformational set of changes in application architecture and information technology that the world has ever seen.  I’ve been in this business a very long time, and I’ve never seen the like of it.  Containers, Kubernetes, the cloud, and cloud-native are combining to almost remake the rules of application design, development, and hosting.  The potential application of the new model could raise IT spending by 40% to 60%, based on past cyclical trends.  Does that sound “unchanged” to you?

This is all the more ironic when you consider that VMware is in perhaps the best position in all the industry to exploit the new model.  When you consider that VMware has been pushing cloud tools, network tools, orchestration and federation tools, and even service provider infrastructure elements, and therefore is a broad-based technology supplier, reliant on exploiting broad trends in a product-line-symbiotic way.  It’s like they built a boat without thinking about where the water was.

The CEO, a moment after the comment I just cited, noted that “we expect to see tech spend well exceed GDP”, and I assume that means growth of tech spending exceeds growth of GDP.  Well, that has in fact happened three times in the history of IT, in three distinct cyclical moves driven by a new productivity paradigm having been realized.  The last cycle ended in 2000, and it’s not picked up again.

I think that VMware is right in thinking that it’s critical for the industry (and for VMware) to see that cycle reignited, but if that’s true, why downplay the revolution?  There are only three possible reasons.

The first is that VMware simply doesn’t see the strategic shift at all, which is what that (awful) quote seems to suggest.  The glacier is moving downhill and they’re dodging the rocks it dislodges and the water that flows out, so they think there’s a flood and avalanche.  If that’s the case, then the steps they’ve taken are simply fortuitously aligned with the real glacial impact of the application and IT shift.  Serendipity can work; somebody always wins the lottery.  It’s just a bad strategy for dealing with real-world, right-now, issues and opportunities.

A lack of recognition of the strategic shift would explain why VMware let their most important strategic initiative, Tanzu, drift out without the place-holding fanfare it needs and deserves.  It’s very possible that the strategic shift I’ve described in IT is already visible to customers, and that their reaction to it is to hold off on a major commitment until they understand how VMware would deal with the shift.  That could explain the slow closes in Q4, and also the shift from license to subscription at a faster-than-expected pace.

But then there’s this quote from the call: “And we really believe this gives us a tremendous position to help customers with their modern applications in Kubernetes, one of the most important shifts in enterprise architecture since the cloud.”  That sure sounds like strategic awareness to me.

The second possibility is that VMware sees the shift but doesn’t know how to address it.  They’re focusing on tactical presentation of strategic assets because they don’t have the strategic presentation yet.  Tanzu, under this condition set, would be a tactical bundling of assets in response to changing sales focus.

You could read their recent M&A, including Pivotal and Nyansa, as proof of this possible explanation for VMware’s slippage.  Pivotal, in particular, is a problematic acquisition from a strategic perspective.  They were spun out of VMware once, and it’s always going to raise questions if you spin somebody out and they buy them back.  Pivotal is the commercial conduit for, and major contributor to, the Cloud Foundry Foundation, the open-source community that developed Cloud Foundry.  VMware had linked itself to Pivotal in past releases (Pivotal Container Services), and while you could make a convincing case for thinking about Cloud Foundry as a cloud-native framework, it would be more convincing if you’d already articulated a cloud-native vision, that broad strategic sweep I’ve mentioned above.

Then there’s the last possibility, which is that VMware is expecting to be bought, likely by Dell.  A major reason why that might be the case is the IBM/Red Hat deal.

IBM has a strong, loyal, and rich installed base for its mainframes, and the best account control in the industry.  The problem is that there will likely never be any additions to that base, and natural IT evolution is eroding it.  Red Hat, the VMware competitor in the new-architecture-for-the-new-IT space, brought IBM a broader base and a way of strategically advancing their current base.  Win-Win.  IBM isn’t a direct competitor with Dell in the sense of server for server, but the big IBM accounts are surely Dell targets, and HPE is a competitor who could at any time launch their own initiatives aimed at that IT revolution I’m predicting.

Dell acquired (or merged with) EMC in 2015-2016, and got VMware with it, and Dell remains the major shareholder.  There have been persistent rumors that Dell might acquire the remainder of VMware, for the simple reason that servers are just warehouses in which you store software under the new IT model.  All the differentiation and innovation in the world of IT will come from software, and now in particular it’s the new application model that will drive the bus.  Would Dell want another player to grab control?  Even if HPE couldn’t buy VMware (since Dell holds so much of it), the future of IT is (technically speaking) the Kubernetes ecosystem, and that’s almost entirely open-sourced.  HPE could get smart here, perhaps, and cobble their own Tanzu-like thing together?

VMware has multiple possible reasons for its seeming strategic paralysis, but no justifications.  You cannot be in a market driven by a major strategic change, then drag your feet in dealing with or exploiting it.  They’ve got the best technology base in the industry to do that, but it’s starting to look like their failure to aggressively position their own assets, for whatever reason, is enough to erode the value of that asset.  IBM/Red Hat and HPE (to a lesser degree) now threaten them, and VMware has only a short window in which to respond—as short perhaps as the end of this year.

Then there’s the service provider space.  VMware should have a really good position there, and their Project Maestro has recently been converted into a commercial orchestration offering.  They do have the proper components for a solid solution to carrier cloud, but again their positioning is opaque to say the least.  They need a broad vision of what carrier cloud is, and does, before they can present a solid vision of how to implement it.  Again, there’s no effective response to the opportunity.

Their biggest barrier to an effective response is, generally speaking, themselves.  Second is Pivotal, whose acquisition might be a reflection of “self-ness”.  Unless they have a strong vision for somehow leveraging Pivotal without colliding with Kubernetes (which Pivotal only recently accepted) or knave (which I think could be the basis for a broadly valuable Cloud Foundry competitor), they’re wasting time and management focus with it.  If they do believe there’s redemption for Pivotal, then they need to get their story on that out there.  The good news is that b******t has no inertia; you can have it as fast as you can talk.  The bad news is that eventually it buries you if you don’t build a platform that can rise above it.