Are Vendors Facing a Subscription Pricing Crisis?

The relationship between buyers and sellers isn’t usually hostile or adversarial, but it’s naturally wary, particularly on the buyer side. Sellers make money by making buyers spend money, and while that spending can often be offset by benefits, there are times when it isn’t. One such time is when changes in pricing policies set by sellers increases the cost of the “budget” side of network procurement. That part of network spending focuses on sustaining what’s already been justified, and because it doesn’t draw on new benefits, there’s constant downward price pressure on it. Pricing changes that push costs the other way aren’t welcome.

One of those pricing policy changes, the one that buyers express the most concerns over, is the shift to subscription pricing. Subscription pricing has also taken hold in the software space, and the same forces that encouraged vendors to adopt it there influence networking. The same resentments are building, too.

Whether you call it “subscription” or “as-a-service”, the concept is the same. A buyer of a product, instead of purchasing it for a given amount, acquires what’s essentially a lease. Remember when Microsoft Office was purchased? The problem for Microsoft was that the purchase model put Microsoft under constant pressure to create meaningful feature enhancements to drive buyers to purchase an “upgrade” version. When a buyer didn’t see an improvement, they hunkered down on their old version and produced no incremental revenue for Microsoft. So we got Microsoft 365.

For over a decade, we’ve had a kind of hybrid subscription model in the network equipment space. A buyer purchases a router, for example, and a maintenance contract that provides updated software features and fixes. More recently we’ve seen the hardware and software “unbundled” and the latter sold on the same sort of subscription deal that we had with Microsoft Office. Network vendors, like software vendors overall, have touted this shift in pricing policy to Wall Street because it offers them more revenue. Which, of course, is what gives buyers angst, because it’s a factor that’s raising the cost of sustaining infrastructure.

There are arguments in favor of the subscription model, of course, and even the more radical “network-as-a-service” approach. Many companies who have to watch cash flow prefer to have their costs incurred as expenses rather than capitalized, because the expenses apply at the same time the money is spent and capital costs have to be amortized, so early cash outflows aren’t fully offset when they do their books and file their taxes. The subscription model also assures that network software doesn’t get outdated, and that can be a big factor in keeping the network secure. Up until 2020, three-quarters of network buyers thought the subscription/as-a-service trend was positive, and over half of buyers feel that way today. But since 2020, I’ve been getting more and more reports from buyers that the CFO doesn’t like subscriptions any longer. And that they want something done.

Subscription pricing is the second-most-cited reason to look at “open-model” networking, and the number one reason is overall cost. Even two years ago, almost no enterprise buyer would have said that subscription pricing was a reason to look at open networks. In the service provider space, though, the interest in open-model networking was fueled in part by subscription pricing changes as far back as 2015. Two years ago, enterprises said they would look at alternatives to a vendor who raised subscription prices significantly (by 25% or more), and today a third of enterprise buyers are looking at open networks as a way of reducing costs, even if no changes in subscription pricing are in the offing.

Despite the views they expressed on subscription pricing, only about 8% of enterprises say they changed vendors or adopted open-model networks to get away from them. Just under half said that they’d pressured their vendors to give them a break on subscription pricing, and in most cases the pressure paid off with a cost reduction somewhere, but not always in the subscription costs themselves. In fact, enterprises said that vendors seemed more willing to discount equipment, perhaps fearing that they’d get caught in a never-ending erosion of their subscription revenues.

Enterprises also say that they “understand” the shift to subscription pricing, with the majority linking it to ongoing changes to software to enhance security or to fix problems. Less than a quarter of enterprises said that they believed that maintenance of embedded network software should be free, and the number who believed that software not installed on devices should be maintained for free indefinitely was even smaller.

Given this, how much of the push-back against subscription pricing of network software is really aimed at the practice, versus a simple reaction to the increased pricing pressure buyers feel? Well over 90% of enterprises say that over the last three years, they’ve felt more pressure to reduce spending on sustaining their networks. Most admit that this pressure has led them to take a harder look at subscription pricing, and this is particularly true of enterprises who are stretching out the useful life period of their equipment. Once gear is fully depreciated it contributes no further costs except for any ongoing maintenance and software subscription costs. It makes sense to look at those next, then.

Another interesting fact is that just over 80% of the enterprises I’ve heard from say that they use open-source software in their data center or the cloud, and that they pay annual subscription charges for that software. Candidly, I’m not sure I believe all those who say they don’t, but I was unable to find any network buyer who said they’d started compiling and maintaining their own open-source tools to avoid subscription payments. That suggests that either network software subscription practices are resisted in part because they’re relatively recent, or that ongoing feature progress in data center software is sufficient to justify the payments.

And that may be the critical point. Software subscriptions are, for the vendor, a way of ensuring ongoing revenue when buyers might otherwise resist upgrades. If buyers are resisting upgrades, they’re likely doing that because they don’t see much value in the new features the upgrade offers. In a way, then, imposing subscription costs could be a vendor admission that they fear they’re not providing enough incremental value with new software versions.

With desktop software, there’s always an influx of new buyers to create revenue. With network software, the number of “new” buyers is limited because companies that need large-scale network purchases don’t spring up out of nowhere. Desktop software vendors need to play to the new buyer more than network vendors do, perhaps, and also perhaps they need to rethink that point unless they want to see the current nascent movement toward open-model networks take hold, aided by resentment over subscription practices.

How Do We Get To, and Optimize For, AIOps?

What’s the right model for AI-based operations, or AIOps? There are obviously a lot of different “operations” AI could be used in, so does the optimum model depend on the specific mission? Or, just maybe, is there at least a set of guidelines that would cut across all the missions? That would enable us to frame out not only a general toolkit for AIOps, but also to at least hope for a toolkit that could cross over operations areas. Could we have One AI to Rule Them All? Maybe.

One thing I believe to be absolutely true is that the “Rule Them All” piece of that paraphrase is critical with AIOps. We are not, at least not in the near future, going to have a single AI element that can fully absorb the operation of a complex network and take it over in all kinds of conditions. AIOps is a hierarchy, by necessity. If you think of how humans work, that shouldn’t be surprising. When you’re walking along a corridor talking with a team member, you’re not plotting out each step and how to maintain your balance as you move your arms and change your center of gravity. You have an autonomous system that does that, one you can override but that’s happy to do what it’s supposed to do, on autopilot. So it must be with AIOps.

To me, then, the whole AIOps thing is about intent models or black boxes. A cohesive collection of devices and software in some mix can be visualized as a single black box. Its properties are those it exposes, and its contents are up to it to manage. The goal of that management is to fulfill the SLA that’s at least implicit if not explicit in those exposed properties. You’re working (or, in my example, walking) if you’re meeting the appropriate external conditions and properties of working/walking.

So does each of these black boxes have its own AIOps? Not necessarily. That will depend on the value of AI at that point, which depends on the nature of the contents of the black box, the complexity of managing it to its SLA, and the available tools to do the managing. It also depends on whether a given black box has a close relationship with others, and a bunch of what I could describe as “cooperative factors” that we’ll get into.

We’ll get into them by supposing that the manager of our black box, having expended its best efforts, cannot meet its SLA. I broke, and I’m sorry. Well, sorry probably won’t cut it, so the question then is how broke-ness will be dealt with. The answer is “whoever has an interface with this black box.” An SLA is a guarantee, and obviously it’s a guarantee to the element that’s connected at the specific interface(s) the SLA is provided for. Our black box has a cooperating element, or elements, and we can also presume that those connecting elements also have SLAs and other cooperating companions. The question is whether there’s some clear circle of cooperation here, a group of black boxes that are themselves perhaps an outer-level black box.

Networks, and combinations of network-and-hosting, are often systems of systems, and so the idea of a black-box hierarchy here isn’t outlandish. But it’s hard not to see an entire network, or a big conglomerate of data center and network, as the top element of every hierarchy. In other words, does “cooperation” extend as far as connection does? Not necessarily.

I’ve done a lot of work in describing service models that were aimed at automating operations. What I determined was that there was connection and there was cooperation as far as describing the way two black boxes interact. In the former case, a black box might have to be informed about an SLA fault in order to “know” that it might not be able to meet its own SLAs. In the latter case, the black box might be informed in order to mediate or participate in a remedy.

Let’s suppose our black box is an element in a service, and that its interface to a core network service fails. If it has a second interface to the same core network, and if that interface can be used to satisfy its SLA “upstream”, then there black box could remedy its own failure. If it doesn’t have another interface, then it would need to report a failure “upstream”. The question is then whether the upstream box has a connection to the core that doesn’t go back through the same failed box. If it does, then it has a remedy option, and we might well consider that “parent” box and whatever “child” boxes there might be with parallel interfaces to the core to be a cooperative system, and to provide operations automation, meaning AIOps, for that system of boxes. In that case, we might see a complex hierarchy of black boxes.

It’s also possible that there might be a black box system that’s created by management alone. An SLA violation might be reported not to the data-plane interface (or not just to it) but to a management system, one that’s an AIOps entity. In that case, the span of control of that management system would determine the boundary of the cooperative system, because it’s the thing that’s inducing the cooperation.

The purpose of this structure is to string AIOps elements in the mesh of things that form the infrastructure being managed, an infrastructure almost surely bound together with a network. These elements would be charged with ensuring that the subordinate black boxes under their control met their overall SLA. And, of course, there would be an AIOps element sitting at the top of the hierarchy. However, rather than being expected to understand all the pieces of that mesh of things, it only has to understand the properties of the black boxes just below it in the hierarchy. These boxes need to understand what’s below them, and so forth until we reach the bottom.

Effective AIOps depends on intent modeling, and intent modeling isn’t just defining black boxes that represent modeled elements, it’s also defining the way these black boxes are organized into a cooperative hierarchy. The “model” is as important as the “intent”. This reduces the burden on each AIOps element, and it also permits the elements to be specialized to the specific infrastructure components they’re expected to control. As you rise in the hierarchy, the elements could become more generalized, since their mission would usually be one of finding alternatives to a primary element that had failed.

The approach I’ve described also provides for the use of AIOps in testing and rehabilitation of assets, and for broader-level diagnosis of the health of the system overall. A specialized tool could be “connected” to a separate management channel to do this, and that channel could (like the data interfaces) represent either a pathway to specific real assets or to an AIOps element there, prepared to act on behalf of the generalized higher-level tool Testing and rehab, then, would be managed through the same sort of AIOps hierarchy as overall operations, though not necessarily one with the same exact structure.

We may be heading toward the notion of having AI deployed in a network or compute structure that’s hierarchical, but that doesn’t mean that we have a hierarchy of intent models or even that we have intent models at all. There’s not enough formalism with respect to the relationship between clusters of technology elements, their management tools, and the management of the entire infrastructure…at least not yet. Some vendors are making progress toward a holistic management vision, and such a vision is just as critical for AIOps as AI is itself.

Why We Need to Look Deeper into Market Opportunity Differences in Tech

Are there states in the US that present a special opportunity for network and cloud services? That’s a question I’m getting more often as vendors and service providers look for attractive targets for new revenue. In fact, to me at least, the big question is why this particular question hasn’t been asked and answered long ago.

The most fundamental measure of network potential I’ve discovered is “demand density”, which measures the economic activity per unit of addressable geography. Where demand density is high, a mile of infrastructure passes a lot of opportunity dollars and is likely to generate a good ROI. If we look at state-wide demand density, we find that there are four states with sterling demand density and seven where demand density is low enough to make profitable deployment of wireline broadband challenging. Roughly 20 of the 50 states have demand densities high enough to indicate that business broadband would be practical there.

One factor that strongly influences demand density (or perhaps correlates with it) is the prevalence of single-family housing. One one hand, it would seem that single-family households would be higher-income, but on the other hand single-family housing results in lower population densities than apartment living. None of the states with the largest percentage of households living in single-family homes place at the top in terms of demand density, and only three of the top ten in single-family housing are in the top 20 states in demand density. Urban/suburban concentrations make for more efficient networking, and that same concentration is linked to a higher population of businesses.

That doesn’t mean that selling broadband and cloud services would be practical everywhere demand density is high. Generally speaking you can’t sell broadband or any form of business technology except to the headquarters of a multi-site business. Major enterprises aren’t headquartered everywhere, and while every one of those 20 states has at least one enterprise headquarters, the largest number of enterprise headquarters are found in a group of ten states: New York (58), Texas (50), California (49), Illinois (37), Ohio (25), New Jersey (22), Virginia (21), Pennsylvania (20), North Carolina (19), Massachusetts (18). All of these states have demand densities in the top 20; the top four in demand density are also on this list. It’s also true that these same states have the largest number of headquarters sites for SMBs.

While you can’t generally sell tech except to headquarters locations, you also can’t sell tech if you ignore where the other sites are located. Only multi-site businesses buy business networks, meaning more than Internet access, and generally business networks are found only in businesses with at least 20 sites, of which there are roughly 600 thousand in the US. On the average, network salespeople say that only the top 50 thousand companies in terms of number of sites are really profitable sales targets, and these represent a bit over a million and a half locations. In order to sell network services, a provider would have to cover (somehow) all the satellite site locations.

The top states for carrier Ethernet, meaning specialized business broadband services (according to TeleGeography) are almost the same as the states that are top in enterprise headquarters. Only Florida and Georgia are on the Ethernet list and not on the top headquarters list, and only Virginia and North Carolina are at the top of the headquarters list and not on the carrier Ethernet list. That illustrates that some secondary sites are large enough to justify carrier Ethernet, and some states seem to harbor more of those high-volume secondary sites. Interestingly, these same states have the largest number of network integrators.

Suppose you’re trying to empower workers with new applications. How do you find the right targets? It turns out that there are 25 occupations where jobs have a high information content, and can therefore be empowered readily. The occupations with the highest unit value of labor represent the prime targets, and the same states that have the highest number of headquarters sites have the highest unit value of labor overall, and the largest percentage of high-value occupations. There’s only one outlier here; Washington state has a high unit value of labor and isn’t in the top ten on either of our other categories. Guess what’s responsible? Tech and Microsoft.

A good measure of a state’s growth in tech dependence is the number of computer schools found there, and here again the states with the most carrier Ethernet have the most computer schools. This demonstrates that carrier Ethernet requirements correlate with the employment of computer-savvy workers, ones that are most easily empowered through IT and network investment.

The states with the lowest unit value of labor on average, and the smallest number of high-value occupation targets, are the same states with low demand densities. These states also have the lowest number of personal computers per capita, which is a good measure of the technology use in a state, and the lowest average unit value of labor. We see here the contrast between an “empowerable” state and one that’s far less likely to benefit from technology projects aimed at worker productivity improvements.

The point of all of this is that it’s a serious mistake to think of a country as a homogeneous market for computing, cloud, and network services. Business headquarters are concentrated in a small number of states, and for most vertical markets (healthcare being the exception) high-value workers who present attractive empowerment targets are concentrated in roughly the same places. The best indicators of an attractive area for tech product and service sales are the number of carrier Ethernet sites and the number of computer schools, but demand density is a close third.

Of course, it doesn’t take a national intelligence organization to figure out where carrier Ethernet can be found, and furthermore the location and size of those particular business sites hasn’t changed much over the last decade. Real estate needed to house a large workforce and justify carrier Ethernet connection is high-inertia. The greater opportunity right now lies in the secondary sites that are too small to justify carrier Ethernet, and thus too small to support on MPLS VPN technology.

In order to be a “networked” business you need multiple sites to connect. There are roughly 2 million business sites that are part of a multi-site business, and a million and a half of them are associated with the top 50 thousand businesses ranked by number of sites. The other half-million are typically businesses with fewer than 10 sites, and only about a fifth of these are networked other than simply given Internet access.

The average major multi-site business has 53 sites, and only 25 are connected via MPLS VPNs and carrier Ethernet. One of the most important trends of the last five years has been the pressure to add small sites to VPNs, and it’s this pressure that created the SD-WAN opportunity. Gartner says that there are roughly 20 thousand US SD-WAN sites and my model says 32 thousand. Ether way, there are about 400 thousand satellite sites not connected via either MPLS or SD-WAN, and if you count all multi-site businesses there are over a million sites that could be on a VPN and are not.

Whether you want to consider it a cause or an effect or indicator, the most significant event in the transformation of a “business” is the connection if its sites to a VPN. It implies that there is a central set of resources, applications and data, to which the workers in the remote offices need access. It provides a central mechanism for the management of those resources and access to them, and even to local information resources. It’s an instrument of policy, of control, and in almost all cases a tool in security and compliance. That means that the time at which the connection is made is the ideal time to assess the quality of all these things, and to provide augmentation as needed. In particular, site and connection security policies are usually set at this point, and that means whoever sells the VPN connection has an inside track into the security of the network and the sites on it. Security tools are one of the most enduring sales opportunities of our current times, so that’s important.

The key takeaway here is simple; you have to know your market target, not only the usual “know about”, but the more general and more-often-ignored point of simply knowing what your target is. The business market for network products and services is, in particular, made up of highly differentiated segments. Every state in the US has its own mixture of segments. Blasting out material, sending out salespeople, with no notion of what segments you can hope to address is wasting time, effort, money, and opportunity.