Making Sure AI Operations Deliver on their Promise

With Tesla about to release its humanoid robot, we’re going to see more talk about the dangers of machines taking over. Do we actually face the risk of having AI get smart enough to become truly sentient, and deciding that its creators are an impediment to its future? I think we’re safe from that in the immediate future, but we do have to be thoughtful about how we use the kinds of AI we really can create, or we may have to worry about them too…for more pedestrian reasons.

Isaac Asimov was famous for his “Three Laws” of robotics, which obviously addressed an AI model that was to all intents and purposes self-aware and intelligent. For those who don’t remember, or weren’t SiFi fans, they were:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

A quick read of these shows that they were designed to protect us from robots, but suppose that robots and AI aren’t really the problem? A few early users of AI I’ve had chats with have offered some examples of real-today AI problems. They aren’t evidence that AI is more of a risk than we can tolerate, but that like all technologies that hasten things, it can move too fast.

Another blast-from-the-past memory I have is a cartoon about the early days of IT, when it was common to measure computing power by comparing it to the work of human mathematicians. A software type comes home from work and says “I made a mistake today that would have taken a thousand mathematicians a hundred years to make!” It’s this cartoon and not the Three Laws that illustrate our challenge with controlling AI, and it’s not limited to things like neural networks either.

Tech is full of examples of software that’s designed to seek optimality. In networking, the process of route discovery is algorithmic, which means it’s based on a mathematical theorem that’s used to solve a problem. The difference between algorithmic optimality and AI optimality is more one of degree. An algorithm has a limited and specific mission, and AI is designed to support a broader mission, meaning more information, more assessments, more human-like thought processes.

Routing algorithms make “mistakes”, which is perhaps the driving force behind the notion of SDN. You can create a more efficient network by planning routes explicitly to manage traffic and QoS, which is what Google does with its core network. However, central management a la SDN is difficult to scale. We could argue that AI in tech is designed to lift algorithmic optimization up a level. Maybe not to the top, where there are no issues, but at least to something more.

According to the feedback I’ve gotten on AI, users see two distinct models, the advisory form and the direct-response form. In the former, AI generates information that’s intended to guide human action, and in the latter AI actually takes action. The direct-response form of AI obviously works quicker and saves more human effort, but it presents more risk because the behavior of the AI system is likely difficult to override in real time. Users tell me that it’s easier to sell the advisory form of AI to both management and staff for reasons of perceived risk, and they also say that they’ve had more problems with direct-response AI.

The problems with automated responses is that they aren’t always right, or they’re not always executed as fast as expected. The former issue is the one most users report, and in most cases what they’re reporting is a response to a problem that doesn’t properly consider the network as a whole, or the state the network will be left in once the response is executed. That these problems boil down to “My AI didn’t do the right thing” is obvious; the issues behind that point need a bit of discussion.

Truly autonomous AI based on neural-network or built-in intelligence create problems because of the way the underlying rules are created. The classic process involves a subject-matter expert and a “knowledge engineer” who work together to define a set of rules that will then be applied by AI. While a failure of either or both these roles is an obvious risk, it’s not what enterprises say is at the root of the problem. That turns out to be biting off more than you can chew.

Most enterprise networks are complicated mixtures of technology, vendors, and missions. When you try to establish rules for AI handling, the policies that will guide AI interpretation of conditions and execution of actions, it’s easy to fall into a kind of “completeness trap”. You go to the netops people and ask what they’d like, and since they have a broad scope of responsibility, they offer a broad response. The problem is that the breadth means that the subject-matter people (netops) will simply overlook some things, that some things they describe won’t be perfectly understood by the knowledge engineers, or a combination of both.

Machine learning (ML) forms of AI fall into this same trap, for a slightly different reason. In an ML system, the root notion is that the system will learn from how things are handled. In order for that to work, the conditions that lead up to a netops intervention have to be fully described, and the action netops takes fully understood. In simple terms, an ML system for operations automation would say “when these-conditions occurred, this-action was taken, resulting in this-remediation-state. All the stuff in italics would have to be fully defined, meaning that specific conditions could be assigned to the first and third and specific steps to the second.

What, exactly, would fully define some network condition or state? Is it everything that has been gathered about the network? If so, that’s a daunting task. If not, then what specific information subset is appropriate? The bigger the bite of netops activity you take, the more difficult these questions become, to the point where it may be literally impossible to answer them. In addition, the more conditions there might be, the smaller the chance they’d appear during the learning period.

You can’t control network complexity, but you can control bite size. While, as I said above, “enterprise networks are complicated mixtures of technology, vendors, and missions,” it’s usually true that the elements can be divided into “zones”. There’s a data center network, a branch network, and likely a “switch network” and a “router network”. Maybe a Cisco network and a Juniper network. These zones can be treated separately by AI because netops assessments and actions normally take place within a zone.

Both operators and enterprises are starting to like the “bite control” approach if they decide to try fully automated AI responses to network conditions, but more are liking AI in an advisory role. Operators in particular are seeing the highest value of AI as facilitating diagnosis of problems, the second-highest as recommending solutions, and automatically applying solutions as last. Enterprises are a bit more likely to want fully automated responses, but not significantly more likely to try them in the near term.

The final point here is key, though. There are risks associated with AI, but those risks are really very similar to the risks of operations without AI. Humans make mistakes, and so does AI. Despite some early issues with fully autonomous operations, both network operators and enterprises say they are “fully committed” to the increased use of AI in their networks, and are similarly committed to AI in data center operations. There are some growing pains, but they’re not turning the potential AI users away.

We’re still kicking AI tires, it seems. That’s not a bad thing, because trusting a fully automatic response is a big step when you’re talking about a mission-critical system. To paraphrase that old cartoon, we don’t want our AI to make a mistake that would have taken a thousand netops specialists a hundred years to make.

Is the Juniper/Synopsys Deal a Sea Change in Silicon Photonics for Networks?

Juniper is getting serious about silicon photonics. The company already had hundreds of patents in the area, and a clear interest in making itself a presence in the space. Now, they’ve formed a separate company with Synopsys, which will contain Juniper’s silicon photonics technology and leverage it across a wide range of applications. Networking is obviously one such application, and perhaps in more dimensions than we think.

Silicon photonics (SiPh) is a chip technology that substitutes photons for electrons in moving information. In theory, it can create chips that are more powerful but consume less power and so dissipate less heat, making them more efficient both in terms of facility costs but also in terms of the environment. It can also increase the speed of interconnect for network devices, and facilitate the coupling of chips to fiber for longer-haul transport. Intel is probably the most-recognized name in SiPh, and it’s a part of the company’s data center initiatives. In public cloud and other “hyperscaler” resource pools, SiPh could be critical.

For over three years, Juniper has touted a “second-generation” vision of SiPh, where lasers could be put directly onto the chip and drive external fiber, something called “co-packaged optics”. It can lift the capacity of a fiber connection to a network device up into the terabits, which would certainly be an advance, but I think there may be other network benefits that might be even more significant, relating to the metro-mesh model of future networks that I’ve blogged about.

When you route packets, you need to worry about specific destinations when you’re close to the edge, because that’s where those destinations connect. As you get deeper into the network, you’re really not routing destination addresses but subnet addresses, aggregated flows. In MPLS networking, the MPLS LSPs make this aggregate-routing explicit. While the core traffic flows over optical fibers, the routing process remains electrical because we can’t directly route optical flows.

With SiPh, that could change. Imagine a chip with multiple optical transceivers, capable of “reading” an MPLS header and making a routing decision. This multiplicity of connections could be a technical challenge for today’s devices, but we could imagine each fiber trunk terminating in a chip that did nothing but pull off “local” traffic from an optical flow. The rest of the stuff would proceed onward, with a lot less handling delay than we’d see in today’s networks.

The value of this sort of thing in metro-mesh applications is clear, but it would also be highly valuable in the metro network itself, because it could be the key to injecting hosted features into communications services, as well as in supporting edge computing applications. The metro site is the optimum place to couple hosted and connectivity features because it’s deep enough to be efficient in terms of users served, and shallow enough for some personalization to be possible.

It’s easy to see how this sort of thing could be valuable for Juniper in realizing their “Cloud Metro” vision. SiPh could transform the concept from a label for a collection of technologies that’s already in play, to something totally new and potentially revolutionary. It would link well to their Apstra data center stuff, not only in metro applications but in general. Data centers, after all, have an enormous amount of traffic to handle and need an effective on/off-ramp to the network, whatever “the network” might be.

There are other applications of SiPh that might be interesting to Juniper, too. It’s already been used to create large-scale neural networks and it’s obvious that SiPh could accelerate other complex AI applications, even be used to create an “AI engine”, a form of server that would be tuned to support AI in multiple forms. In mixed AI/IoT applications in particular, something like this could be the difference between a proof-of-concept implementation and a practical real-world system.

A final value proposition for a Juniper SiPh strategy is the effects of the shift toward optical networking on the competitive landscape. Electrical and optical vendors are increasingly fighting for budget, and the optical people have a fundamental advantage in that their stuff is essential in transporting information. Creating a model for metro meshing and metro feature injection, combining data center and WAN, would be a powerful competitive response for a router vendor. It’s probably at least one reason why Juniper got into SiPh to start with.

Why the venture, then? I think the main reason is that the space is just too specialized, and moving too fast, for Juniper to be assured their own efforts would keep up with market opportunities. Cisco also has pluggable SiPh elements in its portfolio, and while Cisco doesn’t normally attempt to lead the market, they certainly make playing it safe a lot more risky. One could even argue that they make playing it safe downright dangerous.

Cisco is the incumbent in router networks, they have their own server line, and they’re clearly taking more interest in the hosting side of the business. The scope of what Cisco could do in the data center is so broad that Juniper couldn’t hope to match it quickly enough to be credible, if the future of services does in fact involve the partnership between connectivity and hosted features. However, they’ve been far from innovative or revolutionary in their data center stuff. Attacking on a narrower front, through the SiPh metro mission, might be the best way to beat them.

Revolution in the making? An important point to remember in all of this is that technology revolutions are rooted in opportunity, not in technology. There are conditions that point to a transformation of networking, but whether those conditions will all develop and how the transformation they drive will unfold is still an open question. Right now, the biggest challenge SiPh may face is the same that many vendors face, the PR. The articles on SiPh range from obtuse to totally vapid, and that makes it difficult for the technology to stimulate interest among planners. Will simply saying the magic words (well, letters) make SiPh a reality? Hardly. Again, we’re probably going to have to wait for somebody to step up.

That’s why the Juniper/Synopsys deal might be important. It’s a change in the business processes driving at least a piece of SiPh, and those kinds of changes are what opens the market up to broader changes in technology, and network direction.

Suppose We Had an NFV Mulligan

The notion of open-model networks often involves the concept of feature/function hosting, a concept that was introduced a decade ago with the “Call for Action” paper that turned into the NFV ISG. Today, I think that most network professionals agree that the ISG didn’t get the details right but changed the industry by raising the flag on the concept. Today, the ISG is trying to modernize (Release 5) and other forces are weighing in on the question of function hosting. Suppose we were to get a Mulligan on this, a do-over? What would a top-down approach to the question yield? I propose four principles.

The first principle is universal openness demands a universal hosting model. An actual network contains three classes of devices, aggregate traffic handlers, connection feature hosts, and on-the-network elements. NFV really focused on the middle group here, but we need an architecture that addresses all the groups and avoids silos. The way to get that is to break up the model of an open-model network into three layers, platform, PaaS, and function.

NFV proposed hosting functions on commercial off-the-shelf servers, and that’s too limiting. Functions should be hosted on whatever platform is optimal, which means everything from containers and VMs to bare metal, white boxes, controllers, etc. A network would then be made up of a variety of boxes, a variety of network functions riding on them, and a mechanism to organize and manage the entire collection.

Extending the “platform” to white boxes and other embedded-control hardware isn’t easy. In the white box space, for example, you have the potential for a variety of CPU chips, network chips, AI chips, and so forth. In order to implement this principle, you’d need to define a standard API set for each class of chip and a driver model to harmonize each implementation of a chip type with that API set.

The second principle builds on the first; the PaaS layer of function hosting creates a unified model of management and orchestration at all levels. To prevent total anarchy in function hosting, the functions have to be built to a common set of APIs. Those APIs are presented by “middleware” in each platform, and bind the application/function model to the variety of devices that might be useful in hosting stuff. Everything involved in function hosting is abstracted by this PaaS layer and presented through these common APIs, so no matter what the function or platform might be, getting the two together and functioning is the same. Thus, external tools bind to functions through this PaaS layer as well.

This principle, and this layer, are IMHO the keys to a modern vision of function hosting. Absent the PaaS layer, we end up with a bunch of virtual devices whose capabilities and value are constrained by the physical devices we already have. Do we really change networking by converting a real router to a virtual one? Maybe for a bit, while we wring out vendor profits, but not fundamentally.

The third principle is that function relationships to platforms and services are defined by a generalized, intent-based, service model. You can’t create a market-responsive network infrastructure by programming it, you have to be able to use a model to organize things, and build services by building models. The processes within, and in support of the PaaS layer would be integrated through the model, which means that lifecycle management and even OSS/BSS activity would be coordinated through the model.

Way back before any real NFV ISG work was done, I proposed the notion of “derived operations”, which meant that operations interfaces would be created through APIs that were proxies for underlying management information. I think that this task should now be part of the model, meaning that the model should be able to define management interfaces to represent functions and function collections.

Principle number four is all cloud-related operations/lifecycle tasks are carried out by standard cloud tools. This may be more complicated than it appears on the surface, because while we have “cloud tools” that work with containers, VMs, and even bare metal, we don’t traditionally apply (or attempt to apply) them to white boxes. If we’re to adhere to the first principle above, we have to make these tools work everywhere functions could be hosted. That could happen through the PaaS layer, which might be the simplest way to address this issue. That would mean that the PaaS layer is the functional centerpiece of the whole approach, which I think is a good idea.

We have, in the ONF Stratum and P4 architectures, what I think is a good model for the white-box function platform, and it seems likely it could also serve for other specialized hardware, like AI or IoT elements. We have, in Kubernetes, an orchestration platform that would serve well, and it’s already been adapted to bare metal. If we were to port the Kubernetes node elements (Kubelet, Kub-Proxy, etc.) to the Stratum model, that would make it compatible with white boxes, providing that we added Kubernetes’ general orchestration for VMs and bare metal.

We don’t have either a model or the PaaS, and those are the two things that something like the NFV ISG should be developing (that was my proposal to the ISG back in 2013). It’s easier to create the two together, I think, because there’s some cooperative design involved. It wouldn’t be rocket science to do this, either. There’s plenty of work out there on intent models, I’ve blogged about the modeling needs many times, and my ExperiaSphere work is available for reference without even a requirement for attribution, as long as the word “ExperiaSphere” isn’t used and there’s no implication of my endorsement made on the result (unless I know what it is and actually do endorse it).

This is what I believe the NFV ISG should now be doing, but I’m skeptical that they would even consider it. The problem with standards groups is that they tend to be high-inertia. If they do something, subsequent work tends to continually validate that which was done before. That makes any significant change in direction difficult, often impossible. The same problem, I think, would likely inhibit the TMF from making progress.

Could a vendor do the job? Maybe one of the big open-source foundations I mentioned in my blog yesterday (Apache, Linux, CNCF)? Juniper actually attempted something like this over 15 years ago, and turned their work into an international body that was pretty successful till vendor and standards politics messed it up. An open-source group could make progress, but I think that it would need some contributed initial framework or it would likely get mired in debate right from the first, and fail to move things fast enough.

“Fast enough” is the key here. We are already seeing, in 5G Open RAN and the RIC, that different platforms and management/orchestration frameworks are emerging for different missions, creating silos. I’d guess that if something isn’t started this year, it will be difficult to bring the right stuff to market in time to fend off the inevitable silos, proprietary visions, and disorder.

Just What Does “Open” Technology Mean?

One interesting point of agreement across both the network operator and enterprise communities is that “There’s open, and then there’s open.” The meaning of this should be clear; not everything that’s touted as “open” is equivalently open. Given that both operators and enterprises say that “openness” is in at least the top two of their requirements, it’s worth looking at just what they think it means, and what the views of buyers mean to the market overall.

It’s no surprise that the top meaning attributed to “open” in both networking and IT is “not proprietary” or “no lock-in”. Buyers always tend to feel exploited by sellers, but that’s been a particular factor in both networking and IT for over two decades, according to my surveys. This definition isn’t particularly helpful, though, because it turns out that many “open” technologies (as classified by the buyers themselves) aren’t free of proprietary lock-ins.

In my survey attempts to dig down a bit, I found that there was some consensus on the meaning of the term I often use in my blogs, which is “open-model”. Buyers say that an “open-model” technology is one that is built from components that each have multiple sources and that can be freely interchanged. Consensus is nice, but even this definition has nuances. For example, “router networks” are considered by network operators as open-model (three-quarters say that), but not by enterprises (two-thirds say they’re not). The difference in viewpoint arises here because enterprises believe that router vendors will add technologies (usually operations/management tools) that lock them in.

To be an open-model technology by the definition of most buyers, you’d have to expand the definition to say that in addition to having interchangeable components with multiple sources, all functional elements have to be built this way, not just the primary elements. Thus, an “open-model” router network would have to be made up only of interchangeable components, including the management tools.

“Open source” is another concept that’s crept a bit into definition ambiguity. Strictly speaking, open-source means that the source code is freely available to anyone who uses the software, and can be modified as desired, subject to the terms of a “license”. There are at least a dozen different open-source licenses, though, and we also now have the concept of “dual licensing” that permits an “open-source” version of something and a more commercial version.

Enterprises believe that “open-source” should mean that the software has freely available source code, is supported by community development, and that only integration and support services are sold, not functional extensions. The network operators are a bit less concerned about the last point; they don’t mind commercial extensions as long as the interface to them is open and open-source.

There is, however, a general erosion of confidence in open-source software, even the stuff that meets buyer definitions. A decade ago, over 80% of both enterprises and operators believed that open-source software was secure and that governance was “strong”. Today, less than half of operators and just over half of enterprises believe that. Both groups are three times as likely to want to acquire open-source software from a third party source (Red Hat, for example), and those buyers cite governance and security risks as a reason not to go directly to the source. The open-source foundations like Linux, Apache, and the CNCF are rated as “critically important” to software quality and security by at least two-thirds of buyers today, when a decade ago they were important to a quarter or less.

Where things get really complicated is on the network side, with the white-box and disaggregated movements. Enterprises believe that white-box hardware and open-source networking software would create an open network device. They believe that white boxes are critical to this happy situation because they don’t believe that any open network software would be available unless there was a significant white-box community to drive interest.

Operators, whose focus is on the capex side, are more interested in open hardware, meaning white boxes, than in open software. That’s one reason why they aren’t concerned about the DriveNets model, where a software vendor uses open white-box hardware. For the operators, dependent on capacity and switching speed, issues of the compatibility of open-source software with the networking chips used is the critical software issue; they like the ONF P4/Stratum notion and model, but they’re not confident it will drive a lot of market change. They don’t see anyone stepping up to drive the model forward, and what they’d like is for a software player like Red Hat or VMware to do that, meaning they’d like to see both players field open-networking software and support a variety of white boxes.

As far as the disaggregation promises of the major network vendors, both groups will agree that offering routers or other network devices as separated software/hardware elements is “open” only if there is proof that the separated pieces will run in other combinations. Both groups characterize the disaggregation stories of vendors as “cynical” or even “misleading”.

Negative views can be helpful, but as far as what “open” doesn’t mean, the views of both operators and enterprises are pretty vague. Based on their high-level definition, it means “not proprietary”, but just what that means is tangled. On questioning, both groups say that open physical interface standards don’t create openness. Both groups say that the server side and software side of the tech space are much more “open” than the network side. That links their view here back to the goal of having open hardware models and open-source software, which both groups say characterize the server space.

As I said at the beginning of this blog, “open” ranks no lower than second in desirable attributes, so one might think that it would be decisive when selecting new technology. In the server space and platform software space, that tends to be true, but not in the networking space. When you ask the same operators and enterprises whether their last purchase of network equipment was “open”, well over three-quarters said it was not. When you asked what the decisive factor was in the purchase, the responses were split between “TCO” and “integration”, with the latter really meaning “incumbency”.

The “open” push in networking actually peaked in 2019, according to my surveys. Since then, its importance relative to other factors has declined. In the last year, in fact, buyers who actually purchased open network technology cited price as a bigger factor in their decision than openness. In cases where open technology lost deals to proprietary devices, the reason was that the vendors discounted their products to make the deal.

It’s difficult to say conclusively why open technology passed its peak of interest, but one big factor seems to be media coverage. Vendors push on what has editorial support, and at the moment that’s mostly 5G. This, despite the fact that only a bit over half the network operators and only about 10 percent of enterprises cite 5G as being a key strategic interest to them.

Openness was never everything it was claimed to be; few things are these days. It’s still a better process than the traditional commercial processes, and the issues evolving in open-model technology are arising largely from the same forces that create proprietary abuses. The lesson for everyone is that there’s no easy answer for buyers…or sellers.

How Many “Markets” Do We Have in IT and Networking?

You can learn an amazing range of things from government data, particularly if you push it through a model. I’ve been slogging through that process for decades now, but for the last 6 months I’ve been focusing on the question of just what vertical markets are doing with IT and networking. The raw government data is interesting in itself, but to get the most from it, you need to add a bit of interpretation, which I’ve done using my market model. The question I’ve been trying to answer is whether “the market” can be defined in terms of behavior, or whether it’s so fragmented in behavior that the concept of an overall market is misleading, even useless.

My process looks at 62 vertical markets, and for this blog I’m focusing on US market data where I have the best access, best model/survey data, and most familiarity. One thing that immediately stands out in the assessment is the sharp polarization of verticals in terms of one critical metric, centralization of IT. Only six verticals rise to the level of “centralized” (a score of 50 or more) and 47 are “decentralized”, meaning they score less than 10.

Five of the six centralized verticals relate to the financial industry, and of the nine “partially centralized” verticals, seven are transportation verticals. In all, of the 15 verticals that are at least partially centralized, six are financial and seven transportation, so 87% of the verticals where centralized IT is a strong factor in IT overall are in just two broad segments of the market. These are the companies where data centers are highly important, and where centralized IT planning and management tend to occur. They are also the companies where there is a narrow set of “core business” applications that dominate everything. Keep these points in mind as we talk about the future.

Let’s look at another measure, which is the dependence of the company on “network spending”, which is largely a measure of the influence of private/VPN networks on their IT spending. Six transportation verticals are in the “dependent” range (a score of over 10) out of fifteen companies, but no finance companies. This correlates with a shift of companies with a retail-something focus to use of the Internet to reach customers and partners, since Internet spending isn’t included in this metric.

Internet growth and dependence is universally high in retail verticals and in other verticals (finance, for example) that rely on online customer relationships. In fact, all of the “dependent” companies (score over 35) fit those criteria. Public cloud data, which I have to get from my own contacts since it’s not reliably reported to government agencies, correlates with this same group of companies, which make up only 18 of the 62 verticals in my list.

Moving back to IT, we can see some validation of this retail vision of the cloud. Changes in IT direction are made most easily where there’s a change in budgeting, as indicated by spending growth. In the nine companies with “strong” growth in server spending, a score of 14 or over, we see two transportation verticals and one financial, both of which are verticals with limited retail exposure. There is only one retail vertical, accommodations, in that “strong” growth in server spending zone. Generally, retail firms that relied on storefronts had more distributed computing, and my own contacts suggest that this group of businesses were unwilling to risk centralized or cloud operations for their locations, for fear of a network outage that would suspend their ability to do business.

There is strong indication in my own surveys that companies who have difficulty acquiring and retaining skilled technology expertise are more likely to adopt cloud computing. One indicator of that situation is the growth in spending on integration services. There are two distinct sub-groups represented among the top twelve verticals in this group. The first are companies with overall high unit values of labor, such as investment firms and management consulting firms, who also offer higher-than-average salaries to IT personnel, and the other are firms with concentrated, specialized high-value employees but a larger overall labor base, who tend to spend less on IT people. Apparently, companies with a majority of skilled people are prepared to go outside to obtain technical skills when needed, and also to adopt cloud services to respond tactically to opportunities or risks. Companies with a small number of specialist-skilled people, like mining, manufacturing, petro-chemical, etc., also spend less on IT personnel on the average, and seek integration stills because they lack in-house resources. These are also candidates for cloud computing.

Another insight we can gather from the data relates to IoT. If we look at five-year network spending trends in the 62 verticals, we find that four of the five verticals with “strong” five-year network spending growth (scores of 10 or higher) were manufacturing, mining, and other verticals whose network usage was influenced by collecting telemetry from remote operations (the fifth was a financial vertical). IoT impact, not surprisingly, was largest in manufacturing and transportation. Interestingly, these same verticals topped the spending growth chart for personal computing, and remote personal computer activity was responsible for much of the network spending increases outside manufacturing/transportation.

All these, I think, are interesting insights, but perhaps the most interesting insight of all is that the differences in IT and network spending, centralization, and growth are enormous across these 62 verticals. The ratios on the spending side are anywhere from 10:1 to 50:1, and in spending growth in IT there are some verticals that show negative growth. In network spending, half of the total verticals show negative spending growth. The conclusion? That it’s meaningless to talk about “the market” in broad terms. The business service potential for the cloud, for IoT, for 5G, and for everything else will depend on what verticals we’re talking about.

That even applies to geography. Vertical markets aren’t evenly distributed, even nationally. Get down to the state, and even more to the zipcode, and you’re going to see massive differences in potential because of massive differences in how the verticals map to those geographic boundaries.

We can’t let ourselves get carried away with this, though. Just as there are enormous differences among verticals, there are enormous differences within them. Firm size is a factor that creates variations within verticals, and so is the concept of “focus”. Is a manufacturing company focused on a single product class or are they broadly diversified? Is a financial company a consumer retail broker or a federal reserve bank? I use a 62-vertical segmentation to address some of this, but even within one of those tighter verticals, there are differences.

Everyone knows the old joke about people trying to identify an elephant behind a screen; they reach in, and depending on what they happen to touch, they think it’s a snake or a tree or a cliff. We may be doing this very thing talking about markets in networking and IT. The data, to me, shows that the differences among verticals are so profound that we can’t assume that a trend that appears in one place is really a market trend, or just a local phenomena. It also shows that sales/marketing processes, to be optimal, will have to look at the market differently.