Reading a Future Path from Juniper’s Current Quarter

Juniper surprised a lot of people with its quarterly earnings report, beating on earnings by over 2% and on revenue by nearly 6%. Surprise was likely not appropriate. In my blog on January 20th, I pointed out that Juniper was unfairly (in my view) assigned the smallest upside by the Street. No, I didn’t know anything about the quarter just ended, just that Juniper had acquired what I think are the strongest technology assets given the state of networking. It seemed to me obvious that they’d start paying off, and they have. The big question, then, is how high can they go?”

I don’t think there’s much doubt that Juniper’s quarter owes much to its “three pillars” of networking, Automated WAN, AI-Driven Enterprise and Cloud-Ready Data Center, talked about in some detail in its fall “influencers’ event”. In particular, I think that Juniper has been smart to introduce AI as an operations enhancement tool across both enterprise and service providers, WAN and data center. Given that operators are demonstrably reluctant to dip their toes into new revenue streams, they can solve their profit-per-bit problem only by improving operations.

Operations costs and issues are also a big priority for enterprises and cloud providers. Networks are complicated, and as we add in additional technical features and elements to improve networks’ ability to serve their missions, we make them more complicated. This threatens the stability of networks, which of course threatens the missions we’re trying to support better. Adding AI into the picture promises a reduction in the errors that, in particular, are creating embarrassment and financial losses.

All of these positive drivers are durable enough to serve Juniper well through 2022, and perhaps even into 2023, in the enterprise and cloud provider spaces. For network operators, the problem of profit compression is getting acute, and 5G deployment is creating a need to deploy new infrastructure that tends to cement in an architecture, whether it’s deliberate or default. Most Street analysts agree that the service provider space is Juniper’s biggest systemic market risk.

Juniper probably realized this as early as last spring, when they did their “Cloud Metro” announcement. Obviously, a cloud/metro strategy is aimed at the operators and also likely the cloud providers, and it could reflect the way we’d evolve to edge computing. I blogged on Juniper’s positioning when they told their story, and what they announced was essentially the network and operations framework for a future where services were created by a combination of network connectivity and hosted features. It’s a great notion, but one that I think could have been positioned more aggressively (as I noted in my blog).

The reason for my concern is Juniper’s biggest competitive market risk, Cisco. Cisco isn’t an innovator (they characterize their positioning as seeking a “fast follower” role), and in fact they’re already starting to counter-punch against Juniper’s SD-WAN and AI/operations successes. The best strategy, IMHO, to counter someone who is “fast-following” you is to lead faster, to get far enough out in front and change the dialog enough so as to make their laggard position obvious and uncomfortable. That’s particularly true when you have as strong a technology asset base as Juniper has.

Juniper might be making its technology base even stronger, too. They recently announced some new ASICs, one (Trio 6) that targets what Juniper has described as “Built for the unknown”, at the edge. This chip fits into Juniper’s MX router line, generally targeted at service-edge missions. The other, the Express 5, provides accelerated packet handling for Juniper’s big-iron PTX. You could easily see both of these fitting in a metro mission heavy on edge computing and so tightly coupled to data center technology.

The need for tight coupling between network and data center isn’t limited to metro, or to service providers, or cloud providers. The fact is that enterprises’ network strategies have been driven largely from the data center for twenty years or more, and so “Cloud Metro” is a mission for cloud-coupled networking technology that has an almost-universal application. Recall, too, that Juniper’s Apstra acquisition puts it in an almost unique position of having a strategy to couple their networking to almost any data center switching strategy.

Cloud-coupled networking has another dimension, which is that of virtual networking. In any environment where software nuggets are deployed, redeployed, and scaled in a highly dynamic way, you really need a virtual network to create the agility. Add to that the fact that, as I noted last week, consumer broadband enhancements in capacity and quality favor SD-WAN, and the combination of Juniper’s Contrail and 128 Technology Session Smart Routing look very, very, good.

None of the aggressive stuff I’m projecting here was mentioned in the earnings call, even Cloud Metro. That’s not necessarily a surprise or a bad thing, because these days earnings calls are all about the current situation; companies have been reluctant to talk about positioning stuff since Sarbanes-Oxley twenty years ago. The big question is whether Juniper will, at some point, position their future potential, making it a commitment by the company to buy into a specific model of network/cloud evolution.

There are always pluses and minuses associated with aggressive positioning, of course. The pluses are obvious; you get to define a space and set the reference for all who follow you there…if you’re right. The minuses are all based on what happens if you’re wrong, if the market doesn’t develop as you expect. From a PR perspective, though, being aggressive is almost always the best approach, because nobody takes back any good PR they’ve given you. Rival Cisco proved back in the pre-SOX days that a five-phase plan, announced with the statement that you were already in phase two, always got good ink even if you never delivered on it and it never even proved useful.

The biggest reason for naked aggression for Juniper may have nothing to do with Cisco, but a lot to do with competition. In the metro and even in the data center, the cloud and our evolved notion of services are combining to create a critical partnership between networking and hosting. In any partnership, there always seems to be a senior partner. Will it be a network vendor or a hosting vendor?

Despite the fact that Cisco has server assets, I don’t think they have any intention of being an aggressive player in defining that network/cloud relationship. That means Juniper is the only major player on the network side that has a shot. If it’s not Juniper, then it opens a path for a hosting vendor to do the job; perhaps VMware or IBM/Red Hat. Maybe even Dell or HPE. Or, just perhaps, a lower-tier network vendor gets anointed. Whoever defines the space will then shape the transformation of IT decisively, and almost surely in favor of the technology they represent. Can networking win? Yes, if Juniper or some other smaller network vendor steps up. Otherwise, we can expect to see hosting dominate the partnership and the metro, and the data center to dominate the enterprise network going forward.

White boxes and hosted network features are either fixtures of new services or indicators of commoditization. It may well be that 2022 is when we’ll see the market decide between these choices.

The Broadband Explosion Could Create Collateral Impacts

Broadband change is in the wind, literally as well as figuratively. In the figurative sense, it’s clear that telcos and cablecos alike believe that they have no option but to make consumer broadband profitable in itself. For some, such as Verizon, that means literally taking broadband to the sky, with fixed wireless or millimeter-wave technology. AT&T, long a laggard with regard to fiber to the home, is now offering multi-gig service tiers. It’s clear that all of this will drive other changes, but what?

In their most recent quarter, Verizon reported 78 thousand FWA adds, up from 55 thousand last quarter (residential and business in both cases) compared with 55 thousand Fios adds. Yes, Verizon has been deploying Fios for a long time, but the fact that its new wireless millimeter-wave service has passed Fios in incremental deployments is still impressive. It proves that the technology can be valuable as a means of providing high-speed broadband where fiber isn’t quite cheap enough. It won’t bridge the digital divide, but it might at least bridge the digital suburban off-ramp.

AT&T’s decision to push fiber-based broadband to 2 and 5 gig speeds is an admission that it needs to offer premium broadband or risk having someone else steal a lot of customers. AT&T’s wireline footprint is largely overlapped by at least one cable competitor, and relentless advances in DOCSIS mean that cable could be that “someone else”. Not to mention the risk of local competitors in areas where demand density is high, including deals involving partnerships with state/local government.

We’re not going to see gigabit rates from the broadband subsidies now floating about, but it is very likely that even many rural areas will have broadband good enough to support streaming video, and that creates the first of the secondary changes we’re going to talk about.

Cable companies got started by syndicating access to multiple live TV channels at a time when broadband capacity couldn’t deliver streaming live TV in the form we have today. Obviously it now can, and for a growing number of customers. Does this mean that the streaming players will eat the linear live TV model? Yes, but the victory may be short-lived because the networks may eat the streaming players.

What I’ve heard off-the-record from every network, studio, and video content creator is that they’re happy to have streaming syndicators as long as what they do is resell bundled network/content TV and video material and create a multi-channel GUI around it. The old “content is king” story is getting new life as the TV networks in particular realize that they need to brand their own streaming service. Remember that in the recent dispute with YouTube TV, NBCU wanted Google to sell a Peacock service as a bundle rather than license separate channels. I think that’s what everyone wants, and of course that isn’t a huge opportunity for those already in the streaming-multichannel-TV business.

It may not be any opportunity at all, in fact, because there are already players (including, says the rumor, Apple) who see themselves as creating the ultimate video front-end, one that would integrate with every provider of content, live or on demand. Amazon, Google, and Microsoft are all said to be exploring the same option, to include cloud storage of “recorded” live material. Roku is also said to be looking into being a universal content front-end. Google, of course, already has YouTube and YouTube TV, and anything they do here would likely be held in reserve until it was clear that their YouTube TV property was under threat.

This video-front-end mission requires APIs that would be used to integrate the content, and that opens another likely change, which is the growth of content-for-syndication players. Today, a small new “network”, a creator of limited specialty content, has a rough time because their material isn’t presented as one choice among a broad set of “what’s on?” options. A syndication player could offer their APIs to anyone, and a new content player could integrate with them. Since there is IMHO zero chance that this new content front-end wouldn’t offer both on-demand and live material, any content could be integrated with the front-end element, creating a kind of “open-Roku” form.

This is a massive shift, of course, which means it will take a lot of time to complete. The near-term initiatives of networks to build their own streaming brand is a clear indicator of where they’d like things to go, and that they’re taking careful steps to move things along. That means maintaining syndication deals with streaming aggregators until their direct streaming relationships demonstrate they can provide staying power. We should expect to see more and more content licensing disputes between networks and streaming services, some going beyond the current up-to-the-brink and actually resulting in loss of some material, for a more significant period. At some point, the “significant period” will start to mean “forever” for the popular network material.

All this is going to impact the market, but it’s not the end of the impact of better broadband. If we assume, as we should, that urban/suburban services are heading above the gig level in terms of top-tier bandwidth, we have to assume that “residential” broadband is going to offer a major cost advantage versus traditional business services.

The cost per bit of residential broadband has been far lower than the equivalent cost for business broadband, but companies have paid the differential because of the difference in availability and QoS. Today, with more and more of the front-end piece of every business application migrating to the cloud, and more and more of application networking being carried on the Internet, it’s looking questionable whether the availability/quality differentiator for business broadband can hold.

The answer likely lies in just what “gigabit” broadband really means. A packet interface is “clocked” at a data interface rate, meaning that packets are delivered to the user/network interface at a rate that corresponds to the service’s “bandwidth”. Most users who have high-quality broadband and take the time to assess the real speed of their service find that it doesn’t match the clock rate. Deeper congestion, or deeper capacity metering, or deeper constriction of capacity at content sources or applications, can all reduce the end-to-end delivery rate of broadband. Upstream versus downstream performance can also vary, both in clock speed (asymmetrical services like 100/20 Mbps) and in actual end-to-end delivery rate. These variations won’t typically mean much to users, but they could mean a lot to business.

Even a big household may be challenged to consume a gigabit connection, streaming video and making video calls. A branch office of an enterprise, with anywhere from a half-dozen to a hundred or so workers, could do so much more easily, particularly if there are “deeper” points of constriction. Feed a dozen gigabit connections into an aggregation point that has a single gigabit trunk going out, and it’s obvious that if the usage of those connections rises, the effective performance of every connection will be less than the clocked value.

The obvious question is whether some variant on consumer-level broadband access could be leveraged as a business service. The initial impact of radically improved broadband speeds in the consumer space would be a significant advance in the use of SD-WAN as opposed to IP VPNs, in branch locations. The limiting factor on this trend would be the deeper constriction in performance just noted. Most SD-WAN has additional header overhead, and that means that some “throughput” isn’t “goodput”, to use the popular terms. Even where header overhead is minimal, though, it’s possible that consumer broadband won’t deliver any higher end-to-end performance at gig speeds as it did/does at a tenth that. Could that encourage a change in service? There are two options.

The obvious option would be to establish a “premium” handling policy at those deeper points in the network. Hop off the access broadband network and turn left, and you get consumer Internet. Turn right and you get business broadband. The advantage of this is that it leverages mass-market access technology to lower the cost of business service. The disadvantage is that there are certainly business sites located in areas where business density is too low to make too much premium infrastructure profitable.

The second option would be “Internet QoS”, which in debates on net neutrality tends to be called “paid prioritization”. If premium handling were made a broad option in mass-market infrastructure, then it could be used by businesses to support SD-WAN service, and used by consumers where they needed better-than-best-efforts. The advantage of this is clear; we end up with better broadband. The disadvantages are equally clear.

Few doubt that paid prioritization would result in erosion of the standard service. At the very least, we could expect that broadband wouldn’t get “better” unless we paid more to make that happen. Given the quality broadband dependence of the OTT industry and the legion of startups and VCs that the industry empowers, and given the fact that “net neutrality” has been a political and regulatory football, this option looks like it’s a non-starter, at least in the near term.

The biggest barrier to either option, though, is the profits of the operators whose infrastructure would have to be changed. To invest in quality left-turn-or-right business handling of broadband is to support a migration of your customers from an expensive service to a cheaper one. That’s not a formula for success at a time when your profit per bit is already sinking.

We’ve had a period of some stability in the broadband space, with technology evolving rather than revolutionizing. We may be seeing an end to that now, and the shift will create opportunities and risks for both vendors and operators.

Striking a New Electro-Optical Balance in Network-Building…Maybe

For roughly a decade, there’s been a growing debate on the balance between optical technology and electrical technology in networks. The optical vendors, notably Ciena, have (not surprisingly) been weighing in on the topic the most, given that they’re likely beneficiaries of a shift toward optical networks. A recent Light Reading piece talks about Ciena’s view. I do think that there are forces operating to shift the focus of operators more to transport optics, but I think they’re a part of a larger potential architectural shift, and we need to explore the big picture before we draw local conclusions.

Networks have always relied on “trunks” and “nodes”, and trunk technology has typically relied on economy of scale in bandwidth/capacity. Fat pipes are cheaper per unit of capacity than thin ones, so there’s a benefit to aggregation. A big part of this is the fact that the cost of a trunk includes a hefty charge for deployment of the physical media, a cost that’s largely independent of capacity.

With fiber optics and dense wavelength-division multiplexing (DWDM) you can create an optical trunk with very high capacity for a relatively small TCO premium over lower-capacity pipe. The trick is to aggregate enough traffic to utilize the capacity. If we assumed a static load on the network, I believe that the opto-electical dynamic wouldn’t be under much pressure, but network load is increasing. Fiber has crept into the access network, but not the kind of fiber the article discusses, fiber that needs IP integration. That requirement exists when fiber is deployed deeper, creating a network topology that uses less electrical-layer handling.

Access networks are the on-ramps to the broader network, and access networks terminate in what used to be called “edge offices”, which in turn link to deeper facilities. Overall, there are roughly twenty thousand aggregation sites in the US and perhaps a hundred thousand globally. Consumer broadband has driven up the traffic level in the access network (wireline and wireless), with video being the major contributor. Higher-capacity access connections mean higher capacity is needed to create trunks at the aggregation points. Aggregation at the edge is typically done in electrical devices, and even if optical trunks are used the devices involved are still routers.

If you really want to see IP/optical convergence, you need to look at what happens behind (deeper than) the edge aggregation. There, the big question is the number of trunks, which if you assume essentially unlimited optical capacity per trunk, depends on how you interconnect the aggregation points. You obviously can’t mesh a hundred thousand aggregation points globally, or even twenty-thousand in the US. If operators are truly interested in IP/optical convergence, then they’re postulating more meshing, and a need to transit through deeper aggregation points with lower cost and latency.

Where this happens and what devices are involved depends on a lot of things, but perhaps most of all it depends on what services drive the network. I’ve been a consistent fan of “metro networking”, meaning the presumption that services will focus on/in a metro center, and I think that metro is both the first driver of IP/optical convergence and the primary battleground where router vendors have to work to manage it.

In a pure connection-driven future, there is no reason to think that IP/optical convergence couldn’t replace all of the core routing and much or all of metro routing. Traffic at any point is just passing through, so smart handling isn’t required. We could expect to see advances in the use of IP/optical interfaces on content caches, and as the unit cost of optical capacity continued to fall, the migration of cache points deeper into the network.

The counter-force to this is obviously non-connection services. If we have to add service intelligence, we need to couple computing to the network, and that’s more easily done with traditional network devices like switches and routers. If those non-connection services focus on the metro, then we probably see traditional IP devices with optical interfaces (the current model) being deployed from the metro outward, which again perpetuates the current model.

So is the whole of IP/optical convergence then a myth? To a degree, yes, if we assume that the convergence really means that optical devices take on a limited IP mission to make connection networks more efficient. I think that even the article hints at that, with a Ciena quote: “61% [of respondents] defined IP/optical convergence as the streamlining of operations across IP and optical functions. To me, that involves multi-layer intelligent software control and automation.” In other words, what operators want is operational efficiency improvements, not convergence of the network equipment. If that’s the case, then there’s little Ciena or other optical vendors can do to move the ball.

The deeper truth here is that the concept of “IP/optical convergence” isn’t what the debate here is really about. We have it in at least one form, the optical interfaces on electrical devices, already. The deep issue, the real debate and the real competition, is over what happens at the metro level. Why? Because if metro doesn’t introduce service intelligence, then simple optical aggregation spreads out from core, through metro, and closer to the edge. We might see a radical reduction in electrical-level (router) devices. If metro does create significant service intelligence, then electrical-level, data-center-integrated, networking concepts spread toward both edge and core, and simple reconfigurable optical add-drop handling is diminished.

Does Ciena see the risk here? Does it realize that if metro services explode, chances are that networking will tend to resemble data-center interconnect (DCI) more than it will optical aggregation? Does it see things like the metaverse as the big potential upward driver of that risk? That’s one big question.

The other one, of course, is what the router vendors see. Operators have been saying since 2012 (to me, and perhaps to others) that their profit-per-bit numbers were eroding dangerously. You can raise them by raising revenues, lowering costs, or both, but obviously if revenues won’t contribute much then cost reduction has to do all the lifting. That means the kind of simplification focus that the article implies. So not only do the router vendors need credible service revenue boosts for their customers, they need to make them happen quickly in the metro, to defend their incumbency against optical encroachment.

Network vendors have been slow to recognize reality here, or at least the electrical-device vendors have. I’m not completely convinced that Ciena is demonstrating that the optical players are seeing the light either. There’s a race for comprehension here, centered in metro and service networking, and the winner is going to be a big winner indeed.

What Benefits Do Users See in Applying AI to Netops?

Artificial intelligence is, in a sense, like a UFO. Until it lands and submits for inspection, you’re free to assign to it whatever characteristics you find interesting. Network operations staff have only recently been facing an AI “landing” and so they’re only starting to assign specific characteristics to it, from which they can derive an assessment of value. But they have started, so let’s look at what they’re seeing, or at least hoping. What are our goals in “AI ops”?

There are a lot of stories about the use of AI in operations centers, to respond to failures, and that certainly seems a valid application. AI could provide quick, automatic, responses to problems and also likely be able to anticipate at least some of them. Sure, a sudden capacitor explosion in a router could create a zero-warning outage, but operations people say that most problems take a few minutes to develop. You’d expect that operations would love this sort of thing, but not so much.

The fact is that everyone from end users through carriers to cloud providers says that network change management is their top target, not “fault management” in the sense of recovering from failures. Almost all ops professionals say that their top problem is configuring the network (or even the computing resource pool) to fit the current work requirements, and that the growing complexity of their infrastructure means that it’s all too easy to make a mistake.

That “make a mistake” thinking may explain a lot here. An exploding capacitor in a router isn’t operations’ fault, but a configuration error is a human error that not only hurts the reputation of whoever made it, but also the reputations of their management and even those who planned out the operations practices and selected the tools. “Failure” is bad, but “error” is a lot worse because it can be pinned on so many.

In fact, the view of the role of AI in fault management may be tainted by this cover-your-you-know-what view. There’s a lot of reluctance in accepting a fully automated AI response to a problem, and if you dig into it, you find that it stems from a fear that the operator will be held accountable for the AI decision. What the majority of ops people want is for an AI system to tell them there’s a problem and suggest what it is. The operator would then take over. A smaller number want AI to lay out a solution for human review and commitment. The smallest number want AI to just run with it, perhaps generating a notification, and this approach is usually considered suitable for “minor” or “routine” issues.

The notion that an operations type would be tarred with the brush of AI error isn’t as far fetched as it seems. Non-technical people often see “the computer” or “the network” or “AI” as a co-conspirator with whoever is programming or running it. In my first job as a programmer, a printer error had resulted in a visible check number at the upper right that differed from the MICR-encoded number at the bottom. This resulted in widespread problems of reconciliation, and the internal auditor stormed up to me and shouted “Your computer is making mistakes and you’re covering up for it!”

If configuration management is really the goal, then what specifically do operations people want? Essentially, they’d like to be able to input a change in the terms used when it was described to them. Generally, what this means is that a service or application has an implicit “goal state”, which is the way infrastructure is bound to the fulfillment of the service/application requirements. They’d like AI to take the goal state and transform it into the commands necessary to achieve it. When I hear them talk, I’m struck by how similar this is to the “declarative” model of DevOps; tell me what’s supposed to be there and I’ll figure out the steps. Normal operations tends to be “imperative”, tell me the steps to take and hope they add up to the goal.

Another thing that operations types say they want from AI is simplification. Infrastructure is complicated, and that complexity limits the ability of a human specialist to assimilate the data needed for what pilots call “situational awareness”. Think of yourself as captain of a ship; there’s a lot going on and if you try to grasp all the details, you’re lost. You expect subordinates to ingest the stuff under their control and spit out a summary, which you then combine with the reports of others to understand whether you’re sailing or sinking. Operations people think AI could play that role.

The “how” part is a bit vague, but from how they talk about it, I think they want some form of abstraction or intent modeling. There are logical functional divisions in almost every task; a ship has the bridge, the engine room, combat intelligence, and so forth. Networks and data centers have the same thing, though exactly what divisions would be most useful or relevant may vary. Could AI be given what’s essentially a zone of things to watch, a set of policies to interpret what it sees, and a set of “states” that it could then say are prevailing? It would seem that that should be possible.

The final thing that operations people want is complete, comprehensive, journaling of AI activity. What happened, how was it interpreted, what action was recommended or taken, and what then happened? Part of this goes back to the CYA comment I made earlier; operations types who depend on AI have to be able to defend their own role if AI screws up. Part is also due to the fact that without understanding how a “wrong” choice came about, it’s impossible to ensure that the right one is made next time.

It’s surprising how little is said about journaling AI analysis and decisions, even when the capability actually exists. Journals are logs, and logs are among the most valuable tools in operations management, but an AI activity journal isn’t automatic, it has to be created by the developer of the AI system. Even if it is, there has to be a comprehensive document on how to use it, or you can bet it won’t be used. A few operations people wryly commented that they needed an AI tool to analyze their AI journal.

The journaling issue raised what might be a sub-issue, which was the need to understand what was available for AI to analyze. Most organizations said they had no idea what data was actually available, what its timing was, how it could be accessed. They had stuff they used, things they were familiar with, but they also had the uneasy feeling that if their AI was limited to knowing the very same things the operations people themselves knew, it probably wasn’t being used to full potential. A few very savvy types said that they thought a vendor should provide a kind of information inventory that had all the available information, its formats, conditions of availability, and so forth. Yes, they said, all that was out there, but not in any single convenient place.

This point, an afterthought on the last suggested AI priority, might actually be the key to the whole AI success-or-failure story. Garbage in, garbage out, after all. That may be the reason why single-vendor AI strategies that link AI tools to the vendor’s own products, work the best. It may also be the guidepost for how to integrate other vendors, other technologies into an AI system. You need that journal decoder to identify and characterize the important stuff, and also some control over what gets journaled in the first place.

Regarding that, I want to call out a point I made several years ago regarding SD-WAN implementations. The goal of networks is to serve goals, and business networks in particular have to be able to support the applications that are most important to the business, and whose execution benefits likely justify a big part of the cost of the network. Session awareness, the ability to capture information on the user-to-application relationships being run, is critical in getting the most out of network security, but also in getting the most out of AI. Enterprises aren’t fully aware of the implications here, but some leaders tell me that knowing whether “critical” sessions are being impacted by a problem, and considering the routing of critical sessions during configuration changes, is likely a key to effective use of AI down the line.

Facing the Future of Tech, or Creating It

What is the future of tech? A lot of Wall Street professionals and a lot of investors are asking that question, given the NASDAQ correction. The problem with using stock prices as a measure of a market is that short-selling behavior can induce a slump just as easily as real market conditions. That obviously doesn’t rule out a real issue with tech, so we need to look at things.

Stuff sells because people want or need it. Consumer technology sells largely based on quality-of-life value, which can be based on fundamentals or on nothing more than standing tall with peers. Business technology sells based on ROI, meaning that if a company gains a benefit from something that meets their rate-of-return expectations, they’re likely to adopt it. Are either of these forces subject to change now?

On the consumer side, there’s not much question that people are whipsawed by the shifts in COVID. When the virus first came along, both people and companies changed behaviors to limit risk, and any behavioral change results in a shift in those tech value propositions. I can’t go out, so I have to rely more on in-home entertainment, so there’s an uptick in streaming services and gaming. My workers have to stay home, so I have to support remote work technology.

But when vaccines and Omicron seemed to drive down the actual risk associated with contracting COVID, people and companies started to shift again. That’s where we are now. Netflix turned in a disappointing quarter, and Peloton said it was shutting down production for a time to respond to lower demand. Neither of these is a surprise, and no rational investor would be spooked by the move. However, that doesn’t mean that those companies’ stocks aren’t less attractive, so some downturn would be expected. The downturn, though, should be balanced by an upturn in stocks that reflected the pre-COVID purchase patterns.

Stock prices are set by the number of people trying to sell versus those trying to buy, and the level of determination of both groups. The “value” of the company figures in only insofar as it impacts this buy/sell balance. Over my time working with and in the market, I’ve seen a shift from value (“fundamentals”) investing to buy/sell-balance (“technical”) investment. It changed how the market works, and that makes me wonder whether we’ve seen something similar in the way technology itself is bought and sold.

The majority of people today don’t have WiFi 6, and we’re already talking about WiFi 7. 5G’s success is assured at one basic level, but the value of the extended capabilities (like network slicing) is still up in the air, yet we’re talking about 6G. If you threaten a technology that’s not even fully adopted with obsolescence, how credible is the investment in either the current or the new generation? Why is this happening?

One reason is the consumerization of technology. Up until the 1990s, there was no consumer data services market. Today, consumer broadband is the largest data market. Up until the 1970s, there was no personal computing technology, and today the total processing power of computers sold to consumers dwarfs that sold to businesses and governments. Ordinary people don’t do ROI calculations on their tech purchases; they buy what they want as long as they can pay for it. And they don’t understand “tech value” in any deeper sense, so they don’t respond to stories and advertising that goes deeper. It’s excitement that matters.

Consumerization has also pushed tech toward what I’ll call the unitary purchase model. I buy something. The something I buy isn’t related to my purchase of past somethings, but to the specific interests and information that drive me at the moment. I surely, as a consumer, don’t think about advancing my technology state with a series of incremental investments that only give me my goal at the end of the process. Instant gratification.

These shifts don’t necessarily apply to business, but they end up doing that nevertheless. A corporate buyer is a person first and a worker second. How many times does a buyer have to read about WiFi 7 before they start thinking of it as a business technology as well as a personal one? If they do start, can they find (in the information sources they’ve first seen WiFi 7 discussed) the real value propositions?

Business technology advances as an ecosystem, not as a series of disconnected products. We can’t get the benefits of 5G or edge computing just by waving our hands, or by promoting what could be done with them eventually. We have to make a business case, and for transformational technologies that business case will require both a building of a technology base and the building of a business case. Just having something doesn’t justify it, so what does? What would those applications need to justify their own technologies?

Since the dawn of commercial IT in the 1950s, we’ve seen three ecosystemic waves of IT advance, the last of which ended roughly in the year 2000. We’ve had none since, and is it a surprise that “tech consumerism” took off in the ‘90s? We are missing some ingredients that drive a fundamental technology shift of the kind that produced one of those IT waves, waves that drove tech spending up almost 50% faster than GDP growth.

What are we missing? Three things, I think.

Thing One is a holistic sense of the future. What are we actually moving toward in our next tech wave? How will it fundamentally change our lives and our businesses? Without a sense of this, it’s going to be difficult for enterprises to make a business case to get to that future.

Thing Two is self-valuing steps forward. We can’t defer the benefits of a technology revolution without deferring the costs, which means nobody can make money until some benefits arrive. The steps to the future don’t have to justify the future, but they do need to justify the steps themselves.

Thing Three is buyer education. Workers and consumers are the same people but different roles. We can’t let ourselves fall into consumerism-based marketing when we’re trying to sell business technology. The next step forward in the cloud, or the network, or the end-points, can’t be promoted by saying it’s cool and your friends will be jealous if you don’t buy it. We have to show how that holistic future is created and how the steps work.

Most of these problems can be solved by vendors, because in truth vendors had a big role in creating them. It wasn’t their fault alone; Sarbanes-Oxley and the fallout of the tech crash in 1999 tried to reign in speculative valuations by requiring a link between stock price and fundamental growth. What that ended up doing was encouraging companies to focus on the coming quarter or the current year, and made it harder to develop technologies that had legs.

The current market conditions are likely a blip, though market slumps have a way of feeding themselves. Vendors in the tech space need to decide now whether the conditions that lay us open for this sort of thing are going to be accepted for the future. They don’t have to be, but the status quo is what we’ll get if we don’t see some progress in ecosystemic, strategic, thinking. We’ve gamed the status quo about as much as we can expect to, so if we want a better future for both us-as-workers and us-personally, we need to step up and do the right thing.

The Street View of Cloud and Network Unification

No matter how complex a technology, you can always reduce it to dollars and cents, which is what Wall Street tends to do. Note, though, that while “cents” and “sense” have the same sound, focus on the former doesn’t always involve the latter. You can’t discount Street insight, but you can’t depend on it entirely. Thus, I feel free to add my own modest view to the mix, and particularly when we’re talking about metro and edge, topics I think are critical. Before I start, since I’m talking here about stocks in part, I want to make the point that I do not hold any stock in any network vendor.

Metro and edge are a fusion of network and cloud technologies, and the Street recognizes that. Generally, they see cloud technology hybridizing and edge computing developing, and with the spread of the data center they see network implications. To understand what they see, we have to look at these two themes, whether they’re viewed accurately, and what it means if they are.

Practically all enterprise cloud computing is and has been hybrid cloud, and this is something that’s been missed both by most of the Street and much of the media. What’s changing is that the cloud front-end part of applications is dominating new development and determining how businesses project themselves to customers, partners, and even workers. The cloud piece, once a small tail on the legacy application dog, is now doing more work and less wagging. As that happens, it unleashes some important issues.

Competition is one. Any time a market is expanding, and cloud-front-end computing is surely doing that, there’s a race to grab the incremental bucks. We can see that in the growing number of web services that cloud providers offer, and the fact that those services have started to creep into new areas like IoT and artificial intelligence or machine learning (AI/ML). Both IoT and AI/ML address classes of applications that are more tightly coupled to the real world, meaning “real-time applications” that are inherently latency sensitive. You can see how this drives edge computing and also networking.

Another issue is management. Data center applications have to be managed, and so do their resources, but the task is relatively straightforward and well understood. When you start to build applications designed to be deployed on resource pools, to heal themselves and scale themselves, you add a dynamism that data centers rarely saw. Since more and more of these features are found in the cloud front-end piece of applications, the growth in that space has shifted management focus to the cloud, to the point where cloud services providing orchestration and management have exploded. If application agility lives in and is controlled by the cloud, then the cloud becomes the senior partner in the application.

Hybrid cloud and edge computing are, or can be, linked. Hybrid cloud says that applications live in a compute domain that can live in the data center and in a public cloud. Edge computing says that some applications need to live close to the activities they support. The “edge” might be in a user facility that’s close to workers or processes, or in a cloud that’s hosted in the user’s metro area. Either way, edge computing adds resources to what hybrid cloud hosts on.

The Street sees all of this, sadly, through the lens of “moving to the cloud” which isn’t what’s happening. They tend to break things down by what as-a-service the cloud providers sell, saying for example that almost two-thirds of cloud services are IaaS. True, but almost 100% of enterprises employ value-added web services to augment basic hosting, even today. Most applications running on IaaS have never run in the data center or anywhere but the cloud. That’s the core reason why it’s so hard for others to break into the Top Four among cloud providers; the initial investment in those tools is too much for them, and the use of cloud provider tools tends to lock users into a particular cloud.

In hybrid cloud, the Street recognizes the value of the model but misunderstands what it means. Fortunately that doesn’t erase their emphasis on hybrid cloud as a symbiotic technology in edge computing.

The Street’s view of edge computing is less valuable. Like hybrid cloud, their edge view is based on a misperception; that content hosting leads to edge computing. We have CDNs today, of course, but CDNs were from the first a means for content providers to distribute video without the cost and glitches associated with pushing it from central points to anywhere on the Internet. Latency is less an issue than consistency; you can cache video to make up for some latency jitter, but on the average you have to deliver material at the intrinsic bit rate of the material.

In fact, almost everything the Street sees as a driver to edge computing (security, data management, cost management for network delivery, serverless computing, gaming, and even IoT) are really not drivers of edge computing. Some aren’t even exploiters of edge services that had somehow been justified by something else. The fact is that the Street has no idea what will drive edge computing, which is bad because it means that companies have no edge position they can take that will be useful in promoting their stock in the near term, and capable of sustained profit generation in the longer term.

Edge computing will be driven by IoT, but only to the extent that we take “the edge” to mean “on the customer premises, proximate to the point of activity.” That’s where we have it today, at the enterprise level. The Street’s edge focus on CDN means it’s focusing on OTT use of the edge not on enterprise use, and yet most of its future drivers would apply to the enterprise. Thus, the Street is talking out of one side of its mouth about what’s really a cloud application (edge-as-a-cloud), and the other about “edge as a new placement of customer-owned hosting”.

IoT today uses local “edge” devices for process control. Even smart homes often have “hubs” that are really a form of edge computing. Edge hosting expands the places you can put things, just as cloud hosting does. That’s why the link the Street suggests between hybrid cloud and edge computing is more credible than their view of edge computing overall. Ownership of the edge resources is less important than placement, and placement benefits in latency control suggest that on-premises “edge” owned by the customer is the best approach.

The current local edge model does benefit the cloud provider, because the cloud provider has (in “hybrid cloud”) already addressed the need to define a distributed hosting model that allocates resources in different places based on different cost and performance constraints. Remember the orchestration and management impact of hybrid cloud discussed above? The major public cloud providers offer the ability to locally edge-host cloud-integrated application components, making a local edge an extension of the cloud.

Via the network, of course, and that’s a topic where I think the Street has fallen significantly short on insights. If you have a collection of resource pools, from data center to cloud and then to edge, a collection of resilient, scalable, components, and a collection of potential users (customers, partners, and employees), you have a prescription for a very agile network. You have, in fact, a demand for a virtual network. Add in some things the Street does recognize, which are the need for an agile data center networking strategy, a need for enhanced security, and a need for operational efficiency, and you have a recipe for a completely new network model.

Completely new models raise the potential for completely new winners, and in this situation the Street doesn’t even see any network device vendor candidates to speak of, other than security specialists. At the same time, the trade media (or at least some of it) is picking up on the fact that Juniper is making a move. With the acquisition of 128 Technology, Mist, and Apstra, Juniper has covered all the bases of the new virtual-agile network. And yet the Street seems to assign them the least upside potential in the whole space, lower than rival Cisco or even F5. There seem to be two reasons why Juniper’s not getting Street cred; the Street doesn’t understand that all SD-WANs aren’t created equal (or even nearly so), and the Street like the trade media is more focused on get-in-Cisco’s-face positioning than on product technology improvements. Who doesn’t like a good brawl?

The challenge for the network device players, even Juniper, is that virtual networking is broadly viewed as an above-the-network technology. Juniper does integrate 128T’s Session Smart Router concept into other Juniper products, but many users and most Street analysts miss the fact that SSR integration could make network devices preferred virtual-network hosts. Without that, players like VMware have a credible shot at the space, and if you’re looking to define upside potential (as the Street surely is) then how do you miss this one?

Not all the Street did; Juniper got an upgrade recently and their stock has outperformed Cisco’s in the 3, 6, and 12-month comparisons. There’s no breakout though, and the technology shifts suggest that there could be. Whatever the Street, buyers, or even network vendors themselves, see, there is going to be a major change in networking down the line, and not that far down either. We can expect to see the impact of these shifts in 2022.

Paths to the Edge: Metaverse Models or Metro Mesh?

Most experts I talk with, either on the enterprise side or among their vendors/operators, have been telling me for almost a decade that they truly believe that edge computing will be driven by some flavor of “augmented” or “virtual” reality. The “metaverse” concept we’re hearing so much about today is (IMHO) simply a variation on that theme. Several variations, in fact, and it may be the way those variations manage to create harmony that decides just where edge computing and even metro networking end up going. Or, it may be that we see a completely different set of forces start us along the path to edge and metro. Or both.

I did my first model of edge deployment back in 2013, calling it “carrier cloud” because I believed then (and still believe) that operators are the ones who have the real estate, the network topology role, and the low ROI tolerance needed to optimally deploy edge technology. I cited five drivers (which I noted in my blog yesterday) for carrier cloud. Three of them (5G, IoT, and what I called “contextual” applications; more on that below) are still broadly recognized as edge drivers, but I want to reframe that early work into metaverse terms.

To me, a metaverse is a reality model, something that represents either the real world or an alternative to it. 5G isn’t a metaverse-modeled reality, it’s a network technology, but its role in carrier cloud or metaverse was really nothing more than a spark plug to ignite some early deployment. The real applications of edge computing depend on some variation on the reality-model theme.

In my original modeling, “contextual services” were the primary opportunity driver, with IoT second. I submit that both are reality models, and thus they’re a good place to start our discussion.

Contextual services are services designed to augment ordinary user perceptive reality with a parallel “information reality”. Walking down the street, we might see a building in the next block—that’s perceptive reality. Information reality might tell us, via an augmented-reality-glasses overlay, that the building is the XYZ Widget Shop, and that a Widget we’ve been researching is on sale there. Yes, we could get this information today by doing some web searches, but we’d have to be thinking of Widgets to think to do it, which we may not be. Contextual services would take a stimulus in the real world, like what we see, and correlate it with stuff we’ve expressed interest in or should be made aware of. Stimulus plus context equals augmented reality.

Contextual services are the core of the first of the three metaverse models I mentioned yesterday. This metaverse (like another we’ll get to) is centered on us, and it accepts stimuli from sources like what we see (based on where we are), what we hear, what communications requests are being made, and so forth. It also has a “cache” of interests based on past stimulus, things we’ve done or asked or researched, etc. The model parses the cache when a stimulus comes along and generates a response in the form of an “augmentation” of reality, like overlay text in AR/VR goggles.

The model that’s obviously related to the contextual metaverse is the “social metaverse”, the stuff Meta wants to create. The primary difference in the social metaverse is in the “augmentation” piece. The contextual metaverse assumes the information reality overlays on the real world. The social metaverse assumes that there is an alternative universe created, and that alternative universe is what is perceived by someone who is “inhabiting” the social metaverse. Because the social metaverse is social, it’s important that this alternative universe be presented as real to all inhabitants, and that all inhabitants and their behaviors are visible there, to all who are “local”.

IoT is a different model, what I’ll call a “process metaverse”. In a process metaverse, the goal is to create a digital twin of a process, and use that twin to gain insight into and control over the real-world process it represents. A process metaverse isn’t centered on us, but on the process. Information augmentation isn’t integrated into real-world sensory channels, but fed into control channels to do something.

It’s easy to see that all these “metaverse models” are, or could be, technical implementations of a common metaverse software architecture. It’s a model-driven architecture, where “events” are passed around through “objects” that represent something, and in the passing they trigger “actions” that can influence the “perception space” of whatever the metaverse centers on.

My hope with a metaverse-of-things approach is to create a single software framework that could be applied to all these metaverse missions, reducing the time required to build one and the overall cost. Such an approach could also allow potential edge providers to create an “edge platform as a service” that would optimize the hosting of edge applications and further enhance return on investment. It doesn’t guarantee that we’d build out an edge computing model, but it would make it more financially reasonable to do so.

What happens without this? Is there another way of getting to edge computing, or at least getting closer? One possibility is to look forward at what edge computing would look like, not at a single location but collectively. As I noted in a past blog, edge computing is really metro-centric computing, and if we had it, then applications like the metaverse would encourage the meshing of metro networks to create regional, national, and global networks. Could we see an evolution of networking create the metro-mesh?

The public cloud providers are already starting to offer network services created within their own cloud networks, as a means of uniting applications spread across wide geographies. Buy cloud front-end application services and you get cloud networking to backhaul to your data centers. If this sort of thing catches on, it would induce cloud providers to take on more network missions, and the threat to operator VPNs might induce operators to deploy metro-centric networking, then evolve to a metro mesh architecture.

A metro-mesh model has lower latency because it’s calculated to reduce transit hops, replacing traditional router cores with more direct fiber paths. We already have a few operators taking steps in that direction, and cloud provider competition for network services might be enough to multiply operator interest in that model. If operators aren’t motivated to creep into carrier cloud by adding metro hosting today, might they creep in by starting with the metro-centric and metro-mesh architectures? Perhaps.

One thing seems certain to me. We are beginning to see a revolution in terms of cloud and network missions, and at the same time a revolution in the competitive dynamic of the combined cloud/network space. We won’t see cloud providers erasing network operators; the access network isn’t interesting to them and has too low an ROI to likely become a target of competition. We might see the cloud providers eating a bigger piece of business networking, meaning VPN services, and if that happens, could it induce operators to take a shot at cloud computing in response? Perhaps.

Building Bridges, Building Edges

Let’s say that you wanted to justify building a bridge between Lubec, Maine and Canada’s Westport, Nova Scotia, crossing the Bay of Fundy. It would be fair to say that such a bridge would be enabling for those who wanted to drive between the points quickly. Our new bridge would be about 50 miles long. The current road route would be about 500 miles. Think of all the driving time that bridge would save (probably ten hours or more)! And if there were a bridge couldn’t you expect people to walk on it? An experienced walker could do that hike in a day…maybe. So we could use the driving and walking benefits of the Lubec-Westport Bridge to justify its surely-enormous cost, right?

The answer to that has a lot to do with how we justify things like 5G and edge computing.

If we had such a bridge, perhaps some would walk on it. Perhaps many would drive on it, but chances are that the numbers who would use the bridge would take centuries to justify its cost. Thus, there are things that I can use the bridge, or a technology, for that would not justify it. Exploiting a resource is one thing, financing it is another.

OK, let’s take this a little further. Grand Manan Island is maybe 15 miles along the path of our bridge. Suppose we build a smaller bridge just that far? It could be a step along the way, right? Yes, but if there aren’t a lot of people trying to drive or walk between Lubec and Westport, there are surely a lot less trying to drive/walk between Lubec and Grand Manan. The only value of that little step would be the value it presented in taking the next longer one. Even adding additional (non-existent) islands to the route wouldn’t help; no additional island would likely contribute much to the business case, and if any step were deemed to be unjustified, the whole value proposition would be shot.

By this time, I expect that you can see what I (at least) believe the connections are between my bridge analogy and things like 5G or edge computing. We have no problem deciding what we could do with either, but we’re conspicuously short of things that could justify them.

When I first ran my models on “carrier cloud”, I identified six theoretical drivers: virtual CPE and NFV, advertising and content, 5G, public cloud services offered by operators, contextual services, and IoT. My model suggested that NFV and vCPE had minimal potential. It also said that 5G would have its greatest potential impact by 2022, and that in the longer term, IoT was the decisive driver. In other words, my model said that we could visualize six islands between Maine and Nova Scotia, and that each of them could (if properly exploited) contribute to an investment that would then be exploited by subsequent service steps.

What happens to this picture of “successive evolution” if one or more steps doesn’t play its role? In the case of edge computing (carrier cloud), the answer is that the operators never make any major edge investment. The first of the possible drivers, NFV, never really had much of a chance except in the context of 5G, and operators have been increasingly looking to the cloud providers for hosting 5G virtual functions. Operators never deployed their own video ad and caching services to any extent, and that rounds out all the early edge applications.

Contextual services and IoT are related; the former relies on the latter to get real-world information on a service user, and presents AR/VR or other augmented information forms based on the combination of data available. Because of their early misjudging of the 5G opportunity (they wanted to charge “things” in IoT for cell service when they ran out of humans), operators have done nothing useful in the last two of our drivers so far, and time there is running out.

Neither 5G nor edge computing will fail because of operator misperceptions. 5G is the technical successor to 4G, which was the successor to 3G. There was never a risk it wouldn’t deploy, just a risk that it wouldn’t generate any incremental revenue. Edge computing, on the other hand, could be at risk, and with it a whole lot of vendor revenues.

My model said a decade ago that were operators to deploy edge computing (carrier cloud) at the scale all the drivers could justify, it would be the largest single source of new data center deployments in the history of IT, with the potential for one hundred thousand new edge hosting points. A big part of that number comes from “competitive overbuild”, where multiple operators locate edge hosting sites in the same area because they’re competing for the services those sites support. If, as it appears will be the case, the public cloud providers end up deploying all the edge computing, there are fewer competitors, fewer data centers, and less vendor revenue to fill those centers with network and computing gear.

This is why vendors should be working hard to devise a strategy for edge computing that operators could buy into. That strategy would obviously have to be centered in the metro zone, because metro opportunity density is high enough to justify deployments and close enough to the user to still present attractive latency options to applications.

There are two credible edge opportunities, IoT and metaverse hosting, and both have that perfect combination of great potential and technical and business hurdles that seem to characterize tech these days. There are some things that could be done to promote both these applications in a general way, and as I noted in earlier blogs, we could define a “metaverse of things” model that could further harness common elements. And, of course, we could let nature take its course on both. Which option would offer the best, the fastest?

I’m skeptical that IoT applications will drive edge hosting spontaneously, because the specific kind of IoT that would be most specifically an edge hosting application would also require the highest “first cost” of deployment. Enterprises already know that their own IoT needs tend to be tied to fixed locations, like warehouses or factories. This kind of IoT cries out for local, customer-owned, edge computing, because hosting further from the actual IoT elements only increases latency and magnifies the risk of loss of service.

“Roaming IoT”, characterized by the transportation vertical’s use on shipments and vehicles, ships, aircraft, and trains, is different because it doesn’t have a fixed point to place an edge, and in fact is necessarily highly mobile. I’ve worked on this vertical for decades, and I can see IoT edge hosting options, but they would be more likely to exploit edge computing than justify it, particularly because you’d need edge resources through the entire scope of movement of your IoT elements.

Metaverse hosting is in a sense the opposite; there are strong reasons to say that it depends on edge computing, but a lot of fuzziness on the issue of just how it makes money or even works at the technical level. If we presumed that a “metaverse” was a social-network foundation, then social-network providers (like Meta) would surely tune it to fit what they’d be prepared to deploy. The opportunity in edge computing depends on the presumption that there would be a lot of metaverses to host, making the business of hosting them a good one.

Given that one headline this week was that Walmart plans to enter the metaverse, you’d think that we were already on the verge of a metaverse explosion. Not so; we’re on the verge of another “label-something-to-create-click-bait” wave. What Walmart is actually contemplating is creating a cryptocurrency and NFTs, neither of which can be said to mandate edge computing, and which are in fact more likely aligned with the Web3 stuff. As I noted in a prior blog, Web3 is also mostly hype, but it does suggest that some form of digital wallet and some mechanism for NFT administration could be overlaid on current Internet technology, particularly by players who take payments.

We’ve had credit-card terminals for half a century or more and somehow managed to support them without edge computing. Adding security through blockchain is a worthy goal, but it doesn’t require that we host anything at or near the edge, because credit card and other retail transactions are done at human speed and can tolerate latencies measured in single-digit seconds.

I think that the metaverse may well be the key to edge success, but only if it develops across its full range of potential applications. It’s too early to say whether that will happen, but I’ll blog on the range of “potential applications” to set the stage, and watch for indications that multiple options are moving to realization. If we see that, then edge computing will become a reality, likely even in 2022.

How’s the “Everything is Software” Trend Going?

Software, so they say, is taking over the world. I actually agree with this, and with a related premise that “hardware” and “network” companies have to not only think like software companies, but actually become one. These points raise the obvious question of how well everyone is doing, and fortunately there’s some Street research that offers a clue. Wall Street is always interested in how companies will be doing in the future, for obvious reasons, and digesting the Street view gives us a chance to rate at least the various types of companies with respect to their “softwareness”. Of course, I’ll add in my own thoughts where appropriate.

We can divide tech companies into three loose categories—software companies, computer/hardware companies, and network companies. If we looked at these groups simply in terms of how successful they were in leveraging the software space, we’d expect to see they fall into the category order I just listed. The first thing I find interesting in Street research is that they don’t, in one important case.

Yes, software companies tend to be more likely to be rated as successfully exploiting software, but the computer/hardware companies are rated below networking companies in that regard. That same ranking can be seen in how the Street says it expects companies to perform in software, relative to their stated plans. Network vendors, then, are seen as more likely to exceed software expectations than computer/hardware vendors.

The Street is better at recognizing symptoms than at offering a proper diagnosis. I think the primary problem with computer/hardware vendors is that hardware is supposed to run software, meaning that both the Street and enterprises expect that a computer vendor is neutral with respect to what’s being run. If they offer “software” at all, it may be simply a matter of convenience rather than something that they’re promoting to facilitate differentiation or adding value.

Viewed in this light, the decision by Dell to let VMware go might make a lot of sense. If “software” is truly a generic layer of value on top of hardware plumbing, then linking VMware to Dell would likely risk contaminating VMware’s value story in association with non-Dell hardware. Interestingly, Dell has fewer Street views that their software business will simply match their plans; most think it will either beat or fall short. Competitor HPE is seen as having a much greater chance of “on-target” software performance, mostly because the Street doesn’t see any changes that generate a big upside surprise for HPE.

Networking, obviously, is a different situation. For most of the history of network devices, software and hardware were bundled. If you got a Cisco router, you got some flavor of IOS, and if you got Juniper equipment you got Junos. Today, network vendors are “disaggregating”, meaning they are breaking up the hardware/software bundle. That lets them charge for software, of course, and the shift in the paradigm opens a new software opportunity that’s reflected in Street assessments of their software potential.

Network vendors also typically offer management software, and increasingly security security software too. Since security software is at least among the top software opportunities, if not the top, that gives network vendors a potentially greater level of software potential and upside versus plans.

The downside, for network vendors, is that while the “disaggregation” theoretically opens up additional revenues, the opportunity isn’t open-ended because it’s unlikely that an enterprise would buy a vendor router and then not run their associated network operating system. In my own surveys, I don’t encounter any enterprises who report doing that. Same for management software; it’s almost always tied to the hardware choice. Security software, and even security devices, are more likely to cross vendor lines in procurement, and generally the Street likes the software fortunes of security-focused network vendors better than it does the software opportunities for traditional network vendors, even if they offer security products.

One thing this suggests is that the notion of “disaggregation” in hardware and software terms isn’t an automatic guarantee of lofty software numbers for the network vendors. Generally, network vendors are seen as having a slightly larger upside versus plans in the software space, but on the average about a third of Street analysis suggests a downside. That contrasts to the software space, where less than a fifth of analysis shows that, and the computer/hardware space where almost half of analysis shows a downside risk.

Another interesting insight from the Street view of security is that everyone is confused about it, from the Street to the vendors to the enterprise buyers. The Street recognizes somewhere between five and ten classes of security products. Enterprises report having somewhere between three and six security classes in place, and vendors are all over the place in how they position their stuff. The fact that security products are more prone to vendor crossover, where a buyer gets a security product from a non-incumbent vendor, illustrates the complexity of the space too.

It’s always interesting, and challenging, to relate Street data to my own surveys, modeling, and assessments. This is easiest to consider in the specific case of SASE, which is a “new product category” and thus gets a lot of ink. The Street is split about 50:50 in whether they see SASE and SD-WAN as being related in any way, and where they see a connection they see SASE as being the sum of SD-WAN and security. That view favors vendor presentations of SASE, which tend to try to protect current security incumbencies and products. My view is that SASE has a critical foundation in a proper implementation of SD-WAN, something that virtually no SD-WAN vendor has actually accomplished.

That this leads to even more confusion is obvious. There is absolutely no correlation between the Street projections for who might be a winner in the SASE space, my own data on who enterprises think are winners, and my views on who actually has the best product set. The Street seems to be valuing incumbency in either the security space, the SD-WAN space, or both over any consideration of the actual capabilities of the product set.

So where are we with software opportunity for non-software companies? My view is that both network vendors and computer/hardware vendors have under-realized their potential in software, largely because their commitment to it has been superficial. Vendors rarely have a software strategy; they have more of a software dream. Dream fulfillment is always sketchy, and that’s particularly true when enterprises tell me they’re crying out for rational strategic positioning from their vendors. If vendors actually had a strong software plan, backed up by a strong positioning, they would do considerably better. As it is, lack of both ingredients is encouraging buyers to stay the course, favoring incumbents.

This is all surprising to me, given that we’re actually facing the most potential for technology and vendor shifts in at least 20 years. The bad news is that vendors have been blowing kisses at the software opportunity rather than actually trying to maximize it. That’s left most of them far short of where software could take them. The good news is that there’s time to fix this, particularly for network vendors, and the price for doing so could be very significant.

Are Operators Considering a New Millimeter-Wave Option?

The more operators are confined to being access players, the more they have to worry about the cost of supporting connections. We all have to worry about that, in fact, because the division of the Internet ecosystem into plumbing and gold-plated toilet seats (so to speak) means that there’s a risk that the value we see in the latter will be compromised by failures in the former. Endless demand for capacity can’t be fulfilled at zero profit, so we can expect broadband expansion (geographically and capacity-wise) to be limited by cost of deployment.

Fiber to the home isn’t going to work everywhere. Fiber pass costs have fallen, but they’re still at least triple the pass cost of CATV, and that isn’t going to work everywhere either. The problem with any sort of physical media is the need to acquire right of way and trench the media, then maintain it. We know that urban and the denser suburbs can be served profitably with fiber or CATV, but deeper suburbs and rural areas simply don’t have the demand density needed. One thing that would help would be a strategy that didn’t impose pass costs, in the current sense, at all.

5G millimeter-wave technology, when used in conjunction with fiber-to-the-node (FTTN) offers gigabit speeds and the potential for addressing at least the deep-suburban piece of the digital divide. Stick a node in a suburban area and you can expect to offer high-quality broadband for a mile around it. There’s no need to prep each home or trench media; you just send your new customers an antenna and instructions. The problem is that mm-wave doesn’t penetrate obstructions well, and there have been reports that even trees will create a barrier to service. Some operators have told me that they’re looking at ways to make 5G/FTTH work better, and maybe even reach its potential.

In typical mm-wave deployments, you stick a node and transceiver in a hopefully high location, often an existing cell tower, and with that serve an area roughly a mile in radius. According to operators, getting the transceiver antenna high is helpful because if the antenna is well above nearby trees, it’s only the trees in the yards of customers that are likely to pose a barrier to the service. The problem is that those trees are barrier enough.

Let’s say we can get our transceiver antenna a hundred feet in the air. If you work through the geometry, you find that at a mile range, the line of sight would be at an angle of 1.085 degrees to the horizontal. A twenty-foot tree at the back end of a lot, say 50 feet from the home antenna, would cover an angle of over 20 degrees, which means that it would be in the path of our millimeter waves. To get clear line of sight above that tree, you’d need a tower a couple thousand feet high.

My operator friends tell me that they’ve determined that it would be difficult to make this sort of 5G/FTTN work in wooded areas unless there was a natural high point of considerable elevation. However, there might be another model that could work well. That model could be called “fiber-to-the-intersection” (FTTI).

Look at a typical crossroads, of which there are millions worldwide. You can typically see quite a distance down both streets, in both directions. There are usually trees, but they’re not usually closely spaced, even in suburban areas. The buildings tend to have a fairly standard setback, too, so they line up well. Imagine a millimeter-wave antenna in these intersections; it would have a pretty clear line of sight to structures along the street in all directions.

You may be looking to empower users and businesses with broadband, but what you’re really doing in any rational broadband strategy is empowering buildings, and buildings are most likely strung along roads/streets. Focusing 5G/FTTN on FTTI missions would make sense, then, in multiple ways.

Another point operators made was that arbitrary locations at the best “geographic” point for a millimeter-wave node could well create difficulties feeding the node with fiber. There are always rights of way along transportation paths, but rarely across people’s yards or fields. Without a feed for fiber and nodal power, millimeter-wave is about as useful as an unplugged microwave oven.

There are downsides to FTTI, of course. On the technical side, operators say that the likely number of customers you could expect a given node to support is lower because most streets/roads don’t run straight for a full mile, and curves would introduce barriers, particularly to antennas that had to be at most at the top of a pole. However, there are practical limits caused by terrain and foliage in any millimeter-wave approach, and it’s not clear that FTTI would be worse. In fact, as I’ve noted, operators seem to think it could be better.

There’s also a political issue. A millimeter-wave node stuck on a cell tower isn’t an in-your-face installation. Adding one to an intersection, a place where residents and workers drive through daily, invites pushback. Local government intervention can be time-consuming and costly for operators, and if specific legislation is involved there’s always the risk of a change in administration (at the local, state, or federal level) could swing the rules against any accommodations previously negotiated.

The problem, of course, is whether there’s an alternative. Estimates of just how much capacity a home or business needs to be considered “broadband empowered” vary considerably. Most operators think that 100 Mbps download and 50 Mbps upload would be a reasonable goal. Neither copper loop nor satellite technology can currently meet that standard. 5G cellular, millimeter-wave, and fiber (at least to the curb) are all suitable, under at least some situations.

Some operators (including, obviously, most cable providers) see a combination of fiber and CATV cable as the answer. After all, we deliver broadband and video to millions of locations using that approach. The problem, as even some cable operators will admit, is that it’s becoming more and more difficult to deploy new CATV plant as the demand density of the unserved and underserved drops, which it does as you pick all the accessible apples of demand pockets.

Any physical-media approach to broadband is limited where demand density is low. That means that one of the 5G models (cellular or millimeter-wave) would be a preferred addition to the current model of CATV and fiber. The FTTI interest I’m hearing about represents an attempt by millimeter-wave advocates to deal with the barriers to deployment of their favorite approach.

Most of the operators, including the FTTI and millimeter-wave advocates, would admit that broader 5G cellular usage in home broadband would likely be a better approach. One reason that there’s considerable operator interest in 5G to start with isn’t the fact that you could give a mobile user a couple hundred megabits per second, but that you might be able to give that capacity to a home user. Samsung’s recent achievement of 8Gbps 5G delivery using massive MIMO doesn’t mean that 8G smartphone services are likely profitable, but that it’s possible to support a higher bandwidth for a 5G cell site, enabling that site to deliver home broadband as well as cellular 5G service.

Operators tell me that a pure cellular-5G model to support both home broadband and mobile services isn’t efficient in higher demand density areas, and that millimeter-wave 5G isn’t effective in very low density areas. It looks like operator planners are jiggling their strategies to find the best way to use millimeter wave, to minimize any empowerment gaps they face, and to keep broadband improving and profitable at the same time.