Building Bridges, Building Edges

Let’s say that you wanted to justify building a bridge between Lubec, Maine and Canada’s Westport, Nova Scotia, crossing the Bay of Fundy. It would be fair to say that such a bridge would be enabling for those who wanted to drive between the points quickly. Our new bridge would be about 50 miles long. The current road route would be about 500 miles. Think of all the driving time that bridge would save (probably ten hours or more)! And if there were a bridge couldn’t you expect people to walk on it? An experienced walker could do that hike in a day…maybe. So we could use the driving and walking benefits of the Lubec-Westport Bridge to justify its surely-enormous cost, right?

The answer to that has a lot to do with how we justify things like 5G and edge computing.

If we had such a bridge, perhaps some would walk on it. Perhaps many would drive on it, but chances are that the numbers who would use the bridge would take centuries to justify its cost. Thus, there are things that I can use the bridge, or a technology, for that would not justify it. Exploiting a resource is one thing, financing it is another.

OK, let’s take this a little further. Grand Manan Island is maybe 15 miles along the path of our bridge. Suppose we build a smaller bridge just that far? It could be a step along the way, right? Yes, but if there aren’t a lot of people trying to drive or walk between Lubec and Westport, there are surely a lot less trying to drive/walk between Lubec and Grand Manan. The only value of that little step would be the value it presented in taking the next longer one. Even adding additional (non-existent) islands to the route wouldn’t help; no additional island would likely contribute much to the business case, and if any step were deemed to be unjustified, the whole value proposition would be shot.

By this time, I expect that you can see what I (at least) believe the connections are between my bridge analogy and things like 5G or edge computing. We have no problem deciding what we could do with either, but we’re conspicuously short of things that could justify them.

When I first ran my models on “carrier cloud”, I identified six theoretical drivers: virtual CPE and NFV, advertising and content, 5G, public cloud services offered by operators, contextual services, and IoT. My model suggested that NFV and vCPE had minimal potential. It also said that 5G would have its greatest potential impact by 2022, and that in the longer term, IoT was the decisive driver. In other words, my model said that we could visualize six islands between Maine and Nova Scotia, and that each of them could (if properly exploited) contribute to an investment that would then be exploited by subsequent service steps.

What happens to this picture of “successive evolution” if one or more steps doesn’t play its role? In the case of edge computing (carrier cloud), the answer is that the operators never make any major edge investment. The first of the possible drivers, NFV, never really had much of a chance except in the context of 5G, and operators have been increasingly looking to the cloud providers for hosting 5G virtual functions. Operators never deployed their own video ad and caching services to any extent, and that rounds out all the early edge applications.

Contextual services and IoT are related; the former relies on the latter to get real-world information on a service user, and presents AR/VR or other augmented information forms based on the combination of data available. Because of their early misjudging of the 5G opportunity (they wanted to charge “things” in IoT for cell service when they ran out of humans), operators have done nothing useful in the last two of our drivers so far, and time there is running out.

Neither 5G nor edge computing will fail because of operator misperceptions. 5G is the technical successor to 4G, which was the successor to 3G. There was never a risk it wouldn’t deploy, just a risk that it wouldn’t generate any incremental revenue. Edge computing, on the other hand, could be at risk, and with it a whole lot of vendor revenues.

My model said a decade ago that were operators to deploy edge computing (carrier cloud) at the scale all the drivers could justify, it would be the largest single source of new data center deployments in the history of IT, with the potential for one hundred thousand new edge hosting points. A big part of that number comes from “competitive overbuild”, where multiple operators locate edge hosting sites in the same area because they’re competing for the services those sites support. If, as it appears will be the case, the public cloud providers end up deploying all the edge computing, there are fewer competitors, fewer data centers, and less vendor revenue to fill those centers with network and computing gear.

This is why vendors should be working hard to devise a strategy for edge computing that operators could buy into. That strategy would obviously have to be centered in the metro zone, because metro opportunity density is high enough to justify deployments and close enough to the user to still present attractive latency options to applications.

There are two credible edge opportunities, IoT and metaverse hosting, and both have that perfect combination of great potential and technical and business hurdles that seem to characterize tech these days. There are some things that could be done to promote both these applications in a general way, and as I noted in earlier blogs, we could define a “metaverse of things” model that could further harness common elements. And, of course, we could let nature take its course on both. Which option would offer the best, the fastest?

I’m skeptical that IoT applications will drive edge hosting spontaneously, because the specific kind of IoT that would be most specifically an edge hosting application would also require the highest “first cost” of deployment. Enterprises already know that their own IoT needs tend to be tied to fixed locations, like warehouses or factories. This kind of IoT cries out for local, customer-owned, edge computing, because hosting further from the actual IoT elements only increases latency and magnifies the risk of loss of service.

“Roaming IoT”, characterized by the transportation vertical’s use on shipments and vehicles, ships, aircraft, and trains, is different because it doesn’t have a fixed point to place an edge, and in fact is necessarily highly mobile. I’ve worked on this vertical for decades, and I can see IoT edge hosting options, but they would be more likely to exploit edge computing than justify it, particularly because you’d need edge resources through the entire scope of movement of your IoT elements.

Metaverse hosting is in a sense the opposite; there are strong reasons to say that it depends on edge computing, but a lot of fuzziness on the issue of just how it makes money or even works at the technical level. If we presumed that a “metaverse” was a social-network foundation, then social-network providers (like Meta) would surely tune it to fit what they’d be prepared to deploy. The opportunity in edge computing depends on the presumption that there would be a lot of metaverses to host, making the business of hosting them a good one.

Given that one headline this week was that Walmart plans to enter the metaverse, you’d think that we were already on the verge of a metaverse explosion. Not so; we’re on the verge of another “label-something-to-create-click-bait” wave. What Walmart is actually contemplating is creating a cryptocurrency and NFTs, neither of which can be said to mandate edge computing, and which are in fact more likely aligned with the Web3 stuff. As I noted in a prior blog, Web3 is also mostly hype, but it does suggest that some form of digital wallet and some mechanism for NFT administration could be overlaid on current Internet technology, particularly by players who take payments.

We’ve had credit-card terminals for half a century or more and somehow managed to support them without edge computing. Adding security through blockchain is a worthy goal, but it doesn’t require that we host anything at or near the edge, because credit card and other retail transactions are done at human speed and can tolerate latencies measured in single-digit seconds.

I think that the metaverse may well be the key to edge success, but only if it develops across its full range of potential applications. It’s too early to say whether that will happen, but I’ll blog on the range of “potential applications” to set the stage, and watch for indications that multiple options are moving to realization. If we see that, then edge computing will become a reality, likely even in 2022.

How’s the “Everything is Software” Trend Going?

Software, so they say, is taking over the world. I actually agree with this, and with a related premise that “hardware” and “network” companies have to not only think like software companies, but actually become one. These points raise the obvious question of how well everyone is doing, and fortunately there’s some Street research that offers a clue. Wall Street is always interested in how companies will be doing in the future, for obvious reasons, and digesting the Street view gives us a chance to rate at least the various types of companies with respect to their “softwareness”. Of course, I’ll add in my own thoughts where appropriate.

We can divide tech companies into three loose categories—software companies, computer/hardware companies, and network companies. If we looked at these groups simply in terms of how successful they were in leveraging the software space, we’d expect to see they fall into the category order I just listed. The first thing I find interesting in Street research is that they don’t, in one important case.

Yes, software companies tend to be more likely to be rated as successfully exploiting software, but the computer/hardware companies are rated below networking companies in that regard. That same ranking can be seen in how the Street says it expects companies to perform in software, relative to their stated plans. Network vendors, then, are seen as more likely to exceed software expectations than computer/hardware vendors.

The Street is better at recognizing symptoms than at offering a proper diagnosis. I think the primary problem with computer/hardware vendors is that hardware is supposed to run software, meaning that both the Street and enterprises expect that a computer vendor is neutral with respect to what’s being run. If they offer “software” at all, it may be simply a matter of convenience rather than something that they’re promoting to facilitate differentiation or adding value.

Viewed in this light, the decision by Dell to let VMware go might make a lot of sense. If “software” is truly a generic layer of value on top of hardware plumbing, then linking VMware to Dell would likely risk contaminating VMware’s value story in association with non-Dell hardware. Interestingly, Dell has fewer Street views that their software business will simply match their plans; most think it will either beat or fall short. Competitor HPE is seen as having a much greater chance of “on-target” software performance, mostly because the Street doesn’t see any changes that generate a big upside surprise for HPE.

Networking, obviously, is a different situation. For most of the history of network devices, software and hardware were bundled. If you got a Cisco router, you got some flavor of IOS, and if you got Juniper equipment you got Junos. Today, network vendors are “disaggregating”, meaning they are breaking up the hardware/software bundle. That lets them charge for software, of course, and the shift in the paradigm opens a new software opportunity that’s reflected in Street assessments of their software potential.

Network vendors also typically offer management software, and increasingly security security software too. Since security software is at least among the top software opportunities, if not the top, that gives network vendors a potentially greater level of software potential and upside versus plans.

The downside, for network vendors, is that while the “disaggregation” theoretically opens up additional revenues, the opportunity isn’t open-ended because it’s unlikely that an enterprise would buy a vendor router and then not run their associated network operating system. In my own surveys, I don’t encounter any enterprises who report doing that. Same for management software; it’s almost always tied to the hardware choice. Security software, and even security devices, are more likely to cross vendor lines in procurement, and generally the Street likes the software fortunes of security-focused network vendors better than it does the software opportunities for traditional network vendors, even if they offer security products.

One thing this suggests is that the notion of “disaggregation” in hardware and software terms isn’t an automatic guarantee of lofty software numbers for the network vendors. Generally, network vendors are seen as having a slightly larger upside versus plans in the software space, but on the average about a third of Street analysis suggests a downside. That contrasts to the software space, where less than a fifth of analysis shows that, and the computer/hardware space where almost half of analysis shows a downside risk.

Another interesting insight from the Street view of security is that everyone is confused about it, from the Street to the vendors to the enterprise buyers. The Street recognizes somewhere between five and ten classes of security products. Enterprises report having somewhere between three and six security classes in place, and vendors are all over the place in how they position their stuff. The fact that security products are more prone to vendor crossover, where a buyer gets a security product from a non-incumbent vendor, illustrates the complexity of the space too.

It’s always interesting, and challenging, to relate Street data to my own surveys, modeling, and assessments. This is easiest to consider in the specific case of SASE, which is a “new product category” and thus gets a lot of ink. The Street is split about 50:50 in whether they see SASE and SD-WAN as being related in any way, and where they see a connection they see SASE as being the sum of SD-WAN and security. That view favors vendor presentations of SASE, which tend to try to protect current security incumbencies and products. My view is that SASE has a critical foundation in a proper implementation of SD-WAN, something that virtually no SD-WAN vendor has actually accomplished.

That this leads to even more confusion is obvious. There is absolutely no correlation between the Street projections for who might be a winner in the SASE space, my own data on who enterprises think are winners, and my views on who actually has the best product set. The Street seems to be valuing incumbency in either the security space, the SD-WAN space, or both over any consideration of the actual capabilities of the product set.

So where are we with software opportunity for non-software companies? My view is that both network vendors and computer/hardware vendors have under-realized their potential in software, largely because their commitment to it has been superficial. Vendors rarely have a software strategy; they have more of a software dream. Dream fulfillment is always sketchy, and that’s particularly true when enterprises tell me they’re crying out for rational strategic positioning from their vendors. If vendors actually had a strong software plan, backed up by a strong positioning, they would do considerably better. As it is, lack of both ingredients is encouraging buyers to stay the course, favoring incumbents.

This is all surprising to me, given that we’re actually facing the most potential for technology and vendor shifts in at least 20 years. The bad news is that vendors have been blowing kisses at the software opportunity rather than actually trying to maximize it. That’s left most of them far short of where software could take them. The good news is that there’s time to fix this, particularly for network vendors, and the price for doing so could be very significant.

Are Operators Considering a New Millimeter-Wave Option?

The more operators are confined to being access players, the more they have to worry about the cost of supporting connections. We all have to worry about that, in fact, because the division of the Internet ecosystem into plumbing and gold-plated toilet seats (so to speak) means that there’s a risk that the value we see in the latter will be compromised by failures in the former. Endless demand for capacity can’t be fulfilled at zero profit, so we can expect broadband expansion (geographically and capacity-wise) to be limited by cost of deployment.

Fiber to the home isn’t going to work everywhere. Fiber pass costs have fallen, but they’re still at least triple the pass cost of CATV, and that isn’t going to work everywhere either. The problem with any sort of physical media is the need to acquire right of way and trench the media, then maintain it. We know that urban and the denser suburbs can be served profitably with fiber or CATV, but deeper suburbs and rural areas simply don’t have the demand density needed. One thing that would help would be a strategy that didn’t impose pass costs, in the current sense, at all.

5G millimeter-wave technology, when used in conjunction with fiber-to-the-node (FTTN) offers gigabit speeds and the potential for addressing at least the deep-suburban piece of the digital divide. Stick a node in a suburban area and you can expect to offer high-quality broadband for a mile around it. There’s no need to prep each home or trench media; you just send your new customers an antenna and instructions. The problem is that mm-wave doesn’t penetrate obstructions well, and there have been reports that even trees will create a barrier to service. Some operators have told me that they’re looking at ways to make 5G/FTTH work better, and maybe even reach its potential.

In typical mm-wave deployments, you stick a node and transceiver in a hopefully high location, often an existing cell tower, and with that serve an area roughly a mile in radius. According to operators, getting the transceiver antenna high is helpful because if the antenna is well above nearby trees, it’s only the trees in the yards of customers that are likely to pose a barrier to the service. The problem is that those trees are barrier enough.

Let’s say we can get our transceiver antenna a hundred feet in the air. If you work through the geometry, you find that at a mile range, the line of sight would be at an angle of 1.085 degrees to the horizontal. A twenty-foot tree at the back end of a lot, say 50 feet from the home antenna, would cover an angle of over 20 degrees, which means that it would be in the path of our millimeter waves. To get clear line of sight above that tree, you’d need a tower a couple thousand feet high.

My operator friends tell me that they’ve determined that it would be difficult to make this sort of 5G/FTTN work in wooded areas unless there was a natural high point of considerable elevation. However, there might be another model that could work well. That model could be called “fiber-to-the-intersection” (FTTI).

Look at a typical crossroads, of which there are millions worldwide. You can typically see quite a distance down both streets, in both directions. There are usually trees, but they’re not usually closely spaced, even in suburban areas. The buildings tend to have a fairly standard setback, too, so they line up well. Imagine a millimeter-wave antenna in these intersections; it would have a pretty clear line of sight to structures along the street in all directions.

You may be looking to empower users and businesses with broadband, but what you’re really doing in any rational broadband strategy is empowering buildings, and buildings are most likely strung along roads/streets. Focusing 5G/FTTN on FTTI missions would make sense, then, in multiple ways.

Another point operators made was that arbitrary locations at the best “geographic” point for a millimeter-wave node could well create difficulties feeding the node with fiber. There are always rights of way along transportation paths, but rarely across people’s yards or fields. Without a feed for fiber and nodal power, millimeter-wave is about as useful as an unplugged microwave oven.

There are downsides to FTTI, of course. On the technical side, operators say that the likely number of customers you could expect a given node to support is lower because most streets/roads don’t run straight for a full mile, and curves would introduce barriers, particularly to antennas that had to be at most at the top of a pole. However, there are practical limits caused by terrain and foliage in any millimeter-wave approach, and it’s not clear that FTTI would be worse. In fact, as I’ve noted, operators seem to think it could be better.

There’s also a political issue. A millimeter-wave node stuck on a cell tower isn’t an in-your-face installation. Adding one to an intersection, a place where residents and workers drive through daily, invites pushback. Local government intervention can be time-consuming and costly for operators, and if specific legislation is involved there’s always the risk of a change in administration (at the local, state, or federal level) could swing the rules against any accommodations previously negotiated.

The problem, of course, is whether there’s an alternative. Estimates of just how much capacity a home or business needs to be considered “broadband empowered” vary considerably. Most operators think that 100 Mbps download and 50 Mbps upload would be a reasonable goal. Neither copper loop nor satellite technology can currently meet that standard. 5G cellular, millimeter-wave, and fiber (at least to the curb) are all suitable, under at least some situations.

Some operators (including, obviously, most cable providers) see a combination of fiber and CATV cable as the answer. After all, we deliver broadband and video to millions of locations using that approach. The problem, as even some cable operators will admit, is that it’s becoming more and more difficult to deploy new CATV plant as the demand density of the unserved and underserved drops, which it does as you pick all the accessible apples of demand pockets.

Any physical-media approach to broadband is limited where demand density is low. That means that one of the 5G models (cellular or millimeter-wave) would be a preferred addition to the current model of CATV and fiber. The FTTI interest I’m hearing about represents an attempt by millimeter-wave advocates to deal with the barriers to deployment of their favorite approach.

Most of the operators, including the FTTI and millimeter-wave advocates, would admit that broader 5G cellular usage in home broadband would likely be a better approach. One reason that there’s considerable operator interest in 5G to start with isn’t the fact that you could give a mobile user a couple hundred megabits per second, but that you might be able to give that capacity to a home user. Samsung’s recent achievement of 8Gbps 5G delivery using massive MIMO doesn’t mean that 8G smartphone services are likely profitable, but that it’s possible to support a higher bandwidth for a 5G cell site, enabling that site to deliver home broadband as well as cellular 5G service.

Operators tell me that a pure cellular-5G model to support both home broadband and mobile services isn’t efficient in higher demand density areas, and that millimeter-wave 5G isn’t effective in very low density areas. It looks like operator planners are jiggling their strategies to find the best way to use millimeter wave, to minimize any empowerment gaps they face, and to keep broadband improving and profitable at the same time.

The Impact of a Shift Away From Ad Sponsorship

Our culture is replete with references to the importance of the path that money and value follow in commercial exchanges. “Follow the money” and “Show me the money” are two examples. As I pointed out last week in a blog, there are really two payment models in the world of Internet and tech today—the direct payment approach where consumers of a product or service pay for it, and the indirect approach where third-party players with a desire to gain access to consumers pay for consumer products/services in order to influence consumer behavior in their favor. I also suggested that the indirect model, which we see most often in the form of ad sponsorship, was under pressure.

Numbers for anything are difficult to come by these days, but sources of advertising spending put the 2021 numbers at somewhere between $580 and $680 billion. Growth numbers also vary, but my own data has shown that the critical number, which is ad spend as a portion of GDP, has stayed very close to constant for a decade, and has decreased in some years. The key point is that the rate of ad spending growth (if any) isn’t sufficient to fund our entire online experience. In fact, it’s not enough to continue to fund everything we’re used to having “free with ads”.

In network TV, we now have about 18 minutes of commercials per hour, which means that almost a third of a broadcast show is dedicated to commercials, something research shows is an all-time high. Nobody expects it to go lower, and the reason is that online advertising has taken a larger share of ad budgets, which as I’ve noted are largely static over time. The reason is that online advertising is more easily targeted, which advertisers like because they can spend less to reach their real prospects. Networks have to compensate for the shift by offering more minutes for the same dollars.

What we can see in content, meaning video, is a shift from a purely ad-sponsored model to a subscription model. Amazon Prime Video, Netflix, Hulu, YouTube TV, and other services are treated like cable TV because the content producers (like the networks) charge for their material and the streaming providers have to pay. Even the networks are under profit pressure, so they’re moving toward offering their content via streaming services. Over time, many on the network side expect that every network will have their own streaming offering, and will gradually put more pressure on “aggregator” services like Hulu and YouTube TV. Many of the streaming players are already becoming content producers to compensate for this pressure.

The point here is that we’re shifting from an indirect-pay model to a pay model for more and more things, and the reason is that there’s not enough indirect payment available to cover both all the stuff we want and profit growth for the current recipients of the payments. This will continue to impact content services, and eventually impact social media and perhaps even the Internet itself.

Remember the “follow the money” adage? Well, wherever the trail leads, one truth emerges, which is that the more players who expect to touch the money/value flow, the less money there is per player. Right now, “networks” or content creators are partnering with aggregators because they have to, because the public wants to “watch TV” and not have to deal with between ten and fifty separate content sources to do it. However, people are getting used to direct relationships with content sources, and as they do, the value proposition for aggregators gets thinner.

The biggest factor in weaning people away from the “watching TV” approach is the erosion in “live” viewing. Other than sports and news, people are generally accepting of on-demand content, because social forces have made scheduled viewing inconvenient, and limited creation of new shows has forced viewers to seek other material to watch. New players to the content game, like Amazon, have tended to release an entire season for streaming at one time, while networks have cited costs and COVID as reasons for reducing their number of shows. Less live TV means more people learning to do without it.

We could be heading for a combination of content subscription fees (the Netflix model and the goal of networks like Disney/ABC, Comcast/NBCU and Peacock, Viacom’s Paramount, and CBS’ streaming offerings) and pay per view. We already have “sports networks” (the ESPN series) and “news networks”, but individual networks bid on sports and offer local news and weather. Aggregators like Hulu and YouTube TV may end up being the source of “live” news and weather at the local level, and some sports, particularly high-school and college sports. More and more of the other stuff may flee to network-specific streaming services.

Social media offers the ultimate in ad targeting, so we can expect it to retain value for advertisers even in the long term. More and more ad dollars fleeing to social media will in fact drive the shift away from ad dependence in other services, like video content. However, social-media companies will still need to think about revenue and profit growth, and these companies will either have to start creating unique content or start selling products. We can already see a bit of both today.

The metaverse may be the biggest beneficiary of a shift by OTT giants to a direct pay model. It’s not difficult to see how the metaverse could create many avenues for direct revenue generation, including the obvious move to charge for “membership”. To shift a platform that’s currently ad-sponsored to a direct-pay model would surely arouse user wrath, but a new concept like the metaverse could easily become a pay-for. I expect that Meta itself is leaning toward this approach.

What about the impact on the Internet and telecom? Focus on content delivery means caching in metro locations, which tends to focus traffic from the metro outward to the user. Core transport is less important without settlement, because without a revenue source associated with peering there’s no incentive to build out the core. The telcos and cable companies are cast increasingly into a pure access role because they’re not particularly interested in deploying caching or metro hosting.

There are regulatory barriers to QoS-specific Internet services, which means that to the extent that the Internet is the dialtone of the 21st Century, operators have little chance of gaining revenue by selling premium handling. In theory, they could sell QoS for business services, but both operators and businesses tell me that’s going to be a heavy lift, for two reasons.

The first reason is that the focus of business IT has been cloud-enhanced user interfaces to support web access to customer and partner portals. This mission explicitly involves the Internet, not business data services, and in my most recent surveys of both operators and enterprises, its priority is broadly (almost universally) acknowledged.

The second reason is that first mission’s impact on business networking. Companies are learning that employee access to applications can be provided through the same facilities as are being enhanced to support customer/partner access. Combine this with the growing use of SD-WAN to support thin sites and the virus lockdowns and WFH, and you get what some operators and enterprises are already seeing as a flight from more expensive VPN services toward SD-WAN and the Internet. Given that, premium QoS on VPNs is hardly likely to be a good option.

Direct pay will not open an opportunity for Internet QoS, and in fact is likely to focus the market more on metro caching than on networking. The Internet, at the access level, will get better without premium handling because the market for content depends on reasonable delivery quality. Thus, the only real driver of premium handling would be edge computing and its association with latency-sensitive applications, meaning IoT. As I’ve pointed out earlier this week, the operators may have booted their opportunity at the edge through a combination of carrier cloud hesitancy and a potential onrush in community network interest, epitomized in Amazon’s Sidewalk enhancements.

There is still, in theory, time for operators to get their act together and take a position in the services that could lift them above the commodity access morass. Not much time, though, and I don’t think operators themselves are capable of the transformation. Vendors, or at least a vendor, will have to step up and be sensible. That may be a vain hope too, based on past behavior, and we may see 2022 as the Year the Operators Became Plastic-Pipe Plumbers.

Amazon Extends Sidewalk from Neighbors to Smart Cities

If you need evidence that operators’ reluctance to deploy carrier cloud has far-reaching consequences, you only need to look at Amazon’s Sidewalk evolution for a proof point. Sidewalk, in its original form, was a strategy for sharing WiFi between homeowners and businesses to allow for better connection and control of IoT elements. Now, Amazon is going to launch a business-and-government addition to Sidewalk, the Sidewalk Bridge Pro, that puts the service squarely in the commercial IoT space.

Sidewalk is Amazon’s mesh-network IoT strategy. It uses some 900 MHz spectrum to provide a link between IoT elements to supplement WiFi, and Sidewalk “members” allow their WiFi and IoT connectivity to be shared by neighbors. This enables IoT to work in situations where WiFi couldn’t reach a device, and it’s a good example of the benefit of federation.

Sidewalk has always used LoRa to create its network within a neighborhood, but whether a given location can even use Sidewalk effectively would depend on the number of neighbors who had compatible Amazon/Nest devices to act as a bridge, and were willing to join the Sidewalk community. The range of current Sidewalk bridges is short, which means that in some cases even having Bridge-happy neighbors might not be enough for commercial applications. Sidewalk Bridge Pro is supposed to offer a range of up to five miles.

Keep in mind that Sidewalk is an IoT network and not a generalized Internet service. The idea is to create a “community network” that lets users of smart devices share network technology so that their devices are accessible outside their own home networks. Sidewalk Bridge Pro makes this community network larger, to the point where it could be used to create not only smart buildings but also smart cities. The effect of this announcement is potentially significant, for Amazon, the operators, and IoT overall.

First, this establishes the notion of federated communications for IoT. Rather than creating a dedicated IoT network (presumably from operator 5G services), buildings and cities (as well as parks and other governmental and public areas) could create IoT networks through the creation of a cooperative instead. A smart building could be created by linking the networks of its tenants, and since Sidewalk assures privacy and security (at least Amazon claims it does) the result wouldn’t compromise security. The use of Sidewalk Bridge Pros could ensure that no tenant’s network failure could take down the collective’s network.

The second impact is that Sidewalk facilitates the use and reuse of traditional (meaning current residential) IoT technology. Whatever works with the Amazon Ring system will work with Sidewalk, providing that there’s a suitable bridge included. Most Ring hubs are bridges, so that’s not a major challenge. Many businesses have adopted IoT based on the same devices used in homes, including Amazon’s Ring devices, so this lowers the smart-building bar by allowing smart buildings to be created from smart businesses (or apartment tenants, or both).

The third impact is related to that buildings-from-tenants story; Sidewalk presumes that there would be “local” or “tenant” events and activities, and that there would also be building-level (and ultimately community-level) events and activities as well. What gets “exported” from one level to another would be under control of the Sidewalk developers who build the software.

Impact number four is that Sidewalk elevates the whole IoT story, from down in the device-and-network dirt where potential is delivered but not functionality, to the tenant, building, and community domains where software intelligence will focus. This intelligence, of course, is almost surely going to be largely hosted in Amazon’s cloud. We’ve needed an IoT story that focuses not on the Internet or on Things, but on things that people and companies value. Now we’re certain to get one.

The final impact is competition. Sidewalk has been treated up to now as a kind of extension of basic Alexa and Ring, a way of finding a pet in a neighbor’s yard or triggering a light from a sensor just out of reach of your home WiFi. Not exactly a dramatic market opportunity, in other words. The Bridge Pro expands the collective concept way beyond that, to the point where other cloud providers can’t ignore it. In fact, IoT vendors in general are going to have to start thinking about Sidewalk.

Right now, we have two “flavors” of IoT. One flavor is based on WiFi and designed for consumer low-tech installation and adoption. The other is based on one of a number of IoT protocols, ranging from the proprietary Insteon stuff to established commercial/residential IoT device connection standards like Z-Wave or Zigbee, and onward to the commercially targeted LoRaWAN. Amazon’s Alexa/Ring and Google Home have already induced vendors in our middle group to add Amazon/Google integration to their hubs. Wouldn’t it be likely that Sidewalk would induce that group to create Sidewalk bridges and Sidewalk-like federations?

I’ve not dug through the programming details of the current Amazon/Google bridges in smart Z-Wave or Zigbee hubs, nor have I looked at the details of Sidewalk development, but it seems from what I can find in casual research that you could already bridge between my middle commercial/residential technology group and Sidewalk. If a Z-Wave/Zigbee hub can talk to Alexa/Ring, then that pathway currently allows commands to pass. If Alexa/Ring is federated with Sidewalk, it would seem that would then extend Sidewalk to work with Z-Wave and Zigbee, and that could be huge.

Serious IoT for homes and businesses relies on those two protocols and the enormous inventory of devices they support. Bring these into the Sidewalk community, or any federation similar to Sidewalk launched by a competitor, and you suddenly have all the makings of an entire IoT ecosystem. No monthly charges for connection, no new devices, no need to learn new technology. A thriving set of developers and integrators too.

The obvious question is when we could expect this utopian IoT future, and that’s hard to say. Amazon hasn’t gotten regulatory approval for its Sidewalk Bridge Pro, nor has it released a price or availability date. However, the launch of Sidewalk Bridge Pro is such an obvious gauntlet thrown down to competitors that it’s hard to believe Amazon wouldn’t be pretty close to general availability.

I’ve always been excited by Sidewalk because a community network could focus the industry more on the IoT applications than on sensor/controller connectivity. In particular, I’d like to see Amazon and Sidewalk developers address the questions I’ve raised in marrying IoT with metaverse, my “metaverse-of-things” suggestion. Perhaps we’ll get some innovative strategies in that space now, and that could be critical in accelerating the pace of IoT, the metaverse, and edge computing.

Let’s Consider a Tech New-Year’s Resolution

New Year’s Resolutions are popular, perhaps in part because we know that we don’t usually keep them. Still, we feel better making them, so every year I look for some inspiration for my own. This year, I got it from, of all places, our legal system.

Everyone will likely have their own take on the Elizabeth Holmes verdict, but the theme we’re hearing in the media is focused on the question of whether Theranos is just the tip of some vast Silicon Valley iceberg of fraud. Is the Valley a problem in and of itself? Is tech one vast “spin-till-you-win” cesspool? Yes and no, but it’s a harder question to answer because there’s a lot of blame to go around.

Back in the dot-com bubble, in the late 1990s, I was bombarded by companies who wanted me to help them tell their story. Some were startups, and some were giants like WorldCom and Enron, but two-thirds of them were complete nonsense. My policy is, and has always been, that I will not work with or in any way promote something I believe has no real value proposition. I actually considered retiring or leaving the field because I was totally disgusted with the way things were going, despite having 20 years of independent consulting and way more than that of network experience.

When WorldCom and Enron were in their heyday, I told every reporter who asked me that their numbers and business models simply didn’t make sense, and wrote my own features and blogs to say the same. Well, their claims weren’t real. Not only did my comments at the time have little impact on the stories I was interviewed for, after everything came crashing down, nobody in the media or government called me and asked the simple question “How did you know?” Punishing egregious violations was enough; there was no reason to ask why the problems weren’t recognized from the first, and no reason to try to change things to prevent them in the future.

How did we get to that state, and are we still in it? Who is responsible? Let of offer my own tale, and you can decide whether you agree.

Our problem started not in the Valley but in the tech media and how it’s paid for. I started enterprise networking surveys in 1982, and between 1982 and 1990 technology publications were ranked as a strategic influence second only to the experience of a trusted peer. If I called on a CTO or CIO, I’d almost certainly see a copy of Business Communications Review and Network Magazine on their desks. I wrote regularly for both publications, and some articles were almost three thousand words. Some of these publications were subscription-based, but most were controlled circulation, meaning ad sponsored. Advertisers paid to reach those with budget influence.

If you put bacteria in a growth medium, they grow. A new money-making strategy begats money-makers. If advertisers would pay for more eyeballs, then eyeball farming might be a good business to be in. In 1989, in the US, my surveys and modeling said there were roughly 14,000 points of organized network procurement, and that number roughly matched the circulation of those publications. Five years later, the number of people who filled out reader service cards to claim receipt of the publications had grown to over 55,000 and the number of points of procurement had grown only to 15,600. The eyeball factory was stamping out ad-attracting eyeballs.

By 1996, the sum of the budgets claimed in reader service cards threatened to exceeded the global gross domestic product. All of the key publications of a decade earlier were going or gone. Soon, the remainder went online, and search engine optimization (SEO) ruled the world. To get found in a search is to get clicked, and the higher up you rank in the results, the more clicks you get.

The impact of this on the media is obvious. In the ad-sponsored model, the publication or website is paid for ad eyeball impressions. People, including the people running the publications/sites do what they get paid to do. If you’re paid when somebody clicks on a link and is served an ad, you focus on getting them to click. Once they do, you’ve gotten as much from them as you can expect to get. Search engines stamped out egregious misuse, like having an “article” consisting entirely of keywords for SEO, so people got creative with content. Forget three-kiloword articles, enter sensationalism.

The eyeball focus hits the content being posted, because what gets a click is what’s novel, interesting, rather than being true or useful. A story about Steven Speielberg claimed that he was asked by a reporter what his best advice was as a young director. He thought a moment and replied “When you talk to the press…lie.” The reason is obvious; if sensationalism rules, then you have to be sensational to win. Ideally, being sensational and being truthful aren’t orthogonal, and my own consulting work makes that assumption. I talk to clients about a “marketing fable”, which is a way of presenting your product or service that magnifies media interest without stepping over into falsehood. It’s a fine line, and it’s hardly surprising that many don’t bother trying to walk it when the rewards of sensationalism are so clear, and the risks seem to apply only if you step so far out of line as to be guilty of a crime.

I’m not saying that Silicon Valley and what it represents isn’t in trouble. I’m saying that the forums carrying criticism of the Valley today are the ones who got it into trouble, and the people reading the articles are the ones who continue to demonstrate that sensationalism wins over truth. If we want “free” things, we’re really getting things somebody else is paying for, and accepting the practices of the system that does the paying. We have to promote 5G as transformational to the average user, because average users are the mass eyeball source. It’s a world of extremes; every risk is a catastrophe in the making, every benefit a changer of Life as We Know It. And guess what? It works. Because we click on the sensational links. We can name the practice “click bait”, but we can’t seem to wean off it.

The question is whether that excuses stepping over the line between positioning and outright falsehood. Everyone who writes for or talks to an audience has a responsibility for what they say. I have a responsibility, and so did Holmes. Nobody can say that they’re always right, nobody can possibly know all about everything. Everybody can, or should, say that they communicate the truth as they know it, and seek to know as much of it was they can. The jury decided that Holmes failed to do that, and I agree, but I think that a part of the crime was getting caught up in the hype wave, and she didn’t start that, nor will it end with her and the verdict.

We started it, all of us who fall for click bait or push the truth to get better coverage. We wanted the Next Steve Jobs, and so we invented one. Free entertainment, just like the TV of old, sponsored by commercials. To quote a sifi writer of old, TANSTAAFL, meaning “There ain’t no such thing as a free lunch.” If we want goodness and truth, we have to stop rewarding the alternative.

I’ve seen a lot of bad companies succeed, a lot of bad stories rewarded. I’ve seen a lot of good companies fail, technologies that were essentially to the optimal evolution of networking get discarded…at least for a while. The market rights itself eventually, because false sensationalism is parasitic and a parasite that kills its host isn’t survivable. Since I’m quoting things here, read Emerson’s essay on compensation: “Every secret is told, every crime punished, every virtue rewarded, every wrong redressed, in silence and certainty.”

Ad sponsorship is a zero-sum game. Despite the growth in online advertising and dollars-for-eyeballs thinking, ad spending as a percentage of global GDP is actually down a bit. We now end up paying for a lot of video, through cable TV and streaming providers. The question then is whether people who have to pay for tech information will be satisfied with tech entertainment instead. We may find out.

What we need now is innovation, not hype. We had three major waves of tech innovation in the last century, waves that drove tech spending up almost twice as fast as GDP. We’ve had none in the 21st century, and I think the reason is that tech is complicated, and the mass market can’t absorb its details, only consume its results. If we’re focused on promoting what we already have, or pretending we have something we don’t, to get eyeballs, our best and brightest aren’t weaving all the complexity of the tech world into something truly revolutionary. Wildebeest during the migration are easy prey, till they move on. Easy isn’t always smart, and we need smart promotion of innovative ideas and not hype, however easy hype seems today.

My resolution for 2022 is to try harder to promote the innovative truths of our industry. How about you?

My Experiences Modeling the Metaverse

My notion of a metaverse of things (MoT) is grounded in the assumption that all applications of this type are a form of digital twinning. There’s a real-world framework that is, at least in part, reflected into the virtual world. For applications like industrial IoT, there’s a structure that’s being mirrored, and that structure has specific components that represent pieces of the industrial process. For MoT, or any digital twinning approach, to work, we need to be able to represent the structure and its rules in the virtual world.

For applications like social media, the things that are reflected into the real world are freely moving elements—people. There may still be a structure, representing the static environment of the metaverse’s virtual reality (like a game), but the structure is really, literally, in the eye of the beholder because how it looks and behaves is related to the behavior of the free-will people who are roving about in it.

If MoT is to be implemented, the implementation would have to include the modeling of the virtual world and the linkage between it and the real world. It would be helpful if a single approach could model all kinds of MoT applications, because then the same software could decode any model of a digital twin and visualize it as needed. The question of how to model this sort of thing isn’t a new one, though. Some of my LinkedIn friends have encouraged me to talk specifically about the process of modeling a metaverse, and as it happens I looked at the problem at the same time as I started my work on modeling network services.

When I started to look at network services as composable elements fifteen years ago or so, I had the same challenge of how to model the composition, and ExperiaSphere was the result. I started with a Java-defined model that essentially programmed a service from objects (Java classes) but in the second phase transitioned to the notion of a model. However, ExperiaSphere never intended to model the network or resources, just the functions included in a service. The presumption was that lower-level tools would deploy functional elements as virtual pieces committed to the cloud.

The ExperiaSphere project had a spin-off into social media, though. There was a set of elements that represented a social framework and the interactions, and that opened the question of how you’d model social systems (for those interested, this was called “SocioPATH”). The result of thinking on this was another activity I called “Dopple”, the name representing the German word “doppleganger” which can mean a kind of virtual double. That’s a reasonable name for a digital twin, I think, and in fact a Dopple object was designed to be a representation of something like a person. Broadly speaking, a Dopple was something that represented either a real-world thing or a thing that was intended to act, in the virtual world, as though it was real.

A person’s Dopple would be, in modern terms, an avatar. So would the Dopple of a “non-player character” in the terminology of Dungeons and Dragons. Dopples would have APIs that linked them to the visualization framework, to software elements that controlled how they looked and moved, and so forth. You could also represent fully static things like rooms and buildings and mountains as Dopples, but of course as something in the virtual world became more static than dynamic, there’d likely be a value to simply representing it as a CAD-like model.

In the real world, everyone has a personal view, so it has to be the same in a metaverse. Just as there are billions of people and trillions of discrete objects in the real world, the same might be true for a metaverse. However, in both cases the personal view of an individual acts as a filter on that enormous universe of stuff, and that means that the Dopple concept has to be personal-centered and thus able to understand what’s inside each “personal view”.

My assumption was that, like ExperiaSphere’s “Experiams”, Dopple objects would form a hierarchy. The top level of ExperiaSphere’s model is a “Service Experiam” representing the entire service. The top level of Dopple, in my conception, was a locale. A locale is (as the name suggests) a place that contains things. The scope of a locale is determined by focus or the “visibility” of the individual whose personal view is the center of the metaverse, so to speak. If the metaverse isn’t modeling a social system but an industrial process, the locale would represent a place where the process elements were concentrated in the real world. In IoT, a locale could represent an assembly line, and in a social metaverse it could represent the surroundings of a digitally twinned human, an avatar.

As software objects, a Dopple is a collection of properties and APIs. I assumed that there would be four “dimensions” of APIs associated with a Dopple, and the metaverse would be a collection of Dopples.

The first dimension is the “Behavior” dimension, and this is the set of APIs that represent the linkage of this particular Dopple object to the real world. Generally, it would represent the pathway through which the Dopple-as-an-avatar would be synchronized with the real-world element it represents.

The second dimension is the “GUI” dimension, and here we’d find the APIs that project the representation of the Dopple to the sum of senses that the metaverse supports. Note that this representation is limited to the Dopple itself, not its surroundings. However, the same dimension of APIs would govern what aspects of the metaverse are “visible” to the Dopple.

Dimension number three is the “Binding” dimension, which represents how the Dopple is linked in the hierarchy of Dopples that make up the metaverse. In a social metaverse, the binding links the Dopple to superior elements, like a “locale Dopple” and subordinate elements such as the representation of what an avatar is “carrying” or “wearing”.

The final dimension is the “Process” dimension, and this is a link to the processes that make up the Dopple’s behaviors. My thought was that like ExperiaSphere’s Experiams, each Dopple had a state/event table that defined what “events” it recognized, what “state” it was in, and what processed a given event in this particular state.

In my approach, a “Behavior Dopple”, meaning one directly coupled to the real world, had a hosting point, a place where the Dopple object was stored and where its associated processes were run. “Behavior Dopples” could represent people, industrial elements, NPCs (in gaming/D&D terms), or real places.

Every Behavior Dopple has (in theory, one or more) an associated locale, meaning there is a superior Dopple bound to it that represents the viewpoint of what the Dopple represents. If multiple Behavior Dopples are in a common locale, they have a common superior Dopple and their point of views are a composite of that superior Dopple’s bound subordinates. If you wave in a metaverse, your wave is visible within any bound locale superior Dopples.

To illustrate this (complex but important) point, suppose your avatar is in a virtual room, attending a virtual conference. Your Behavior Dopple waves, and the wave is visible to those in the same virtual room and also to those attending the virtual conference. A virtual conference, in my original Dopple model, was a “Window Dopple” that was bound to the locales of each of the attendees. These Dopples “filtered” the view according to the nature of the conference, so that if your camera was off then your personal/Behavior Dopple wouldn’t be “seen” but would be heard. I assumed that Window Dopples would present a “field of view” that represented a camera, and things outside that field of view would not be seen by the conference. A null field of view is equivalent to the “camera-off” state.

The “Window Dopple” illustrates that a link between two locales (meaning their Dopples) can be another Dopple, which further illustrates that a metaverse is a structure built on Dopples. The same concept can be applied to IoT. We might have a Factory Locale and a Warehouse Locale, each represented by a Dopple and each containing stuff. The product of the Factory Locale (created by the manufacturing process) is a Dopple, which is then transported to the warehouse (as represented by a Window Dopple linking factory and warehouse).

The reason for all these dimensions of APIs and Dopple objects was to create a framework that could be adapted to any relationship between the real and virtual worlds, and to any means we might find convenient for representing the virtual world to its inhabitants. Like most software architects, I always like the idea of a generalized approach rather than a one-off, and which of the two we end up with is probably the biggest question in the world of metaverse and MoT. If we create “silo metaverses”, we multiply the difficulties in creating a generalized hosting framework and the cost of metaverse software overall. At some point, cost and difficulties could tip the balance of viability for some potential applications.

We probably won’t establish a single model for the metaverse in 2022, and we may never do so. What we can expect for this year is a sign of progress toward that single, general, approach. If we don’t see it, we can expect that metaverse and MoT won’t fully realize their near-term potential, and perhaps never realize their full potential at all.

Summing Up 2022 Expectations

Well, Happy New Year everyone. There are obviously positive and negative omens in play for the world, economic and otherwise. Still, I’m a network/cloud strategist and not a doctor or economist. I’m not going to attempt to address broader issues and events, other than to point out what a negative versus positive shift could mean for networking. I blogged at the end of 2021 on the three technology factors that were likely to influence networking in 2022, and now I want to bring all the points together, not by reprising the early blogs but by summing the impacts and presenting an overall view of what to expect.

Network services are at the heart of everything. The Internet is the new dialtone of the world, and the mechanism whereby we shop, sell, support customers and link with partners, entertain ourselves and even learn. The Internet is functionally an over-the-top community, but network services are what make the Internet possible. Those same services are the foundation of business networks, calling and texting, and everything that connects us. The profits of services fund service provider capex, and the nature of the services determine the network plans of enterprises and the personal behavior of consumers. For all these reasons, it’s a good idea to start our discussion with network services.

The problem we have with network services is that return on the investment needed to sustain, much less expand and improve, them is in a squeeze. Virtually every business wants to use network services, and certainly most consumers, but nobody wants to pay for them, or at least pay in proportion to usage. The Internet is the first network service the world has ever known that didn’t settle among participating network operators. While consumers have driven traffic growth, and their unwillingness to pay for usage has cemented in the neutrality rules, as the cost of Internet capacity drops and bandwidth and QoS improve, the Internet looks increasingly attractive as an alternative to traditional business networking, like VPNs. We only have to look at the growth in SD-WAN to see that.

The fact that network operator profit per bit on basic connectivity has shrunk dangerously close to the point where ROI isn’t satisfactory is clear. Many operators have started to move out of their traditional service markets to other geographies or other service areas. The good news is that we’re not going to see network operators exiting network connectivity services. The bad news is that we are likely to see a curtailment of investment there, but fortunately not in 2022.

The original value proposition for networking, the value proposition that drove everything up to the point where public Internet got popular, was connectivity. A connectivity service is valuable to the extent that it can connect a lot of stuff, and that tends to value a broad network. Consumer Internet and broadband changed things because the goal was to connect to OTT resources, experiences, that became increasingly cached and local. The natural evolution of content services results in a metro-cache focus, even if we ignore other trends that drive metro change. Operators don’t seem to see this, and the likely reason is that they don’t want to see a future dominated by metro-centric services. They’re connection people and they want to stay that way.

What’s keeping operators committed to connectivity services isn’t public-spirited thinking, it’s more like tradition or inertia. Operators have over a hundred years of both invested, and they’re no more likely to jump up and become over-the-top players than a farmer would be likely to decide to become a miner. In fact, what we’re seeing now is that operators are doing what farmers would typically do if somebody discovered mineral wealth on their land—they outsource the new opportunity. 5G has created the closest thing operators have seen in decades to a “new service opportunity”, but they don’t see the real opportunity, which is edge computing or “carrier cloud”, as one they can run with. So, they’re doing deals with the cloud providers for the edge and hoping for both “thing-connect” revenue and some sort of QoS windfall on connection services.

The cloud providers aren’t all that interested in the kinds of “new service opportunities” that operators think 5G could create. They understand that “thing-connect” charges would do nothing other than invalidate the whole IoT service space, and that nobody wants to pay for QoS when demonstrably the Internet and much of cloud computing don’t require it. Cloud providers do see reality here, and want to seize the higher (in revenue and profit margin terms) ground. They are interested in the 5G hosting opportunities operators are presenting, in edge computing partnerships, and other relationships that involve operators paying for cloud hosting. They don’t want operators to build carrier cloud, any more than operators want it.

What this likely means is that operators’ investment in carrier cloud in 2022 will be negligible, and that “new services” offered in 2022 will likely be simply face-lifts on connection services. That includes 5G, and Verizon’s decision to defer 5G Standalone, meaning 5G Core and the slicing features, until “2022” likely means that they don’t really know when or if they’ll push full 5G. Whatever we see in new network services, including edge computing and IoT, is more likely to come from cloud providers than from operators.

If operators want connection services, though, they do have the avenue of SD-WAN and SASE to think about. SD-WAN is a virtual-network overlay on IP and the Internet, targeted initially at sites where MPLS VPNs were either too expensive or unavailable. When the Internet, at least at the consumer level, was a dial-up phenomena, a separate business network framework made sense. Today, with the Internet charged with video delivery and delivering much better QoS, that’s probably not the case. I think operators in 2022, fleeing OTT-level “new services” like edge computing, will push an integration of SD-WAN, NaaS, and SASE. They’ll leave the rest to the cloud providers.

If the operators aren’t likely to be offering services that compete with cloud providers’ services, the opposite is not true. By the end of 2022, I think all the public cloud providers will be offering things that will effectively compete with operator services. The starting point will be intra-cloud networking designed to encourage enterprises to create a global front-end in a single cloud, linked to multiple data centers within the cloud itself.

What’s interesting about the intra-cloud network is that it will have a lot in common with the network needed to make metaverse a reality. Building a metaverse demands edge computing in metro centers, and meshing of metro centers with low-latency pathways. Such a network is fiber-intensive because rather than feeding traffic to a core network, each metro center directly feeds fiber to other centers. The core of the network, if that’s an appropriate term, is much more optical than traditional IP.

Even though “carrier cloud” isn’t likely to drive edge hosting in 2022, and it’s not likely that the metaverse will by itself stimulate the metro-mesh future that I think will sweep networking, we will likely see more pedestrian forces of competition among cloud providers lay the groundwork for radical changes. The cloud providers see what operators can’t or won’t see, which is that connectivity is just a service on-ramp whose only useful attribute is cheapness. There’s no money in offering that sort of thing.

What “metaverse” as a concept does is focus the most aggressive competitors on the metro, and on the fact that service value ends up in the metro, however you define “service” and whoever the “service provider” really is. Metaverse also focuses on the fact that what makes metro different in the future is the integration with edge computing. If the network operators do in fact cede the hosting to the cloud providers, they’ll self-disintermediate on the most important future trend. Hosting wins, so cloud providers win.

The industry as a whole may be starting to see the light on this. Juniper posted a podcast that reflects the view that the public cloud providers will end up providing edge hosting, even out to the cell sites. Operators would then have to try to make QoS valuable, but that presupposes that the cloud providers would even consider buying connection services for edge and metaverse from operators. If you can afford to invest in a cloud or edge data center, you can probably absorb the cost of linking it via fiber to some or all of your other centers. Particularly if you can sell “cloud networking” to enterprises to replace their VPNs.

Cloud-provider-dominated edge computing would lead to cloud-provider-dominated metro, and that IMHO would mean that the cloud providers would likely mesh edge enclaves, regional centers, and other hosting points with fiber. That would essentially displace the packet core, moving most routing capacity to the metro areas, and making metro the focus of electrical networking capacity overall. It would also create resources whereby the cloud providers could expand intra-cloud networking that they’re already starting to offer, and offering a variety of cloud-and-edge-related VPN services.

Now, in closing, let’s look at the question of health and economics. It seems clear that Omicron doesn’t cause as many serious cases as Delta, and it’s also clear that vaccination has reduced the risk of serious problems even further. We’re coming to terms with a form of COVID that is more like the flu, and that means we’re accepting a level of infection risk as the price for minimal social and economic disruption. It’s those socioeconomic factors that created lockdowns and business issues, and so it appears that 2022 doesn’t have another wave of impacts in store for us, just another wave of virus. Barring another variant with greater impact, we should see 2022 as a year we shift more toward “normal” behavior. The forces that shape networking in 2022, then, are already visible and operating on the markets.

Can the cloud providers do networking, leaving operators with little or nothing beyond the access networks, which are the lowest-margin and therefore least-competitive pieces? If they do, then it may mean that some form of public/private partnership will be necessary to maintain investment in broadband access. The form may vary, with some countries perhaps opting for something like Australia’s NBN and others creating broader subsidies, but the end result would be the same. Value may be tapped off the pure OTT players, but it will come to rest in the public cloud and not carrier cloud. This year, 2022, is likely the last chance for operators to empower themselves at the service level. The most important question of the year is whether they’ll see that, and do it.

The Internet in 2022: Web3, Metaverse, and More

The Internet and the trends in over-the-top (OTT) services is the last of the three areas whose 2022 fortunes I’m going to blog about. Far from the least important, but perhaps the most difficult to predict. In fact, to predict pure OTT factors is as difficult as predicting taste in music, so I’m not going there. Instead, what I’ll focus on is the areas where significant shifts in Internet-think might create significant shifts in network infrastructure and services. I’ll sum up all of the forces covered here and in the two blogs earlier this week, in my first blog of the New Year, which will be my next blog since I’m taking a break until January 4th, 2022.

The majority of network traffic is created by use of the Internet. The Internet, as a means of reaching customers and partners, is also the primary driver of cloud consumption, not only for enterprises but as the delivery mechanism for what’s usually called “OTT” services, including streaming video and social media. We’ve seen sharp changes in the use of these services through the pandemic so far, and the Street is now focused (not surprisingly) on what web players and trends will dominate as COVID becomes less of an issue, both socially and financially.

There are two trends that the Street sees as impacting the Internet through 2022, one more evolutionary one more revolutionary. The former is characterized in Street research as “Web 3”, which is a (many would say, vast) oversimplification of the technical model that’s been given that name. The latter, of course, is the “metaverse”.

Web 3 (or 3.0, or Web3, as it’s variously known in technical circles) is a major shift in the Internet designed to build around an identity-security-and-trust model. Think of blockchains ruling the world and you have a notion of what we’re talking about. I blogged about Web 3 HERE, but the Street is thinking about it less as a technology shift than a response to specific problems with Internet security, identity, and monetization. Or, just perhaps, the Street is just into the hype game. In any event, the scope of impact for “true” Web 3 technology would be so substantial that there is zero chance anything would come of it in 2022, so what I suggest is that the Street is thinking about more band-aid-like changes made to the three areas I mentioned.

The metaverse is another area where the Street is oversimplifying; indeed, it completely misses the full scope of technical changes required. They see it as something an OTT player like Meta (Facebook) could drive on their own, when in fact the creation of a global virtual world would demand major changes in computing and networking. I’ve also blogged about the metaverse (HERE and HERE), and you can reference these pieces for the details on just what has to happen.

Before we look at these trends, let’s look at the baseline. COVID has likely forever altered our tradeoff between the real world and the online world. Many of our youth have long been as much or more citizens of the virtual world as they have of the real, but lockdowns and simple fear of contagion have turned many toward online shopping a virtual visiting. We will see some relaxation of the drivers of this shift in 2022, but Omicron makes it clear that we’re unlikely to be free of COVID risks even in 2023. Maybe never.

The battles for TV carriage, fought by device vendors like Roku, networks and entertainment companies, and streaming TV services, are a demonstration that people have accustomed themselves to getting video in their living room instead of going out to see it. Live events like concerts still draw the young, generally less concerned about infection, but even sporting events have been hit by the fear factor. In this environment, everyone wants virtual eyeballs, and there’s a strong shift away from pure ad sponsorship to payment for content.

This is all creating the on-ramp for our evolutionary and revolutionary issues, of course. If more money is going to be spent online, then there’s more need for security and identity support, and monetization is a whole new game if content is more important. If people are going to interact virtually, there’s a strong desire to move beyond talking heads and group meetings where every new attendee makes the size of everyone’s face smaller and smaller. The Web 3 and metaverse moves are two different paths to the same destination, a future where what’s online is just as important—and maybe more important—than what’s real. We will not achieve this future (either by evolution or revolution) in 2022 or 2023, but we will almost surely see steps toward it, and those steps will define the technology shifts that the Street will validate, and that means the companies whose fortunes will advance. The big question will be the extent to which “revolutionary” metaverse developments will percolate beyond simply creating the OTT framework for metaverse presentation.

Creating a more identity-centric online experience is clearly going to have to be driven by a combination of the public cloud providers and the other companies who have significant positions in consumer trust and e-commerce. As it happens, of course, the top companies in both areas are the same—Amazon, Google, and Microsoft—but I’d also add Apple to the mix because the iPhone is so pervasive. These four companies, by providing both tools to build more identity-centric, secure, online applications, are the most likely to lead the evolutionary charge. However, they may end up buying smaller players who come up with something truly innovative.

The metaverse area is a lot more complicated, particularly when you consider the issues I raised in my past blogs on the topic. The value of the metaverse increases significantly with the “realism” of its experience, meaning the extent to which avatars and environments in the metaverse closely track the things they represent. Any OTT, including Meta, could frame the presentation side of the metaverse, but that doesn’t address the challenge of distributing a worldview of a single locale to a set of people who are inhabiting multiple, widely separated, places in the real world.

The question in our Internet-revolution area is whether the presentation side (the social-media types) will redefine computing and networking to erase the geographic-separation barrier, or whether cloud providers (or other players, in theory) will solve the separation problem and provide metaverse-building tools that will then address the presentation side. Which path we take will have an enormous impact on our future, on technology evolution through all of IT, and on vendor fortunes. What it boils down to is whether a “metaverse” is a social network or an application.

Meta is surely capable of creating a true, highly integrated, metaverse that is “multi-tenant” in that it contains “groups” that are communities unto themselves. They’d have to deploy a lot of additional servers and build a pretty strong mesh network, but it could be done. If they did it, they’d likely tap off much of the metaverse opportunity, and they could end up being what is effectively a network and cloud provider of metaverse-specific resources. Others, meaning my four evolution giants, would have to try to gain entry into the space against a strong incumbent.

But would Meta spend to build infrastructure on that scale? Even to cover the US market would be a massive infrastructure investment for them. If they didn’t move with incredible speed, their steps would telegraph a warning to others, particularly the four evolutionary giants, and they’d likely move themselves. Without a significant first-mover advantage, Meta would have to ask itself whether it would be better to host on someone else’s “metacloud” or build their own. If some of the Cloud-Three of the Evolutionary-Four were to move aggressively, they could make metaverse hosting the premier edge application, and if they meshed their own edge facilities, they could be the leading provider of network services other than the access piece.

The reason for the “could be” here is that the impact on network services would depend on the extent to which metaverses became a common framework in social networking, collaboration, customer support, and so forth. It would also depend on just how aggressively Meta pursued the use of their metaverse framework as a means of addressing overall metaverse opportunities. Finally, it would depend on whether other players, especially my Big Four evolutionary players, decided to compete directly in metaverse hosting.

These question marks are critical in understanding what might happen to today’s vendors, cloud providers, and network operators. Meta is a champion of open-model computing and networking, and to the extent that they build out infrastructure for the metaverse, they’ll undoubtedly use an open approach, meaning traditional server and network vendors won’t get much out of the deal. Public cloud providers would be likely to build a lot of their own server/network gear too, so the “revolution” of the metaverse would be more revolutionary if Meta or the cloud providers built it.

The evolutionary path, the path that would preserve current incumbencies in services and equipment, would be if network operators stepped up. That could (in theory) happen in two ways. First, the operators could start deploying carrier cloud, meaning edge computing and metro networking. They could mesh their own metro areas, and they could support interconnects (as they do now) in the form of peering agreements to mesh the meshes, so to speak. Second, they could decide to create the transport relationships needed for metro meshing and forget the hosting.

Operators should have started to invest in carrier cloud three years ago, if they wanted to optimize their own long-term position in the market. They didn’t, and I don’t see them suddenly getting aggressive in that area now. Even the mesh-transport initiative seems far-fetched to me, given that that they’d be creating a new transport network focused on high capacity and low latency that, unless the metaverse opportunity developed quickly, wouldn’t return any new revenue to them.

What could tip the scales here is the equipment vendors, both computing and networking. A shift to open-model networking for metaverse hosting would be a disaster for them, tapping off what would surely be the largest source of new revenue and even slowly eroding their current revenue by displacing current Internet technologies in favor of the new. Network operators don’t invent network technologies (even when they try, they just don’t have the mindset; look at NFV or ONAP). If vendors were to create a viable strategy for metaverse hosting, it wouldn’t guarantee operators would adopt it, but it might facilitate adoption when the risks to operator business models becomes clear.

Which might happen in 2022. Right now, everything in our revolutionary-Internet pathway depends on just how fast metaverse catches on. Operators respond more to competition than to opportunity; they have a history of presuming demand is there to be exploited, and that it’s just a question of whether anyone besides them is in a position to do that. If we were to see Meta moving very quickly, see a lot of interest developing, see players like Microsoft and Google starting off their own processes to host metaverses, then operators might start getting concerned. Combine that with vendor strategies they could adopt rather than having to invent them, and carrier cloud might be off to the races.

There will be more Internet in 2022. There will likely be signals of differences developing, too, and some of the signals may be strong enough to start bringing about market changes. As always, what we should look for are signs that up-and-coming players are jumping on one of the trends I’ve noted here, to gain some first-mover advantage. If that happens, and if those players do their singing and dancing well enough, we might actually see progress in some or all these areas by the time 2023 rolls around. Meanwhile, Happy Holidays and a Happy New Year!

Networking in 2022

For most who read my blog, network issues and opportunities are paramount. The good news for that group is that the Street almost universally sees networking at the top of the hardware opportunity list. The bad news is that when you look at the companies the Street really likes, you don’t see any of the major network vendors, and in fact just a couple vendors that most would consider “network vendors” at all.

In assessing the network space, my own strategy is to say that there are three factors that we have to consider. The first two, service provider and enterprise networking trends, are covered in this blog. The final factor, the Internet, is covered in the next blog, the one for December 22nd, which will be my last blog of 2021. There’s an interplay between this blog and the next, just was there was an interplay between Monday’s blog on software and the cloud, and this one. There’s no easy way to cover what’s going to happen in 2022, so I’ll ask you to bear with the approach I’ve taken and refer to items in whichever of the three blogs you need to address your own interests.

OK, let’s get started. The top trend in networking, from the Street’s perspective, is 5G. They’re citing “strong 5G handset demand”, which is why the top company pick in “networking” is Apple. They’re also citing strong 5G-related telco capex, of course, and some “pent-up” spending on enterprise networks. Let’s look at these points first.

There’s no question that 5G modernization of telco networks is the biggest source of incremental (non-modernization) capex for the telco space. I’ve noted in many blogs that 5G is the only significant technology that has budget approval, and that approval helped in 2021. In 2022 it may be so critical as to swamp other issues for vendors who want to sell to telcos. However, it’s important to separate “strategic” 5G that would (purportedly) create new mobile network demand, from “tactical” 5G which is just more of the orderly evolution of wireless generations we’ve seen over the last 20 years.

It is true that 5G can deliver better wireless speeds to consumers, but far less true that this makes any real difference to most mobile users. Even video delivery doesn’t require per-user bandwidth more than LTE provides. My straw poll of a couple hundred tech people who’ve made the LTE-to-5G transition shows that less than 10% are “confident” they can tell when they’re on 5G versus an earlier technology, without looking at an indicator on the phone.

Why then the big Street bandwagon for 5G handset vendors? Most of the Street analysts I talk with say that they believe that the “leading-edge” consumers will “demand” 5G because it’s the latest and greatest. My straw poll says otherwise; over half of those who have gone to 5G smartphones did so because 5G came on the latest models, and it was time for them to upgrade. The next-largest segment said that they wanted 5G because they believed that coverage available on earlier generations would likely get gradually worse over time. Those who actively wanted the “differences” of 5G were the smallest group. I most note here, though, that my poll included people who were more likely knowledgeable about mobile network trends; the average consumer might not have the same view.

5G is important to the Street in telco infrastructure, as I’ve already noted, because it’s budgeted. There is another factor, which is that 5G could favor a broader vendor list in the mobile network space than we’ve had in the past. Three vendors have dominated mobile networking; Ericsson, Huawei, and Nokia. Huawei is suffering from the pushback against Chinese companies for security reasons, and interestingly neither Ericsson nor Nokia has beaten Cisco in share price over the last year. It seems pretty clear that the Street hasn’t favored mobile incumbents, despite the 5G push.

The reason may be the open-networking trend, which in 5G means Open RAN and O-RAN. Nokia is widely seen by the telcos as the vendor most open-committed, and they’ve beaten rival Ericsson in share price over the last year too. However, part of the stock-price trends relate to growth opportunity, which generally means that the Street likes smaller players with more upside than giant incumbents who can’t gain much market share without a major revolution. Arista, one of the Street favorites, handily beats all the mobile incumbents and Cisco in share price appreciation.

5G would logically mean an increase in traffic, which means IP equipment in general would be expected to show strength. Cisco, the market leader, is at the bottom in terms of share price growth, with rival Juniper doing a bit better and Arista doing a lot better than either of these. Putting things in Street terms, they’re expecting to see Juniper gain some market share and Arista to gain a lot.

Arista’s big plus, says the Street, is the “hyperscalers”, meaning public cloud and other high-density compute players. Juniper gets a nod in this same space, but there’s also Street enthusiasm for Juniper’s AI push, which promises to address operations challenges that are clearly developing for any large-scale network operator, including the cloud providers. Between Cisco and Juniper, the Street tends to like Juniper best.

Then there’s Ciena. They’ve run consistently behind all the network vendors I’ve mentioned in terms of stock price appreciation, until their last quarterly report, at which point they jumped into a favored position (along with Arista, who still outpaces them in stock price). There are some good reasons for this, but there’s also a systemic shift that we need to look at first.

There was a big change in network equipment vendor share pricing around the first of November 2021, one that didn’t mirror a major change in the indexes. Arista, Ciena, and Juniper all saw increases in share price, and I think this is reflecting a move by the Street to favor companies that they saw as having a better chance of gaining market share. The trend didn’t impact Cisco or the mobile vendors, which tells me that the Street was looking more at conventional growth, both in data center switching and perhaps to an extent, in routing. The fact that Arista saw the most appreciation means to me that the Street thinks switching is the better space.

Going back now to the Ciena story, the Street believes that it’s got a strong data center story, which is true. Some analysts also appreciate the story that operators are likely to invest more in capacity than in capacity management, favoring a lot of fiber-level stuff to simplify operations up at the packet layer. The second story is more significant in my view than the first, given that as the Street admits, Ciena is the leading vendor in the data-center fiber space.

OK, that’s a summary of the Street’s view of networking. Let me now go over what I’m hearing, and what I think. It’s sometimes significantly different.

The first difference is that the Street focuses on technology segments but you really need to look at network topology segments to get the right picture. The most important trend in networking is metro. Metro is where access networks concentrate traffic. Metro is the deepest point where you can apply service intelligence efficiently. Metro/edge is the Last Frontier of hyperscaler deployment, whether you believe the cloud will move out or telcos will take up carrier cloud.

The reason this is important is that the architecture of the metro infrastructure determines the device makeup within it. All of the factors I’ve cited influence the infrastructure, meaning that the extent to which the develop, the pace of the development, and whether metro infrastructure is actually architected or is just evolving, establish what device types, and what vendors, might win.

Right now, the Street is assuming that “hyperscaler” opportunity is really just expansion of the public cloud deployments, and that 5G isn’t driving any conspicuous edge deployment at all. That’s possible, of course, and if it happens then metro equals 5G “devices” (which might well be tightly integrated metro server boxes) plus backhaul fiber. Nokia (leader in the Street view for 5G) and Ciena (leader in fiber capacity) win in network equipment, with Arista and Ciena still winning in data center.

In order for this not to happen, you need one or both of two new forces. The first is metro architecture and the second is carrier cloud. If a vendor were to position a cohesive vision of how metro, 5G, edge computing, and new services all combine in a single architectural vision for metro, that vendor could empower its own strategies by showing how they create the metro/edge of the future. On the other hand, if a vendor could define how operators could/must/should deploy their own edge computing, and tie that to 5G, they could make carrier cloud data centers a priority, link it to 5G, and win with it.

The impact of metro architecture and carrier cloud would be felt most directly in the way that operators see white-box technology. The metro area is the hottest focus of investment, where most 5G technology other than towers and radios would be placed. Right now, there is no real “metro architecture” in play, and certainly the big network vendors aren’t competing for a win in the metro. That means that the open-model strategy promoted by 5G in general and O-RAN in particular could define the metro network, and 5G and the hosting of 5G-related services (if they develop) could define edge computing. That would likely encourage white-box deployments in the metro area, which would then validate white-box technology in the places where operator budgets are focused. Obviously, that could create issues for the incumbent network vendors.

Moving on to the enterprise, we come to a major shift that gets essentially no notice from the Street—virtual networking. Virtual networks are a part of cloud data centers, and they’d likely be part of the carrier cloud too, but in those missions they’re generally invisible, and most of the cloud providers have used their own technologies in those places. In the enterprise, we’ve had “SDN” in data centers, which really means virtual networking and often VMware’s NSX, but the big news has been SD-WAN.

Enterprises aren’t likely to make convulsive changes in their network technology. The majority of them fear vendor lock-in and have embraced multiple vendors in multiple places, but they’re disinclined to mix vendors within a single area of their network. Since we have no real drivers of change in the enterprise (5G is such a driver for network operators), technology evolution depends on modernization budgets, and those rarely fund total switches of technology, or vendors. But virtual networks are different.

The SD-WAN mission of virtual networks typically adds a device to sites that had been previously left off MPLS VPNs because of service costs or availability. That device, though, is essentially a branch router, and so it doesn’t represent a major change in architecture. There are also typically software or appliance additions in the data center and perhaps other major sites, to terminate the SD-WAN virtual network and reintroduce traffic in a normal form. This basic mission of SD-WAN has actually gained a lot of traction, and it’s important because enterprises are roughly five times as likely to select a vendor for SD-WAN that’s not incumbent in their networks, as to add a new “network vendor” overall.

Under the covers of SD-WAN’s initial VPN-extending mission, there’s another realization, which is that SD-WAN is an enterprise virtual network in general, one that’s adept not only of networking thin sites, but networking clouds and cloud applications. In this role, it can provide a much higher level of security than ordinary firewall-based solutions could offer. A very few SD-WANs, including Juniper’s 128 Technology acquisition, offer a strong zero-trust model.

Security is the most-supported enterprise network priority, with about a third of enterprises saying that their budget for security is equal or greater than their budget for network expansion. If SD-WAN taps into security, it could bring virtual networking to the enterprise on a large scale, and that would transform how enterprises use networks and IP. It would then likely transform how enterprises buy IP services, which transforms how they’re sold and built by operators.

Virtual networks would tend to tap features from the transport level, and of course they would also remake how we view network security. Both these truths tend to worry incumbent router vendors in particular, but the fact is that the biggest impact of virtual networks might be that they’d tap a bit of the pressure for white-box networks in the enterprise.

Networking, then, is highly dependent on two points raised here—metro evolution for network operators and virtual networking evolution for the enterprises. A final force in networking evolution is the Internet, and that will be the topic of the last of my blogs on what to expect for next year.