Is There a Relationship Between Crypto and the Metaverse?

Just where, if anywhere, is the intersection between the metaverse concept and crypto? That might be a key question for a number of reasons, but as usual the answer will depend a lot on how we define two very fuzzy concepts. Whatever the truth is, we can also expect hype to fuzz up the results, particularly since Meta’s reporting of an objectively very bad quarter virtually assures that they’ll be looking at the metaverse to restore their opportunity. Some of what they do will surely involve crypto, and I’ll blog next week about what I think will happen, but today I want to look at the broader question of the metaverse/crypto relationship.

If you read my blog from yesterday you’ll get a sense of the multiple missions that the concept of a metaverse could serve, and also my own definition of what a metaverse is. The prevailing definition, to quote, is that a metaverse is “an artificial reality community where avatars representing people interact.” On LinkedIn, I was told that Wikipedia says that a metaverse is a collection of virtual worlds created for social connection. My thinking is that we need to broaden that to say that a metaverse is an enhanced or alternative framework representing a real-world environment or community. The second, broader, definition is a superset of the others.

A metaverse’s fundamental requirement is the ability to create a digital twin of a set of real-world things, and an alternative framework in which to represent them. That framework can be designed to create a realistic virtual community, a game, or even a representation of a factory or transportation system. Humans may or may not be represented in it, and inanimate things might be generated or twinned. There is, in my view, absolutely nothing about this broad process that demands any crypto technology.

Whether it admits to or benefits from crypto depends on how that concept is defined. There are two general definitions out there. First, “crypto” can be a shorthand term for “cryptocurrency”, and that appears to me to be the most broadly used definition of the term. Think Bitcoin, in other words. The second definition is that crypto is short for a cryptographic, blockchain-oriented, mechanism for creating an authoritative record of something. The first of the definitions would make crypto an adjunct to the metaverse, but the second might make it a very important feature of its implementation.

Obviously, you can’t pass around real money in a virtual reality. If a metaverse has to support real commerce, meaning payments and receipts, in any way, then we have to be able to translate real-world financial value into the virtual world (the metaverse) and back. Even the ability to buy a weapon or a drink in a game, if the right is backed up by something like a credit card, means that you have to be able to represent the transaction in the metaverse, perhaps with a “local currency”. If payment between players is possible, then that currency has to be convertible to real money, and that means that if you could counterfeit it, you would be effectively creating real money. Cryptocurrency, or a blockchain-authenticated local currency, could be the solution to that. There could be other solutions too, of course.

That’s not the end of it. If you pay for a sword in a game, you “have” it. Could players counterfeit swords, and sell them within the game? Perhaps, depending on the rules. If they could, then you also need to be able to represent a virtual element in an authoritative way, which essentially means that things that are “real” in the metaverse might have to be represented as non-fungible tokens (NFTs). An NFT is, in a sense, a cryptocurrency, but in another sense it reflects our second definition, which is that it’s simply a validation that something is a representation of something else.

That’s a pretty broad definition, so we can say that the second crypto definition and application of “crypto” is a lot more nuanced. The most obvious application of that definition is the authentication of the relationship between a “digital twin” and the real-world counterpart. If we’re talking about people and avatars, that would mean assuring the metaverse that a given avatar was what they represented to be. Whether that assurance to the metaverse meant assurance to everyone in it would depend on whether the rules of the metaverse allowed someone or something to misrepresent itself. Can I don a disguise? If so, then my identity (the twin-to-reality association) has to be flexible. However, when I buy a sword, I need the metaverse to be as sure of who I am as a clerk in a store would be sure of my identity if the real “me” used a credit card.

The ability of crypto to validate identity is fundamental to many current crypto/blockchain applications, and particularly to Web3 stuff. But it’s not just “human identity” that we have to worry about. In a metaverse-of-things (MoT) IoT application, we might digitally twin a manufacturing process to allow for computer simulation of production control. It would be awkward to say the least if some foreign element could be introduced into the MoT, where it could either bollix up the works or even steal something. A twin of a truck, introduced into a transportation MoT, could end up being loaded with real goods.

In both these applications, there are implications that only add to the potential complexity. Just as you need a wallet for real money, you need a wallet for cryptocurrency, and you need perhaps a virtual backpack or scabbard to hold your virtual possessions. The elements of an MoT need a “locale” that holds the process, and we need a way of moving things into and out of their repositories, ways that might or might not involve a transfer of ownership. Drawing my sword isn’t the same as giving it to you, or throwing it on the ground, and changes to the ownership relationships have to be made appropriately. Through it, the properties of the sword remain.

Except that I could “process” the twin element, which in MoT would reflect real manufacturing steps. Those have to be recorded. If I throw down my sword and it breaks, that has to be reflected in the properties of the sword, and so would having it repaired. Staying with manufacturing/transportation, there are changes of ownership that have to be managed. Crypto/blockchain isn’t the only way to do that, but it’s a logical way given that we have blockchain-based initiatives that accomplish much the same sorts of things today.

What this all leaves us is that there is definitely a role for blockchain in what may turn out to be all of the possible metaverse missions. There may also be a role for cryptocurrency and NFTs. However, and to me this is the big “however” point, there are a lot of things we need to have before either blockchains or cryptocurrency/NFTs have to be addressed. The risk of doing without them in the metaverse is silos based around identity and authenticity assurance, but every metaverse implementation could end up being a silo at the architectural level, and that’s a much greater risk.

If anyone is doing the right thing here, with the architecture and with “crypto” in combination, I’d sure like to hear about it. Takers?

More Metaverse Missions?

If you like euphonic comments, how about “more morphs mix metaverse”? It’s pretty clear already that we’re going to see the original concept of the metaverse broadened to the point where it could apply to nearly everything. That’s too bad, because there are some (more euphonics) multi-faceted metaverse missions that actually could impact the architecture and evolution of the concept, and they could get buried in the hype.

The simple definition of a metaverse is that it’s an artificial reality community where avatars representing people interact. That’s what Meta (yes, the Facebook people) seems to have intended. In a sense, this kind of metaverse is an extension of massive multiplayer gaming, and that similarity illustrates the fact that the concept of the metaverse as a platform differs from the concept of the metaverse as a service or social network.

As a platform, a metaverse is a digital twinning framework designed to mimic selected elements of the real world. We could envision a metaverse collaborative mission as one that fairly faithfully mimicked the subset of reality needed to give people a sense of communicating in a real get-together. A game is a mission where only the “identity” of the player is mimicked; the persona the avatar represents isn’t tightly coupled with the real physical person except perhaps in movement, and perhaps not even there. Maybe it’s only a matter of control. As I suggested in earlier blogs, you could also envision an IoT mission for a metaverse, where you digitally twinned not necessarily people but transportation or industrial processes.

What we’re seeing already is an attempt to link blockchain concepts, ranging from NFTs to cryptocurrencies, to a metaverse, then say that anything that involves blockchain or supports crypto is a metaverse. Truth be told, those are more readily linked with the Web3 concept, where identity and financial goals are explicitly part of the mission. That doesn’t mean that you couldn’t have blockchains and crypto and NFTs in a metaverse, only that those things don’t make something a metaverse.

So what does? An architectural model, I think, made up of three specific things.

The first is that digital-twin concept. A metaverse is an alternate reality that draws on real-world elements by synchronizing them in some way with their metaverse equivalent, their twin. Just what gets synchronized and how it’s done can vary.

The second is the concept of a locale. Artificial reality has to reflect the fact that people can’t grasp the infinite well. We live in a vast world, but we see only a piece of it, and what we see and do is contained in that piece, which is our locale. We can define locales differently, of course, a video conference could create a metaverse locale, but a locale is fundamental because it’s the metaverse equivalent of the range of our senses. This means, collaterally, that the metaverse may have to generate environmental elements and even avatars that don’t represent an actual person but play a part—the Dungeons and Dragons non-playing character or NPC.

The third thing is contextual realism. The metaverse isn’t necessarily the real world, or even something meant to mimic it, but whatever it is, it has to be able to present in a way that matches the experience target of its mission. If we’re mimicking a physical meeting, we have to “see” those in our meeting locale and they have to move and speak in a way consistent with a real-world meeting. If we’re in a game and playing a flying creature as our avatar, we have to be able to impart the feeling of flight.

I think that it would be possible to create a single software framework capable of supporting any metaverse model and mission, given that we could define a way of building a metaverse that could provide the general capabilities required for the three elements above. However, the specific way the architecture would work for a given mission would have to fit the subjective nature of metaverses; what makes up each of my three things above will vary depending on the mission.

A good example of this is how a metaverse is hosted. Just like a cloud-native application is a mesh of functions, a metaverse is likewise. The place where a given function is hosted will depend on the specific mission, and in fact some functions would have to be hosted in multiple places and somehow coordinated. For example, a “twin-me” function that would convert a person into an avatar would likely have to live locally to each person. I also speculated in a past blog that a “locale” would have to have a hosting point, a place that drew in the elements of all the twin-me functions and created a unified view of “reality” at the meeting point.

Blockchain, NFT, and crypto enthusiasts see the metaverse as a GPU function because GPUs create blockchains. I think that this focus misses all three of my points of metaverse functionality because it misses the sense of an artificial reality. The limit to a metaverse is really the limitation of creating a realistic locale, and the biggest barrier to that is probably latency, because latency limits the realism of the metaverse experience.

We could envision a collaborative metaverse with ten people in it, with all ten being in the same general real-world location. We could find a place to host our locale that would present all ten with a sense of participation, providing our “twin-me” functions were adequate. Add in an 11th person located half-a-world away, and we would now have a difficult time making that person feel equivalent to the first ten, because anything they did would be delayed relative to the rest. They’d “see” and “hear” behind the other ten, who would see and hear them delayed from their real actions.

This doesn’t mean that there’s no GPU mission in metaverse-building. I think that the twin-me process could well be GPU-intensive, and so would the locale-hosting activity, because the locale would have to be rendered from the perspective of each inhabitant/avatar. The important thing is that contextual realism, which GPUs would contribute to but which latency would tend to kill. Thus, it’s not so much the GPU as where you could put it, particularly with regard to locale.

Everyone virtually-sitting in a virtual-room would have a perspective on the contents, and of each other. Do we create that perspective centrally for all and send it out? Not as a visual field unless our metaverse is very simple, because the time required to transmit it would be large. More likely we’d present the room and inanimate surroundings as a kind of CAD model and have it rendered locally for each inhabitant.

This sort of approach would tend to offload the creation of the metaverse’s visual framework from the locale host to the user’s location, but I think that business uses of a metaverse are likely to have a “local locale” host to represent their employees. That means that metaverse applications would be highly distributed, perhaps the most distributed of any cloud applications. It also means that there would be a significant opportunity for the creation of custom metaverse appliances, and of course for edge computing.

The connection between metaverse visualization and metaverse “twin-me” and gaming is obvious, and I can’t help but wonder whether Microsoft and Sony used that as part of their justification for buying gaming companies. However, there’s a lot to true metaversing that simple visualizing or twinning doesn’t cover. Microsoft, with Azure, has an inroad into that broader issue set, and Sony doesn’t. They may need to acquire other services, which begs the question of who will offer them.

The metaverse concept is really the largest potential driver of change in both computing and networking, because the need to control latency to support widely distributed communities would tend to drive both edge computing and edge meshing. That would redefine the structure of both, and open a lot of opportunities for cloud providers, vendors, and even network operators down the line. And perhaps not that far down the line; I think we could see some significant movement in the space before the end of the year.

The SD-WAN Wars are Coming

This is going to be a year of great change in networking, which means both great opportunities and great risks. In particular, we really seem to be setting up for a major shift in the virtual networking and SD-WAN space. The question, as it often is, is what exactly we’re going to be fighting over, and who’s going to take what critical positioning steps to grab control of the new situation.

SD-WAN is already hot, and its fundamental value proposition, lower-cost VPNs, is getting hotter. As I noted last week, a major improvement in consumer broadband technology drives the cost of high-speed Internet connectivity down. That makes SD-WAN more attractive than MPLS VPNs, cost-wise at least. SD-WAN as a VPN extension, or even a VPN replacement, is essentially old news here, but if the economies of consumer broadband drive more businesses to reconsider IP VPNs even where they’re available, the technology could get a boost.

Even with a boost, though, differentiation is the key to success in sales, which means that SD-WAN vendors have to push beyond the obvious. I’ll illustrate what I think is happening with three references, below.

One place they’ve been pushing is undergoing its own revolution—the cloud. Even before COVID, there was growing enterprise interest in creating Internet-based portals for customers and partners, and when WFH was added in in 2020, we saw a major upswing in the use of the Internet as an employee empowerment tool. The cloud was the primary vehicle enterprises used to create portals to their legacy application, and that’s been (and will remain) the primary driver of enterprise cloud commitment. SD-WAN, in software form, can create a direct-to-cloud connection just as it can support a thin-site connection.

This could be critical for a number of reasons, not the least being that while cloud providers are now starting to offer VPN-like services for enterprises’ cloud-hosted elements, these aren’t helpful in multi-cloud because they’re cloud-specific. On the other hand, an SD-WAN could link the cloud to the VPN, whatever cloud we’re talking about, and could also link branch offices and other remote sites, even home workers. Cloud connectivity is already recognized as a new SD-WAN driver, but it’s going to get a lot more recognized this year.

Then, of course, there’s security. The first of the three sources I promised to cite, from VentureBeat, is about what should have been the security focus all along, zero trust. I’d love to say that this piece frames the future of security, and the relationship between it and SD-WAN, but it totally misses the mark. The story doesn’t even talk about the real zero-trust model, which has to be based on substituting explicit connection permission for IP networks’ traditional promiscuous connectivity.

Like any term that gets media attention, zero-trust has gotten expanded to the point where it’s about almost anything and everything related to security. That’s probably largely due to the fact that software vendors and network vendors with established security portfolios aren’t particularly interested in seeing their business impacted by something new, but the fact is that the “trust” that we’re talking about in zero-trust is about trust in connectivity.

IP networks are inherently promiscuous in terms of connectivity, meaning that if you don’t want some connections to be made, you have to do something to block them. Traditionally that blocking has evolved into an endpoint feature, a “firewall” that stands between a user or application and the wide and evil world. However, once you decide that you’re going to create a higher-layer network service, as virtual networking and SD-WAN do, you have a chance to define connection rights there.

Back in 2019, I did a short report on SD-WAN, and in the report, I made the point that the number one requirement for SD-WAN was session awareness, meaning the ability of the software to recognize users and applications, and the network relationships (sessions) between them. Session awareness means that an SD-WAN can control what sessions are permitted, and that’s what I’ve believed from the first is the foundation not only of zero-trust security, but of security overall.

It’s possible to introduce something like session awareness via an expanded definition of a firewall, but that approach has challenges. Firewalls are per-packet elements; they look at packet headers to decide what to admit and what to reject. To make them aware of even the “allowed” IP addresses (and ports) but worse the list of ones not allowed, would make them impossibly complex and introduce significant latency. You need to introduce session awareness at the connection level, and manage the overhead.

I know of only two companies in the SD-WAN space who even claim any level of session awareness, and that hasn’t changed for several years. If there were a realization of the security side of SD-WAN, the connection with zero trust, you’d expect to see SD-WAN vendors adding the features to their own products. They haven’t, they’ve only band-aided and fuzzied up the concept with a loose link to firewalls.

What are vendors doing? That’s the target of my second reference. Cisco is of course the gorilla of network equipment, and they’ve recently announced a link between SD-WAN and their WebEx collaboration. This is consistent with recent announcements that have linked their SD-WAN to cloud and multi-cloud. The Cisco drive is sales-friendly, tactical, but it doesn’t reflect any Cisco awareness of a seismic shift in SD-WAN and virtual networking. Yes, as I’ve noted many times, Cisco likes to be a “fast follower”, but it seems to me that their recent announcements epitomize “follower” more than “fast”.

Cisco isn’t going to drive an SD-WAN or virtual-network revolution. Like most SD-WAN players, they’re committed to simple changes to their base technology, which means that even a strong cloud position is a bit of work. Security? Forget it; they’re well behind the positioning of other SD-WAN providers.

Including arch-rival Juniper. Juniper’s acquisition of 128 Technology gave them a major edge in the technology of SD-WAN and the implementation of a true zero-trust model. Their most recent announcement on SD-WAN, the third of my references, linked their “Session Smart Routing” (128 Technology) approach with Mist management, which not only simplifies operations for the typically small-to-fringe SD-WAN sites where local support is likely unavailable, but also makes their solution more attractive as a managed service. MSPs are a major conduit for SD-WAN sales, and the operational benefits of Mist would also make the Juniper strategy just as attractive to network operators.

One way or the other, SD-WAN is going to grow significantly. As it does, it’s inevitable that the market looks further and harder for differentiation, particularly when what looks like it might be developing is a true shift from a limited SD-WAN position to a much more interesting and important virtual-network positioning. The improvements in FWA and fiber broadband are priming the pump now, but it’s going to be the cloud and security that deliver buckets of opportunity. You can bet that these areas will be getting a lot of attention in 2022, but buyers will need to beware of the tendency of vendors to position old technology to address new missions. Vendors will need to start thinking about making real enhancements to some creaky old offerings, because things will get real, and very quickly, in the SD-WAN wars.

Reading a Future Path from Juniper’s Current Quarter

Juniper surprised a lot of people with its quarterly earnings report, beating on earnings by over 2% and on revenue by nearly 6%. Surprise was likely not appropriate. In my blog on January 20th, I pointed out that Juniper was unfairly (in my view) assigned the smallest upside by the Street. No, I didn’t know anything about the quarter just ended, just that Juniper had acquired what I think are the strongest technology assets given the state of networking. It seemed to me obvious that they’d start paying off, and they have. The big question, then, is how high can they go?”

I don’t think there’s much doubt that Juniper’s quarter owes much to its “three pillars” of networking, Automated WAN, AI-Driven Enterprise and Cloud-Ready Data Center, talked about in some detail in its fall “influencers’ event”. In particular, I think that Juniper has been smart to introduce AI as an operations enhancement tool across both enterprise and service providers, WAN and data center. Given that operators are demonstrably reluctant to dip their toes into new revenue streams, they can solve their profit-per-bit problem only by improving operations.

Operations costs and issues are also a big priority for enterprises and cloud providers. Networks are complicated, and as we add in additional technical features and elements to improve networks’ ability to serve their missions, we make them more complicated. This threatens the stability of networks, which of course threatens the missions we’re trying to support better. Adding AI into the picture promises a reduction in the errors that, in particular, are creating embarrassment and financial losses.

All of these positive drivers are durable enough to serve Juniper well through 2022, and perhaps even into 2023, in the enterprise and cloud provider spaces. For network operators, the problem of profit compression is getting acute, and 5G deployment is creating a need to deploy new infrastructure that tends to cement in an architecture, whether it’s deliberate or default. Most Street analysts agree that the service provider space is Juniper’s biggest systemic market risk.

Juniper probably realized this as early as last spring, when they did their “Cloud Metro” announcement. Obviously, a cloud/metro strategy is aimed at the operators and also likely the cloud providers, and it could reflect the way we’d evolve to edge computing. I blogged on Juniper’s positioning when they told their story, and what they announced was essentially the network and operations framework for a future where services were created by a combination of network connectivity and hosted features. It’s a great notion, but one that I think could have been positioned more aggressively (as I noted in my blog).

The reason for my concern is Juniper’s biggest competitive market risk, Cisco. Cisco isn’t an innovator (they characterize their positioning as seeking a “fast follower” role), and in fact they’re already starting to counter-punch against Juniper’s SD-WAN and AI/operations successes. The best strategy, IMHO, to counter someone who is “fast-following” you is to lead faster, to get far enough out in front and change the dialog enough so as to make their laggard position obvious and uncomfortable. That’s particularly true when you have as strong a technology asset base as Juniper has.

Juniper might be making its technology base even stronger, too. They recently announced some new ASICs, one (Trio 6) that targets what Juniper has described as “Built for the unknown”, at the edge. This chip fits into Juniper’s MX router line, generally targeted at service-edge missions. The other, the Express 5, provides accelerated packet handling for Juniper’s big-iron PTX. You could easily see both of these fitting in a metro mission heavy on edge computing and so tightly coupled to data center technology.

The need for tight coupling between network and data center isn’t limited to metro, or to service providers, or cloud providers. The fact is that enterprises’ network strategies have been driven largely from the data center for twenty years or more, and so “Cloud Metro” is a mission for cloud-coupled networking technology that has an almost-universal application. Recall, too, that Juniper’s Apstra acquisition puts it in an almost unique position of having a strategy to couple their networking to almost any data center switching strategy.

Cloud-coupled networking has another dimension, which is that of virtual networking. In any environment where software nuggets are deployed, redeployed, and scaled in a highly dynamic way, you really need a virtual network to create the agility. Add to that the fact that, as I noted last week, consumer broadband enhancements in capacity and quality favor SD-WAN, and the combination of Juniper’s Contrail and 128 Technology Session Smart Routing look very, very, good.

None of the aggressive stuff I’m projecting here was mentioned in the earnings call, even Cloud Metro. That’s not necessarily a surprise or a bad thing, because these days earnings calls are all about the current situation; companies have been reluctant to talk about positioning stuff since Sarbanes-Oxley twenty years ago. The big question is whether Juniper will, at some point, position their future potential, making it a commitment by the company to buy into a specific model of network/cloud evolution.

There are always pluses and minuses associated with aggressive positioning, of course. The pluses are obvious; you get to define a space and set the reference for all who follow you there…if you’re right. The minuses are all based on what happens if you’re wrong, if the market doesn’t develop as you expect. From a PR perspective, though, being aggressive is almost always the best approach, because nobody takes back any good PR they’ve given you. Rival Cisco proved back in the pre-SOX days that a five-phase plan, announced with the statement that you were already in phase two, always got good ink even if you never delivered on it and it never even proved useful.

The biggest reason for naked aggression for Juniper may have nothing to do with Cisco, but a lot to do with competition. In the metro and even in the data center, the cloud and our evolved notion of services are combining to create a critical partnership between networking and hosting. In any partnership, there always seems to be a senior partner. Will it be a network vendor or a hosting vendor?

Despite the fact that Cisco has server assets, I don’t think they have any intention of being an aggressive player in defining that network/cloud relationship. That means Juniper is the only major player on the network side that has a shot. If it’s not Juniper, then it opens a path for a hosting vendor to do the job; perhaps VMware or IBM/Red Hat. Maybe even Dell or HPE. Or, just perhaps, a lower-tier network vendor gets anointed. Whoever defines the space will then shape the transformation of IT decisively, and almost surely in favor of the technology they represent. Can networking win? Yes, if Juniper or some other smaller network vendor steps up. Otherwise, we can expect to see hosting dominate the partnership and the metro, and the data center to dominate the enterprise network going forward.

White boxes and hosted network features are either fixtures of new services or indicators of commoditization. It may well be that 2022 is when we’ll see the market decide between these choices.

The Broadband Explosion Could Create Collateral Impacts

Broadband change is in the wind, literally as well as figuratively. In the figurative sense, it’s clear that telcos and cablecos alike believe that they have no option but to make consumer broadband profitable in itself. For some, such as Verizon, that means literally taking broadband to the sky, with fixed wireless or millimeter-wave technology. AT&T, long a laggard with regard to fiber to the home, is now offering multi-gig service tiers. It’s clear that all of this will drive other changes, but what?

In their most recent quarter, Verizon reported 78 thousand FWA adds, up from 55 thousand last quarter (residential and business in both cases) compared with 55 thousand Fios adds. Yes, Verizon has been deploying Fios for a long time, but the fact that its new wireless millimeter-wave service has passed Fios in incremental deployments is still impressive. It proves that the technology can be valuable as a means of providing high-speed broadband where fiber isn’t quite cheap enough. It won’t bridge the digital divide, but it might at least bridge the digital suburban off-ramp.

AT&T’s decision to push fiber-based broadband to 2 and 5 gig speeds is an admission that it needs to offer premium broadband or risk having someone else steal a lot of customers. AT&T’s wireline footprint is largely overlapped by at least one cable competitor, and relentless advances in DOCSIS mean that cable could be that “someone else”. Not to mention the risk of local competitors in areas where demand density is high, including deals involving partnerships with state/local government.

We’re not going to see gigabit rates from the broadband subsidies now floating about, but it is very likely that even many rural areas will have broadband good enough to support streaming video, and that creates the first of the secondary changes we’re going to talk about.

Cable companies got started by syndicating access to multiple live TV channels at a time when broadband capacity couldn’t deliver streaming live TV in the form we have today. Obviously it now can, and for a growing number of customers. Does this mean that the streaming players will eat the linear live TV model? Yes, but the victory may be short-lived because the networks may eat the streaming players.

What I’ve heard off-the-record from every network, studio, and video content creator is that they’re happy to have streaming syndicators as long as what they do is resell bundled network/content TV and video material and create a multi-channel GUI around it. The old “content is king” story is getting new life as the TV networks in particular realize that they need to brand their own streaming service. Remember that in the recent dispute with YouTube TV, NBCU wanted Google to sell a Peacock service as a bundle rather than license separate channels. I think that’s what everyone wants, and of course that isn’t a huge opportunity for those already in the streaming-multichannel-TV business.

It may not be any opportunity at all, in fact, because there are already players (including, says the rumor, Apple) who see themselves as creating the ultimate video front-end, one that would integrate with every provider of content, live or on demand. Amazon, Google, and Microsoft are all said to be exploring the same option, to include cloud storage of “recorded” live material. Roku is also said to be looking into being a universal content front-end. Google, of course, already has YouTube and YouTube TV, and anything they do here would likely be held in reserve until it was clear that their YouTube TV property was under threat.

This video-front-end mission requires APIs that would be used to integrate the content, and that opens another likely change, which is the growth of content-for-syndication players. Today, a small new “network”, a creator of limited specialty content, has a rough time because their material isn’t presented as one choice among a broad set of “what’s on?” options. A syndication player could offer their APIs to anyone, and a new content player could integrate with them. Since there is IMHO zero chance that this new content front-end wouldn’t offer both on-demand and live material, any content could be integrated with the front-end element, creating a kind of “open-Roku” form.

This is a massive shift, of course, which means it will take a lot of time to complete. The near-term initiatives of networks to build their own streaming brand is a clear indicator of where they’d like things to go, and that they’re taking careful steps to move things along. That means maintaining syndication deals with streaming aggregators until their direct streaming relationships demonstrate they can provide staying power. We should expect to see more and more content licensing disputes between networks and streaming services, some going beyond the current up-to-the-brink and actually resulting in loss of some material, for a more significant period. At some point, the “significant period” will start to mean “forever” for the popular network material.

All this is going to impact the market, but it’s not the end of the impact of better broadband. If we assume, as we should, that urban/suburban services are heading above the gig level in terms of top-tier bandwidth, we have to assume that “residential” broadband is going to offer a major cost advantage versus traditional business services.

The cost per bit of residential broadband has been far lower than the equivalent cost for business broadband, but companies have paid the differential because of the difference in availability and QoS. Today, with more and more of the front-end piece of every business application migrating to the cloud, and more and more of application networking being carried on the Internet, it’s looking questionable whether the availability/quality differentiator for business broadband can hold.

The answer likely lies in just what “gigabit” broadband really means. A packet interface is “clocked” at a data interface rate, meaning that packets are delivered to the user/network interface at a rate that corresponds to the service’s “bandwidth”. Most users who have high-quality broadband and take the time to assess the real speed of their service find that it doesn’t match the clock rate. Deeper congestion, or deeper capacity metering, or deeper constriction of capacity at content sources or applications, can all reduce the end-to-end delivery rate of broadband. Upstream versus downstream performance can also vary, both in clock speed (asymmetrical services like 100/20 Mbps) and in actual end-to-end delivery rate. These variations won’t typically mean much to users, but they could mean a lot to business.

Even a big household may be challenged to consume a gigabit connection, streaming video and making video calls. A branch office of an enterprise, with anywhere from a half-dozen to a hundred or so workers, could do so much more easily, particularly if there are “deeper” points of constriction. Feed a dozen gigabit connections into an aggregation point that has a single gigabit trunk going out, and it’s obvious that if the usage of those connections rises, the effective performance of every connection will be less than the clocked value.

The obvious question is whether some variant on consumer-level broadband access could be leveraged as a business service. The initial impact of radically improved broadband speeds in the consumer space would be a significant advance in the use of SD-WAN as opposed to IP VPNs, in branch locations. The limiting factor on this trend would be the deeper constriction in performance just noted. Most SD-WAN has additional header overhead, and that means that some “throughput” isn’t “goodput”, to use the popular terms. Even where header overhead is minimal, though, it’s possible that consumer broadband won’t deliver any higher end-to-end performance at gig speeds as it did/does at a tenth that. Could that encourage a change in service? There are two options.

The obvious option would be to establish a “premium” handling policy at those deeper points in the network. Hop off the access broadband network and turn left, and you get consumer Internet. Turn right and you get business broadband. The advantage of this is that it leverages mass-market access technology to lower the cost of business service. The disadvantage is that there are certainly business sites located in areas where business density is too low to make too much premium infrastructure profitable.

The second option would be “Internet QoS”, which in debates on net neutrality tends to be called “paid prioritization”. If premium handling were made a broad option in mass-market infrastructure, then it could be used by businesses to support SD-WAN service, and used by consumers where they needed better-than-best-efforts. The advantage of this is clear; we end up with better broadband. The disadvantages are equally clear.

Few doubt that paid prioritization would result in erosion of the standard service. At the very least, we could expect that broadband wouldn’t get “better” unless we paid more to make that happen. Given the quality broadband dependence of the OTT industry and the legion of startups and VCs that the industry empowers, and given the fact that “net neutrality” has been a political and regulatory football, this option looks like it’s a non-starter, at least in the near term.

The biggest barrier to either option, though, is the profits of the operators whose infrastructure would have to be changed. To invest in quality left-turn-or-right business handling of broadband is to support a migration of your customers from an expensive service to a cheaper one. That’s not a formula for success at a time when your profit per bit is already sinking.

We’ve had a period of some stability in the broadband space, with technology evolving rather than revolutionizing. We may be seeing an end to that now, and the shift will create opportunities and risks for both vendors and operators.

Striking a New Electro-Optical Balance in Network-Building…Maybe

For roughly a decade, there’s been a growing debate on the balance between optical technology and electrical technology in networks. The optical vendors, notably Ciena, have (not surprisingly) been weighing in on the topic the most, given that they’re likely beneficiaries of a shift toward optical networks. A recent Light Reading piece talks about Ciena’s view. I do think that there are forces operating to shift the focus of operators more to transport optics, but I think they’re a part of a larger potential architectural shift, and we need to explore the big picture before we draw local conclusions.

Networks have always relied on “trunks” and “nodes”, and trunk technology has typically relied on economy of scale in bandwidth/capacity. Fat pipes are cheaper per unit of capacity than thin ones, so there’s a benefit to aggregation. A big part of this is the fact that the cost of a trunk includes a hefty charge for deployment of the physical media, a cost that’s largely independent of capacity.

With fiber optics and dense wavelength-division multiplexing (DWDM) you can create an optical trunk with very high capacity for a relatively small TCO premium over lower-capacity pipe. The trick is to aggregate enough traffic to utilize the capacity. If we assumed a static load on the network, I believe that the opto-electical dynamic wouldn’t be under much pressure, but network load is increasing. Fiber has crept into the access network, but not the kind of fiber the article discusses, fiber that needs IP integration. That requirement exists when fiber is deployed deeper, creating a network topology that uses less electrical-layer handling.

Access networks are the on-ramps to the broader network, and access networks terminate in what used to be called “edge offices”, which in turn link to deeper facilities. Overall, there are roughly twenty thousand aggregation sites in the US and perhaps a hundred thousand globally. Consumer broadband has driven up the traffic level in the access network (wireline and wireless), with video being the major contributor. Higher-capacity access connections mean higher capacity is needed to create trunks at the aggregation points. Aggregation at the edge is typically done in electrical devices, and even if optical trunks are used the devices involved are still routers.

If you really want to see IP/optical convergence, you need to look at what happens behind (deeper than) the edge aggregation. There, the big question is the number of trunks, which if you assume essentially unlimited optical capacity per trunk, depends on how you interconnect the aggregation points. You obviously can’t mesh a hundred thousand aggregation points globally, or even twenty-thousand in the US. If operators are truly interested in IP/optical convergence, then they’re postulating more meshing, and a need to transit through deeper aggregation points with lower cost and latency.

Where this happens and what devices are involved depends on a lot of things, but perhaps most of all it depends on what services drive the network. I’ve been a consistent fan of “metro networking”, meaning the presumption that services will focus on/in a metro center, and I think that metro is both the first driver of IP/optical convergence and the primary battleground where router vendors have to work to manage it.

In a pure connection-driven future, there is no reason to think that IP/optical convergence couldn’t replace all of the core routing and much or all of metro routing. Traffic at any point is just passing through, so smart handling isn’t required. We could expect to see advances in the use of IP/optical interfaces on content caches, and as the unit cost of optical capacity continued to fall, the migration of cache points deeper into the network.

The counter-force to this is obviously non-connection services. If we have to add service intelligence, we need to couple computing to the network, and that’s more easily done with traditional network devices like switches and routers. If those non-connection services focus on the metro, then we probably see traditional IP devices with optical interfaces (the current model) being deployed from the metro outward, which again perpetuates the current model.

So is the whole of IP/optical convergence then a myth? To a degree, yes, if we assume that the convergence really means that optical devices take on a limited IP mission to make connection networks more efficient. I think that even the article hints at that, with a Ciena quote: “61% [of respondents] defined IP/optical convergence as the streamlining of operations across IP and optical functions. To me, that involves multi-layer intelligent software control and automation.” In other words, what operators want is operational efficiency improvements, not convergence of the network equipment. If that’s the case, then there’s little Ciena or other optical vendors can do to move the ball.

The deeper truth here is that the concept of “IP/optical convergence” isn’t what the debate here is really about. We have it in at least one form, the optical interfaces on electrical devices, already. The deep issue, the real debate and the real competition, is over what happens at the metro level. Why? Because if metro doesn’t introduce service intelligence, then simple optical aggregation spreads out from core, through metro, and closer to the edge. We might see a radical reduction in electrical-level (router) devices. If metro does create significant service intelligence, then electrical-level, data-center-integrated, networking concepts spread toward both edge and core, and simple reconfigurable optical add-drop handling is diminished.

Does Ciena see the risk here? Does it realize that if metro services explode, chances are that networking will tend to resemble data-center interconnect (DCI) more than it will optical aggregation? Does it see things like the metaverse as the big potential upward driver of that risk? That’s one big question.

The other one, of course, is what the router vendors see. Operators have been saying since 2012 (to me, and perhaps to others) that their profit-per-bit numbers were eroding dangerously. You can raise them by raising revenues, lowering costs, or both, but obviously if revenues won’t contribute much then cost reduction has to do all the lifting. That means the kind of simplification focus that the article implies. So not only do the router vendors need credible service revenue boosts for their customers, they need to make them happen quickly in the metro, to defend their incumbency against optical encroachment.

Network vendors have been slow to recognize reality here, or at least the electrical-device vendors have. I’m not completely convinced that Ciena is demonstrating that the optical players are seeing the light either. There’s a race for comprehension here, centered in metro and service networking, and the winner is going to be a big winner indeed.

What Benefits Do Users See in Applying AI to Netops?

Artificial intelligence is, in a sense, like a UFO. Until it lands and submits for inspection, you’re free to assign to it whatever characteristics you find interesting. Network operations staff have only recently been facing an AI “landing” and so they’re only starting to assign specific characteristics to it, from which they can derive an assessment of value. But they have started, so let’s look at what they’re seeing, or at least hoping. What are our goals in “AI ops”?

There are a lot of stories about the use of AI in operations centers, to respond to failures, and that certainly seems a valid application. AI could provide quick, automatic, responses to problems and also likely be able to anticipate at least some of them. Sure, a sudden capacitor explosion in a router could create a zero-warning outage, but operations people say that most problems take a few minutes to develop. You’d expect that operations would love this sort of thing, but not so much.

The fact is that everyone from end users through carriers to cloud providers says that network change management is their top target, not “fault management” in the sense of recovering from failures. Almost all ops professionals say that their top problem is configuring the network (or even the computing resource pool) to fit the current work requirements, and that the growing complexity of their infrastructure means that it’s all too easy to make a mistake.

That “make a mistake” thinking may explain a lot here. An exploding capacitor in a router isn’t operations’ fault, but a configuration error is a human error that not only hurts the reputation of whoever made it, but also the reputations of their management and even those who planned out the operations practices and selected the tools. “Failure” is bad, but “error” is a lot worse because it can be pinned on so many.

In fact, the view of the role of AI in fault management may be tainted by this cover-your-you-know-what view. There’s a lot of reluctance in accepting a fully automated AI response to a problem, and if you dig into it, you find that it stems from a fear that the operator will be held accountable for the AI decision. What the majority of ops people want is for an AI system to tell them there’s a problem and suggest what it is. The operator would then take over. A smaller number want AI to lay out a solution for human review and commitment. The smallest number want AI to just run with it, perhaps generating a notification, and this approach is usually considered suitable for “minor” or “routine” issues.

The notion that an operations type would be tarred with the brush of AI error isn’t as far fetched as it seems. Non-technical people often see “the computer” or “the network” or “AI” as a co-conspirator with whoever is programming or running it. In my first job as a programmer, a printer error had resulted in a visible check number at the upper right that differed from the MICR-encoded number at the bottom. This resulted in widespread problems of reconciliation, and the internal auditor stormed up to me and shouted “Your computer is making mistakes and you’re covering up for it!”

If configuration management is really the goal, then what specifically do operations people want? Essentially, they’d like to be able to input a change in the terms used when it was described to them. Generally, what this means is that a service or application has an implicit “goal state”, which is the way infrastructure is bound to the fulfillment of the service/application requirements. They’d like AI to take the goal state and transform it into the commands necessary to achieve it. When I hear them talk, I’m struck by how similar this is to the “declarative” model of DevOps; tell me what’s supposed to be there and I’ll figure out the steps. Normal operations tends to be “imperative”, tell me the steps to take and hope they add up to the goal.

Another thing that operations types say they want from AI is simplification. Infrastructure is complicated, and that complexity limits the ability of a human specialist to assimilate the data needed for what pilots call “situational awareness”. Think of yourself as captain of a ship; there’s a lot going on and if you try to grasp all the details, you’re lost. You expect subordinates to ingest the stuff under their control and spit out a summary, which you then combine with the reports of others to understand whether you’re sailing or sinking. Operations people think AI could play that role.

The “how” part is a bit vague, but from how they talk about it, I think they want some form of abstraction or intent modeling. There are logical functional divisions in almost every task; a ship has the bridge, the engine room, combat intelligence, and so forth. Networks and data centers have the same thing, though exactly what divisions would be most useful or relevant may vary. Could AI be given what’s essentially a zone of things to watch, a set of policies to interpret what it sees, and a set of “states” that it could then say are prevailing? It would seem that that should be possible.

The final thing that operations people want is complete, comprehensive, journaling of AI activity. What happened, how was it interpreted, what action was recommended or taken, and what then happened? Part of this goes back to the CYA comment I made earlier; operations types who depend on AI have to be able to defend their own role if AI screws up. Part is also due to the fact that without understanding how a “wrong” choice came about, it’s impossible to ensure that the right one is made next time.

It’s surprising how little is said about journaling AI analysis and decisions, even when the capability actually exists. Journals are logs, and logs are among the most valuable tools in operations management, but an AI activity journal isn’t automatic, it has to be created by the developer of the AI system. Even if it is, there has to be a comprehensive document on how to use it, or you can bet it won’t be used. A few operations people wryly commented that they needed an AI tool to analyze their AI journal.

The journaling issue raised what might be a sub-issue, which was the need to understand what was available for AI to analyze. Most organizations said they had no idea what data was actually available, what its timing was, how it could be accessed. They had stuff they used, things they were familiar with, but they also had the uneasy feeling that if their AI was limited to knowing the very same things the operations people themselves knew, it probably wasn’t being used to full potential. A few very savvy types said that they thought a vendor should provide a kind of information inventory that had all the available information, its formats, conditions of availability, and so forth. Yes, they said, all that was out there, but not in any single convenient place.

This point, an afterthought on the last suggested AI priority, might actually be the key to the whole AI success-or-failure story. Garbage in, garbage out, after all. That may be the reason why single-vendor AI strategies that link AI tools to the vendor’s own products, work the best. It may also be the guidepost for how to integrate other vendors, other technologies into an AI system. You need that journal decoder to identify and characterize the important stuff, and also some control over what gets journaled in the first place.

Regarding that, I want to call out a point I made several years ago regarding SD-WAN implementations. The goal of networks is to serve goals, and business networks in particular have to be able to support the applications that are most important to the business, and whose execution benefits likely justify a big part of the cost of the network. Session awareness, the ability to capture information on the user-to-application relationships being run, is critical in getting the most out of network security, but also in getting the most out of AI. Enterprises aren’t fully aware of the implications here, but some leaders tell me that knowing whether “critical” sessions are being impacted by a problem, and considering the routing of critical sessions during configuration changes, is likely a key to effective use of AI down the line.

Facing the Future of Tech, or Creating It

What is the future of tech? A lot of Wall Street professionals and a lot of investors are asking that question, given the NASDAQ correction. The problem with using stock prices as a measure of a market is that short-selling behavior can induce a slump just as easily as real market conditions. That obviously doesn’t rule out a real issue with tech, so we need to look at things.

Stuff sells because people want or need it. Consumer technology sells largely based on quality-of-life value, which can be based on fundamentals or on nothing more than standing tall with peers. Business technology sells based on ROI, meaning that if a company gains a benefit from something that meets their rate-of-return expectations, they’re likely to adopt it. Are either of these forces subject to change now?

On the consumer side, there’s not much question that people are whipsawed by the shifts in COVID. When the virus first came along, both people and companies changed behaviors to limit risk, and any behavioral change results in a shift in those tech value propositions. I can’t go out, so I have to rely more on in-home entertainment, so there’s an uptick in streaming services and gaming. My workers have to stay home, so I have to support remote work technology.

But when vaccines and Omicron seemed to drive down the actual risk associated with contracting COVID, people and companies started to shift again. That’s where we are now. Netflix turned in a disappointing quarter, and Peloton said it was shutting down production for a time to respond to lower demand. Neither of these is a surprise, and no rational investor would be spooked by the move. However, that doesn’t mean that those companies’ stocks aren’t less attractive, so some downturn would be expected. The downturn, though, should be balanced by an upturn in stocks that reflected the pre-COVID purchase patterns.

Stock prices are set by the number of people trying to sell versus those trying to buy, and the level of determination of both groups. The “value” of the company figures in only insofar as it impacts this buy/sell balance. Over my time working with and in the market, I’ve seen a shift from value (“fundamentals”) investing to buy/sell-balance (“technical”) investment. It changed how the market works, and that makes me wonder whether we’ve seen something similar in the way technology itself is bought and sold.

The majority of people today don’t have WiFi 6, and we’re already talking about WiFi 7. 5G’s success is assured at one basic level, but the value of the extended capabilities (like network slicing) is still up in the air, yet we’re talking about 6G. If you threaten a technology that’s not even fully adopted with obsolescence, how credible is the investment in either the current or the new generation? Why is this happening?

One reason is the consumerization of technology. Up until the 1990s, there was no consumer data services market. Today, consumer broadband is the largest data market. Up until the 1970s, there was no personal computing technology, and today the total processing power of computers sold to consumers dwarfs that sold to businesses and governments. Ordinary people don’t do ROI calculations on their tech purchases; they buy what they want as long as they can pay for it. And they don’t understand “tech value” in any deeper sense, so they don’t respond to stories and advertising that goes deeper. It’s excitement that matters.

Consumerization has also pushed tech toward what I’ll call the unitary purchase model. I buy something. The something I buy isn’t related to my purchase of past somethings, but to the specific interests and information that drive me at the moment. I surely, as a consumer, don’t think about advancing my technology state with a series of incremental investments that only give me my goal at the end of the process. Instant gratification.

These shifts don’t necessarily apply to business, but they end up doing that nevertheless. A corporate buyer is a person first and a worker second. How many times does a buyer have to read about WiFi 7 before they start thinking of it as a business technology as well as a personal one? If they do start, can they find (in the information sources they’ve first seen WiFi 7 discussed) the real value propositions?

Business technology advances as an ecosystem, not as a series of disconnected products. We can’t get the benefits of 5G or edge computing just by waving our hands, or by promoting what could be done with them eventually. We have to make a business case, and for transformational technologies that business case will require both a building of a technology base and the building of a business case. Just having something doesn’t justify it, so what does? What would those applications need to justify their own technologies?

Since the dawn of commercial IT in the 1950s, we’ve seen three ecosystemic waves of IT advance, the last of which ended roughly in the year 2000. We’ve had none since, and is it a surprise that “tech consumerism” took off in the ‘90s? We are missing some ingredients that drive a fundamental technology shift of the kind that produced one of those IT waves, waves that drove tech spending up almost 50% faster than GDP growth.

What are we missing? Three things, I think.

Thing One is a holistic sense of the future. What are we actually moving toward in our next tech wave? How will it fundamentally change our lives and our businesses? Without a sense of this, it’s going to be difficult for enterprises to make a business case to get to that future.

Thing Two is self-valuing steps forward. We can’t defer the benefits of a technology revolution without deferring the costs, which means nobody can make money until some benefits arrive. The steps to the future don’t have to justify the future, but they do need to justify the steps themselves.

Thing Three is buyer education. Workers and consumers are the same people but different roles. We can’t let ourselves fall into consumerism-based marketing when we’re trying to sell business technology. The next step forward in the cloud, or the network, or the end-points, can’t be promoted by saying it’s cool and your friends will be jealous if you don’t buy it. We have to show how that holistic future is created and how the steps work.

Most of these problems can be solved by vendors, because in truth vendors had a big role in creating them. It wasn’t their fault alone; Sarbanes-Oxley and the fallout of the tech crash in 1999 tried to reign in speculative valuations by requiring a link between stock price and fundamental growth. What that ended up doing was encouraging companies to focus on the coming quarter or the current year, and made it harder to develop technologies that had legs.

The current market conditions are likely a blip, though market slumps have a way of feeding themselves. Vendors in the tech space need to decide now whether the conditions that lay us open for this sort of thing are going to be accepted for the future. They don’t have to be, but the status quo is what we’ll get if we don’t see some progress in ecosystemic, strategic, thinking. We’ve gamed the status quo about as much as we can expect to, so if we want a better future for both us-as-workers and us-personally, we need to step up and do the right thing.

The Street View of Cloud and Network Unification

No matter how complex a technology, you can always reduce it to dollars and cents, which is what Wall Street tends to do. Note, though, that while “cents” and “sense” have the same sound, focus on the former doesn’t always involve the latter. You can’t discount Street insight, but you can’t depend on it entirely. Thus, I feel free to add my own modest view to the mix, and particularly when we’re talking about metro and edge, topics I think are critical. Before I start, since I’m talking here about stocks in part, I want to make the point that I do not hold any stock in any network vendor.

Metro and edge are a fusion of network and cloud technologies, and the Street recognizes that. Generally, they see cloud technology hybridizing and edge computing developing, and with the spread of the data center they see network implications. To understand what they see, we have to look at these two themes, whether they’re viewed accurately, and what it means if they are.

Practically all enterprise cloud computing is and has been hybrid cloud, and this is something that’s been missed both by most of the Street and much of the media. What’s changing is that the cloud front-end part of applications is dominating new development and determining how businesses project themselves to customers, partners, and even workers. The cloud piece, once a small tail on the legacy application dog, is now doing more work and less wagging. As that happens, it unleashes some important issues.

Competition is one. Any time a market is expanding, and cloud-front-end computing is surely doing that, there’s a race to grab the incremental bucks. We can see that in the growing number of web services that cloud providers offer, and the fact that those services have started to creep into new areas like IoT and artificial intelligence or machine learning (AI/ML). Both IoT and AI/ML address classes of applications that are more tightly coupled to the real world, meaning “real-time applications” that are inherently latency sensitive. You can see how this drives edge computing and also networking.

Another issue is management. Data center applications have to be managed, and so do their resources, but the task is relatively straightforward and well understood. When you start to build applications designed to be deployed on resource pools, to heal themselves and scale themselves, you add a dynamism that data centers rarely saw. Since more and more of these features are found in the cloud front-end piece of applications, the growth in that space has shifted management focus to the cloud, to the point where cloud services providing orchestration and management have exploded. If application agility lives in and is controlled by the cloud, then the cloud becomes the senior partner in the application.

Hybrid cloud and edge computing are, or can be, linked. Hybrid cloud says that applications live in a compute domain that can live in the data center and in a public cloud. Edge computing says that some applications need to live close to the activities they support. The “edge” might be in a user facility that’s close to workers or processes, or in a cloud that’s hosted in the user’s metro area. Either way, edge computing adds resources to what hybrid cloud hosts on.

The Street sees all of this, sadly, through the lens of “moving to the cloud” which isn’t what’s happening. They tend to break things down by what as-a-service the cloud providers sell, saying for example that almost two-thirds of cloud services are IaaS. True, but almost 100% of enterprises employ value-added web services to augment basic hosting, even today. Most applications running on IaaS have never run in the data center or anywhere but the cloud. That’s the core reason why it’s so hard for others to break into the Top Four among cloud providers; the initial investment in those tools is too much for them, and the use of cloud provider tools tends to lock users into a particular cloud.

In hybrid cloud, the Street recognizes the value of the model but misunderstands what it means. Fortunately that doesn’t erase their emphasis on hybrid cloud as a symbiotic technology in edge computing.

The Street’s view of edge computing is less valuable. Like hybrid cloud, their edge view is based on a misperception; that content hosting leads to edge computing. We have CDNs today, of course, but CDNs were from the first a means for content providers to distribute video without the cost and glitches associated with pushing it from central points to anywhere on the Internet. Latency is less an issue than consistency; you can cache video to make up for some latency jitter, but on the average you have to deliver material at the intrinsic bit rate of the material.

In fact, almost everything the Street sees as a driver to edge computing (security, data management, cost management for network delivery, serverless computing, gaming, and even IoT) are really not drivers of edge computing. Some aren’t even exploiters of edge services that had somehow been justified by something else. The fact is that the Street has no idea what will drive edge computing, which is bad because it means that companies have no edge position they can take that will be useful in promoting their stock in the near term, and capable of sustained profit generation in the longer term.

Edge computing will be driven by IoT, but only to the extent that we take “the edge” to mean “on the customer premises, proximate to the point of activity.” That’s where we have it today, at the enterprise level. The Street’s edge focus on CDN means it’s focusing on OTT use of the edge not on enterprise use, and yet most of its future drivers would apply to the enterprise. Thus, the Street is talking out of one side of its mouth about what’s really a cloud application (edge-as-a-cloud), and the other about “edge as a new placement of customer-owned hosting”.

IoT today uses local “edge” devices for process control. Even smart homes often have “hubs” that are really a form of edge computing. Edge hosting expands the places you can put things, just as cloud hosting does. That’s why the link the Street suggests between hybrid cloud and edge computing is more credible than their view of edge computing overall. Ownership of the edge resources is less important than placement, and placement benefits in latency control suggest that on-premises “edge” owned by the customer is the best approach.

The current local edge model does benefit the cloud provider, because the cloud provider has (in “hybrid cloud”) already addressed the need to define a distributed hosting model that allocates resources in different places based on different cost and performance constraints. Remember the orchestration and management impact of hybrid cloud discussed above? The major public cloud providers offer the ability to locally edge-host cloud-integrated application components, making a local edge an extension of the cloud.

Via the network, of course, and that’s a topic where I think the Street has fallen significantly short on insights. If you have a collection of resource pools, from data center to cloud and then to edge, a collection of resilient, scalable, components, and a collection of potential users (customers, partners, and employees), you have a prescription for a very agile network. You have, in fact, a demand for a virtual network. Add in some things the Street does recognize, which are the need for an agile data center networking strategy, a need for enhanced security, and a need for operational efficiency, and you have a recipe for a completely new network model.

Completely new models raise the potential for completely new winners, and in this situation the Street doesn’t even see any network device vendor candidates to speak of, other than security specialists. At the same time, the trade media (or at least some of it) is picking up on the fact that Juniper is making a move. With the acquisition of 128 Technology, Mist, and Apstra, Juniper has covered all the bases of the new virtual-agile network. And yet the Street seems to assign them the least upside potential in the whole space, lower than rival Cisco or even F5. There seem to be two reasons why Juniper’s not getting Street cred; the Street doesn’t understand that all SD-WANs aren’t created equal (or even nearly so), and the Street like the trade media is more focused on get-in-Cisco’s-face positioning than on product technology improvements. Who doesn’t like a good brawl?

The challenge for the network device players, even Juniper, is that virtual networking is broadly viewed as an above-the-network technology. Juniper does integrate 128T’s Session Smart Router concept into other Juniper products, but many users and most Street analysts miss the fact that SSR integration could make network devices preferred virtual-network hosts. Without that, players like VMware have a credible shot at the space, and if you’re looking to define upside potential (as the Street surely is) then how do you miss this one?

Not all the Street did; Juniper got an upgrade recently and their stock has outperformed Cisco’s in the 3, 6, and 12-month comparisons. There’s no breakout though, and the technology shifts suggest that there could be. Whatever the Street, buyers, or even network vendors themselves, see, there is going to be a major change in networking down the line, and not that far down either. We can expect to see the impact of these shifts in 2022.

Paths to the Edge: Metaverse Models or Metro Mesh?

Most experts I talk with, either on the enterprise side or among their vendors/operators, have been telling me for almost a decade that they truly believe that edge computing will be driven by some flavor of “augmented” or “virtual” reality. The “metaverse” concept we’re hearing so much about today is (IMHO) simply a variation on that theme. Several variations, in fact, and it may be the way those variations manage to create harmony that decides just where edge computing and even metro networking end up going. Or, it may be that we see a completely different set of forces start us along the path to edge and metro. Or both.

I did my first model of edge deployment back in 2013, calling it “carrier cloud” because I believed then (and still believe) that operators are the ones who have the real estate, the network topology role, and the low ROI tolerance needed to optimally deploy edge technology. I cited five drivers (which I noted in my blog yesterday) for carrier cloud. Three of them (5G, IoT, and what I called “contextual” applications; more on that below) are still broadly recognized as edge drivers, but I want to reframe that early work into metaverse terms.

To me, a metaverse is a reality model, something that represents either the real world or an alternative to it. 5G isn’t a metaverse-modeled reality, it’s a network technology, but its role in carrier cloud or metaverse was really nothing more than a spark plug to ignite some early deployment. The real applications of edge computing depend on some variation on the reality-model theme.

In my original modeling, “contextual services” were the primary opportunity driver, with IoT second. I submit that both are reality models, and thus they’re a good place to start our discussion.

Contextual services are services designed to augment ordinary user perceptive reality with a parallel “information reality”. Walking down the street, we might see a building in the next block—that’s perceptive reality. Information reality might tell us, via an augmented-reality-glasses overlay, that the building is the XYZ Widget Shop, and that a Widget we’ve been researching is on sale there. Yes, we could get this information today by doing some web searches, but we’d have to be thinking of Widgets to think to do it, which we may not be. Contextual services would take a stimulus in the real world, like what we see, and correlate it with stuff we’ve expressed interest in or should be made aware of. Stimulus plus context equals augmented reality.

Contextual services are the core of the first of the three metaverse models I mentioned yesterday. This metaverse (like another we’ll get to) is centered on us, and it accepts stimuli from sources like what we see (based on where we are), what we hear, what communications requests are being made, and so forth. It also has a “cache” of interests based on past stimulus, things we’ve done or asked or researched, etc. The model parses the cache when a stimulus comes along and generates a response in the form of an “augmentation” of reality, like overlay text in AR/VR goggles.

The model that’s obviously related to the contextual metaverse is the “social metaverse”, the stuff Meta wants to create. The primary difference in the social metaverse is in the “augmentation” piece. The contextual metaverse assumes the information reality overlays on the real world. The social metaverse assumes that there is an alternative universe created, and that alternative universe is what is perceived by someone who is “inhabiting” the social metaverse. Because the social metaverse is social, it’s important that this alternative universe be presented as real to all inhabitants, and that all inhabitants and their behaviors are visible there, to all who are “local”.

IoT is a different model, what I’ll call a “process metaverse”. In a process metaverse, the goal is to create a digital twin of a process, and use that twin to gain insight into and control over the real-world process it represents. A process metaverse isn’t centered on us, but on the process. Information augmentation isn’t integrated into real-world sensory channels, but fed into control channels to do something.

It’s easy to see that all these “metaverse models” are, or could be, technical implementations of a common metaverse software architecture. It’s a model-driven architecture, where “events” are passed around through “objects” that represent something, and in the passing they trigger “actions” that can influence the “perception space” of whatever the metaverse centers on.

My hope with a metaverse-of-things approach is to create a single software framework that could be applied to all these metaverse missions, reducing the time required to build one and the overall cost. Such an approach could also allow potential edge providers to create an “edge platform as a service” that would optimize the hosting of edge applications and further enhance return on investment. It doesn’t guarantee that we’d build out an edge computing model, but it would make it more financially reasonable to do so.

What happens without this? Is there another way of getting to edge computing, or at least getting closer? One possibility is to look forward at what edge computing would look like, not at a single location but collectively. As I noted in a past blog, edge computing is really metro-centric computing, and if we had it, then applications like the metaverse would encourage the meshing of metro networks to create regional, national, and global networks. Could we see an evolution of networking create the metro-mesh?

The public cloud providers are already starting to offer network services created within their own cloud networks, as a means of uniting applications spread across wide geographies. Buy cloud front-end application services and you get cloud networking to backhaul to your data centers. If this sort of thing catches on, it would induce cloud providers to take on more network missions, and the threat to operator VPNs might induce operators to deploy metro-centric networking, then evolve to a metro mesh architecture.

A metro-mesh model has lower latency because it’s calculated to reduce transit hops, replacing traditional router cores with more direct fiber paths. We already have a few operators taking steps in that direction, and cloud provider competition for network services might be enough to multiply operator interest in that model. If operators aren’t motivated to creep into carrier cloud by adding metro hosting today, might they creep in by starting with the metro-centric and metro-mesh architectures? Perhaps.

One thing seems certain to me. We are beginning to see a revolution in terms of cloud and network missions, and at the same time a revolution in the competitive dynamic of the combined cloud/network space. We won’t see cloud providers erasing network operators; the access network isn’t interesting to them and has too low an ROI to likely become a target of competition. We might see the cloud providers eating a bigger piece of business networking, meaning VPN services, and if that happens, could it induce operators to take a shot at cloud computing in response? Perhaps.