Paths to the Edge: Metaverse Models or Metro Mesh?

Most experts I talk with, either on the enterprise side or among their vendors/operators, have been telling me for almost a decade that they truly believe that edge computing will be driven by some flavor of “augmented” or “virtual” reality. The “metaverse” concept we’re hearing so much about today is (IMHO) simply a variation on that theme. Several variations, in fact, and it may be the way those variations manage to create harmony that decides just where edge computing and even metro networking end up going. Or, it may be that we see a completely different set of forces start us along the path to edge and metro. Or both.

I did my first model of edge deployment back in 2013, calling it “carrier cloud” because I believed then (and still believe) that operators are the ones who have the real estate, the network topology role, and the low ROI tolerance needed to optimally deploy edge technology. I cited five drivers (which I noted in my blog yesterday) for carrier cloud. Three of them (5G, IoT, and what I called “contextual” applications; more on that below) are still broadly recognized as edge drivers, but I want to reframe that early work into metaverse terms.

To me, a metaverse is a reality model, something that represents either the real world or an alternative to it. 5G isn’t a metaverse-modeled reality, it’s a network technology, but its role in carrier cloud or metaverse was really nothing more than a spark plug to ignite some early deployment. The real applications of edge computing depend on some variation on the reality-model theme.

In my original modeling, “contextual services” were the primary opportunity driver, with IoT second. I submit that both are reality models, and thus they’re a good place to start our discussion.

Contextual services are services designed to augment ordinary user perceptive reality with a parallel “information reality”. Walking down the street, we might see a building in the next block—that’s perceptive reality. Information reality might tell us, via an augmented-reality-glasses overlay, that the building is the XYZ Widget Shop, and that a Widget we’ve been researching is on sale there. Yes, we could get this information today by doing some web searches, but we’d have to be thinking of Widgets to think to do it, which we may not be. Contextual services would take a stimulus in the real world, like what we see, and correlate it with stuff we’ve expressed interest in or should be made aware of. Stimulus plus context equals augmented reality.

Contextual services are the core of the first of the three metaverse models I mentioned yesterday. This metaverse (like another we’ll get to) is centered on us, and it accepts stimuli from sources like what we see (based on where we are), what we hear, what communications requests are being made, and so forth. It also has a “cache” of interests based on past stimulus, things we’ve done or asked or researched, etc. The model parses the cache when a stimulus comes along and generates a response in the form of an “augmentation” of reality, like overlay text in AR/VR goggles.

The model that’s obviously related to the contextual metaverse is the “social metaverse”, the stuff Meta wants to create. The primary difference in the social metaverse is in the “augmentation” piece. The contextual metaverse assumes the information reality overlays on the real world. The social metaverse assumes that there is an alternative universe created, and that alternative universe is what is perceived by someone who is “inhabiting” the social metaverse. Because the social metaverse is social, it’s important that this alternative universe be presented as real to all inhabitants, and that all inhabitants and their behaviors are visible there, to all who are “local”.

IoT is a different model, what I’ll call a “process metaverse”. In a process metaverse, the goal is to create a digital twin of a process, and use that twin to gain insight into and control over the real-world process it represents. A process metaverse isn’t centered on us, but on the process. Information augmentation isn’t integrated into real-world sensory channels, but fed into control channels to do something.

It’s easy to see that all these “metaverse models” are, or could be, technical implementations of a common metaverse software architecture. It’s a model-driven architecture, where “events” are passed around through “objects” that represent something, and in the passing they trigger “actions” that can influence the “perception space” of whatever the metaverse centers on.

My hope with a metaverse-of-things approach is to create a single software framework that could be applied to all these metaverse missions, reducing the time required to build one and the overall cost. Such an approach could also allow potential edge providers to create an “edge platform as a service” that would optimize the hosting of edge applications and further enhance return on investment. It doesn’t guarantee that we’d build out an edge computing model, but it would make it more financially reasonable to do so.

What happens without this? Is there another way of getting to edge computing, or at least getting closer? One possibility is to look forward at what edge computing would look like, not at a single location but collectively. As I noted in a past blog, edge computing is really metro-centric computing, and if we had it, then applications like the metaverse would encourage the meshing of metro networks to create regional, national, and global networks. Could we see an evolution of networking create the metro-mesh?

The public cloud providers are already starting to offer network services created within their own cloud networks, as a means of uniting applications spread across wide geographies. Buy cloud front-end application services and you get cloud networking to backhaul to your data centers. If this sort of thing catches on, it would induce cloud providers to take on more network missions, and the threat to operator VPNs might induce operators to deploy metro-centric networking, then evolve to a metro mesh architecture.

A metro-mesh model has lower latency because it’s calculated to reduce transit hops, replacing traditional router cores with more direct fiber paths. We already have a few operators taking steps in that direction, and cloud provider competition for network services might be enough to multiply operator interest in that model. If operators aren’t motivated to creep into carrier cloud by adding metro hosting today, might they creep in by starting with the metro-centric and metro-mesh architectures? Perhaps.

One thing seems certain to me. We are beginning to see a revolution in terms of cloud and network missions, and at the same time a revolution in the competitive dynamic of the combined cloud/network space. We won’t see cloud providers erasing network operators; the access network isn’t interesting to them and has too low an ROI to likely become a target of competition. We might see the cloud providers eating a bigger piece of business networking, meaning VPN services, and if that happens, could it induce operators to take a shot at cloud computing in response? Perhaps.