What is the relationship between the metaverse and augmented or virtual reality? The question relates not only to just how realistic the metaverse ends up being, but also to the question of how far metaverse-friendly technology might percolate out to other areas. That has significant implications on a lot of things, including conference/collaboration and edge computing.
Most people understand that the metaverse is an alternate reality in that it represents a “world” inhabited by “virtual people” that we call “avatars”. The extent to which the metaverse’s version of reality is credible will depend on how closely it can map to at least the most dominant of our senses, the visual. That depends on the sophistication of the “point-of-view” construction of what an avatar would see, and the sophistication of the display that renders it to the player/user behind the avatar.
I’ve talked in past blogs about the role that low latency plays in metaverse realism. That role is part of the point-of-view construction process, in that if multiple player/users cannot control their avatar’s movements in something close to synchronized real time, the interaction of the avatars will quickly become unrealistic and awkward. The latency issue is related to a lot of factors, from the processing performance that constructs the point of view to the network connection that carries information between each player/user and the master reference on which the points of view of everyone must be based.
The display issue also involves a lot of factors. One basic truth is that if the point of view presented by a metaverse is mapped to a very small part of the field of view of the player/user, then the metaverse experience simply cannot be immersive. That may not mean it won’t be useful, fun, entertaining, but it won’t be realistic. A larger screen, something like a laptop screen, is only a little better. A big monitor or HDTV would be better yet, but even if you sat fairly close to a high-res large display, you’d still see stuff around it. Our ability to focus, and in particular to focus on screens, tunes us to being able to ignore distractions around the display, but that’s not the only issue.
Would we expect the metaverse to be silent? Hardly; we’d need conversations. Would the conversations be expected to emerge from the place where the avatar we were talking with was located? Perhaps not immediately, but eventually. In any event, if some avatar spoke from “behind” us in a virtual sense, we’d have to map the sound somewhere, and having it appear to have come from in front would be one of those jarring things that could impact our feeling of realism.
Then there’s our own movement. When we want to see what’s to our left, we turn our head, we don’t diddle with a game controller. Natural behavior would not only expect head-turns to trigger a change in view, it would likely generate that movement instinctively. If we’re sitting in front of our huge HDTV, turning our head doesn’t change our avatar-field-of-view, it makes it disappear as the display moves out of our field of vision or focus.
A sophisticated, immersive, successful, metaverse will surely encourage a shift to AR/VR headsets. While I put the two (augmented reality/virtual reality) together, it is almost certain that it’s VR that’s promoted by the social-media model of the metaverse. Augmented reality means adding stuff to what’s real-world vision, with the presumption that what you’re adding is relatively simple in comparison to what’s passing through. The more complex the additions are, the more difficult it is to keep real-world view and augmentation in sync, and that means you’re looking at creating a full virtual reality by combining a camera view with the augmentation. Given that the metaverse software would then “see” both real and augmentation, it would be far easier to synchronize the behavior of the two. Finally, full VR is really the only way to make the social metaverse realistic.
The question of the role of AR/VR in the metaverse, and in other “Metaverse of Things” applications, is important because limitations in AR/VR could limit the metaverse experience, and applications of the metaverse, enough to reduce near-term pressure to support things like edge computing and low-latency networking. There have been comments on this topic for weeks, and many suggest that AR/VR isn’t really necessary, but I think that the issue of immersion argues the other way.
The downside of metaverse AR/VR is that, for now, the technology can’t really support the mission unless we presume that the AR/VR headset is specialized to a metaverse application or that such a headset can be connected just as we would an external display. The first of these could be problematic unless every metaverse provider adhered to a common standard, and the second (while preferred) is just not within the current state of the art. We don’t have the resolution needed, and the additional software to support a “virtual monitor” as big as a room that you could turn your head to scan is also lacking. In basic gaming AR/VR, we’re a long way from a strategy that could work.
The question this raises is whether the things we need for the metaverse to really take off are, overall, just too expensive and difficult to create. We need edge computing, we need metro meshing, and we need AR/VR. If we have a billion metaversers out there, fully equipped, and we assume we can build good VR for three hundred bucks at volume, we have three hundred billion dollars worth of AR/VR alone. That would surely make vendors salivate, but is it realistic? We need a practical on-ramp approach.
What I think will happen is that we’ll have a segmented metaverse model. Not only will different metaverse players (Meta and others) likely each create their own metaverse offering, the offerings will carefully balance what’s needed in the way of new technology investment with what they hope to gain from their users. Companies like Meta will almost surely either create or permit the use of “spaces” within a metaverse where admission will either require or encourage a higher level of investment.
The AR/VR headset issue is more complex, but obviously it could impact metaverse features and adoption. I think it’s clear that full VR is likely to be required for metaverse, and I also think that even applications we’d classify as “augmented reality” would be limited unless we had better real-to-virtual-world coordination than classic AR could provide. The question is whether AR/VR applications outside the metaverse (not including gaming, which is clearly more metaverse-VR-like) would contribute to advances in AR/VR headsets. If they do, and if the headsets are more widely used because of these contributions, then we could see a more advanced metaverse mixture pretty quickly.