Is There a Future in Augmented/Virtual Reality?

Last week there were a number of stories out on virtual reality (VR).  It’s not that the notion is new; gaming developers have tried to deliver on it for a decade or more, and Google’s Glass was an early VR-targeted product.  One interesting one was a joke.  On April 1st, Google spoofed the space with an offering it called “Cardboard Plastic”, a clear plastic visor thing that hid nothing, and did nothing.  It was a fun spoof, but that doesn’t mean that there’s nothing real about VR.  There are a dozen or more real products out there, with various capabilities.  I’m not going to focus on the design of these, but rather on the applications and impact.

From an application perspective, VR’s most common applications are gaming or presenting people with a visual field that includes their texts, which is a kind of light integration with “real” reality.  These combine to demonstrate VR’s scope—you can create a virtual reality in a true sense, meaning an alternate reality, or you can somehow augment what’s real.

Just as we have two classes of application we have two classes of technology—the “complete” models and the “augmented” models.  A complete VR model creates the entire visual experience for the user.  For that, it could mix generated graphics with a captured real-time (or stored) image.  The augmented models are designed to show something overlaid on a real visual field.  Google’s Glass was an example of augmented reality (Cardboard Plastic would have been a “lightly augmented” one too).  Complete VR can be applied to either the alternate reality or augmented reality applications, but the augmented approach is obviously targeted at supplementing what’s real.

The spoof notion of Cardboard Plastic is a kind of signal for where the notion of augmented reality would go, because it demonstrates that you probably don’t want to spend a lot of money and blow a lot of compute power in recreating exactly what the user would see if there was nothing in the way.  Better to show them reality through the device and then add on some projected graphics.  However, the technology to provide for “real-looking” projections and real see-through is difficult to master, particularly at price points that would be affordable.

The complete model is easier at one level and more difficult at the other.  It’s easy to recreate a visual framework on a camera; we do that all the time with phones and live displays on cameras.  The problem is the accuracy of the display—does it “look” real and provide sufficient detail to be useful.  We can approximate both fantasy virtual worlds and augmented reality with the complete VR models today, but the experience isn’t convincing.  In particular, the complete model of VR has to be able to deal with little head movements that the human eye/brain combination wouldn’t convert into a major visual shift, but that VR headsets tend to follow religiously.  Many people get dizzy, in fact.

In theory, in the long term, the difference between the two would shrink as graphics technology improves.  In the near term, the complete models are best seen as a window into a primarily virtual world and the augmented models a window into the real world.  The technical challenges of presenting a credible image of the real world needn’t be solved for augmented-reality devices, which is beneficial if the real world is the major visual focus of the applications.  For this piece, I need to use a different acronym for the two, so I’ll call anything that generates an augmented reality “AR” and the stuff that’s creating a virtual-world reality “VR”.

The applications of AR are the most compelling from an overall impact perspective.  I covered some when Google’s Glass came out, for consumers they include putting visual tags on things they’re passing, heads-up driving displays, or just social horseplay.  For workers, having a schematic of something they’re viewing in real time, displaying the steps that need to be taken in a manual task, warning them of interfering or incorrect conditions, are all good examples of valuable applications.

One thing that should be clear in all these applications is that we’re talking about mobile/wearable technology here, which means that the value of AR/VR outside pure fantasy world entertainment is going to depend on contextual processing of the stimulus that impacts their wearer.  You can’t augment reality for a user if you don’t know what reality is.

There are two levels to augmenting reality, two layers of context.  One is what surrounds the user, what the user might be seeing or interacting with.  Think of this as a set of “information fields” that are emitted by things (yes, including IoT “things”).  Included are the geographic context of the user, the social context (who/what might be physically nearby or socially connected), and the “retail” context representing things that might be offered to the user.  The second level is the user’s attention, which means what the user is looking at.  You can’t provide any useful form of AR without reading the location/focus of the user’s eyes.  Fortunately, that technology has existed in high-end cameras for a long time.

AR would demand that you position augmented elements in the visual field at the point where the real element they represented was seen.  However, if you were to move your eyes away from a real element that should probably signal a loss of interest, which should then result in dimming or removing the augmentation elements associated with it.  Otherwise you clutter up the visual field with augmentations and you can’t see the real world any longer.

As I said earlier here, and in prior blogs on AR/VR, there is tremendous potential for the space, but you can’t realize it by focusing on the device alone.  You have to be able to frame AR in a real context or it’s just gaming, whatever technology you use.  The second of our two layers of context could be addressed in the device but not the first.

At its best, AR could be a driver for contextual behavior support, which I’ve also talked about before.  Those “fields” that are emitted by various “things” could, if organized and cataloged, serve to tell an application what a user is seeing given the orientation and focus of their VR headset.  If you have this kind of input you can augment reality; if not then you’re really not moving the ball much and you’re limiting the utility and impact of your implementation.

This frames the challenge for useful augmented reality, which includes all those business apps.  The failure of the initial Google Glass model shows, I think, that we can’t have AR without the supporting “thing fields”.  We have to get them either because AR capability pulls them through or they arise because of IoT and contextual services, and I think the latter model is the most realistic because the cost of extensive deployment of information-field resources would be too high for an emerging opportunity like AR to pull through.  Google Glass showed that too.

What this means is that meaningful AR/VR will happen only if we get a realistic model for IoT that can combine with contextual user-agent services to create the framework.  That makes the IoT/context combination even more critical.