Would 5G and Edge Computing Really Launch AR?

What is common to all dreams?  The answer is “dreamers”, and that’s what seems to be dominating the whole 5G, edge, and augmented reality space.  Light Reading’s piece on the subject yesterday talks about how network giants Verizon and AT&T are “hoping” that 5G and edge computing will save augmented reality (AR).  “Hoping” and “dreaming” are different largely in how disconnected each is from “reality” of a different sort, the objective reality of the market.  I think these operators are, as usual, drifting into dreamland.

“Augmented reality” is the process of mixing computer-generated information, images, sounds, and so forth, and the actual scene around us.  As the term suggests, it’s supposed to add information to our inventory of data, almost always by enriching our visual field in some way.  A label that’s superimposed on a face in the crowd could tell you that the person approaching is someone you’d met before, or that the storefront two blocks down on the other side of the street has the product you’re looking for.

There is no question that there’s a need for network-delivered information to do the augmenting of a given reality.  There’s no question that the processing needed to correlate what we see “naturally” with useful augmentations has to be hosted somewhere.  But without the information, it doesn’t much matter how fast our networks are or where that hosting is done.  The corollary to that truth is that providing “better” network or hosting capability doesn’t save AR.  What we have there is the classic “ecosystem” problem.

Predators depend on prey animals, which depend largely on vegetation.  If you want a healthy ecosystem, you can’t populate a desert with sheep and hope that predators will come to thin the herd to a point that the vegetation can sustain, because you don’t have vegetation and you don’t have predators.  Somehow you need to get all the pieces into place before natural processes combine to create a balanced ecology.

If we apply this principle to the technology world in general, or the AR world in particular, we’d say that an application (like AR) will emerge by first harnessing the elements of the current ecosystem it needs to create the new one.  The market opportunity will then build more of those elements, and begin transforming them to better suit the new missions of the new world.  AR, then, should be emerging through piggybacking on current capabilities.

What we call “the Internet” today is really the worldwide web, which is a hypertext application that emerged in 1989.  It was applied first on “the Internet”, a network largely focused on universities and government projects.  We had devices (routers) to move traffic for the Internet, and we now had a visual, generally interesting, way of presenting information through the Internet.  At this moment, operators were struggling with the question of how voice calls were going to fill the capacity that new fiber technology was going to create.  Consumer data?  Perfect answer.

What this picture shows about AR is that its growth depends on a combination of technology and opportunity.  What I think network operators are doing today is believing that the technology is the barrier, that opportunity will automatically create itself once the technical framework has been established.  I don’t believe that’s workable, and yet it’s exactly what operators have done for ages—the old “field-of-dreams-build-it-and-they-will-come” mindset.

That’s how we get into points in the article I cited, like “To get that type of VR-quality latency, operators will need to move the compute power closer to the network edge.”  We’re postulating the needs of an application, the technology best suited to deliver it, without the benefit of the application.  We had the Internet for decades before the web was invented, but without the web it’s very likely that we’d never have had “the Internet” is it is today.  Put another way, the best way to look at AR evolution and success is to look for the single basic technology strategy that would tie together all the resources we know we need.

I’ve proposed that one way of visualizing that basic technology is the notion of information fields.  In a virtual/augmented world, we move in parallel through the real world (defined by time and geography) and the information world (defined by knowledge).  Each element in the real world could be seen as emitting an information field, describing its goals, capabilities, properties…whatever.  A shop would emit information about what it sells, where it is, when it’s open, and also what it might be looking for.

So would each of us.  When we are shopping, our information field, advertising that fact, intersects with other fields representing the thing we’re shopping for.  The intersection can then trigger in us and in the shops, awareness of the mutual opportunity.  That awareness can be signaled to us, in this case via AR.  The process of recognizing mutual opportunity or relevance among information fields is what I’ve referred to as contextualization.  We live in a context, with a set of goals and priorities.  What AR could do is to present information from the virtual world into our real world, based on our context.  Contextualization and information fields form the AR equivalent of the worldwide web, HTTP and HTML.

Except, of course, we don’t have any firm and generally accepted vision of how information fields would be represented and how contextualization would be applied.  I’ve suggested in prior blogs that every real-world element would be represented in the virtual world by an agent process.  It’s that process that would do the heavy lifting, the assimilation of information fields and the contextualization.  That process would then provide the “augmentation” to the reality that we saw in our (hopefully more compact and attractive) AR glasses.

It’s not likely that the implementation of this concept would be in the form of “fields” in a literal sense.  Think of it as a big database.  Say that one index was location, and that for each location the database could return the “fields” that “permeated” (in a virtual sense) that location.  As we moved through the world, our own interest “fields” would be matched to the fields associated with the locations we passed through.

The same approach could be taken with something other than user location as the “key factor”.  Any interest or offering could be used to link a user/worker to an “offer” or capability.  If the user happens to be within view of the place the offer could be realized, it could be highlighted in the user’s field of view.  If not, an arrow could be presented to lead the user to the location.

There are probably a lot of other ways to do this, just as there would be a lot of ways to have created something like the worldwide web.  It’s hard to say now whether any of the other web options would have been better, but it’s easy to see that if no option at all had been selected, we’d be in a very different place today.

The web, in a timing sense, was an accident.  It might be that AR will likewise be an accidental combination of capabilities and interests, but we have an advantage in that we can visualize the goal we’re seeking, and we can therefore visualize both abstract (the “information fields”) and concrete (the agent processes and the database) mechanisms for getting to the goal.  The challenge is putting the pieces together.  A couple decades ago we might have seen something like this act as a catalyst for venture capital investment, but today’s VCs aren’t interested in complexity.

That doesn’t mean a startup or open-source project couldn’t do this, and that’s what I think is going to happen.  I don’t think that AT&T or Verizon, or edge computing or 5G, are going to catapult us into the AR future.  They’re just going to get dragged along for the ride.