Can We Define the Next Big Tech Thing?

I blogged yesterday to ask the tech industry “What Lies Beyond?” What I want to do here, and what I deliberately did not do yesterday, was try to answer that question. That’s because my answer would move from analytical assessment of what I consider to be hard facts, to interpretations that in the end are my personal judgment…an educated guess if you like. I don’t want to mix those things on a topic this important.

If you look at our relationship with tech as it’s evolved over the last 70 years or so, it’s possible to see a trend that’s hardly surprising, but not always considered. Tech is getting more involved with us. In business, tech is moving closer to the how and not just the what. We used to enter transactions into databases by punching cards to record them long after the event. Yesterday, I checked out of a market by waving my phone over a terminal, and everything that those cards carried, and more, is pushed through a whole series of applications.

In the consumer space, many people have built their lives around what is in fact a virtual world. Even I, a social-media Luddite, have many “friends” that I don’t see face to face, and some I’ve never seen that way at all. We are entertained by devices we carry with us, not ones we stick on a console in the living room. We crowdsource more and more, probably more than we should, because we can.

The way that tech has changed businesses and lives is important because it’s the sum of the changes we think are valuable that determines what we’re willing to spend on tech. Thus, if the question “What Lies Beyond” that I asked in yesterday’s blog is important, it’s important to understand how tech gets even closer to us, more involved with us. Features of technology are important only insofar as they support that continued shrinking of tech’s distance from our lives.

To me, there is a single term that reflects what we need to be thinking about, and that term is context. Suppose you had an oracle (a soothsayer, not the company!) who told you the absolute truth about everything. It would be great, right. Now imagine yourself sitting in a restaurant having lunch, and hearing the oracle say “when you mow your lawn, you need to set the mower an inch higher.” Or maybe, as you ponder an investment decision, hearing “you haven’t talked with Joan and Charlie for a while.” Both those oracle comments may be not only true but helpful, but not in the context in which they’re delivered.

What lies beyond our on-demand world is the anticipated world. What allows useful anticipation is context.

Context is made up of things. First, there’s what we are doing. Second is what we need to do it. Third is where we are doing it. Number four is what is the risk-reward balance of interruptions. Even if we’re pondering investments, knowing the building is on fire justifies interrupting our deliberations. Knowing that the lawn needs mowing probably does not.

The problem with addressing context is that it requires too much from us. None of our components are things that our gadgets can readily provide. By nature, context is systemic, and so we need to get the system in which we live, the real world, transported into our tech world. This is best accomplished, like many things, through a movement on both sides. Thus, the what-lies-beyond future depends on three things, the metaverse, IoT, and AI.

What underpins context is what I’ve called the “metaverse of things” (MoT) or a digital twin. In order for tech to be contextual, it has to have a model of our context that it can deal with. The combination of digital twinning technology and an injection of metaverse principles can allow IoT and other information resources to be introduced into a contextual model that can also “contain” the user.

I’ve blogged before about MoT, digital twinning, and contextual services, so I won’t repeat that here. Instead, I want to look at the tech impact of these concepts, by linking them back to the driving benefits.

Any widespread use of context demands at the minimum a broad view of presence; not just the simple “I’m online” stuff we get now, but more body-specific, motion-and-location-specific. This would mean a combination of “tags” (which could be watches, even smartphones, but could also be RFID-like elements) and “sensors” that could sense the tag positions. We could expect this to provide a mission for IoT, and we could also expect it to be multi-leveled, meaning that people in their homes or workplaces would “allow” greater precision in accessing those tags than those walking the street.

Because this sort of tag/sensor combination is useless if every MoT player decides to create their own system, we can also expect that this will standardize at three levels. First, obviously, the sensors and tags would be standardized. Second, the message formats would be standardized so applications with access rights could read the presence data. Finally, there would be a service level standardization that would provide for access to sensor/tag data with the regulatory levels of anonymity and to simplify software development by presenting “presence features” rather than raw data. You can’t have hundreds of applications accessing a simple sensor without blocking it.

The next thing you could expect is that latency becomes important, but it’s not just network latency, it’s the length of the control loop. The first step in keeping latency down is to provide edge computing facilities to shorten transit delay, and the second step is to improve network latency, first to the edge sites and then between edge sites. If we assumed that edge hosting was done in a metro area, major cities would likely be able to satisfy edge requirements from a metro center; for “metro areas” that are spread out (Wyoming comes to mind; the whole state was defined as a Local Access and Transport Area or LATA) it would probably be necessary to spread the metro out, or at least to manage a trade-off between the number of connected edge sites (to reduce latency in reaching one) and the latency of the connections.

The notion of presence as a physical property has to be augmented by presence in a behavioral sense, meaning that applications would need to understand that “what am I doing?” dimension. Some of that knowledge could come from physical presence; I’m driving a car, walking along, riding the subway. Some could come from my interaction context; I’m chatting with Jolene or Charlie, texting with my mom, or perhaps I’m receiving a call that’s been prioritized based on my policies. This information is generally available today, but in order for it to be a real driver of applications, it would have to be deliverable in a standard form and conform to regulatory policies on privacy and personal protection, just as physical presence would.

What I think this exercise demonstrates is that the future of tech depends on the creation of an ecosystem, not on the individual technologies that make it up. If you improve network latency you have removed that barrier to a set of applications for which we have a half-dozen other inhibitors to deal with. Same with edge computing or IoT sensors. Our problem in advancing tech to the next level, in the sense of being able to help workers more directly to enhance their productivity, or to improve quality of life for consumers, is that we’re looking at a strand of spaghetti and calling it a dish.

If we want to advance tech, in computing, networking, personal technology, or whatever, then we need to support this ecosystemic view in some way. If we don’t do that, then eventually we’ll likely stumble on a solution, but it will take decades where doing it right might take months. Which tech evolution do you prefer? I think you know my choice.