Where Do We Get the “Context” in Contextual Services?

Some of the questions and comments I’ve gotten on my blog on contextual services and point-of-activity empowerment ask at least implicitly for an expansion in one key point. I noted the importance of relating “location” of a service user to what is nearby, meaning “contextual location”. I didn’t note how one did that or how the capability would be used. Actually I did mention that back in 2019, and I did a three-part series on new contextual services in September of 2022 (HERE, HERE, and HERE). What I think people are asking for is an application of the concepts of those past blogs to the examples I presented on January 5th, so let’s give that a shot.

My view of a service user driving or walking along is one of a digital-twin metaverse-like model that represents that user in relation to surroundings. Rather than offering user location in latitude and longitude terms, offer it relative to elements of the real world that are relevant to user interests, missions, and so forth. The question is how that relative/relevant property set is reflected in the metaverse of the specific user.

In 2019 I introduced a concept I called an “information field”. This is a data set that is asserted by everything that impacts the users’ metaverses. Some information fields are asserted by fixed objects like lamp posts or buildings or curbs. Others are asserted by other users, by moving objects, and even by things like time of day or weather. Information fields are asserted, and each user metaverse model includes what I’ll call an “information receptor”, which represents the things that the user is, explicitly or implicitly, sensitive to. When an information field that’s asserted meets the receptor’s sensitivity properties, the field becomes a stimulus input to the user’s metaverse.

Say our user is driving down a road, and there’s a vehicle approaching. That vehicle asserts an information field, and our user is deploying an information receptor. We might say that the receptor is sensitive to another vehicle if it is within a given distance or closing at a specific rate. If the field matches the sensitivity thresholds, then the “logic” of the user contextual metaverse gets a generated event.

Or let’s say our user is shopping. Stores assert information fields that represent what is available there, and at what price. Our user sets explicit sensitivity to the goal of the shopping trip, and if the information field of a shop matches that goal sensitivity, the user gets an event.

Or maybe our user is a worker trying to locate a valve to repair. The valve asserts an information field, and the state of the valve can be a part of it. If the state matches the worker’s receptor sensitivity, and if any other factors known about the target valve match too, the worker would get an event notice, and using it could home in on the thing that needs fixing.

What we could say is that in any given location there exists a set of information fields that could be relevant to a user in that location. This set would change over time as mobile elements shifted into or out of proximity to the location, and a given user’s focus location could also change if the user were mobile. The user’s receptor sensitivity would determine which of the information fields were relevant, so a user walking on the street toward a shop wouldn’t be bothered by the fact that they were passing over a manhole access to a wiring closet, while a repair team might be sensitive to that manhole and not to the shop.

You can see (I hope) how this relates to contextual services and point-of-activity empowerment. The approach would give a user an event, which could (in their metaverse model of their activity), trigger some information delivery. You can also see, perhaps, how this could be linked to some of my musings about a generalized architecture for a metaverse.

In early March of 2022, I did a series of blogs on modeling digital-twin systems and the role a metaverse architecture might play. One of the notions in the series was the “locale”, which I defined as a virtual place that collected inhabitants who could interact with each other. That concept could be applied to our contextual services discussion, with perhaps a bit more elasticity.

First, the locale in contextual services is a virtual model of a real place. One useful way to think of it is that a user of a contextual service is surrounded by a “field” that represents the user’s locale. The user can be aware of things within the locale, and would be expected to be unaware of things outside it. The exact size of a locale would likely need to be adjusted based on the user’s position, speed of motion, and mission. A driver, for example, might have a locale that’s perhaps a block or two around the driver’s current location. Other vehicles and pedestrians outside the driver’s locale need not be considered when looking at how their information fields matched with user’s receptors. That simplifies the processing of these intersections.

We could also consider locales to be fixed in location, based on a number of factors like traffic flow and direction, the nature of the structures, speed limits, and so forth. If this were done, then all of the vehicles within the locale would have the same information fields presented, and so sensitivity receptors for all of them could be matched against that set of fields in a single step.

I think the first of the models probably works best for vehicular motion because the pace of changes in conditions is faster and because maneuvering to avoid things is more critical. For pedestrian movement I think the latter approach is better, because it’s reasonable to think of “walking” as moving between locales, each of which has contents (shops, restaurants, service stations, banks, etc.) that would assert information fields. On entry, a user would then be “presented” with these fields to match against receptors.

Contextual services, obviously, require context first and foremost, and context requires both the assembly of information that is both physically and “philosophically” local to the service user. That’s more than just reading sensors, it’s interpreting the meaning of the relationships the data uncovers. It’s hard for me to see how this can be accomplished without a digital-twin concept, which is my basis for an entry-level metaverse.

But it’s also hard to see how you get all of this stuff in place. Certainly we can’t expect immediate wide deployment of this sort of thing, which of course relies largely on edge computing, at least “metro edge”. The good news is that a “locale” is inherently local, which means contextual services could be offered in a limited area to support limited missions. It may be that contextual services, metaverses, and edge computing all depend on identifying these local opportunities, then building on their success.