Contextualization has a lot of pieces, but the most obvious is the “contextualizer”, the trusted personal agent process that actually interacts with the information fields that contribute to context, and from them generates contextual results. This is the third blog in the series; the FIRST and SECOND should be read before you dive in here.
Remember my “ghost” example? A personal agent is an invisible companion who’s expected to be our window on the virtual world. As such the agent has to share context with us to make its insights relevant, and that’s accomplished by drawing on the “information fields” that represent contextual inputs (the subject of my second blog). It’s easy at one level to hypothesize how a personal agent could take requests and juggle information fields, but remember that our goal is to make the interaction with our ghost as natural as it would be with a real human companion. That’s what “contextualizing” is about.
I submit that there are four steps in making our agent a true virtual companion. The first and most important is to determine what we’re doing. You could say that this is an expanded form of presence, which is a term used to describe our online availability. The second is to gather relevant information based on our “presence”. This is done through the information fields I discussed in my last blog. The third step is to assess information against a contextual framework to prioritize presentation. If I’m sitting in my favorite chair reading, I’m not as likely to be interested in traffic conditions on my street as I would be if I was preparing to drive. Step four? It’s to continue to re-evaluate as my behavior changes demand.
You can’t be helpful as a companion (visible or otherwise) with no sense of what you’re supposed to help with. There are situations where mission is explicit, as when a user asks for the location of some retail outlet or restaurant, or shops for a specific item. In other cases, mission can be inferred by a combination of time of day, the user’s movement vector, and history. If driving to work is the regular action at 7:15 AM, and if the time is a match and early movement conforms to history, you can infer the user is driving to work. It may also be possible to inspect the movement vector and infer an activity like “window shopping” by matching stops to shop locations.
The mission or expanded presence is essential in what’s likely to be a process of successive approximation, based on information gathered. It’s 7:15 on a work day, so we infer that the human whose “ghost” our personal agent represents is on the way to work. There would likely be other things our user would do in this situation, such as checking business email or stopping at a regular shop for coffee and a pastry, and if these things are done, it reinforces the presumptive mission. The certainty inherent in the mission presumption is then critical in our next step.
Reading someone their calendar for the morning is a logical step if the person is on their way to the office, but intrusive if they’re not. For a given mission, we could presume a “certainty score”, and for each piece of contextual information we (as the ghost companion) might deliver in association with each possible mission, we would have a threshold value. We’d deliver any information whose threshold score was higher than the certainty score for the mission it was associated with.
And obviously, if there were to be a behavioral deviation from the profile of the mission, we would presume that the mission was now less certain, reduce the score, and perhaps start looking for clues as to the real mission. One likely clue is recent stimuli, in the form of a call or text, a web search, or even a stop to look at something. It’s likely that this process, which feeds back things that could change the focus of human attention into our virtual/ghost agent, will be the technical key to the success of contextualization. It’s also perhaps the biggest reason why contextualization risks violation of our First Law.
A ghost agent that lives entirely in a ghost world isn’t likely to be on track with human behavior. People tend to move between a series of “missions”, each of which command their attention for a period of time. They transition between missions because something happens to knock them out of their current framework, that outside stimulus. Because it’s critical that we know what the current mission is, it’s critical we know when it changes, and that means drawing in more knowledge about things that have happened, like calls or texts or web searches.
Most online users today have come to terms (some more reluctantly than others) with having their search history used to frame the ads they see. My straw poll suggests that there’s growing acceptance of having a minimal awareness of email, voice, and SMS communications, but resistance to having the provider actually “read” the content. However, if we’re going to recognize mission changes, we may have to do that, which is one reason why a trusted agent is critical.
There are two technical options for trusted agent hosting—in the user’s device or in the cloud. The security/privacy of these options don’t really differ much, because the key to both is having a framework that doesn’t let the outside look into it. There are, in fact, five technical requirements for a trusted agent, wherever it runs.
The first requirement is that it represents its user anonymously. The user’s identity cannot be exposed by the agent when it makes requests on the user’s behalf. Since many of the stimuli that would signal a mission change are channels of personal communication, that means that the device has to be able to act as a local “information field” to provide that information, even if the trusted agent actually runs in the cloud.
The second requirement is that trusted agent communications must be fully secured. This has to be true between the agent and the information fields, and between any cloud component of the agent and the user’s device. Information can’t be subject to intercept, nor can it be possible to make inquiries of the agent from the outside or push information to it, except from the device.
Requirement number three is that the trusted agent must be the element that recognizes stimuli to signal a mission change. Since you can’t “push” information to the agent except from the device, (based on the last requirement) in order to avoid playing the agent process with false stimuli, the agent has to be alert for changes in mission. It’s my view that when the agent does believe a mission change has occurred, it should be a “notification” event to the user, subject to the normal device rules on whether the event would result in a visible/audible notification.
The fifth requirement is that the trusted agent must explicitly manage the information field relationships, extending trust to those relationships, providing users a list of the information fields they depend on, and allowing users to select their own information field providers, subject to a warning about trust level. I expect that trusted agents will be a competitive product, that “lock-in” will be attempted by many trusted agent providers who want to build their own private ecosystems, and that an open model will be critical for user success.
The final requirement is that elements of the trusted agent must be portable between the cloud, the user’s mobile device, and a designated set of systems under user control. Think of the agent as a series of processes that cooperate with each other but might be hosted in multiple places and move among the list of hosting candidates as the situation dictates.
The trusted agent is the face of contextualization, the link with the user that either justifies everything or acts as a boat anchor. Is it also the logical place to apply artificial intelligence? We’ll see in the next piece.