Is Contextualization a Natural Application for AI?

If calling a personal agent a “ghost companion” is fair, then it’s fair to say that adding artificial intelligence to our artificial companion could be a good idea.  It’s also very likely that the concept of information fields could be enhanced through the application of AI.  How much we could expect and how complicated these additions would be depends on just what we mean by AI.  It’s not a term immune to hype, after all.  And please be sure to read the FIRST, SECOND, and THIRD blogs in this series before you read this one!

A perfect personal agent, fed the data needed to share context with us, could be expected to provide the same sort of advice that a real companion could (or perhaps even better).  Obviously both the quality of information and the artificial intelligence available would limit real-world benefits to something less dramatic, so we’d need to have some specific model for AI application to test cost/benefit with.  Getting one could take a bit of experimentation.

There are two conventional visualizations of a contextual personal agent.  The first is a policy model, similar to that used today in presence-based applications and services.  The behavior of the agent is the sum of a series of policies that link conditions and reactions.  Think of this as a bunch of IF-THEN-ELSE conditionals.  The second is a state/event process, where the user is fitted into a series of well-defined states, representing missions and behaviors.  Conditions, in the form of events, trigger process reactions based on state.

One interesting point we can take from this is that if we could define a series of clear missions and associate behavior patterns with those missions (which can be done through analysis of past behavior) we could do a pretty good job of creating a personal agent using either of these conventional technology strategies.  Thus, the benefits of AI would be greatest when we can’t define clear missions or associate behavior patterns.

As I said in prior blogs in this series, contextual behavior is best understood by presuming a user is on a mission, meaning that they’re doing something.  As they do that, they are sensitive to contextual information that relates to their mission and at least somewhat less sensitive to information that doesn’t relate.  Walking down a given street on the way to the office is different from shopping on that street, and both are different from a sightseeing or time-killing stroll.  Mission is the primary contextual key to personalization and relevance.

I cited three examples of mission in my last paragraph, but of course there could be more, and there could be “sub-missions” or refinements of a mission based on specific goals—I’m looking for a specific product or service, or a specific experience or landmark, or simply to check out a new area.  We could, in my personal view, define a pretty effective and limited set of missions.  The problem is when that mission set doesn’t apply closely enough to provide contextual relevance.

If I set out for a stroll in New York (as I’ve done many times), my behavior might at first mimic behaviors associated with other missions.  I walk north on the Avenue of the Americas, which might be taking me to a familiar store or restaurant, or even to meet someone.  If it’s the latter, it’s possible my calendar or a recent text, email, or call could be a signal that this is a meet-me mission, but it’s also possible the meeting wasn’t triggered by a recent communication or even put on my calendar.  And, of course, it’s possible that’s not even my goal in this stroll.  Where AI comes in is its ability to analyze my behavior pattern independent of my route.  Am I walking purposefully or ambling?  Am I pausing to look into windows or ignoring them?  Is there any specific display that seems to hold my interest?  AI might be able to provide answers to these questions.

Mission, in both policy-based and state-event-based analysis, sets the presumptive state.  If it’s not reliable, then nothing that follows in the way of interpretation is likely to be any better.  AI could infer mission from a broader set of behavior patterns, and by doing so could create that first critical element in contextual analysis.

The critical ingredient here is the ability to infer mission from behavior, rather than using mission to interpret behavior.  AI here would still analyze historical behavior, but with the goal of establishing patterns that could be mapped to presumptive missions.  If I walk a certain way when I’m walking to something, then walking that way implies a destination-related mission.  If, as I approach a known destination, I always slow down or speed up, then my failure to do so would imply that a presumptive destination assignment is incorrect.

In the behavioral-analysis model, the route someone takes is less important than the way they take it, the overall pattern of speed, starts and stops, and so forth.  These factors are complex because they’re “situational”, meaning they have to be related to conditions at the time.  Of course you have to stop at lights, and when you pause to look at something, it’s important to know whether it’s graffiti on a wall or the window of a shop.

This latter point tells me that AI requires better information fields.  What’s in a shop window?  Obviously, the shop’s owner would know, but do they have an incentive to make this information available as an “information field?”  We might suppose that something could regularly run down the streets of a city, taking pictures of shop windows, and using AI and image recognition to classify the items, but that also begs the question of incentive—return on investment.

What could be done more easily?  Well, shops often sell a specific family of goods or services, and so some classification of window contents could be inferred by knowing the kind of establishment involved.  Sales are normally advertised, so it’s reasonable to assume that if sales were advertised online, those sales could become input to an information field.

Information fields themselves could benefit from AI too.  A good example is pedestrian or vehicular traffic.  It’s possible to establish, from the analysis of movement of user devices, the pace of movement and the likely level of congestion encountered.  This would then become an information field element, something that might explain a sudden change of direction in movement.  It’s also possible to anticipate future conditions based on history, and while this could be done by the personal agent, there are times when it would make more sense to have AI linkage in the information fields themselves.

A good example is traffic after a sporting event.  Conditions would impact a large number of people and will generally follow a similar pattern from game to game, depending on things like what time the game is finished, the weather, how close the game is (and thus whether people leave early), and how many attend the game.  All that would be known, so a traffic information field could use this information to guide not only attendees but also those who had to drive through the area impacted by game traffic.

AI differentiation in information fields might become a competitive factor, if information fields themselves became competitive—which they could.  In fact, both information fields and personal agents are most likely to employ AI to gain a following.  Think of it as the search engine of the future.

The net of all of this is that AI could certainly benefit contextualization, but in order for that to happen, we’ll almost surely need richer information fields in general, and probably greater insight into the direct behavior of users.  The more insight we have, the creepier the process could appear, and the more likely it would be to attract the notice of privacy advocates and regulators.  Our ghost companion could easily turn Orwellian on us, and the industry is already under scrutiny over privacy issues.  It will take more smarts than it’s shown so far to stay off the reef on contextualization.