This is the second piece of my series on contextualization, and it focuses on the implementation of the “information fields” concept that’s one of two key elements in my contextualization model. If you missed the first, it’s available HERE. The third blog in the series will cover the other key element, the trusted agent, and the final blog will cover the application of AI to this model.
“Information fields” are the term I’ve been using to describe contextual inputs available to services, applications, and users to add relevance to the interaction between services and users. The goal is to make a service or application look as much like a virtual “ghost” companion as possible. In order for a personal agent to be truly personal and truly capable of acting as an agent, it must share context, and information fields are my basis for obtaining that context.
Why “information fields?” The answer is that First Law of Contextualization; protect personal privacy. The greatest risk in contextualization is having it reverse engineered to lead back to the requestor. Contextual information has to be kept separate both from the actual contextualizing agent and from other contextual sources, or this First Law is at risk. If we visualize the personal agent process as a ghost companion moving through life alongside us, we can visualize information fields as something our ghost can see in the netherworld that we cannot.
A “contextual input” is something that contributes to contextualization. Some of that is directly related to the user and the user’s mobile device, such as the “movement vector” (location, direction, speed) or the mode of transportation. Other contextual input relates to the user’s surroundings, interactions, and even historical behavior. Geography is a contextual input, including the identification of structures, points of interest, and so forth. So is the weather, the crowd/traffic levels, time and date, and even retail or service offerings being presented. Things that we could see or sense should, in a perfect contextualizing world, generate information fields so our ghost companion can also “see” them.
There are two obvious questions about information fields. The first is who provides them, and the second, how they’re accessed and used. For these questions, we’ll have to address all three of our Laws of Contextualization.
Always look to the past when trying to plan for the future. We are actually already “contextualizing” when we serve ads based on cookies that provide information on things like past web searches. What happens in these cases is that a web server and web page will obtain the cookies and from them make a decision on things like what ads to serve. Obviously something like this could be made to work in broader contextualization applications, but it violates some or even all of our Three Laws.
The big problem with the current system is its violation of privacy, or First Law. The client system becomes a repository for contextual information, and applications/services obtain that information from it with relatively little control. We have to assume that truly advanced contextual services would present too much risk if this sort of implementation were adopted, so we need to look at other options.
There’s also a value-chain problem here, with both our Second and Third laws. Our second law was that the value to the consumer of contextualized services have to cover the cost. The consumer in the current model doesn’t see any clear cost/benefit relationship, and in fact in nearly all cases doesn’t know explicitly what the benefit or cost is. We buy into an “Internet ad” ecosystem, which clearly benefits some providers of elements of that ecosystem, but which makes cost and benefit to us invisible. Yes, it may be there, but is the trade-off fair?
The third law may be the biggest near-term problem. For information fields to work, there has to be a benefit to publishing them. Every stakeholder in contextualization needs to have a motivation to play, and in the current system the information, once stored in the client system, is available for exploitation. There are probably contextual resources that could be made available in this form (retail offers come to mind), but a lot of the “background” context needs some commercial stimulus to boost availability. Even the retail offer data and similar contextual information could be problematic if not properly managed, because retailers might use competitor information to diddle pricing.
There seem to be three ingredients associated with information field success. The first is a trusted information field catalog, the second explicit information field coding, and the third a mechanism to compensate information field providers for their information.
A trusted information field catalog lets contextual processes find what they’re looking for by going to a single point. Think of it as a kind of logical DNS, and it could in fact likely be implemented like a DNS is implemented. However, the catalog has to be strictly controlled to ensure that accessing an information field doesn’t leave a trail back to the user that pirates can follow, and that the information resources in the catalog are from trusted sources. Transactions with the catalog then have to be secured, HTTPS at the minimum.
The explicit coding of information fields mandates a taxonomy of information that lets a contextual process ask for what’s available based on specific characteristics. This could be integrated with the catalog, or maintained as a separate database (an RDBMS, for example). The goal is to allow a contextual process to ask for retail offerings in electronics in the 1500 block of Z Street, for example, and to receive a list of one or more information fields that can provide the information.
Compensation for stakeholders is essential or there won’t be enough of them. My estimate is that there are already over a billion information sources that could be tapped for information field deployment, without any new sensor deployments required. Some of this data is in municipal hands but most lies in corporate databases. For some (retailers, for example) the benefit of providing the information as an information field lies in improved sales, and so no explicit compensation is needed. For others, a small if credible revenue stream would be enough to tip the scales, and for some a clear revenue source would be needed to justify the investment needed to frame new information services. The key is to be able to settle fees without having a chain of payments that ends up by identifying our end customer, and thus violating our First Law.
This is probably the most difficult of our information-field challenges. Traditional ad sponsorship, which people like because it doesn’t require people pay anything, has already surrendered enough privacy to prompt regulatory inquiries and action in both the EU and the US. True, thorough, contextualization will raise the risks significantly, which to me means that another avenue to “settlement” among the parties will be required. That, of course, will mean that something has to do the settling, and that something has to be “trusted” in a regulatory sense. The trusted agent is thus our next topic.