Technology Support for Contextual Services

If, as I speculated yesterday, the optimum new-service strategy for network operators would be a set of facilitating services that exploited contextualization and personalization of mobile behavior, what would the technology requirements look like? Operators need to balance feature specificity to raise the value of their services to the OTTs who would frame them in retail form, but also to ensure that whatever they do is useful across as wide a range of retail services as possible. Specific features, broadly useful…sounds like a balancing act. It is, so let’s develop a plan starting at the top.

Which is that what we’re doing is serving mobile users with contextual information. Mobile users move around, but at any point in time, they’re in one place. Most of the time, they’ll be in that one place for a while, and most of the time that one place will be within a single metro area, the place a user lives or works. That single metro area has two desirable characteristics. First, it contains a lot of users, so things done there can benefit from economies of scale. Second, current access network technology for both wireless and wireline terminate there, so there’s efficient connectivity to users whether they’re sitting in their living room or office, or wandering about on the streets or even in parks.

The reason why metro user density is critical is that it’s hard to imagine how contextualized, personalized, services could be hosted without a significant economy of scale. A metro resource pool is feasible because there’s a lot of applications that could be hosted there. If you go further out, to what many see as “the edge” at the cell site, you have little chance of getting economies of scale and thus little chance of getting reasonable cost levels.

The close-to-access point is equally significant because it’s possible to capture user identity and to personalize services at the off-ramp to the access network. For mobile services, you have access to the mobility management elements of LTE or 5G, so you can know who everyone is and get a rough location even without accessing GPS data from a phone or capturing identity by having someone pass a Bluetooth sensor.

There’s another point about metro that cuts both ways, though. If a cellular mobile provider is also a wireline incumbent in a given geography, they surely would have metro facilities in which to plant resource pools for hosting. If they are not, then they have a problem because they likely do not have the real estate, and would have to consider the cost of acquiring space, creating proper environmental conditions, installing a data center, and staffing it with qualified resources. All is not lost, though. There are three ways operators could deal with the in-my-area-versus-not problem.

Way number one, for operators with a territory and resources, would be to offer contextual services only within their resource footprint. This may sound like a losing proposition, but the fact is that contextual services are most valuable in the area where people regularly move, meaning where they live and work. If an operator focused contextual services on their customers within their resource footprint, chances are those customers would stay in their zone most of the time. This strategy wouldn’t be suitable for operators who didn’t have wireline service offerings and so didn’t have a resource footprint to exploit.

The second possible strategy would be federation. If we assume that contextual/personalized services are the right approach, then it would be likely that competition would tend to force most operators to offer them. If the APIs were standardized, then operators could federate their contextualized and personalized facilitating features, creating a uniform platform for OTTs. Alternatively, the OTTs could create their services across operator boundaries using whatever APIs a given operator supported. However, this would require that most operators make these facilitating APIs available in some form.

The third strategy would be for operators to acquire cloud hosting in the areas where they didn’t have a resource footprint. The challenge here would be that the cloud provider service would likely be more costly in the long run than an operator-owned metro resource pool. However, “the long run” might be well down the road, and operators would be able to size their resource capacity to the pace of activity in the lower-density areas. The key to making this effective would be the creation of a hosting platform software set, and the contextualize/personalize applications, to be run on a VM, bare metal, or in containers.

Both the first and second strategies involve the decision to create “carrier cloud” data centers in at least the major areas of the operators’ resource footprint. The third does not, or rather the third would support the migration to those carrier cloud data centers when enough demand for the facilitating services justified the move. That means that the two pieces of the software platform are the critical ingredients; if operators have those they can ease into the service and enhance their own resources as revenues permit.

My baseline presumption would be that the right platform would be containers and Kubernetes, which fits the cloud model well and aligns with the Google Nephio project’s initiatives to make virtual functions work with Kubernetes. I’m inclined to think that service-mesh market leader Istio would also be smart since it is well-suited to message or short-transaction interactions and (since Google did it too) works well with Kubernetes and probably with Nephio as well.

As a layer above, I think we need some anonymizing elements, and this might be a reasonable place to think about blockchain. A blockchain is “authentic”, meaning it can be tied explicitly to something, and we could assign a user a blockchain not only to represent their identity but also (if we assumed an Ethereum chain) hold the policies, interests, goals, and other stuff that would be required. The proposal for a “Level-3” element of Ethereum (which has nothing to do with Level 3 of the OSI model) that would handle process control and optimization could be a help here.

The higher-level stuff, the more contextualization-specific elements, are obviously harder to address. However, I think that we can assume that we can’t really make these elements of the software pieces of blockchain that would require coding/authentication, or the cost of these steps would likely be prohibitive for the number of exchanges that contextualization would require. I’ll talk about some mechanisms for this final step in the next blog in this series.