What New Service Features Might Operators Sell, or Wholesale to OTTs?

I blogged last week about AT&T’s vision of a future where new service features built on connectivity would serve as a platform for OTTs to create true higher-layer services. That obviously raises the question of what those features might be, and I’ve been trying to get some insight from operator contacts and a bit of modeling. This is what I’ve found.

First, there seem to be three categories of “new features” that sit between today’s network services and the higher-layer OTT services that operators have largely determined they can’t or won’t offer. The first category is connectivity extensions that are designed to frame connection services in a slightly different light. Category two is connection-integrated features, which are features that relate to a retail service that’s built on connectivity, but are not actually connectivity in themselves. The final category is features that exploit service opportunities not yet realized, and for which operators could hope to play a foundational role.

Connectivity extension features are designed to enhance basic IP in some useful way, without creating a different interface that would mandate changes to existing hardware or software. These can be divided into features relating to how connectivity is offered, and features that change connection properties in some way. The first group would include things like SD-WAN and the second group things like managed services, metered services, etc.

The advantage of this category of features is the ease with which they could be introduced. SD-WAN technology, and the superset category “virtual-network” technology, are already available from multiple vendors and could be introduced with little delay. Some operators already have. Managed services have been around for decades, and many operators have already dipped their toes into it. As an on-ramp to a new connection service model, this category of features is great. The downside is that it doesn’t have a decisive impact on the ability of others to create true higher-layer services on top.

The second category of features plays on the fact that a connectivity service has more elements than simple packet-pushing. In an IP service, you need address assignment (DHCP), URL resolution (DNS), content caching and address redirect (CDN), for example. Operators routinely provide the first two of this group, but they’re less involved in the third, and they’ve not considered what might lay beyond the obvious-and-current members of the category.

The only potentially money-making service currently recognized in this category is CDN. There are a number of profitable CDN providers, Akamai being the leader, but the network operators have been slow to consider the space. A part of the reason is the “competitive overbuild” challenge; multiple access providers exist in virtually every metro area and each would have to deploy CDN technology to cover all the users. An OTT CDN provider could simply peer with each. CDN networks also require that the metro cache points be fed from some distribution network, but right now the biggest operator issue would be competition from CDN incumbents.

Dodging incumbents means dodging traditional CDN, or at least supersetting it. Two examples come to mind, likely related. The first is ad insertion and the second is “wholesale streaming”.

CDN technology can easily adapt to streaming video. Time-shifted stuff, in fact, is naturally a CDN application, and the distribution of live video via the CDN framework is also possible if the cache points can be fed in at least near-real-time. More and more networks and other players are either getting into streaming or interested, and there are already some accommodations to streaming underway among the CDN giants. An operator CDN might be an ideal mechanism to support a wholesale model of streaming.

Ad insertion is another (related) point. Programming has natural points where ads make sense and pose a minimal disruption to viewing. Could a network operator offer a means of identifying those places (in material where there isn’t a natural slot already staked out)? Could they then provide ad insertion? It seems possible.

Security opportunities may exist in this space. Virtually all network operators offer email services, but no major effort to promote email security and trust has been launched. Even spam detection features are rare in operator email services. Robo-call blocking is now mandated in some areas, but it’s been a potential security feature for over a decade, and operators have failed to capitalize on it. However, this is another area where operators would now face competition that moving faster might have dodged.

The last of our three categories, exploiting yet-unrealized service opportunities, is probably the most interesting and yet the most challenging for operators. Interesting for two reasons; there’s little or no competition in these areas, and the potential revenue upside is likely the largest of any of our categories. Challenging in that addressing any opportunity in this third category will be likely to create culture shock, and also require a considerable “first-cost” capital investment.

The biggest opportunity in this category by far is what I’ve been calling contextual services. We could visualize computing’s evolution over the last 70 years as a continual initiative to move computing closer to us, with smartphones being the most recent step. The value to that is its support for activities, rather than an insistence that activities shape themselves around computing. “Point-of-activity empowerment”, in other words. In order to offer that, we need information resources to be presented in the context of our lives and work.

“Context” is a complex relationship between where we are and indicators of our current interests and activities. Every aspect of our behavior sheds contextual clues, but many aren’t readily visible to software and thus available as a factor in tuning information presentation. Location is an example; we know where we are (most of the time, anyway), and we also know where many others are. We know our current mission, and we can often infer the mission of others by watching their behavior. If all that knowledge were available to software, we can likely imagine the value of the information the software could deliver. The challenge is making it known.

When telephony was first imagined, it was clear to most that the cost of creating a national phone network would be enormous, so great that nobody would likely do it except as a protected monopoly. The cost of creating the framework needed for contextual services would be very large too, but likely within multiple players in today’s market. Of them, the network operators have the advantage of a historically large tolerance to low rates of return and high initial deployment costs (“first costs”). Since contextual services would have to track us in the real world and conform to our changes in focus, they’d likely have to rely on edge computing, which means edge real estate to house them. Operators have that too.

In a sense, contextual services return us to something like the original notion of the “Internet of Things”. That original model aimed to foster a widespread deployment of “things” on the Internet, as free to be leveraged to create new applications and services as the Internet itself. It’s pretty easy to see how a collection of sensors could be used to track traffic conditions, even detect collisions. Something like this could form the basis of a whole new layer of navigation systems, paralleling the traditional GPS. The question would be who spent the money to deploy and secure everything. Operators could do that, and they could also provide resources to host the “virtual agent” that would collect other contextual information and format it for wholesaling.

The motivation for operators to make the investment would be that wholesale revenue. Not revenue from renting access to sensors, but revenue created by encapsulating all the data, mulching it into useful form, and exposing it to partners via APIs. You don’t get traffic information from intersection sensors, you get counts of passing vehicles. There’s a higher-level process needed to correlate information from multiple sensors with street maps, and that process is what provides “traffic” data. Rather than having everyone try to derive the information from sensors (impossible, since all the attempted accesses would create what is in effect a denial-of-service attack on the sensors), the operators could expose services, and earn revenues.

This is what I think the ultimate in AT&T’s model of a service that promotes other services would look like. My model says that it could generate several hundred billion dollars in worldwide revenues for operators, and a like amount (or more) in higher-layer partner revenues. It’s a big lift for operators to think this way, though, and I don’t think we’ll have a reliable indicator of whether any will make the move until we see how they deal with the more comfortable categories of new features. We may have some insights into that later this year.