What’s the Real Role of Virtual Network Infrastructure in New Services?

Does a true virtual network infrastructure promote new services?  To make the question easier, can it promote new services better than traditional infrastructure?  You hear that claim all the time, but the frequency with which a given statement is made doesn’t relate to its truth these days.  Rather than try to synthesize the truth by collecting all possible lies, let’s look at some details and draw some (hopefully) technically meaningful conclusions.

The opening piece of the puzzle this question raises is also the most complicated—what exactly is a new service?  Operators tend to answer this by example—it’s things like improved or elastic QoS, wideband voice, and all the other stuff that’s really nothing more than a feature extension of current services.  All this sort of stuff has been tried, and none of it has changed the downward trajectory of profit per bit.

Analysts and writers have tended to answer the question in a totally different way.  “New services” are services that 1) operators don’t currently offer, and 2) that have some credibility in terms of revenue potential.  These would normally be part of that hazy category called “over-the-top” (OTT) services, because they are not extensions to connection services.  This is, in a sense at least, the right answer, but we have to qualify that “rightness”.

We have a growing industry of companies, the “Internet” companies, that have offered OTT services for decades.  Web hosting, email, videoconference and web conference, and even cloud computing, are examples of this traditional level of OTT.  Operators could get into these spaces, but only by competing against entrenched incumbents.  Given the competitive expertise of the average operator, that’s a non-starter.

What remains of the OTT space after we wash out the Internet establishment are services that for some reason haven’t been considered attractive.  In the past, I’ve identified these services as falling into three categories.  The first is personalization for advertising and video-related services, the second is “contextualization” for location- or activity-centric services, and the last is IoT.  All these services have a few attributes that make them unattractive to Internet startups, but perhaps valuable to operators, and it’s these attributes that would have to link somehow with virtual-infrastructure implementations of traditional service features if the virtual network of the future is really a path to new services.

The first of these attributes is that information obtained or obtainable from legacy services form the basis for the service.  The movement of a person, a group of people, or an anonymous crowd is one example.  Past and current calling/messaging behavior is a second example.  Location, motion, and pattern of places visited is a third.  All these are stuff that we can know from today’s networks or their connected devices.

The second attribute is that this critical information is subject to significant regulation for security and privacy.  What you’ve searched for or purchased online is, for many, a potential privacy violation waiting to happen.  Imagine extending it to who you’re talking with, where you’ve stopped in your stroll, and where you are at this moment.  This sort of thing would require explicit permission, and most Internet companies do everything short of fraud (well, most of the time, short) to avoid posing the question “Will you accept sharing this?”

The third attribute is that the service is likely useful only to the extent that it’s available pervasively.  A good example is contextual services that rely on location and behavior.  If they work within one block or one town, they don’t provide enough utility to an average user to encourage them to pay.

Which brings about the final attribute: there is credible reason to believe users would pay directly for the service.  Ad sponsorship has one under-appreciated limit, which is that the global ad spend has for years been largely static in the $600 billion range.  Everything can’t be ad-sponsored; there’s not enough new money on the table, so new stuff would cannibalize the advertising revenue of older things.

All this leads us to now address the opening question, and I think many of you can see a lot of the handwriting on the wall.  There are three pathways for virtual network infrastructure to facilitate the development of new services.  First, the new infrastructure could do a better job of obtaining and publishing the information needed for new services.  Second, the new infrastructure could create a better framework for delivering the services, perhaps by tighter coupling with the infrastructure in a cloud-native way.  Finally, the new infrastructure might be built on cloud “web service” features, a kind of function-PaaS, that could also be used in constructing the new services.

As regulated entities, operators understand privacy and compliance.  They actually hold all the information that anyone would need, it’s just a matter of getting it exposed.  Further, if we had strong APIs to provide a linkage between a higher-level service and the RAN, registration, mobility, and transport usage of cells and customers, that data would be useful even without personalization.

Operators also bill for services now, so having to deliver services for bucks would be no paradigm shift for them.  They have to make all manner of regulatory disclosures with respect to information, and they’d have a conduit to the user to obtain permission for data use.  The beauty of having the operators take this data and convert it into something that could then spawn personalization or contextualization is that the raw data wouldn’t have to be available through the operators’ services at all.  Third-party apps couldn’t compromise what they don’t have.

How does virtual network infrastructure contribute to these four points?  “Virtual” network infrastructure means at the minimum that network features are cloud-hosted, and if we want to maximize benefits, it means that the implementation is cloud-native.  As I’ve noted in many blogs, this doesn’t mean that all the elements of a service are running on commercial servers.  I think it’s likely that data-plane features would still be hosted on white boxes that were specialized via silicon to the forwarding mission.  It’s going to come down to the control plane.

Most of what a network “knows” it knows via control-plane exchanges.  It’s possible to access control-plane data even from boxes, via a specialized interface.  In a virtual network implementation, the access would be via API and presumably be supported in the same way that control-plane messaging was supported.  I think most developers would agree that the latter would at least enable a cleaner and more efficient interface, and it would (as I’ve noted before) also enable this control-plane-hosting framework to become a sort of DMZ between the network and what’s over top of, or supplementing in a feature sense, the network.

Let’s look at those four points with this control-plane-unity concept in mind.  First, if the control plane is indeed the place from which most network information would be derived, then certainly having the mechanism to tightly couple to the control plane would maximize information flow.  We can say, I think, that this first point is a vote in favor of virtual-network support for new services.

The second of our four points is the management of critical information.  In our control-plane-unity model, the service provider could introduce microservices that would consolidate and anonymize the information, so that if the information is exposed either to a higher-layer business unit or to another company (wholesale service to an OTT), the information has lost its personalized link, or the degree of personalization can be controlled by a party who is already regulated.  That means our second point is also addressed.

Point four (I know you think I’ve missed one, but bear with me!) is the question of willingness to pay.  This one is a bit more complicated, because of course users want everything free.  The reason why free-ness is difficult for these new services is that personalization to the extent of the individual is what makes focused advertising valuable.  It is possible to anonymize people in information services, of course, but unless there’s some great global repository of alias-versus-real mappings, every source of information would necessarily pick their own anonymizing strategy, and no broad knowledge of behavior could be provided.  Some work is obviously needed here.

In the meantime, of course, there’s always the chance that people would pay.  We pay for streaming video today (in most cases), so there’s at least credible reason to believe that a service could be offered for a fee if the service’s perceived value was high enough.  Operators couldn’t make this value guarantee unless they either offered the retail service themselves, or at least devised a retail service model that they could build lower-layer elements into.  More work is needed here too.

Point number three is the hardest to address.  It’s difficult to build a service that has a very narrow geographic scope, particularly if that service is aimed in large part at mobile users.  No new network technology is going to get fork-lifted into place, after the old has been fork-lifted into the trash.  A gradual introduction of virtual-network technology defeats virtual-network-specialized service offerings by excess localization.

The best solution here is to focus more on 5G, not only on 5G infrastructure but on the areas of metro (in particular) networking that 5G would likely impact.  If an entire city is virtual-network-ized, then the credibility of new services driven by virtual-network technology is higher, because a large population of users is within the service area for a long time.

The ideal approach is to play on the basis of virtualization, which is the concept of abstraction.  Some of the control-plane information that could be made available to higher-layer applications/services via APIs could also be extracted from existing networks, either from the devices themselves or from management systems, appliances, or applications (like the IMS/EPC implementations, which could be either software or device-based).  If an abstraction of service information APIs can be mapped to both old (with likely some loss of precision) and new (with greater likely scope and better ability to expand the scope to new information types), then we could build new services that would work everywhere, but work better where virtualization of infrastructure had proceeded significantly.

The conclusion to my opening question isn’t as satisfying as I’d like it to be, frankly.  New virtual-network architecture implementations could offer a better platform for new services, but there are barriers to getting those architectures in place and realizing the benefits fully.  The biggest problem, though, may be that operators haven’t been comfortable with the kind of new services we’re talking about.  Thus, the irony is that the biggest question we might be facing is whether, without a strong new-services commitment by operators, we can hope to ever fully realize virtual-network infrastructure.