Whether we accept the concept of the semantic web, or Web3, or the metaverse, or even fall back on lower-level things like cable’s Distributed Access Architecture (DAA), the signs are pointing toward an expansion of what we believe makes up networks in general, and the Internet in particular. That’s already been going on behind the scenes, driven largely by the dramatic shift in traffic toward video, but it may now explode. If that happens, it may also stress a lot of the technology architectures and initiatives we’ve been dabbling with.
“Networks”, strictly speaking, are about connecting things, but the network that dominates the world today, the Internet, has been more from the start. Yes, the very early Internet included connectivity features, in no small part because we didn’t have a general model of connectivity, but it also had what we’d call “higher-layer services”. Things like terminal access (Telnet), email, file exchange, and eventually the worldwide web came along, then calling, messaging, and a host of other stuff. Today we know the Internet more as a collection of services than as a network.
What’s particularly interesting about this point is that the “services” of the Internet are, first, on the Internet rather than in it, and second, hosted rather than embedded in devices like routers. We could argue that the goal of creating virtual network functions as hosted software components to replace network devices isn’t necessary to validate hosting of features at all; we already do that, and most users see those features as “the Internet”.
One of the questions this point raises is the longer-term relationship between “networks” and “network services”, both in a business sense and technically. That’s a reprise of a decades-old “smart versus dumb networks” debate, but it’s really not about smart networks as much as service-integrated networks. The network itself is still connectivity-driven, the services are even more likely to be on the network, meaning a user of the network as much as the end users are. What’s not clear is whether the some of the services are provided with, integrated with, the network.
CDN services, content delivery networks, are “technically integrated” with the Internet. Content delivery is the primary mission of the Internet, and without CDNs it’s very possible that it couldn’t be done in an economical way and still maintain QoE. However, the great majority of CDN services are offered by third parties, so there is no business-level connection to the ISPs who combine to provide connectivity. Other “services” are really just destinations, like Facebook or TikTok, and this is what ISPs like the telcos mean when they say that they’re “disintermediated”. Services are what people consume, and others have introduced themselves as the providers of visible services, between users and the telcos/ISPs.
This situation is at the bottom of the debate on what telcos should do to gain revenue. Do they add “services” that relate to their connection facilitation incumbency, or do they add “services” of the kind that end users are actually, directly, consuming? There are people on both sides of that question, and some in the middle.
Which is what? Look for a moment at VoIP. If we expect VoIP to offer the service of universal voice connectivity, which of course is what plain old telephone service (POTS) offers, then we have to be able to interwork with POTS and among VoIP implementations. If a future service needs “universality” then it’s very possible that it would also need some foundation elements that were both standardized and made a part of “the network”. This is what AT&T has proposed in its discussions about creating facilitating elements that would help advance higher-level services. The notion of creating facilitating services that are both technically and business-integrated with the network raises a number of questions.
One that crosses between the technical and business domains is how these services are defined. If an operator like AT&T creates a set of the services, and competitors like Verizon follow suit, would the two be compatible? If they are not, then those who create higher-level services built on the facilitation features would have to adapt to different features and interfaces across facilitation providers. Since it’s hard to see what higher-level services wouldn’t be offered across a wide geography, that could pose a major threat to the utility and growth of the higher-level services.
But who then standardizes? Telco standards bodies have a long history of glacial progress and ignorance of modern software principles. Look at the 3GPP and 5G. Industry forums like O-RAN, often launched to address shortcomings in formal standards, may move quicker, but there is always the risk there might be several of them, and also the risk that since the same players are likely involved as are with formal standards, they’d fall prey to the same issues.
The IETF may be a useful compromise here. While it’s not an open-source group, it does require a reference implementation for most proposed changes and additions, and the fact is that IP is the transport framework for pretty much everything these days. The potential breadth of IETF influence, though, combines with attempts by other bodies like the 3GPP to create collisions. For example, we all know that the 3GPP, in 5G Core, standardizes network slicing and thus implicitly defines core network traffic segregation by service type. The IETF has an initiative with the same goal, and IMHO they’re a better place to deal with the issue since facilities like this should be generalized across all services, including the Internet. We will likely see more of these collisions develop, since “standards” and “forums” are increasingly like competing vendors.
Regardless of how the issue of standardization for facilitating services plays out, there’s another technical point to consider, one that came up almost two decades ago in another forum. When you build a service based on a component from some other player, how do you use it without gaining control over it, without getting visibility into someone else’s infrastructure?
In theory, this problem could arise even with connection services like VPNs, if the service were created using pieces of service from multiple providers, from something providers have sometimes called “federation”. My work with ExperiaSphere addressed this with a proxy, which was a model element that represented a service from a non-owned resource. The proxy was used by the “service operator” to build the overall service, but it only presented the data and control features that the “owning operator” was willing to expose. In effect, it was in my model the “top” of a resource domain model, though it could resolve within the owning operator to any combination of service and resource elements.
I think that proxy elements are essential in supporting facilitating services, but I’m not seeing much recognition of that among operators, nor much interest in figuring out how one would be built and used. One reason might be that operators like AT&T are seeing facilitating services as specialized for third-party use, and presume that interfaces and APIs would be designed to be shared with others. At this point, it’s hard to say whether that would stand up to actual broad-scale implementation, particularly if the role the facilitating service played had to be “composed” into multiple services, from multiple sources.
IETF network slices may be the best example of what could be considered a “facilitating service” defined by a standards body and thus, presumably, available across multiple providers. There is no question that IETF network slices are facilitating, explicitly and implicitly, and also no question that they’re credible because they fall within the scope of an accepted standards body that currently defines cross-provider technology standards. I believe that IETF network slices also fall into “lower-level” facilitating services, and that’s the reason there’s
My personal frustration with this point is that we’re now talking about initiatives that magnify the need for standardized federation of service elements, when the topic was raised almost two decades ago and has been largely ignored in the interim. It would be easier for us to make progress on network services if we accepted or at least explored past issues to be sure that they’re either resolved or have been deemed irrelevant. Otherwise we may waste a lot of effort redoing what’s been largely done before.