I’ve talked in past blogs about new network models, about operators’ attempts to control costs. I’ve also noted little progress in accepting anything useful. OK, what do the network operators, the telcos in particular, do now? They’ve struggled for decades to reverse the negative slide in profit per bit, they’ve refused to (or been unable to) transform their business model. They’re partnering with those who are really the biggest threat to them, and waiting for innovations to save them, delivered by vendors who profit from the status quo. This doesn’t sound like a “Dress for Success” story, does it?
If we want to answer the “What now?” question, we have to start with where we (or rather they) are. It’s certainly true that profit per bit has been eroding for operators for decades now, but it’s also true that most of them remain cash flow engines, and that they’ve never been exemplars of stock price appreciation. The early predictions of when their profit per bit would fall below the level needed to sustain infrastructure investment were staved off by cost-cutting, largely reducing human costs in customer care. The result of this created another sort of digital divide.
Operators like Verizon and most of the EU-and-Asia telecoms have a high demand density, which is the rough measure of revenue opportunity per square mile of their wireline footprint (I’ve calculated these numbers for years, based on public data). Ones like Telstra in Australia and AT&T in the US have a lower demand density. The high-density players could almost certainly continue on their current path for five to ten years. The lower-density ones have already started to take more significant action. In Australia, a lot of wireline access was pulled into a pseudo-government element, NBN. With AT&T, they’ve been selling off some assets and undertaking more radical open-model networking strategies to lower costs.
Wireless, meaning cellular telephony and broadband, has generally paid off better than wireline, but in the last five years it’s gotten far more competitive and far more reliant on things like smartphone offers to attract new customers or reduce churn. While operators and the media ballyhooed 5G as a new revenue source, the facts say that it won’t raise cellular revenue significantly, even in the longer term.
OK, that’s level-set. Let’s look now at the options that now appear to be off the table.
OTT providers have created a burgeoning industry, so obviously services above traditional connectivity aren’t impacted by the profit pressures we see on connectivity. Operators could have moved into that space through the deployment of cloud assets into their central and tandem offices, but primarily into “metro” spaces. Edge computing could have been launched, justified by 5G function hosting and video caching (CDN). The problem is that the operators almost universally waited too long, and were unwilling to accept the large capex burden associated with the initial resource deployments for “carrier cloud” (called “first cost” in telco terminology). Instead, the majority have been partnering or preparing to partner with public cloud providers, which surrenders the margins for these services to competitors.
Public cloud providers are building their own edge for hosting, but so is Akamai, the leading CDN player. They’ve offered limited-scope edge services for some time, but they just announced they were acquiring Linode, an early provider of virtual-server hosting that has (like others in the space) morphed into a more generalized cloud service provider. The hosting framework of Linode, added to the almost-universal presence of Akamai, creates an interesting new contender for the edge/OTT space.
There also seems to be a trend toward focusing on wireline broadband rather than linear TV as the driver for to-the-home-or-office capacity. Part of that is due to the increased role of smartphones as viewing appliances, part to the increased cost of providing linear-capable infrastructure. Since operators have been totally inept in evolving streaming services of their own, the trend seems to be for operators to resell another player’s streaming offering or spin out streaming as a separate company. That not only eliminates what was once the primary driver of wireline deployments (consumer linear TV), it means that caching of video and delivery of ads are both lost as an opportunity.
The notion of “premium Internet” would seem a way of regaining some revenue on the consumer broadband side, but that’s almost surely a dead issue too. The “net neutrality” position says that you can’t create paid fast lanes, and even though regulators have bounced back and forth on the way consumer broadband is regulated, there would be enormous public pushback (driven by the media, which in turn would be driven by VCs and OTT players) on any attempt to introduce it during a period when it wasn’t prohibited.
And with that, we can leave the “woulda-coulda-shoulda” piece of the story, and move on to what might be left.
Operators have constantly recommitted themselves to being connection providers, despite proof that profit per bit in that area will be very difficult to sustain. That’s not going to change, so what it means is that operators have to figure out a way to make connection services more valuable to buyers and more profitable to them.
The market is already demonstrating one method, particularly for wireline consumer broadband. The majority of the cost of consumer broadband is the “pass cost”, meaning the cost of making broadband infrastructure available for subscription in a given area. The majority of that cost is the media, and media cost is largely independent of bandwidth/capacity. We’re seeing top-tier broadband speeds push past the gigabit level already, because both fiber and CATV cable offer the potential for multi-gig service with minimal additional cost. The reality that the actual user experience at two or four gig will not likely be any better than at 200 Mbps doesn’t mean that people can’t be induced to pay for the premium.
Another possible premium item is latency. We already have an application where latency can be a killer—gaming. Cloud-based games and massive multiplayer games (MMPG) really don’t work well, or at all, if there’s significant latency in the connection. The metaverse will surely be a class of application that’s universally dependent on lower-latency services. The thing about latency is that simple access latency isn’t the whole story, and that means that this is another opportunity the telcos could be throwing away.
“Service latency” is essentially the same thing as transaction turnaround time. You send something, it passes through the network to the processing point, gets processed, and a result is returned. In any system that’s synchronized with a real-world component, the real world sets the service latency tolerance. Operators can reduce the latency of the access network, which they propose to do for wireless with 5G. To do the same for service latency is another matter.
There are two factors beyond access latency that have to be controlled to get service latency under control. One is process latency, which is made up of the handling of the input, the stuff needed to generate the output. The other is transit latency, which is the time required to move information between the access head-end point and the process point. “Carrier cloud” could have allowed operators to control both of these things by moving hosting close to the access head-end point, but as I indicated above, the operators have largely ceded that to the cloud providers.
“Largely” ceded, because operators still have an option to play. If they were to formalize a metro-network concept, they could create low-latency pathways from access head-end to metro, and then create a fiber mesh of the metro areas. This would generate a low-latency network that would eliminate the current multi-hop core-and-interconnect process, though operators would probably want to consider a new peering arrangement with others to make the metro-mesh fully effective.
The problem with this option is that operators might well be ceding it to others, and not just the cloud providers. Meta is clearly going all-in on the metaverse, and it can’t really be a global alternate reality without a way to create low global service latency. If the operators don’t do the job, I think it’s very likely that the combination of social-network and cloud-provider competitors will do it for them. And once that happens, operators have more at risk than consumer gaming or the metaverse.
Business services were once a giant part of operator revenues, and they’ve fallen as enterprises shifted from buying trunks and routers to build networks, to buying VPN services. As consumer broadband wireline access performance improves, SD-WAN technology can displace current MPLS VPNs for more and more sites, but even that’s not the big problem for operators. That big problem is the cloud.
We’ve missed the big transformational impact of cloud computing, as I said in my Monday blog. The cloud is this enormous, agile, GUI, the manager of information and content presentation. Enterprises have used the cloud not to “move things to” but to front-end things, to host the presentation portion of applications that run in the data center and may not even be owned in source-code form by enterprises, and thus could not be modified. The cloud model says that all your interactions are with the cloud-hosted piece of something, and only when you do something transactional do you dive into the data center (briefly). Cloud portals support the majority of customer and partner interactions with enterprises. Even before WFH, they were taking a bigger role in supporting employee interactions, and now that mission has exploded too.
Where is the VPN in this picture? Everything is a relationship between a human and the cloud, then within the cloud among components, and only at the end (and minimally) with the data center. The classic model of MPLS VPNs has no real place in this, but what does have a place is the SD-WAN, or rather the virtual-network superset of SD-WAN that I’ve touted for the last three years.
The cloud front-end concept means that there is no fixed hosting point for what users interact with. The architecture of cloud applications means that the internal workflows have no easily determined structure (think service mesh technology). If this isn’t a recipe for agile networking, nothing is, and physical networks aren’t agile, but virtual ones are. As cloud hosting and component structures vary, so does network topology, and the only constants are the place the user acts from (which is larger than before) and the place where the data center connects. Virtual networking is the natural solution to the world of the cloud.
SD-WAN, in at least some of its implementations, is a form of virtual networking. Given the steady improvement of consumer broadband price/performance, it’s inevitable that businesses think more about shifting away from their expensive VPNs to less expensive “business broadband” based on consumer technology. If operators don’t accommodate them (and they’ve been slower to do that than I’d expected), then MSPs will step in. Self-cannibalization of VPNs is better than having someone do it for you, and in any event there are SD-WAN solutions (Juniper/128 Technology comes to mind) that offer a lot of additional features, each of which could be a premium item for operators. Finally, SD-WAN is an on-ramp to selling managed services, which is a better revenue kicker yet.
So is service latency control, because the complexity of cloud workflows is limited by the accumulated latency and its impact on QoE. There are also many new applications, including IoT and business-metaverse applications, that will depend on low latency. Virtual networking and service latency control combine to create the foundation for the business applications of the future, and there is still time for operators to not only get on the bandwagon, but to lead the band.
There is time, but is there will? As I said, there are two classes of wireline operators—those with enough demand density and those without. The former group aren’t at immediate risk, and past history suggests that they won’t act for that reason. The second group is at immediate risk, and past history says they’ll respond tactically rather than strategically, until they reach a point where tactics fail, by which time it will likely be too late to drive the sort of change needed. Where, my operator friends, are you?