Broadband change is in the wind, literally as well as figuratively. In the figurative sense, it’s clear that telcos and cablecos alike believe that they have no option but to make consumer broadband profitable in itself. For some, such as Verizon, that means literally taking broadband to the sky, with fixed wireless or millimeter-wave technology. AT&T, long a laggard with regard to fiber to the home, is now offering multi-gig service tiers. It’s clear that all of this will drive other changes, but what?
In their most recent quarter, Verizon reported 78 thousand FWA adds, up from 55 thousand last quarter (residential and business in both cases) compared with 55 thousand Fios adds. Yes, Verizon has been deploying Fios for a long time, but the fact that its new wireless millimeter-wave service has passed Fios in incremental deployments is still impressive. It proves that the technology can be valuable as a means of providing high-speed broadband where fiber isn’t quite cheap enough. It won’t bridge the digital divide, but it might at least bridge the digital suburban off-ramp.
AT&T’s decision to push fiber-based broadband to 2 and 5 gig speeds is an admission that it needs to offer premium broadband or risk having someone else steal a lot of customers. AT&T’s wireline footprint is largely overlapped by at least one cable competitor, and relentless advances in DOCSIS mean that cable could be that “someone else”. Not to mention the risk of local competitors in areas where demand density is high, including deals involving partnerships with state/local government.
We’re not going to see gigabit rates from the broadband subsidies now floating about, but it is very likely that even many rural areas will have broadband good enough to support streaming video, and that creates the first of the secondary changes we’re going to talk about.
Cable companies got started by syndicating access to multiple live TV channels at a time when broadband capacity couldn’t deliver streaming live TV in the form we have today. Obviously it now can, and for a growing number of customers. Does this mean that the streaming players will eat the linear live TV model? Yes, but the victory may be short-lived because the networks may eat the streaming players.
What I’ve heard off-the-record from every network, studio, and video content creator is that they’re happy to have streaming syndicators as long as what they do is resell bundled network/content TV and video material and create a multi-channel GUI around it. The old “content is king” story is getting new life as the TV networks in particular realize that they need to brand their own streaming service. Remember that in the recent dispute with YouTube TV, NBCU wanted Google to sell a Peacock service as a bundle rather than license separate channels. I think that’s what everyone wants, and of course that isn’t a huge opportunity for those already in the streaming-multichannel-TV business.
It may not be any opportunity at all, in fact, because there are already players (including, says the rumor, Apple) who see themselves as creating the ultimate video front-end, one that would integrate with every provider of content, live or on demand. Amazon, Google, and Microsoft are all said to be exploring the same option, to include cloud storage of “recorded” live material. Roku is also said to be looking into being a universal content front-end. Google, of course, already has YouTube and YouTube TV, and anything they do here would likely be held in reserve until it was clear that their YouTube TV property was under threat.
This video-front-end mission requires APIs that would be used to integrate the content, and that opens another likely change, which is the growth of content-for-syndication players. Today, a small new “network”, a creator of limited specialty content, has a rough time because their material isn’t presented as one choice among a broad set of “what’s on?” options. A syndication player could offer their APIs to anyone, and a new content player could integrate with them. Since there is IMHO zero chance that this new content front-end wouldn’t offer both on-demand and live material, any content could be integrated with the front-end element, creating a kind of “open-Roku” form.
This is a massive shift, of course, which means it will take a lot of time to complete. The near-term initiatives of networks to build their own streaming brand is a clear indicator of where they’d like things to go, and that they’re taking careful steps to move things along. That means maintaining syndication deals with streaming aggregators until their direct streaming relationships demonstrate they can provide staying power. We should expect to see more and more content licensing disputes between networks and streaming services, some going beyond the current up-to-the-brink and actually resulting in loss of some material, for a more significant period. At some point, the “significant period” will start to mean “forever” for the popular network material.
All this is going to impact the market, but it’s not the end of the impact of better broadband. If we assume, as we should, that urban/suburban services are heading above the gig level in terms of top-tier bandwidth, we have to assume that “residential” broadband is going to offer a major cost advantage versus traditional business services.
The cost per bit of residential broadband has been far lower than the equivalent cost for business broadband, but companies have paid the differential because of the difference in availability and QoS. Today, with more and more of the front-end piece of every business application migrating to the cloud, and more and more of application networking being carried on the Internet, it’s looking questionable whether the availability/quality differentiator for business broadband can hold.
The answer likely lies in just what “gigabit” broadband really means. A packet interface is “clocked” at a data interface rate, meaning that packets are delivered to the user/network interface at a rate that corresponds to the service’s “bandwidth”. Most users who have high-quality broadband and take the time to assess the real speed of their service find that it doesn’t match the clock rate. Deeper congestion, or deeper capacity metering, or deeper constriction of capacity at content sources or applications, can all reduce the end-to-end delivery rate of broadband. Upstream versus downstream performance can also vary, both in clock speed (asymmetrical services like 100/20 Mbps) and in actual end-to-end delivery rate. These variations won’t typically mean much to users, but they could mean a lot to business.
Even a big household may be challenged to consume a gigabit connection, streaming video and making video calls. A branch office of an enterprise, with anywhere from a half-dozen to a hundred or so workers, could do so much more easily, particularly if there are “deeper” points of constriction. Feed a dozen gigabit connections into an aggregation point that has a single gigabit trunk going out, and it’s obvious that if the usage of those connections rises, the effective performance of every connection will be less than the clocked value.
The obvious question is whether some variant on consumer-level broadband access could be leveraged as a business service. The initial impact of radically improved broadband speeds in the consumer space would be a significant advance in the use of SD-WAN as opposed to IP VPNs, in branch locations. The limiting factor on this trend would be the deeper constriction in performance just noted. Most SD-WAN has additional header overhead, and that means that some “throughput” isn’t “goodput”, to use the popular terms. Even where header overhead is minimal, though, it’s possible that consumer broadband won’t deliver any higher end-to-end performance at gig speeds as it did/does at a tenth that. Could that encourage a change in service? There are two options.
The obvious option would be to establish a “premium” handling policy at those deeper points in the network. Hop off the access broadband network and turn left, and you get consumer Internet. Turn right and you get business broadband. The advantage of this is that it leverages mass-market access technology to lower the cost of business service. The disadvantage is that there are certainly business sites located in areas where business density is too low to make too much premium infrastructure profitable.
The second option would be “Internet QoS”, which in debates on net neutrality tends to be called “paid prioritization”. If premium handling were made a broad option in mass-market infrastructure, then it could be used by businesses to support SD-WAN service, and used by consumers where they needed better-than-best-efforts. The advantage of this is clear; we end up with better broadband. The disadvantages are equally clear.
Few doubt that paid prioritization would result in erosion of the standard service. At the very least, we could expect that broadband wouldn’t get “better” unless we paid more to make that happen. Given the quality broadband dependence of the OTT industry and the legion of startups and VCs that the industry empowers, and given the fact that “net neutrality” has been a political and regulatory football, this option looks like it’s a non-starter, at least in the near term.
The biggest barrier to either option, though, is the profits of the operators whose infrastructure would have to be changed. To invest in quality left-turn-or-right business handling of broadband is to support a migration of your customers from an expensive service to a cheaper one. That’s not a formula for success at a time when your profit per bit is already sinking.
We’ve had a period of some stability in the broadband space, with technology evolving rather than revolutionizing. We may be seeing an end to that now, and the shift will create opportunities and risks for both vendors and operators.