The old “How low can you go?” question may be, for broadband at least, be augmented by the question “How much capacity can we sell?” As this story in Light Reading shows, at least some operator planners are looking onward to things like 10G consumer broadband services. While this might be generating what the media calls “good ink”, there are major questions relating to the viability of ultra fast wireline broadband. Are we falling into the “5G trap” with wireline, too?
By now, many people realize that the vaunted 5G speed increases don’t really make much of a difference in the experiences mobile users are seeking out. You can’t make video better by pushing it out at a high multiple of the characteristic data rate of the material, which is limited by the capability of smartphones to display high-res content and our ability to see better video on a small screen. But did the publicity on 5G’s speed help promote 5G phones, and thus accelerate 5G deployment? Maybe, but even if that’s true, there are still questions.
What’s interesting about 5G is that it’s a technology shift that has zero chance of failing, and that’s been true from day one. It’s the next step in modern wireless technology. The question was always whether it would generate any incremental revenue, since a shift to 5G was surely going to demand an increased investment in infrastructure. That same question is relevant to faster wireline broadband.
There is no question that broadband consumption has grown steadily over the last couple decades. What’s behind that growth is obviously “more” of something, but the “something” here is video content. We are doing a lot more streaming than ever before, and our consumption of streaming video directly drives up bandwidth. On the basis of consumption history, 10G might not sound too outlandish. But….
….but how much video can really be consumed? The average family of four could all be streaming their own material, and in some cases they do, but my friends in the ISP world tell me that most households don’t have more than two television sets active at once, and that the largest consumer of streams as opposed to bandwidth is get-togethers that involve multiple people using their phones for streaming. A gathering with a dozen people, particularly young people, will often have multiple streams of video to phones because smartphone video tends to be viewed individually, given the limitation of the devices’ screens.
A high-resolution video stream requires about 8 Mbps of bandwidth. Most wireline broadband is moving toward a base speed of 50 to 100 Mbps, and that means (say my ISP engineering friends) that you could reasonably expect to support around five such streams on a base wireline service. However, if we assume that our video is delivered at only smartphone resolution, you’re down to maybe 4 Mbps (both these numbers don’t even assume optimum compression) and 10 streams. Given that many consumers can get 1Gbps today, and given that would support 75/150 streams depending on the per-stream requirements, it seems to me that we’re projecting capacity requirements growth beyond reasonable behavioral expectations.
That’s important because of the how-low-can-you-go question. Most consumers aren’t looking to throw money away, so they tend to be cautious on what they pay for home broadband. They’ll often go with a low-end package and increase speed if they need it, and that behavior shows in the distribution of customers by tier of service, which almost every ISP says groups customers at the low end of the capacity range available. If there’s a real demand for 10G speeds, then you’d expect to see people clustered at the highest speed currently available, and you don’t.
That resistance to paying more also limits the ISP ROI associated with any investment in higher broadband speeds. Even for business services, doubling the speed of a connection never results in doubling the revenue. In consumer broadband, at least in my own geography, the lowest speed available has increased by roughly four times over the last decade, but the price is only about 15% higher. The fact that prices are rising more slowly than capacity is the reason why “profit-per-bit” has fallen so sharply.
The obvious ISP response to this situation is what used to be called “overbooking”, meaning assigning more theoretical capacity than a network can actually carry. The speed of a connection is almost always measured by the rate at which the user interface is clocked. In TDM (time-division-multiplexed) networks, the entire connection path would be clocked at the interface speed, so a megabit interface would mean a megabit of transport capacity. In packet networks, transport capacity is shared with the expectation that traffic will have a random distribution of packets so one conversation’s peaks can fit in another’s valleys. But suppose we clock an interface a ten gig, knowing that we really have the transport capacity to support only the same actual packet rate we had with a one-gig interface?
This sort of thing would likely be caught by investigative journalists if not by regulators, but it might not be noticed if the ISP essentially gave away the 10x speed advantage. Why would they do that? Because the actual cost of the faster fiber connection could be very small if it wasn’t backed up by a commensurate upgrade in packet capacity deeper in the network, and the move would offer a competitive advantage. Since customer acquisition and retention is the largest component of opex, that could make sense.
Particularly given the fact that higher-capacity wireline broadband could be sold to businesses. Branch offices and SMB locations are usually in the same areas as residential users, and these sites could us, and pay for, much faster connections. That could make it smart for ISPs to deploy 10G-capable infrastructure and even offer a reasonable ROI on the incremental investment. After all, it’s the terminating gear that’s different, the fiber itself would likely be the same.
In a way, this is a bit like 5G. Companies like T-Mobile are pushing home broadband using mobile 5G infrastructure, and many are using millimeter-wave technology for that purpose. Remove the constraints on smartphone consumption of 5G and the higher speed can be sold. Same with wireline broadband; remove the presumed dependence on residential consumption, focus on business, and then sing the praises of your “faster” infrastructure, and you have a win.
A bit beneficiary of this move could be virtual networking, including and maybe especially SD-WAN. Fiber-PON-based 10G service would be a boon for almost any business site. Could some ISPs switch to 10G PON for most or all of their service delivery? Could we be looking at a transformation of IP VPN services, away from the MPLS VPNs to a different technology altogether? It’s possible, and if that happened it would be more of a revolution in networking, including the Internet, than a push to get consumers to 10G.