Could better knowledge, better data, create better broadband? That’s a question that the FCC is looking at, according to a story in Light Reading. Conceptually, the idea is an outgrowth of a general FCC look at the role of AI, undertaken by the Technical Advisory Council. The FCC isn’t noted for its speed relative to the market, and so the fact that this idea is just an application of a general TAC investigation of AI doesn’t bode well for quick answers. Since I’ve attempted to use statistics to assess broadband potential, I’d like to take a look at what seems to be missing in such efforts.
The profitability of broadband is the biggest determinant to its availability. Globally, telecom has been “privatized” for decades, and that means that providers are typically public companies with a profit motive, not regulated monopolies or agencies of the government. That means that a given market geography is likely to have broadband proportional to the opportunity the market area represents to operators. That would equate to the ROI the broadband services could earn there.
The opportunity that a given area would represent is fairly easy to classify; it’s related to the gross domestic product (GDP) of that area, to the household income, to local retail sales, and so forth. This sort of data can be obtained fairly easily on a macro basis, by country or area within it (down to the state level, using the US as an example). Neighborhood-level data is difficult to obtain because it’s expensive to develop and often is collected only through census-taking, done often once in a decade and so usually fairly out-of-date.
The real problem is the cost picture. You can gain a reasonable idea of the cost and benefit relationship if you alter your “opportunity” assessment to be on a per-unit-area basis. My “demand density” calculations do that; assessing opportunity per square mile in the US, and relating demand density in either parts of the US or other countries to the US metric (the US has a demand density of 1.0 in my work). This works fairly well where demand is fairly concentrated, which in my own model means that demand density is at least 2.0 on my scale, but it can fail with lower demand densities.
The problem is that as demand density falls, the question of how you’re going to deliver broadband becomes critical. A good example is fiber to the home (FTTH). Based on current passive optical network technology, FTTH can be profitable where demand densities are around 5.0 or higher. CATV infrastructure can be profitable at demand densities of around 2.0 and higher, and 5G/FTTN hybrids could be profitable in about the same 2.0-and-up range. However, these technologies have limitations in range of coverage, so average demand densities won’t tell the story.
What we’d really need to have is demand density data that, in US terms, was per zipcode or even tighter. Think of it as a “subdivision” issue. When you concentrate people, either in terms of where they live or in terms of where they work and shop, you create not only a concentration of demand but a simplification in demand fulfillment. More stuff works in the concentrations than would work outside.
A home in the country, even a home of a big earner/spender, is almost a one-off in terms of provisioning a service, because it may be a mile or so from other dwellings and thus isn’t able to share provisioning with others. The only thing that’s going to connect this kind of place profitably is an over-the-air service with decent range. A home of the same value, in a residential subdivision with lots that average 15,000 square feet, is part of a collection that could be served by perhaps a single head-end, a PON deployment, CATV, and 5G/FTTN all at once.
There are also often issues of public policy involved. The US is hardly unique in its debates about the “digital divide”, the difference in broadband empowerment between rural and urban/suburban areas. Almost all countries with respectable telecom infrastructure will have these three zones of habitation, but the nature of the three will vary. Australia and Japan have cities, suburbs, and urban areas, but for Australia even the cities and suburbs are of generally lower density. In Japan, demand density at the national level makes good broadband almost inevitable. In Australia, they’ve resorted to a not-for-profit company to semi-nationalize broadband in an effort to improve quality for all.
There may be a special role for AI even without any better data to work with, though. One example is that AI might be used in conjunction with mapping software to identify business and residential locations and create opportunity density and access efficiency data. While this might not be completely up-to-date, it would still be a significant improvement over what’s available today to guide public policy. Most operators have their own databases showing connections and business or residential locations, and this would also be suitable as a jumping-off point for AI analysis of broadband ROI, which could help their own service and infrastructure planning.
What, exactly, could be done with this data, whatever its source? Let’s start by saying that the best single tool would be what I’d call an ROI gradient. Think of it as a three-dimensional map whose first two dimensions are classic map dimensions and whose third is the ROI potential of each point. The surface created would point out the best and worst areas, and if we presumed that an ROI surface was created for each technology option based on its technical limitations and estimated pass and connect costs, we could get a pretty good idea of where service would be profitable and where it might not be.
For operators, something like this would let them know the best way to address a given demographic or geographic market segment, and whether there were any ways good enough to make service there fruitful absent any subsidization. Governments, of course, could use the same data to target subsidies where they’d actually make a difference, and to set rules on technology used to ensure that subsidies were actually delivering the optimum technical choice for the area. That would reduce the risk of “stranded subsidies” where funds are allocated to some technology that, in the longer term, would be unlikely to be considered competitive in terms of service delivery capability.
The barrier to better broadband through knowledge, though, really starts with accepting that’s actually the only path to it at all. There’s enormous economic and demographic/geographic diversity in most countries, and even more when you consider the essential locality of service technologies. We’ve managed to ignore this for well over a decade, which shows we could likely find a way to continue to ignore it today. If we do, then we’ll always have a “digital divide”, and it may get worse over time.