The FCC Does it Again!

The FCC released its first fairly detailed study of Internet performance in promised-versus-delivered form, and while it has some interesting stuff in it, there’s also a rather substantial internal contradiction in the whole study that is troubling for our ability to set broadband policy.  It seems the government has ignored the whole basis for IP networking.

In the good old days of TDM, the “speed” of the network was set by the rate at which the network interface clocked data in and out.  A T1 line delivers 1.544 Mbps all the time, or it’s broken.  That capacity is available no matter whether traffic is being passed or not, and since most data applications don’t actually consume 100% of capacity, the rest is wasted.  When packet services evolved in the 1980s, their cost-savings was based on their ability to divide information into chunks (packets) and intermix them so that idle periods on a trunk were used effectively.

IP is a packet protocol, and IP savings over TDM are based on that same principle.  The packet concept saves money by using wasted space, so the corollary truth is that if there’s no wasted space there’s no money saved.  A traffic source that uses all the capacity rather than chunks of it leaves no gaps for other traffic to fill.  In effect, that source is consuming bandwidth like TDM did.

The speed at which the packet source can present data when it has it is the speed of the interface.    Any synchronous digital interface “clocks” data in and out at a fixed speed, which is its rated speed.  Think of it as creating a fixed number of bit-buckets, each of which can hold a bit if there’s one presented.  Traffic from a group of interfaces, like cable modems or DSL lines, is aggregated upstream, and the aggregation trunks fill traffic gaps in one user’s data with information from another, so the trunks’ aggregate speed is not the sum of the speeds of the individual interfaces.  That’s why we can give a consumer broadband service for under twenty bucks a month when 64 kbps of TDM would cost five times that amount.

So how does this impact our dear FCC and its study of Internet speeds.  Well, they’ve determined that most Internet users don’t get 100% of the advertised speed, meaning the clock speed of the interface.  But 100% of the interface speed 100% of the time would be TDM, which nobody has.  They have a nice chart that shows who does “best” and who does “worst”.  The problem is that all they’re measuring is the degree to which the aggregation network of the operators is fully utilized.  FiOS does best so does that mean it somehow guarantees users TDM bandwidth?  No, it means that FiOS isn’t currently utilized to its design level so users have less aggregation congestion.  By the FCC measure, the operator with the best Internet would be the one with no customers to congest it.

There are two problems with all of this.  First, you can’t make good public policy with data that demonstrates nothing useful or even relevant.  Second, we’re again encouraging a model of consumer broadband, and an expectation set, that are totally unrealistic.  The only way to give users 100% of interface speed all the time is to give every one of them a dedicated path to every content resource.  Making it look like the most uncongested (likely meaning lowest-populated) network is “best” encourages churn that then populates that network and makes its realization of interface speed less than 100%.

 

Leave a Reply