The FCC Does it Again!

The FCC released its first fairly detailed study of Internet performance in promised-versus-delivered form, and while it has some interesting stuff in it, there’s also a rather substantial internal contradiction in the whole study that is troubling for our ability to set broadband policy.  It seems the government has ignored the whole basis for IP networking.

In the good old days of TDM, the “speed” of the network was set by the rate at which the network interface clocked data in and out.  A T1 line delivers 1.544 Mbps all the time, or it’s broken.  That capacity is available no matter whether traffic is being passed or not, and since most data applications don’t actually consume 100% of capacity, the rest is wasted.  When packet services evolved in the 1980s, their cost-savings was based on their ability to divide information into chunks (packets) and intermix them so that idle periods on a trunk were used effectively.

IP is a packet protocol, and IP savings over TDM are based on that same principle.  The packet concept saves money by using wasted space, so the corollary truth is that if there’s no wasted space there’s no money saved.  A traffic source that uses all the capacity rather than chunks of it leaves no gaps for other traffic to fill.  In effect, that source is consuming bandwidth like TDM did.

The speed at which the packet source can present data when it has it is the speed of the interface.    Any synchronous digital interface “clocks” data in and out at a fixed speed, which is its rated speed.  Think of it as creating a fixed number of bit-buckets, each of which can hold a bit if there’s one presented.  Traffic from a group of interfaces, like cable modems or DSL lines, is aggregated upstream, and the aggregation trunks fill traffic gaps in one user’s data with information from another, so the trunks’ aggregate speed is not the sum of the speeds of the individual interfaces.  That’s why we can give a consumer broadband service for under twenty bucks a month when 64 kbps of TDM would cost five times that amount.

So how does this impact our dear FCC and its study of Internet speeds.  Well, they’ve determined that most Internet users don’t get 100% of the advertised speed, meaning the clock speed of the interface.  But 100% of the interface speed 100% of the time would be TDM, which nobody has.  They have a nice chart that shows who does “best” and who does “worst”.  The problem is that all they’re measuring is the degree to which the aggregation network of the operators is fully utilized.  FiOS does best so does that mean it somehow guarantees users TDM bandwidth?  No, it means that FiOS isn’t currently utilized to its design level so users have less aggregation congestion.  By the FCC measure, the operator with the best Internet would be the one with no customers to congest it.

There are two problems with all of this.  First, you can’t make good public policy with data that demonstrates nothing useful or even relevant.  Second, we’re again encouraging a model of consumer broadband, and an expectation set, that are totally unrealistic.  The only way to give users 100% of interface speed all the time is to give every one of them a dedicated path to every content resource.  Making it look like the most uncongested (likely meaning lowest-populated) network is “best” encourages churn that then populates that network and makes its realization of interface speed less than 100%.

 

Reading the iCloud

Apple’s iCloud is advancing quickly to production status, and with the progress comes more clarity into what the service will offer.  Three things about iCloud caught my eye; Windows integration, the pricing for data storage, and the potential competition with Microsoft’s Live strategy.

I’ve noted in the past that one of the biggest issues in cloud computing adoption, and one that is virtually never mentioned, is the cost of storage.  Standard storage pricing from market leaders in the space would put the cost of a terabyte of storage at over a thousand dollars a year, which is more than ten times the cost of buying a terabyte drive and twenty times the marginal cost per terabyte for many data center disk arrays.  With typical installed lives of three years, internal storage is then closing in on being ONE PERCENT of the cloud cost.  Apple’s iCloud pricing sets an even higher price; at $100 for 50GB, a terabyte would cost two thousand dollars a year.

It doesn’t take rocket science to see that we’re pricing cloud storage an order of magnitude or more beyond the equivalent cost in the data center, and many cloud services also charge for outbound delivery.  The rates could double effective storage cost just by churning that terabyte once per month.  Thus, the current cloud pricing policies would discourage the deployment of enterprise mission-critical apps by pushing storage costs way above any possible point of justification.  We’re creating cloud computing for the masses, but not for masses of data.

The Windows connection with iCloud shows that Apple sees the service more like iTunes than like the App Store.  iTunes is a profit center, and the App Store is a feature for iOS that helps build value for the devices it supports.  iCloud is going to be a money-maker in itself, and that demands that Apple open a path to the largest installed base of PCs, which is Windows.  But even this factoid demonstrates something interesting; Windows dominates PCs not appliances, and so iCloud must have a strong value proposition for PCs as well as tablets and smartphones.  It has to leverage local resources more, because on PCs there are more local resources to leverage.

Which brings me to Live.  Microsoft has wanted Live to launch it into online success, but it’s never been able to create a compelling value proposition for Live given the resources available to Microsoft users on Windows and its application base.  That’s in large part due to the fact that Microsoft was so worried about creating something that would take users away from Windows or Office that they forgot that Live had to do that to some degree to have any utility.  They hunkered down on defense, and there’s only a small distinction between battlements designed to serve as a springboard for attack and those designed for your Last Stand.

 

 

Huawei Goes in For the Kill

Huawei, who has been gaining influence by leaps and bounds simply because it’s the network industry price leader, showed real gains in strategic insight in our most recent survey.  Now, Huawei is demonstrating that it intends to keep up its “build-a-strategy” trend by naming a kind of “Chief Security Officer”.  The mainstream thought is that this is intended to alleviate fears by government agencies that Huawei is in some way a spying conduit for the PLA or something.  It’s not.

You don’t have to be a genius to figure out that a company’s naming of a CSO wouldn’t make that company itself less of a threat.  What’s the goal here, then?  It’s to build on Huawei’s growing lead in the networking market as a strategy leader and start to move into specific areas where early opportunity exists.  Security is a major issue for consumers and businesses as well as for service providers, and in the latter case the issue cuts both in the direction of self-protection and in the direction of managed services opportunity.

Our survey of enterprises found that the cloud computing statement they identify with the most was “Only the network can secure the cloud”.  If operators selling network services like VPNs would add a cloud security offering to that VPN, it would likely sell well with enterprises even if it were positioned separate from a cloud offering by that operator.  That’s critical because operators today have a miniscule share of the cloud market, and enterprises are very likely to fiddle a bit on cloud planning to fully grasp the implications.  On the security side, they know.  Not only that, a cloud security offering could grease the sales skids in positioning cloud services.  Who better to buy a cloud service from than the provider of your network security services?

For competing vendors this is another example of fiddling, this time while opportunity burns.  All of the major vendors offer some security tools, but none of them have created effective cloud security positioning, even those who have offerings arguably directly aimed at the cloud, including Cisco and Juniper.  And here’s Huawei, who vendors have historically seen as little more than a street peddler complicating a sweet sales deal by standing outside the Macy’s window, moving aggressively and effectively to make something of the opportunity.  Yet another “shame-on-you-for-your-turtle-pace” moment.

Network equipment isn’t a growth market any more.  A major Street research firm has terminated coverage of ten network equipment vendors, and we’ve noted in past issues that more and more analysts are saying that network equipment spending in the service provider space is now monetization-limited.  The only hope of the network vendors was to create a killer service-layer strategy to fend off Huawei’s aggressive competition.  That’s now increasingly unlikely to happen because most don’t have a framework for a service layer, a platform productization of such a framework, or any idea how to build monetization applications.

On the latter point, we’ve undertaken a project in our ExperiaSphere project to create an application note that describes how, based on a presumed ExperiaSphere model of a service layer, operators could build a solution to their monetization needs.  We’ve drawn the requirements from two critical operator use cases on content and telepresence, and we plan to publish a detailed implementation map.  We have received strong comments of support from big operators on that effort, and when we finish our document (likely to be 12000 words or more and a dozen illustrations) we will make it available freely on our ExperiaSphere website.  We hope that the operators will use it to decipher the complexities of content and telepresence monetization, the principles of a reusable-component-based model of a service layer, and a foundation for some very specific vendor RFI/RFP activity.  We have to tell our operator friends that we believe only they can drive the service layer fast enough to make a proof-of-concept trial by this time next year possible.