Cloudflare, Cloud Market Barriers, and the Edge

According to the company, Cloudflare wants to be the “fourth major public cloud”, but at the same time EU cloud providers are losing market share steadily to the current three US giants. OK, bold aspirations make good ink, but there do seem to be some significant barriers rising in the public cloud market. Is that bad, and if so what can we do about it? Could the same forces that limit the entry of new cloud providers also impact edge computing?

The public cloud giants have been getting steadily bigger, which is usually attributed to their superior economy of scale. It’s more complicated than that. The efficiency of a cloud doesn’t continue to grow as the size of the cloud grows; it’s an Erlang curve that plateaus at a point that a new cloud provider could reach, at least in one data center. It is true that the three largest public clouds can deploy in multiple geographies with near-optimum resource efficiency, but that’s not the biggest barrier.

Over the decade since public cloud services were first offered, our notion of the cloud has changed radically. At first, people thought the cloud was essentially a virtual-machine-as-a-service, a place that either everything was going to live eventually (wrong) or at least where server consolidation would come to roost. Over time, it’s become clear that the cloud is a partner to the data center, a place where, in particular, application front-end elements relating to the user experience could be hosted. The mission gradually drove the expansion of “web services” or the creation of a “platform-as-a-service”.

Today’s cloud has dozens of hosted features, available to developers via APIs, and multiple hosting options beyond IaaS. Applications for public clouds could be built without these features, but it would be more difficult and require more expertise on the part of developers. This rich set of services would create a major barrier to market entry for other cloud aspirants, and interestingly enough, Cloudflare isn’t (at least now) actually proposing to build most of those features. Instead, it seems to be concentrating on one—cloud data storage.

Databases in the cloud are expensive on two levels. First, the storage itself is (according to many enterprises I’ve chatted with) expensive enough to rule it out for some applications. Second, companies like Amazon (with S3) charge for data entry and egress, and that makes it particularly difficult in hybrid cloud applications where both cloud and data center elements have to access the same data. What Cloudflare proposes to do is to make storage cheaper and eliminate access charges where data has to cross cloud boundaries.

That may be the sleeper issue here. The public cloud giants charge for “border crossing” traffic overall, and Cloudflare created the Bandwidth Alliance to provide for mitigation of these charges by having cloud providers link their networks to reduce the costs. To quote Cloudflare, “Our partners have agreed to pass on these cost savings to our joint customers by waiving or reducing data transfer charges.” Smaller cloud providers are limited in their ability to match the geographic and feature scope of the giants, and that means that they have to focus on a best-of-breed model. However, those border-crossing charges make cross-cloud-boundary transfers more expensive. It’s interesting to note that the number one public cloud provider, Amazon, is the specific target of the savings calculator Cloudflare offers.

Multi-cloud promotion would allow “niche” cloud providers who focus on a specific feature set, either competing with the giants or creating something that they’ve been unwilling or unable to offer. If we had a means of reducing access charges significantly, we could see an explosion in multi-cloud, and that would almost surely drive an explosion in the number of public cloud providers. It would also benefit players like Cloudflare, who offer services that would increasingly commit users to a multi-cloud deployment. Cloudflare’s R2 offering of cloud storage is an example, though the service isn’t yet available. That would mean a transformation of how multi-cloud is used.

Today, enterprises use multi-cloud primarily for backup, or to parallel multiple providers to optimize geographic coverage where one provider can’t offer it where it’s needed. Best-of-breed multi-cloud would mean that almost every enterprise could use multi-cloud for almost any application, and that could vastly accelerate cloud usage. The downside, of course, is that that greater usage would divide revenues across a larger field of competitors, and market leaders like Amazon today, and perhaps Google and Microsoft tomorrow, might see the downside risk in the near term larger than the total-addressable-market (TAM) gain in the long term. Today’s money is always more appealing.

Another interesting question that this raises is whether the whole notion of the Bandwidth Alliance wouldn’t convert the cloud from today’s separate players to something more like the Internet, where the players share an underlying network framework. That in turn might blur the boundary between cloud and Internet.

Edge computing could be a factor here, for a number of reasons. The obvious one is that by definition, “the edge” is much more geographically diverse than “the cloud”. In the US, for example, my model says that you could achieve a near-optimum cloud with only 3 major data centers (neglecting availability issues) and that more than a dozen would likely be inefficient. In contrast, you’d need a minimum of 780 edge data centers to field a credible edge offering nationally, and you could justify over 16,000. Globally, edge computing could justify 100,000 data centers, and surely no single player could hope to deploy that much.

“Competitive overbuild” doesn’t work at the edge; the cost is too high. We need “cooperative edge built” instead, and that’s not going to work unless you have “Internet-like” peering among edge (and cloud) providers. Or, of course, if public cloud providers partner with somebody who has edge real estate and create “federations”. That’s what seems to be happening with the cloud-provider-and-telco partnerships on the edge today.

The Internet is a prime example of the value of community over the benefit of exclusivity. The cloud, not so much. The edge could be more Internet-like in terms of value, and that might drive changes in the way that cloud providers and edge providers do peering and charge for data border crossings. If it does, then it will likely benefit us all…except perhaps those with a chance to win in an exclusivity game.

How could a populist edge, promoting a populist cloud, change things? The edge would have to be the driver, because of that clear need for cooperation, but we’re still struggling with how to make the edge, meaning edge computing, a reality. We lack the model to make it work, the “worldwide web” application framework that made the Internet what it is. The question for Cloudflare is whether they understand that, and are prepared to stake a claim in creating that essential edge model. If they are, then they do have a shot at being the fourth public cloud giant, and they even have a shot of moving up in the rankings.