How Could We Accelerate the Pace of New Edge-Deployed Data Centers?

There should be no question that I am a big fan of edge computing, and so I’m happy that Equinix is pushing storage to the edge (according to one story yesterday) or that Vapor IO supports micro-data-centers at the wireless edge.  I just wish we had more demand focus to explain our interest in the supply.  There are plenty of things happening that might drive a legitimate migration of hosting/storage to the network edge, and I can’t help but feel we’d do a better job with deployment if there was a specific business case behind the talk.

Carrier cloud is the definitive network-distributed IT model, and one of the most significant questions for server vendors who have aspirations there is just how the early carrier cloud data centers will be distributed.  A central model of hosting, even a metro-central one, would tend to delay the deployment of new servers.  An edge-centric hosting model would build a lot of places where servers could be added, and encourage operators to quickly support any missions where edge hosting offered a benefit.  So, in the balance, where are we with this?

Where you host something is a balance between economy of scale, economy of transmission, and propagation issues.  Everyone knows that a pool of servers offers lower unit cost than single servers do, and most also know that the cost per bit of transmission tends to fall to a point, then rise again as you pass the current level of economical fiber transport.  Most everyone knows that traversing a network introduces both latency (delay) and packet loss that grows with the distance involved.  The question is how these things combine to impact specific features or applications.

Latency is an exercise in physics; the speed of light and electrons and the delay introduced by queuing and handling in network devices.  The only way to reduce it is to shorten the path, which means pushing stuff to the edge.  Arguably, the only reason to edge-host something is because of latency (though we’ll explore that point later), and most applications run today aren’t enormously latency-sensitive.  Telemetry and control applications, which involve the handling of an event and sending a response, are often critically sensitive to latency in M2M applications.

That means that IoT is probably the obvious reason to think about edge-hosting something.  The example of self-driving cars is trite here, but effective.  You can imagine what would happen if a vehicle was controlled by something half-a-continent away.  You can easily get a half-second control loop, which would mean almost fifty feet of travel at highway speed.

Responses to human queries, particularly voice-driven personal assistants, are also delay sensitive.  I saw a test run a couple years ago that demonstrated that people got frustrated if their queries were delayed more than about two seconds, resulting in their repeating the question and creating a context disconnect with the system.  Since you have to factor in actual “think time” to a response, a short control loop would be helpful here, but you can accommodate longer delay by having your assistant say “Working on that….”

Content delivery in any form is an example of an application where latency per se isn’t a huge issue, but it raises another important point—resource consumption or “economy of transmission”.  If you were to serve (as a decades-old commercial suggested you could) all the movies ever made from a single point, the problem you’d hit is that multiple views of the same movie would quickly explode demands for capacity.  You also expose the stream to extreme variability in network performance and packet loss, which can destroy QoE.  Caching in content delivery networks is a response to both these factors, and CDNs represent the most common example of “edge hosting” we see today.

Let’s look at the reason we have CDNs to explore the broader question of edge-hosting economies versus more centralized hosting.  Most user video viewing hits a relatively contained universe of titles, for a variety of reasons.  The cost of storing these titles in multiple places close to the network edge, thus reducing network resource consumptions and the risk of performance issues, is minimal.  What makes it so is the fact that so many content views hit a small universe of content.  If we imagine for a moment that every user watched their own unique movie, you’d see that content caching would quickly become unwieldy.  Unique resources, then, are better candidates for “deep hosting” if all other traffic and scale economies are equal.

That brings us to scale.  I’ve mentioned in many blogs that economies of scale don’t follow an exponential or even linear curve, but an Erlang C curve.  That means that when you get to a certain size data center, further efficiency gains from additional servers are minimal.  For an average collection of applications I modeled for a client, you reached 95% optimality at about 800 servers, and there are conditions under which less than half that would achieve 90% efficiency.  That means that supersized cloud data centers aren’t necessary.  Given that, my models have always said that by around 2023, operators would have reached the point where there was little benefit to augmenting centralized data centers and move to edge hosting.  The biggest growth in new data centers occurs in the model between 2030 and 2035, where the number literally doubles.  If I were a vendor, I would want to accelerate that shift to the edge.

Centralization of resources is necessary for unique resources.  Edge hosting is necessary where short control loops are essential to application performance.  If you model the processes, you find that up to about 2020, carrier cloud is driven more by caching and video/ad consideration than anything else, and that tends to encourage a migration of processing toward the edge.  From 2020 to about 2023, enhanced mobile service features begin to introduce more data center applications that are naturally central or metro-scoped, and beyond 2023 you have things like IoT that magnify the need for edge caching again.

Video, then, is seeding the initial data center edge locations for operators.  Metro-focused applications will probably use a mixture of space in these existing edge hosting points and new and more metro-central resources.  The natural explosion in the number of data centers will occur when the newer short-control-loop stuff emerges, perhaps four to five years from now.  It would be hard to advance something like this; the market change is pretty profound.

Presuming this is all true, then current emphasis on caching of data is smart and edge hosting of processing features may be premature.  What could accelerate the need for edge hosting?  This is where something like NFV could be a game-changer, providing a mission for edge-centric hosting before broad-market changes in M2M and IoT emerge and building many more early data centers.  If you were to double the influence of NFV, for example, in the period up to 2020, you would add an additional thousand edge data centers worldwide.

NFV will never drive carrier cloud, but what it could do is to promote edge-placement of many more data centers between now and 2020, shifting the balance of function distribution in the 2020-2023 period toward the edge, simply because the resources are there.  That could accelerate the number of hosting points (and slightly increase the number of servers) through 2025, and it would be a big windfall for vendors.

IT vendors looking at the carrier cloud market should examine the question of how this early NFV success could be motivated by specific benefits, and what specific steps in standardization, operationalization, or whatever might be essential in supporting that motivating.  There are few applications that could add as much to the data center count, realistically, in the next three years.