Injecting Some Reality at the Edge

Because we just came off an edge computing conference, we’ve got a lot of edge computing stories this week.  A big part of this is the usual tech hype cycle, of course.  When you have ad sponsorship, the site owners have an incentive to create links that are clicked often and so serve ads often.  Stories that build are called “rolling thunder”, and the easiest way to get it is to let stories build by focusing on a topic and making it exciting.

This doesn’t mean that edge is all B.S (though it probably means that what we read about it is).  In my operator discussions over the last month, I heard from plenty of operators who thought edge computing was one of their future differentiators, or that it was critical to one or more of their future services.  What I didn’t hear was a lot of specificity.  “Edge?  Oh, yeah, we’re going to do that,” was a typical comment, and one hardly comforting in its detail.

Edge computing, broadly, means computing placed close to the point of information.  For users, it means placing computing close to work, and that of course has had its problems over the years.  We wouldn’t have needed server consolidation had we not had too many servers deployed “at the edge”.  Enterprises realized quickly that utilization of these edge resources was low and support costs were high, and a lot of early cloud interest came about because of enterprise-created “edge computing”.

Today’s discussions of edge computing are more related to edge-as-a-service, meaning edge computing services offered by public providers.  Because network operators would be logical candidates for offering edge computing services, “the edge” has been expanded to mean applications of edge computing to operator missions, meaning to building services, and not just as a form of public computing.  Edge, for most operators, is part of the “carrier cloud”.

Most, but not all.  About three-quarters of operators think that carrier cloud will include or even be dominated by edge computing resources.  Since that’s very possibly true, it’s a good sign.  Less a good sign is the fact that slightly less than half of operators understand that edge computing is really cloud computing.  The quarter who don’t see edge and carrier cloud as congruent think of edge computing in the old enterprise sense—move compute close to the work.  They see edge applications as being rather static, but immobile because of the link between edge computing and low latency.

Logically, standards groups could be expected to resolve a basic point like the mission of edge computing.  We have an ETSI “edge computing” group, and Light Reading ran an interesting story about some concerns operators have on the group’s activity.  Most center on things like the Multi-Access Edge Computing (MEC) work.  I have my own concerns, both about MEC and MEC compliance, and about edge computing overall.  At the least, I don’t think they’re dealing with the question of basic edge mission, and things go downhill fast when you don’t know why you’re doing something.

My biggest concern is we have, with edge computing standards, yet another case of developing detailed technical specifications from the bottom up.  How do you do API specs without having functional elements with specific behaviors that are accessed via those APIs?  You don’t, but how do you develop a model of functional elements without starting with the missions and constraints on edge computing?  You don’t, but we did just that.

Let’s look at edge logically.  First, it would be lunacy to think that edge computing was anything other than a special case of the cloud, a class of resources and “foundation services” that existed as part of cloud infrastructure and that happened to be located at the edge of the network, which for operators would mean the central offices and perhaps some metro locations.  Second, the applications that drive the edge are a subset of the same applications that would drive carrier cloud.

I’ve blogged often and in detail about what those applications are.  I could make assumptions about how the services needed to those applications would look, and from that I could define APIs that would expose features that could be composed into those services and support those applications.  That’s not been done.  Instead, what we’ve done is attempt to define generalized functional APIs, which necessarily focus not on what edge computing does but how edge computing does it.

Some say that the “application” of MEC is actually mobile edge computing (sadly, the same acronym!), but mobile edge isn’t an application at all.  It’s an assumption that because 5G would “reduce latency”, it marries with a compute resource set close to the edge in order to take advantage of that reduced latency.  But will 5G really reduce latency, enough to matter?  You can’t say without postulating a specific application, and we have none really identified.

The best edge applications would deal with personalization and contextualization of services, and in a related sense to IoT.  If a software architect were to look at those areas, they’d first frame what functionality the three areas would require, decompose that into a service model that would present as a set of hostable features, and then define APIs for them.  Equipped with that, developers could build the applications.  Without that, developers are faced with a set of APIs that represent a framework for deployment and management of something they can’t identify a business case for.  Sounds a lot like ETSI NFV ISG repeated, just as I feared in yesterday’s blog.

Telefonica, according to the article in Light Reading, is chiding the vendors who responded to their RFI on edge computing, but what did they expect?  The MEC work offers nothing truly useful, and vendors are confronted with the choice of either doing all the heavy lifting in framing low-level management-oriented APIs into real services (and giving all their competitors the benefit of their work), or selling stuff that’s focused on the only place that MEC offers (the wrong place), and not being helpful.  Obviously they took the latter option.

This stuff is starting to look like an operator conspiracy to do useless stuff, and a media conspiracy to make it sound important.  That, from someone who hates conspiracy theories.  What it really is represents something harder to fix.  It’s a collision of self-interest and self-delusion.  The media is paid for hype, and operators want to believe that they can shove transformation back into the virtual version of the same old boxes and the same old services they grew up on.  Supply-side thinking leads to bottom-up design, and ad sponsorship to hype waves.  QED.

Why is this process so broken?  Ultimately, as I suggested above, the operators as the buyers need to be responsible for enforcing rationality if they’re going to enforce anything.  Nothing has been done in transformation standards except what operators have pushed for, so every mistake in approach can be traced to their doing the wrong thing.  It would be better not to have standards at all, to let open-source strategies develop and then select from them.  But now, given that they have already developed in the cloud community, it would be best to simply adopt what’s working.

Just because this problem isn’t caused by vendors (directly, at least; they do certainly contribute to the hype waves!) doesn’t mean vendors couldn’t or shouldn’t fix it.  We already see some vendors positioning ecosystemic solutions to cloud-native development, and VMware is also positioning itself directly to network operators.  I’d like to see more of that, more vendors assembling the tools needed.  I’d also like to see all the vendors doing more to elevate their features and APIs, looking upward to the applications that will have to be there, earning revenue, or nothing will happen…ever.