Ah, telco cloud! The initiative that many (myself included) had hoped would revolutionize the cloud, the edge, and telcos all at the same time. Well, it’s been a dud. As a poet once said, “The tusks that clashed in mighty brawls of mastodons…are billiard balls.” Telcos seem to have ceded everything to the public cloud giants, but now there’s an initiative, called “Sylva” that’s hoping to create an open-source telco cloud stack. Will it work?
If history means anything, Sylva has a big uphill slog. While there have been impactful telco initiatives in areas like Open RAN, they’ve been focused on a very limited target. Even then, the pace of progress has been slow enough to limit the extent that the initiatives could influence the market. Sylva has two goals according to the project FAQs; release a software framework (in my terms, an architecture model) that would “identify and prioritize telco and edge requirements”, and develop a reference implementation and an integration/validation framework. That is a very long way from a limited target.
The white paper (available in the github link I provide above) aligns Sylva explicitly with two things. First, a need to move computing to the edge, meaning within a few kilometers of the user. Second, a need to map 5G requirements to a cloud model in a way that ensures telcos’ special requirements are met. The white paper also preferences containers and Kubernetes as the technical foundation for the software to be run, which of course means that the platform software that makes up Sylva would have to include both. It also preferences a the ever-popular “cloud native” model, which has the combined advantage/disadvantage of having no precise and fully accepted definition.
The good news about Sylva, from the white paper, is that it explicitly aligns the project with the evolution of edge computing and not just to things like 5G. That means that Sylva could form the foundation of a telco move into edge computing in some form, either via direct retail service offerings or through what AT&T has described as facilitating services, to be used by others in framing those service offerings.
Another good-news element from the paper is the explicit recognition of public cloud services as an element of telco cloud, but not the entirety of it. The paper properly identifies the basic model for this; a “convergence layer” that presents APIs upward to applications, then maps those APIs to whatever hosting is available below, either deployed by the telcos or by third parties, including public cloud providers. Something like this was proposed with the Apache Mesos/Marathon and DC/OS approach for cloud computing, since updated to support containers.
I’m bringing up Mesos/Marathon here for a reason. Sylva mandates containers and containers-as-a-service (CaaS), which is a concept that’s also found in the current iteration of NFV. Mesos/Marathon is a broader approach to orchestration, one that works with containers but also with other hosting models. One could reason that to the extent that other models might be required, something other than Kubernetes and CaaS might be a more realistic goal for Sylva. Part of the reason for my concern goes back to the ambiguity inherent in the term “cloud native”.
In the NFV community there’s been a tendency to use the terms “cloud native” and “containerized” as being equivalent, which obviously they are not. Containers are a hosting model, a platform architecture. “Cloud native” if it means anything cohesive is an application model designed to maximize the benefit of the cloud. Conceptually, containers predate the cloud, and there are models of cloud hosting (functional computing, for example) that are not container models.
Does this mean I’m promoting Mesos/Marathon instead of Kubernetes? No it does not. The plus for Kubernetes and containers is that modern software development and deployment practices are decisively heading in that direction. One of my constant criticisms of telco initiatives in software and the cloud has been that they’ve tended to ignore the state-of-the-art cloud-think, which in part means “edge-think”. Remember, the real goal of Sylva is to support edge computing. I’m saying that we need to think about the relationship between edge applications and the development/deployment model to ensure that we don’t push a strategy that doesn’t support the kind of edge applications likely to drive “telco cloud” in the first place.
I think it’s fair to say that edge computing is about latency control. The benefit of hosting close to the user is that latency is reduced, and latency matters primarily in applications that are event-driven. Generally, event-driven applications divide into two pieces, a real-time piece that’s highly latency-sensitive because it has to synchronize with the real world, and an optional and more transactional piece that takes a real-world condition (that is often made up of multiple events) and generates a “transaction” with more traditional latency sensitivity, decoupled from the first piece. I’ve written a lot of event-handlers and the primary requirement is that the overall processing, what’s usually called the “control loop” is short in terms of accumulated latency.
Functional computing, which means software whose outputs are based only on the current inputs, is a development model that encourages low latency by eliminating references to stored data. Functional computing also promotes scalability and resiliency because any instance of a software component, if presented with an event message, can run against it and generate the same result. So we could fairly say that functional computing is a reasonable development model for the edge.
How about a deployment model? In public cloud computing, functional computing has been conflated with serverless deployment, where a component instance is loaded on demand and run. That approach is fine for events that don’t happen often, but where the same event happens regularly, you reach a point where the time required to load the components and run them is excessive. In this situation you’d be better off to keep the software function resident. That doesn’t mean it couldn’t scale under load, but that it wouldn’t have to be loaded every time it’s used. Kubernetes and containers will support (with an add-on for serverless operation) both models, so we can fairly say that mandating Kubernetes in Sylva doesn’t interfere with functional computing requirements.
Kubernetes does allow for deployment in VMs, the cloud, and bare metal (some with extensions), so I think that the Sylva approach to deployment does cover the essential bases. There may be current event-driven applications that resist container/Kubernetes deployments, but few believe these would be candidates for telco cloud edge hosting, and in any event it’s difficult to point to any significant number of examples.
What this means is that the framework that Sylva articulates is suitable for the mission, which is good news because a problem at that high level would be very difficult to resolve. There are still some lower-level questions, though.
The first question is timing. The first release of the framework is expected mid-2023, which is just about six months out. That schedule is incredibly optimistic, IMHO, given past experience with telco-centric initiatives. However, failure to meet it would mean that some of the target missions for Sylva might have to advance without it, and that could reduce the benefit of Sylva. Slip far enough, and the market will either have moved beyond the issue or decided it was irrelevant, in which case Sylva would be too.
The second question is content. When NFV launched in 2013, the goal was to identify the standards and components needed, not define new ones. Yet NFV ended up depending on a number of elements that were entirely new. Can Sylva avoid that fate? If not, then it disconnects itself from cloud evolution, and that almost assures it will not be relevant in the real world.
The third question is sponsorship. There are, so far, a number of EU operators and two mobile infrastructure vendors (Ericsson and Nokia) involved in Sylva. None of these organizations are giants in cloud and edge thinking or masters of the specifics of the container and Kubernetes world. Edge computing is something that needs players like Red Hat and VMware for platform software and Cisco and Juniper for network hardware, because edge computing is realistically a function of metro deployment. I’ll be talking more about Cisco and Juniper on Monday, in my blog and in the Things Past and Things to Come podcast on TMT Advisor. Overall, edge computing a melding of data center and network. Will other players with essential expertise step in here? We don’t know yet, and I think that without additional participation, Sylva has little chance of making a difference to operators.