If you ever wondered why there were so many songs about rainbows, you might be among those who wonder why Tom Nolle talks so much about architectures. Somebody actually raised that point for me in a joking way, but it occurred to me that not everyone shares my own background and biases, the factors that make me think so much in architecture terms. What I propose to do is explain why I believe that architectures are so critical in the evolution of networking and computing, and why we don’t have them when we need them.
In the early days of software development, applications were both monolithic and small, constrained by the fact that the biggest computers that a company would have in the early 1960s had less memory than a smartwatch has today. You could afford to think in terms of baby-steps of processing, of chains of applications linked by databases. As we moved into the late ‘60s and had real-time or “interactive” systems and computers with more capacity (remember, the first mainframes came along in 1965), we had to think about multi-component applications, and that was arguably dawn of modern “architecture” thinking
An architecture defines three sets of relationships. The first is the way that application functionality is divided, which sets the “style” of the application. The second is how the application’s components utilize both compute and operating-system-and-middleware resources, and the third is how the resources relate to each other and are coordinated and managed. These three are obviously interdependent, and a good architect will know where and how to start, but will likely do some iterating as they explore each of the dimensions.
The role of architecture came along very early in the evolution of software practices, back in the days when much of application development was done in assembler language, so the programming languages themselves didn’t impose any structure (functions, procedures, “blocks” of code). Everyone quickly realized that writing an application as a big chunk of code that wandered here and there based on testing variables and making decisions (derisively called “spaghetti code” in those days) created something almost unmaintainable. Early programming classes often taught methods of structuring things for efficient development and maintainability.
Another reason why we started having architectures and architects was that it makes sense to build big, complex, systems using a team approach. The systems are essentially the integration of related elements that serve a common goal—what I tend to call an “ecosystem”. The challenge, of course, is to get each independent element to play its role, and that starts by assigning roles and ensuring each element conforms. That’s what an architecture does.
The 3GPP specifications start as architectures. They take a functional requirement set, like device connectivity and mobility, and divide it into pieces—registration of devices, mobility management, and packet steering to moving elements. They define how the pieces relate to each other—the interfaces. In the 3GPP case, they largely ignore the platforms because they assume the mobile ecosystem is made up of autonomous boxes whose interfaces define both their relationships and their functionality. It doesn’t really matter how they’re implemented.
Applications also start as architectures these days, but an application architecture has to start with a processing model. Some applications are “batch”, meaning they process data that’s stored in a repository. Others are “transactional”, meaning that they process things that follow an input-process-update-result flow. Still others are “event-driven” meaning that they process individual signals of real-world conditions. Because applications are software and utilize APIs, and because the hosting, operating system, and middleware choices are best standardized for efficiency, the resource-relationship role of an architect is critical for applications—and for anything that’s software-centric.
Suppose we were to give four development teams one step in our input-process-update-result flow and let them do their thing optimally based on its individual requirements. We might have a super-great GUI that couldn’t pass data or receive it. That’s why architectures are essential; they create a functional collective from a bunch of individual things, and they do it by creating a model into which the individual things must fit, thereby ensuring they know about how they’re supposed to relate to each other.
You can see, from this, two challenges in architecture that have contaminated our network transformation goals. The first is that network people are, by their history, box people. They think in terms of functional distribution done by boxing and connecting. When you apply that kind of thinking to software-and-cloud network infrastructure, you create “soft-box networking”, which doesn’t optimize the software transformation because it’s constrained. The second is that if the ecosystem you’re trying to create is really large, and if it’s divided up into autonomous projects, there’s usually no overarching architecture picture at all.
NFV suffered from both these problems. The NFV “end-to-end architecture” was a box architecture applied to a software mission. The architecture guides the implementation, and in the case of NFV what some said was supposed to be only a “functional diagram” became an application blueprint. Then the NFV ISG declared the surrounding pieces of telco networking, like operations, to be out of scope. That meant that the new implementation was encouraged to simply connect to the rest of the ecosystem in the same way as earlier networks did, which meant the new stuff had to look and behave like the old—no good path to transformation comes from that approach.
Anyone who follows NFV knows about two problems now being cited—onboarding and resource requirements for function hosting. The NFV community is trying to make it easier to convert “physical network functions” or appliance code into “virtual network functions”, but the reason it’s hard is that the NFV specs didn’t define an architecture whose goals included making it easy. The community is also struggling with the different resource requirements of VNFs because there was never an architecture that defined a hardware-abstraction layer for VNF hosting.
Even open-source projects in middleware and cloud computing can suffer from these problems. Vendors like Red Hat struggle to create stable platform software for user deployment because some middleware tools require certain versions of other tools, and often there’s no common ground easily achieved when there’s a lot of tools to integrate. We also normally have multiple implementations of the same feature set, like service mesh, that are largely or totally incompatible because there’s no higher-level architecture to define integration details.
What happens often in open-source to minimize this problem is that an architecture-by-consensus emerges. Linux, containers, Kubernetes, and serverless evolved to be symbiotic, and future developments are only going to expand and enhance the model. This takes time, though, and for the network transformation mission we always talk about, time has largely run out. We have to do something to get things moving, and ensure they don’t run off in a hundred directions.
Networks are large, cooperative, systems, and because of that they need an architecture to define component roles and relationships. Networks based on software elements also need the component-to-resource and resource-to-resource relationships. One solid and evolving way of reducing the issues in the latter two areas is the notion of an “abstraction layer”, a definition of an abstract resource everything consumes, and that is then mapped to real resources in real infrastructure. We should demand that every implementation of a modern software-based network contain this (and we don’t).
But who produces the architecture? That’s the challenge we’ve had demonstrated in almost every networking project involving service provider infrastructure for as long as I’ve been involved in the space (which goes back to international packet networks in the ‘70s). Network people do boxes and interfaces, and since it’s network people who define the functional goals of a network project, their bias then contaminates the software design. Open-source is great to fill in the architecture, but not so great at defining it, since there are so many projects contending for the mission.
This is why we need to make some assumptions if we’re ever to get our transforming architecture right. The assumptions should be that all network functions are hosted in containers, that cloud software techniques will be adopted, at least as the architecture level, and that the architecture of the cloud be considered the baseline for the architecture of the network, not the previous architecture of the network. Running in place is no longer an option, if we want the network of the future to actually network us in the future.
There are a lot of songs about rainbows because they’re viscerally inspiring. I’m always singing about architectures because, in a software-driven age, we literally cannot move forward without them. Large, complex, systems can never accidentally converge on compatible thinking and implementation. That’s what architectures do, and why they’re important—even in networks.