As a former software engineer, architect, and head of some high-tech programming groups, I love APIs. I also have to admit that APIs are getting to be a lot more complicated than they used to be, and things like edge computing, event processing, and microservices are promising to make them even more complex over time. For those who think they’re just starting to get their head around the API concept, this isn’t exactly good news. It’s also bad that APIs are becoming a kind of lefthanded proof of progress. I’m great; I have an API! There’s a lot more to it than that, which is why many of the APIs we’ve seen devised have failed in their mission.
The term “API” stands for “application program interface”, which isn’t really what APIs are turning into. The original API concept was pulled from operating systems and middleware that exposed certain features to application developers through (you guessed it!) an API. Today, we use the term to describe any interface between software components. APIs are the equivalent of hardware interfaces like your Ethernet connectors, except for software.
There are two things an API represents; a function and an application model. The function is the thing an API does, the relationship between its inputs and outputs. The application model is the way the API provides for component integration. Two components are running merrily along, coupled by an API. The way the API works will constrain how the components can work, or they’d be unable to use the API to communicate.
Unless the function of an API has utility, the API is worthless. A decade ago, we saw all manner of initiatives by network operators to “expose their features” via API. Most of those features related to voice calling, which operators were then pricing based on unlimited usage. How valuable can something be if you are giving it away, and how successful will you be in attracting third-party developers to exploit your “features”? Even today, operators talk about APIs as though they were somehow the gateway to new revenues and new services. They typically expect third-party developers to take the risks, and that’s a bit naïve when you consider that the operators themselves would be the best people to be working out new service features.
Most APIs today are aimed not so much at third-party developers as at integration of applications into more complex ecosystems. You can see examples of these APIs in the work done by the ETSI NFV ISG, where APIs provide the connection between the functional components of the architecture, like VNF Managers (VNFMs), Management and Orchestration (MANO), or Virtual Infrastructure Manager (VIM). These same ETSI definitions illustrate the other problem with APIs, which in the current age may be the most serious. APIs codify someone’s visualization of an application model, and it may not be the best model, or even a valid one.
If you have an API between the MANO then that API defines the relationship. If it sends something and expects a reply through the same API as a “return”, that’s a call relationship. That kind of thing is common in legacy applications, but it doesn’t fit nearly as well with more modern applications designed to handle events.
Most network applications, including service lifecycle management, are logically event-driven, meaning that they are designed to process a bunch of asynchronous conditions that pop up here and there, both from planned sources like moves, adds, and changes to services and from unplanned sources like faults. As these events come in, they have to be handled in order and they will kick off processes in response that may themselves generate events. The in-order part is a matter of queues and “serialization”, but the problem of contextualizing events to what the management system is trying to do overall has to be done by recognizing state. A state is a particular context of operation. Normally there’s an “Active” state that’s the goal of a system, and there are various states representing the progression of steps toward that goal, or the restoration of that goal if something breaks. The combination of events and states has long been recognized as the key to handling asynchronous stimulus in any application, which essentially means its key everywhere in networking.
Modern event-driven systems are what are known as “finite state machines” in software terms, meaning that it’s a system that responds to events by doing things and changing its context of event interpretation as it does them. A simple example is that a service starts in the “orderable” state, and responding to the “order” event, transitions to the “active” state. An event has an association with something, like a connection or service or device. There’s a data model for each thing that’s being event-driven, and that model not only records the current state, it defines, for each possible state and event, what’s to be done at the intersection of the two.
If you want to expose the features of an event-driven system, how do you do it? Answer, by generating an event, which means that you’d probably need only a single API, called perhaps “Inject-Event”. Why all the API discussion, then? Most of the APIs people talk about are really not related directly to network software, they’re related to management systems. You don’t build networks on management systems, you manage networks with them…via humans.
Since most management systems are designed to (at some point) present something to a human (a customer service rep, a network operations engineer), these systems have been “humanized”, meaning that a specific window has been created into the network’s operation. At some point, at least some of what can be seen and done through that window may have to be turned into events (if the network is itself event-driven), but what happens underneath isn’t visible through the window. An order in an event-driven service management system has to generate an event. If we have an order API, then somewhere down the line, it has to generate an event if it front-ends a modern system. But because the system can’t be seen through the API “window” we don’t know if it does.
This is why the API story can be dangerous. We need to think of network software, even network management software, in an event-driven way. You can’t do lifecycle automation, for example, without having the software be aware of the state of the elements of the network as it attempts to drive the network into a given behavior. At the least, that goal behavior is itself a “state”, and so is every possible step of progress toward, or away, from it. But API discussions don’t impose state/event thinking below, and in fact usually hide whatever thinking is going on.
Data-modeled, event-driven, network software is the answer to the challenges we face today in service lifecycle automation, resilience in network elements created through software instances of former appliance-driven processes, and elasticity in capacity to do stuff. If we have that, then we have a true event-driven system, and we also have the only API we need to have at the network level, the “Inject-Event” API.
To my mind, the easiest and best way to do APIs is to start with the functional architecture of what you want to do—the division of features and capabilities. You then factor in the application’s “model”, the way that components relate to each other and to outside users, and what you get is a structural picture of the application at the top level. This is where APIs should be considered first, and hardest, but it’s a picture we rarely hear about, and judge even less often.
When someone talks about APIs alone, they’re not telling you (or me) enough to be able to judge the value of what’s being exposed. Is it 1980s technology, monolithic and cumbersome, or is it 2020 technology with events, microservices, and so forth? There is a huge difference, and those who don’t accept that now may find out the truth the hard way when they try to deploy their wares broadly.