Extending Data-Modeled Services to Run-Time: Lessons from aaS Part 1

Abstraction in any form requires a form of modeling, a way of representing the not-real that allows it to be mapped to some resource reality and used as though it was. We have two very different but important abstraction goals in play today, one to support the automation of service and application lifecycles and the other to support the execution of applications built from distributed components. Since both of these end up being hosted in part at “the edge” it sure would be nice if we had some convergence of approach for the divergent missions. It may be that the “as-a-service” concept, which has elements of both missions already, can offer us some guidance, so we’ll explore modeling aaS here, in a two-blog series.

Everyone seems to love stuff-as-a-service, where “stuff” is anything from hardware to…well…anything else. As-a-service is an abstraction, a way of representing the important properties of something as though those properties were the “something” itself. When you buy infrastructure- or software-as-a-service, you get something that looks like infrastructure or software, but is actually the right to use the abstract thing as though it was real. For “aaS” to work, you have to be able to manage and use the abstraction in an appropriate way, which usually means in the way that you’d manage and use what the abstraction represents.

There could be multiple ways of doing that, but I think there’s a value in organizing how that would be done, and at the same time perhaps trying to converge the approach with modern intent-model concepts and with data-driven service management of the type the TMF has promoted. Automating service management, including applications management, is an event-driven process. Control-plane network exchanges are also event-driven, which means that most of what’s critical in 5G could be viewed through the lens of events. That’s a big chunk of the expected future of the cloud, distributed services, and telecom.

In the cloud, as-a-service means that the prefix term is offered just as the name suggests, meaning as a service. IaaS represents a hardware element, specifically a server, that can be real or virtual. SaaS represents an application, so while there is surely a provisioning or setup dimension to SaaS use, meaning a lifecycle dimension, the important interfaces are those that expose application functionality, which is the use of the service not the management of the service. PaaS is a set of tools or middleware elements added to basic hosting. The new container offerings are similar specializations.

Applied to hosting, most IaaS represents a virtual machine, which of course is supposed to look and act like a real server, or is it? Actually, IaaS is a step removed from both the real server and the VM. A real server is bare metal, and a virtual machine is a partitioning of bare metal, meaning you have to load an operating system and do some setup. IaaS from most cloud providers already has the OS loaded, and so what you’re really getting is a kind of API, the administrative logon interface to the OS that’s been preloaded.

To avoid using network service model terms, TMF terms, before we’ve validated they could work, I’m going to call the representation of an aaS relationship a “token”. So, suppose we start our adventure in generalizing aaS by saying that in as-a-service relationships, the service is represented primarily by an “administrative token” that includes a network URL through which the service is controlled. You can generalize this a bit further by saying that there’s an account token, to which various service tokens are linked.

Suppose we had a true VM-as-a-service, with no preloaded OS? We could expect to have our administrative token that represented the VM’s “console”, or a configuration API from which the user could load the necessary OS and middleware. That would suggest that we might have another layer of hierarchy, a token representing the VM, another representing the OS admin login.

From this, it appears that we could not only represent any resource-oriented IT-aaS through a series of connected-hierarchical tokens, but also maintain the relationship among the elements of services. We could, for example, envision a third layer of hierarchy to my VMaaS above, representing containers or serverless or even individual applications. Because of the hierarchy, we could also tie issues together across the mixture.

If we were to rehost a container in such a VMaaS configuration, we would “rehost” the token in the new token hierarchy where it now belonged. At the time of the rehosting, we could create a history of places that particular token had been, too. That could facilitate better analysis of performance or fault data down the line, and even be of some help in training machine learning or AI tools aimed at automating lifecycle management.

What we can take from all of this is that it would be perfectly possible to create a data model to describe, and interact with, those aaS offerings that represent resources. That’s likely because what you do with resources is create “services”, meaning runtime behaviors, and the resources themselves are manipulated to meet the service-level agreements (SLAs), express or implied. That means lifecycle management.

Could the modeling be extended to the runtime services themselves? Since aaS includes runtime services (SaaS, NaaS), it would be essential that we include runtime model capabilities in the picture, just to accommodate current practices, but edge computing applications like IoT are likely to generate services, projected through APIs, to represent common activities. Why have every application for IoT do things like device management, or interpret the meaning of location-related events?

In my early ExperiaSphere project, in the Alpha test in particular, I created a model that represented not only lifecycle behavior but runtime behavior. The application used a search engine API to do a video search, retrieved a URL, and then initiated a NaaS request for delivery. The NaaS request was simulated by a combination of a “dummy” request for a priority ISP connection (with Net Neutrality, of course, there could be no such thing) and a request to a device vendor’s management system for local handling. What the Alpha proved was that you could create a runtime in-use service model, and merge lifecycle behavior into it, providing that you had intent-modeled the service components and had control interfaces available to influence them.

Could that approach work for SaaS and NaaS overall? That’s what we’ll explore in the next blog on our aaS-modeling topic.