There are layers of the clouds in the sky. Similarly, for a bunch of reasons, there are (or should be) layers in the cloud applications and infrastructure that companies are adopting or planning. Yes, there are some applications and even some businesses that may view both cloud and data center as a single resource, or even adopt one or the other. For most, we’re back to layers, so we need to understand what they are and how to manage them.
When I’ve surveyed businesses about their applications, or talked with them about application design and design and hosting trends, they’ve pointed out that most of their “applications” have a decidedly two-layer structure. The bottom layer is the core logic of business applications, the things like transaction processing and database maintenance. The top layer is what we could call the “presentation” layer, where we create user access to applications, run analytics to generate reports, and so forth. This is a good starting point because it’s the layer structure that businesses report spontaneously, making it a kind of observational perspective.
When we thought about cloud hosting in the early days of the cloud, everyone was thinking about transporting the entire two-layer structure to the cloud. While there are applications that businesses were happy to transport en masse, most companies ran into issues with the bottom layer, where information security and regulatory or business policy compliance concerns tend to arise.
The current growth in cloud success, and the pace of adoption of what we call “hybrid cloud”, is due to the recognition by businesses that it’s possible to move the top layer to the cloud and leave the bottom layer in the data center. This creates what I’ve called in my blogs the “front-end/back-end” approach to application-building. A cloud front-end piece, designed for flexibility, resiliency, and agility, is married to a back-end portion that continues to be run in the data center. This eliminates the compliance issues of the “move-to-the-cloud” approach, and adds flexibility to the GUI to support mobile devices and browsers, workers and customers, with customized portals. Better insight leads to better adoption.
The obvious question is whether understanding layers better, or perhaps by looking at more than “high clouds and low clouds” would further improve our insights. Why does the cloud work better for what I’ve called the “front-end” piece? Is the back-end piece in the data center because it doesn’t suit the cloud, because of security/compliance issues, or what?
The interface between application and user is critically important, and even small improvements there result in significant improvements in overall quality of experience. If we look at a typical transaction, we find that it tends to fall into four phases. First, the user selects an action they want to take. Second, they obtain baseline data for the support of the action. Third, they do something that updates that data, and finally they get a visual confirmation of success (or an indication of failure). These phases take place at human speeds, and a complete transaction might take a minute or more to complete.
Front-end/back-end exchanges happen in the last three of our four phases. In a traditional implementation, we click on a link to get a product description, and the back-end system delivers the associated information. We do an explicit or implicit update (click a “buy” button is an implicit update of inventory), and the back-end system relieves the inventory, adds an item to the accounts receivable or processes a credit card. We then get back a confirmation or refusal.
One thing users point out about the way these four phases are implemented is that the APIs involved are usually inherited from applications where the users were highly trusted. Often there’s a generalized API that lets a requestor inquire about status, read existing data, write new data, etc. This approach creates an important problem in our cloud layers discussion. The database itself is always a protected asset. When an application that has database rights is created, meaning when an API is developed, the API now has to be protected too. This is analogous to expanding the base layer of our cloud layer stack, which keeps things in the data center.
Even where security/compliance isn’t an issue, things can get stuck in the data center layer of our application because of the cost of moving them out. If an application routinely accesses a database, then there’s a good chance that the cost of access across the cloud boundary is going to be an incentive to keep the application in the data center, along with the data. However, with transaction processing (as opposed to analytics), the database access is really done only to retrieve a very small number of records. If we run a database query from the cloud, we may drag many records into the cloud and not just the ones we want, depending on how the database is accessed.
This essentially defines another layer, a set of application components that aren’t inherently tied to the data center, but which are effectively tied because of cloud pricing policies associated with the data access across the cloud boundary. Eliminate the access charge, or eliminate a mass amount of data doing the crossing, and you can move these components to the cloud. That could be accomplished, in some cases at least, by sending a query from the cloud to the data center and database, and then returning only the result. Recall that in our four steps of transaction processing, we really saw only one record and updated that record. If the query process is run locally to the database, there are only limited charges for exchanging the result of the query across the cloud boundary.
In some cases, developers have created secondary databases that can be made cloud-resident, but another solution is to recognize that in our four phases of transaction processing, the database really isn’t being accessed, just a select and small number of records. The problem of mixing the layers of our application arises because we don’t re-architect the APIs to fit the cloud model. We don’t contain risk where we expose it, which is always the most effective way to deal with risk.
API restructuring would enhance security in itself, but it might also make it possible to move more of the transactional application into the cloud without additional loss of security. The same would hold true for compliance concerns; if API restructuring could reduce compliance risks of cloud usage for a portion of an application, then compliance would not be a barrier to cloud migration. Some pieces of core applications, if properly protected at the API level to limit what could be exposed there, could migrate to the cloud. Another layer is created.
The point to all of this is that the prevailing hybrid cloud front-end/back-end model is a bit either/or, more so than needed. We’re making assumptions in creating the two pieces that reflect old application design practices rather than modern requirements. We could modernize our thinking, and if we did so, we could create more layers, layers of components that could live on either or both sides of the cloud boundary. That could enhance application QoE considerably, and boost cloud prospects at the same time.
This also illustrates why cloud-native design is both important and different. If you want to be able to create layers of application functionality based on mission and cost management, you’ll need to control the APIs that provide the linkage, or you can accidentally freeze your options into those original front-end/back-end polar layers, and that will defeat cloud-native implementation, no matter what techniques you decide to apply.