One of the interesting questions raised during my recent enterprise Q&A related to the adoption of cloud-native technology for traditional business applications. An enterprise who’d gone unusually far in assessing cloud-native, to the point of starting a small application test, had developers suspend writing code to answer a basic question about architecture. “How does this application divide into cloud-native microservices?” They could picture this web of cloud-deployed components, but they were wary about the impact of all the network connections among components. That was a good question, but it was the iceberg-tip of a better one.
Let’s say we have to process a million orders in an eight-hour period. Back in the old days of data processing, the paperwork would have been captured via data entry, and the assembled orders (“batched”) would have been read and sequentially applied. In this kind of batch processing, the time it takes to process a single order, multiplied by one million, becomes the time it takes to run the job. Every disk I/O, every component processing time, is serialized. A hundred milliseconds of delay (a tenth of a second) equates to a hundred thousand seconds of accumulated delay.
When we do the same million orders in real time, things are different. The order processing it now coupled to the actual transaction, typically involving things like human customers and human salespeople pushing real merchandise around. Toss a hundred milliseconds into this mix and you don’t see anything different where the dollar meets the point-of-sale device.
The reason I’m bringing up this seemingly tangential point is that the nature of an IT application, fundamentally, depends on the way it relates to the real world. In the ‘60s and ‘70s when most commercial activity was batched, we had to worry a lot about process efficiency. Concentration of data into data centers was smart because the penalty for distribution was latency, which accumulated in those old application models. With OLTP we could forgive latency more easily, which is a big change.
But OLTP doesn’t justify latency, nor is the goal of “having latency-tolerant distributed applications” a goal. OLTP was justified by the improved quality of information and the improved productivity of the workers involved in commercial activity. What happened with OLTP was that we created a different application model that interacted with the real commercial world in a different way. That model prioritized different IT issues, required different trade-offs.
The reason this is relevant for cloud-native is illustrated by a question: What would happen if we applied the OLTP model of applications to batch processing, if we let a new technical model get type-cast into an old application model? Answer, hundreds of thousands of seconds of extended runtimes. You should not expect to take the application model used for OLTP and fork-lift it into a cloud-native implementation. You have to go back and ask the question where’s the benefit, then decide how that benefit could be best achieved. That’s when you look at cloud-native.
The “bigger question” that I opened with is how do you know what applications justify cloud-native treatment. The application mentioned by my enterprise friend wasn’t a good candidate, period, and that’s probably why developers were struggling to map it to cloud-native. To determine what is a good cloud-native application you have to know both the benefit the application will bring, and then the benefit that cloud-native will bring to the application. Only a favorable combination of the two can actually justify a cloud-native implementation.
Batch-modeled applications are lousy cloud-native applications because distributing stateless pieces only builds up latency. OLTP applications may be “good” candidates for cloud-native in a technical sense, but you’d have to know what cloud-native brought to the table to know whether the new model would benefit a given OLTP application, or just “work” for it. The best way to find out what’s a good technical candidate is to look at what cloud-native does that traditional technical models for application development don’t.
Most people agree that cloud-native technology works best when applied to simple signals from the real world, what are usually called “events”. A transaction can be dissected into a series of events, each representing a step a user takes in interacting with a commercial application. The benefit I get from that kind of interaction is that I can then take the processes for each step and make them so simple as to be stateless and fully scalable and distributable. What is that worth? That’s a question that needs to be answered in the enterprise trial cloud-native application I mentioned above.
Then there’s the cost or risk. The event nature of the application is likely to generate increased network traffic among components. There’s latency associated with that traffic, and there’s a risk that the loss of connectivity will create a failure that simple resiliency at the cloud-native component level can’t address (you can’t replace something when there’s no connectivity, and if you did, you’d not know it!)
The enterprise I talked with admits that they didn’t consider any of this when they picked their trial application target, and it’s now clear to them that they picked the wrong target if their goal was to try a cloud-native development. But in a sense, they picked a great target, because it showed them an important truth, which is that everything doesn’t benefit from cloud-native.
We have batch applications today, including a lot of what we call “analytics”. These operate on stored data and conform to one class of benefit/risk tradeoffs. We have OLTP today, working as the front-end to an IT enabled business and supporting workers, customers, and partners. That we have both at the same time proves that one model didn’t replace the other. Instead, one set of missions—the real-time missions—spawned a new technical application model. That new model then absorbed the missions that justified it. That’s what will happen with cloud-native.
Does that mean all the attention we (and, yes, I) have given cloud-native isn’t justified? I don’t think so. IT evolution isn’t driven by what stays the same, but by what changes. When new technical options are introduced, they let us address missions differently, and even create new missions. The new technical options will likely reduce some commitments to past options, but many will stay in place. Still, the changes (other than the inevitable cost optimization) in our IT infrastructure will come due to the new technical options and their associated missions. We need to know what the new infrastructure will need, and how we’ll run it.
Then there are those new missions. We do things with OLTP today, in its manifestation as web commerce, that open new business opportunities and revenues impossible to realize using batch processing. We will do things with cloud-native that could not be done using OLTP, and those things are what will drive the spending, the technology changes, and ultimately the vendors.
The likely common thread with these new missions isn’t “IoT” or “autonomous vehicles” or any single technology, but rather the symbiosis of a lot of things we’ve atomized in stories. It’s “the real world”, the environment in which we all live and work. Batch processing put IT a long way from the worker, and OLTP let us (literally) touch IT. The next step is for us to live information technology, or at least live with and within it. In order to do that, real-world knowledge has to be gathered and correlated with behavior, business and personal.
We see this today. If I search online for a camera system or component, I start seeing ads for that class of product almost instantly when I visit web pages or do related searches. The Internet ad world has absorbed my interest and responded. Whether this process represents an invasion or not, it’s an example of contextualization. The next step in IT is to introduce context to applications, in order to make those applications more effective and productive. To the extent that this is successful, the benefits will justify the cost of development and deployment.
All of this raises questions for cloud providers, whether current or aspiring. If users are not really ready for cloud-native, then offerings that take advantage of it will have to be more architected to lower barriers to adoption. That also means they’ll be more easily protected from competition, since there are no standards for the “services” that might frame a cloud-native application. Perhaps this is where cloud providers will get in, and ahead, in the competition. If things stay development-focused and developer-limited, it’s the traditional software vendors who are in the driver’s seat for the cloud.