The Real Reasons the Cloud Doesn’t Always Save Money

Why aren’t we saving money with cloud computing? If you chart the trajectory of IT spending, it’s not trending lower but higher. Given that “you save money adopting cloud computing” is the mantra of the age, why aren’t we seeing the savings? InfoWorld did an article on the topic, and what I found interesting was that the article didn’t address the biggest question of all, which is whether we’re trying to save money in the first place.

The InfoWorld piece offers three reasons why we don’t save money on the cloud, and they all relate to practices that can contribute to higher-than-necessary spending on our cloud applications. OK, yes, those reasons are important in optimizing cloud use, but are they the thing that could turn that upward IT spending trajectory downward? I don’t think so, not by a long shot. And even in those three reasons we overspend on the cloud, there are hidden factors that need to be addressed if even those limited remedies are to work.

I’ve blogged regularly on cloud computing, and I won’t bore you all with details on my view. To summarize, I believe that enterprise cloud adoption has been driven by the need to optimize businesses’ relationship with their prospects and customers by creating web-and-app-friendly portals into current applications. We’re not moving things to the cloud, we’re using the cloud for doing things we’ve never done before. To expect that we could cut IT spending with the cloud under these circumstances is not and never was realistic.

The cloud is a cheaper way to do the user-centric application front-ending that businesses know is required to improve stakeholder engagement with information. Give users a convenient portal into the world of stuff to buy and they’ll use it. Same with partners and employees. The Internet and smartphones created that portal, so the shift to agile Internet-centric presentations of products and services was inevitable. The cloud wasn’t supposed to cut costs, but to cut cost increases that could have crippled the customer-facing initiatives of companies had they been implemented in the data center. The “hybrid cloud” that you’ve been hearing a lot about recently is actually the only kind of cloud the overwhelming majority of enterprises need or want.

That cloud costs should be compared to equivalent data center costs for the same application evolution is the first critical truth in managing cloud costs. It’s the first axiom, in fact, of cloud economics. The second axiom (which those who remember high-school math will recall is a “self-evident truth” is that we have ignored the first axiom in our assessment of cloud cost-effectiveness. The InfoWorld article proves that point.

In geometry you can build theorems from axioms, and we can do it here too. My first proposed theorem is that because we’ve not actually understood the cloud we’ve been planning for and adopting, we’ve not taken all the steps possible to optimize it. How could we, without knowing where we were really heading?

If the primary mission of the cloud is to host application components that enrich prospect/customer interaction (we’ll get to other missions later), then the first thing that needs to be done is to decide what information and processing is needed in the cloud to do that. Remember that web sites and content delivery networks can deliver product information. It’s when you start moving toward the “sale” end of the prospect-to-customer trajectory that you need to have information about detailed pricing, stock and availability, and eventually to convert interaction into transaction. Richer interactions then push some data outward from the data center.

A good example is editing and validity checking. Transaction processing often includes editing prior to applying the transaction, but if you’re going to push the transaction steps outward, that’s likely done in the cloud. If you want to track interest in products and from prospects and customers, you likely want to have a sign-on step, and that creates a link to the account and account history. But even during pre-sale interaction you may want to push stocking information, shipping dates, and so forth outward to avoid having to hammer the data center system while people browse around.

The point here is that whenever you have a boundary between technology options, a boundary that includes different economic tradeoffs in particular, you need to consider things on both sides of the boundary in order to optimize the system overall. We can’t change the cloud without considering the data center model in play, and the opposite is likely even more true. Enterprises often fail to do that, and in that failure they sow the seeds of inefficient cooperation between two essential pieces of the same application.

My second theorem is that cooperation between hybrid cloud elements can generate significant costs in itself. Traffic in and out of the cloud is usually associated with some charges, and storing information in the cloud can do the same thing. This, in addition to the costs associated with the application model in play in both places, the cost to make changes, etc. It’s very easy to forget these costs, and even easier to focus only on one without even considering the other. One enterprise told me they moved a database to the cloud to reduce cross-border traffic charges, only to find that the cost of keeping the database updated and the cost of storage exceeded the crossing charges.

Enterprises tend to get tunnel vision on cloud costs, meaning that they often focus on a cost they’re aware of and ignore the cost of alternatives, even to the point of forgetting to determine if there are any alternatives. This is particularly true of the traffic charges associated with the cloud. Those charges are based on the number and size of message exchanges and data movements, and often the cost of refreshing a database is higher than the cost of keeping the data in the data center and paying for traffic.

My final theorem is that sometimes you just have to do the application over. If a hybrid application is the rule, and if the normal model of development is to use the cloud to absorb the new requirements, is there not a point where limitations in the quality of experience and sub-optimal costs justify rebuilding the application completely? The fact is that the relatively small number of applications that have “moved to the cloud” and gained the business value that was hoped for moved because of this theorem.

One good example is the whole PaaS/Salesforce story. Most companies have things like CRM and ERP applications running in the data center already, and it’s easy for them to front-end these with cloud components to enhance the user experience. However, because the data involved is often limited in volume and scope of use, it makes more sense to simply use a cloud-resident application and forget the hybridization.

There are measures that cloud users can take to better monitor costs and tweak their cloud options. These are good ideas, but they can’t address problems with the application model that’s being deployed. It’s critical, to optimize cloud costs, to optimize the way an application is structured and how processing and traffic are balanced between cloud and data center. If the starting strategy for your cloud application is wrong, all the tuning and tweaking of options in the world won’t fix it.