Why Not “Software-Defined Software?”

We have software defined networks, software defined data centers, software defined servers.  What’s missing?  I contend it’s the most obvious thing of all; software designed software.  We’ve built a notion of virtualization and agility at the resource level, but we’re forgetting that all of the stuff we’re proposing is being purchased to run software.  If you have the same tired old junk in the software inventory, how are you going to utilize all that agility and flexibility?

I said yesterday, referencing Oracle’s quarter, that the key to the future for both Oracle and the others in tech was supporting point-of-activity empowerment, which simply means using agile technology to give people what they want, when and where they want it.  In a simple infrastructure sense, PofAE is a marriage of cloud and mobility.  In a deeper sense, it’s also a renewal of our agility vows, but starting at the software level where all experiences start.

If you look at software and database technology today, you will find all kinds of claims of agile behavior.  We have “loose coupling”, “dynamic integration”, “activity-directed processes” and so forth.  It might seem like software has been in the lead all along, but in fact software practices still tie developers and users to some rigid frameworks, and this rigidity is a barrier to full realization of the benefits of elastic resources.

One simple example of a barrier is the relational database.  We think of data as being a table, where we have key information like account number and we have supporting data like address or balance.  The problem with this in an agile world is that the structure of the data creates what might be called a “preferred use” framework.  You can efficiently find something by account number if that’s how the table is ordered, but if you want to find everything where balance is below or above or between specified points you have to spin through everything.

There’s been talk for ages about semantic relationships for the web or for data, but despite all this debate we’re not making much progress in moving out of the tabular prison.  We should be thinking of data in terms of virtualization.  If we can have virtual servers and virtual networks, why not virtual RDBMS?  Why can’t an “abstraction” of a relational table be instantiated on what is essentially unstructured information?  Such a thing would allow us to form data around practices and needs, which is after all the goal of all this cloud and software-defined hype.

Another example is in the area of processes.  If I develop an application for a single server, I can run it on that single server and lose efficiency because it utilizes it badly.  I can also consolidate it into a virtual data center where it shares server capacity with other apps, and it will be less inefficient.  Perhaps if I run it in the cloud where there’s even more economy of scale, I can be a little more efficient still.  But at the end of the day, I’ll still be running the same software and supporting the same experience that I started with.  In fact, realistically, I’ll be trading off a little performance to get that higher efficiency so I’m actually doing less for the experience than before.

A popular way of addressing this is to componentized software, to break it up into functional units.  Something that was once a monolithic program can now be broken into a dozen components that can be reassembled to create that thing we started with, but also assembled in different ways with other components to create other things.  The most architected extreme of this is found in the Service Oriented Architecture (SOA) which lets us build components and “orchestrate” them into applications through the use of a business process language (BPEL) and a workflow engine (like the Enterprise Service Bus or ESB).  There’s a language (WSDL) to define a service and its inputs and outputs to make browsing through service options easier, and to prevent lining up a workflow whose “gozoutas” don’t match the “gozintas” of the next guy in line. SOA came along almost 20 years ago, and it’s widely used but not universal.  Why?

The reason is the Web.  SOA is a complex architecture filled with complex interfaces that provide a lot of information about dynamic processes but that are difficult to use, particularly where you’re dealing with best-efforts network or IT resources.  The Internet showed us that we need something simpler, and provided it with what we call “REST”, a simple notion of a URL that when sent a message will do something and return a response.  It’s elegant, efficient, simple, and incredibly easy to misuse because it’s hard to find out what a given RESTful service does or needs or returns.

We clearly need to have something in between, some way of creating applications that are not just dynamic or malleable, as REST and SOA are, but are extemporaneous in that they don’t presume any process structure or workflow any more than a truly agile data environment would presume a data structure.  With something like this, we can handle “big data”, “little data”, clouds and NFV and SDN and all of the other stuff we’re talking about at the software level where IT meets the user.  If we don’t do this, we’re shadowboxing with the future.

You don’t hear much about this sort of thing out in the market, probably because despite the fact that we think software and IT is the bastion of innovation, we’ve let ourselves become ossified by presuming “basic principles” that guided us in the past will lead us indefinitely into the future.  If that were true we’d still be punching cards and doing batch processing.  The virtual, software-defined, world needs something more flexible at the software level, or it will fail at the most basic point.  What software cannot, for reasons of poor design, define will never be utilized at the server or network level, even if we build architectures to provide it.

Leave a Reply