Is There a Business Benefit Driving “Hyperconvergence” or “Composable Infrastructure?”

The cloud is a different model of computing, a combination of virtualization and network hosting.  We all recognize that “the cloud” is something apart from virtual machines or containers, OpenStack or vCloud, IaaS or PaaS or SaaS.  It’s also something apart from the specific kind of servers you might use or the data center architecture you might adopt.  Or so it should be.

I had a question last week on LinkedIn (which is where I prefer my blog questions to be asked) on what I thought would drive “Infrastructure 2.0.”  My initial response was that the term was media jargon and that I’d need a more specific idea of what the question meant in order to respond.  When I got that detail, it was clear that the person asking the question was wondering how an agile, flexible, infrastructure model would emerge.  Short answer; via the cloud.  Long answer?  Well, read on.

The biggest mistake we make in technology thinking and planning is perpetuating the notion that “resources suck”, meaning that if we simply supply the right framework for computing or networking (forgetting for the moment how we’d know what it was and how to pay for it), the new resource model would just suck in justifying applications.  Does that sound superficial?  I hope so.

IT or network resources are the outcome of technology decisions, which are the outcome of application and service decisions, which are the outcome of benefit targeting, which is the outcome of demand modeling.  We can’t just push out a new infrastructure model, because the layers above it are not in place to connect it.  The best we could do at this point is to say that the new compute model, the cloud, could be an instrument of change.  The challenge even there is deciding just what kind of changes would then drive the cloud, and you have to do that before you decide how the cloud drives compute infrastructure.

If you did a true top-down model of business IT, you’d have to start with an under-appreciated function, the Enterprise Architect (EA).  This is a role that evolved from the old “methods analysts” of the past, responsible for figuring out what the elements of a job were as a precursor to applying automation.  But it’s here that we expose a big risk, because the best way to do a job may not be the way it’s been done in the past, and often past practices have guided high-level business architectures.

An alternative to this approach is the “focus-on-change” model, which says that if you’re going to do something transformational in IT these days, you will probably have to harness something that’s very different.  I cite, as change-focus options, mobility and IoT.  No, not analytics; they apply to current practices and try to make better decisions through better information.  Mobility and IoT are all about a higher-level shift, which is away from providing a worker an IT-defined framework for doing a job and toward providing a way to help a worker do whatever they happen to be doing.

Any business has what we could call elemental processes, things that are fundamental to doing the business, but there’s a risk even in defining these.  For example, we might say that a sales call is fundamental, but suppose we allow the buyer to purchase online?  Same with delivery, or even with production.  There are basic functional areas, though.  Sales, production, delivery, billing and accounting, all are things that have to be done in some way.  Logically, an EA should look at these functional areas and then define a variety of models that group them.  An online business would have an online sales process, and it might dispatch orders directly to a producer for drop-shipping to the customer or it might pick goods from its own warehouse.  The goods might then be manufactured by the seller or wholesaled.

When you have a map of processes you can then ask how people work to support them.  The most valuable change that appears to be possible today is the notion of “point-of-activity empowerment” I’ve blogged about many times.  Information today can be projected right to where the worker is at the moment the information is most relevant.

Another mistake we make these days is in presuming that changing the information delivery dynamic is all we need to do.  A mobile interface on an application designed for desktop use is a good example.  But would the worker have hauled the desk with them?  No, obviously, and that means that the information designed for desktop delivery isn’t likely to be exactly what the worker wants when there’s a mobile broadband portal that delivers it to the actual job site.  That’s why we have to re-conceptualize our information for the context of its consumption.

That’s why you could call what we’re talking about here contextual intelligence.  The goal is not to put the worker in a rigid IT structure but to support what the worker is actually doing.  The first step in that, of course, is to know what that is.  You don’t want to interfere with productivity by making the worker describe every step they intend to take.  If we defined a process model, we could take that model as a kind of state/event description of a task, a model of what the worker would do under all the conditions encountered.  That model could create the contextual framework, and we could then input into it events from the outside.

Some events could result from the worker’s own action.  “Turn the valve to OFF” might be a step, and there would likely be some process control telemetry that would verify that had been done, or indicate that it didn’t happen.  In either case, there is a next step to take.  Another event source might be worker location; a worker looking for a leak or a particular access panel might be alerted when they were approaching it, and even offered a picture of what it looked like.

From an application perspective, this is a complete turnaround.  Instead of considering information the driver of activity, activity is the driver of information.  Needless to say, we’d have to incorporate process state/event logic into our new system, and we’d also have to have real-time event processing and classification.  Until we have that, we have no framework for the structure of the applications of the future, and no real way of knowing the things that would have to be done to software and hardware to do them.

The converse is true too.  We could change infrastructure to make it hyperconverged or composable or into Infrastructure 2.0 or 3.0 or anything else, and if the applications are the same and the worker behavior is the same, we’re probably going to see nothing but a set of invoices for changes that won’t produce compensatory benefits.

Obviously, it’s difficult to say what the best infrastructure model is until we’ve decided on the applications and the way they’ll be structured.  We can, though, raise some points or questions.

First, I think that the central requirement for the whole point-of-activity picture is a cloud-hosted agent process that’s somewhere close (in network latency terms) to the worker.  Remember that this is supposed to be state/event processing so it’s got to be responsive.  Hosting this in the mobile device would impose an unnecessary level of data traffic on the mobile connection.  The agent represents the user, maintains user context, and acts as the conduit through which information and events flow.

We also need a set of context-generating tools, which can be in part derived from things like the location of the mobile device and in part from other local telemetry that would fall into the IoT category.  Anything that has an association with a job could be given a sensor element that at the minimum reports where it is.  The location of the worker’s device relative to the rest of this stuff is the main part of geographic context.

The agent process is then responsible for drawing on contextual events, and also drawing on information.  Instead of the worker asking for something, the job context would simply deliver it.  The implication of this is that the information resources of the company would be best considered as microservices subservient to the agent process map (the state/event stuff).  “If I’m standing in front of a service panel with the goal of flipping switches or running tests, show me the panel and what I’m supposed to do highlighted in some way.”  That means the “show-panel” and “highlight-elements” microservices are separated from traditional inquiry contexts, which might be more appropriate to what a worker would look at from the desk, before going out into the field.

You can see how the cloud could support all of this, meaning that it could support an application model where components of logic (microservices) are called on dynamically based on worker activity.  The number of instances of a given service you might need, and where you might need them, would depend on instantaneous workload.  That’s a nice cloud-friendly model, and it pushes dynamism deeper than just a GUI, back to the level of supporting application technology and even information storage and delivery.

Information, in this model, should be viewed as a combination of a logical repository and a series of cache points.  The ideal approach to handling latency and response time is to forward-cache things that you’ll probably need as soon as that probability rises to a specific level.  You push data toward the user to lower delivery latency.

The relationship between this productivity-driven goal set, which would at least hold a promise of significant increases in IT spending, and things like “hyperconvergence” or “composable infrastructure” is hard to establish.  Hyperconvergence is a data center model, and so is composable infrastructure.  It’s my view that if there is any such thing as either (meaning if they’re not totally hype-driven rehashes of other technology) then it would have to be a combination of a highly integrated resource virtualization software set (network, compute, and storage) and a data center switching architecture that provided for extremely low latency.  A dynamic application, a dynamic cloud, could in theory favor one or both, but it would depend on how distributed the data centers were and how the cloud itself supported dynamism and composability.  Do you compose infrastructure, really, or virtualize it?  The best answer can’t come from below, only from above where the real benefits—productivity—are generated.

Which leads back to one of my original points.  You can’t push benefits by pushing platforms that can attain them.  You have to push the entire benefit food chain.  The cloud, as I said in opening this blog, is a different model of computing, but no model of computing defines itself.  The applications, the business roles, we support for the cloud will define just what it is, how it evolves, and how it’s hosted.  We need to spend more time thinking about Enterprise Architecture and software architecture and less time anticipating what we’d come up with.