How Events Evolve Us to “Fabric Computing”

If you read this blog regularly you know I believe the future of IT lies in event processing.  In my last blog, I explained what was happening and how the future of cloud computing, edge computing, and IT overall is tied to events.  Event-driven software is the next logical progression in an IT evolution I’ve followed for half a century, and it’s also the only possible response to the drive to reduce human intervention in business processes.  If this is all true, then we need to think about how event-driven software would change hosting and networking.

I’ve said in previous blogs that one way to view the evolution of IT was to mark the progression of its use from something entirely retrospective—capturing what had already happened—to something intimately bound with the tasks of workers or users.  Mobile empowerment, which I’ve characterized as “point-of-activity” empowerment, seems to take the ultimate step of putting information processing in the hands of the worker as they perform their jobs.  Event processing takes things to the next level.

In point-of-activity empowerment, it is possible that a worker could use location services or near-field communications to know when something being sought is nearby.  This could be done in the form of a “where is it?” request, but it could also be something pushed to a worker as they moved around.  The latter is a rudimentary form of event processing, because of its asynchronicity.  Events, then, can get the attention of a worker.

It’s not a significant stretch to say that events can get the attention of a process.  There’s no significant stretch, then, to presume the process could respond to events without human intermediation.  This is actually the only rational framework for any form of true zero-touch automation.  However, it’s more complicated than simply kicking off some vast control process when an event occurs, and that complexity is what drives IT and network evolution in the face of an event-driven future.

Shallow or primary events, the stuff generated by sensors, are typically simple signals of conditions that lack the refined contextual detail needed to actually make something useful happen.  A door sensor, after all, knows only that a door is open or closed, not whether it should be.  To establish the contextual detail needed for true event analysis, you generally need two things—state and correlation.  The state is the broad context of the event-driven system itself.  The alarm is set (state), therefore an open door is an alarm condition.  Correlation is the relationship between events in time.  The outside door opened, and now an interior door has opened.  Therefore, someone is moving through the space.

I’ve used a simple example of state and correlation here, but in the real world both are likely to be much more complicated.  There’s already a software discipline called “complex event processing” that reflects just how many different events might have to be correlated to do something useful.  We also see complex state notions, particularly in the area of things like service lifecycle automation.  A service with a dozen components is actually a dozen state machines, each driven by events, and each potentially generating events to influence the behavior of other machines.

Another complicating factor is that both state and correlation are, so to speak, in the eye of the beholder.  An event is, in a complete processing sense, a complex topological map that links the primary or shallow event to a series of chains of processing.  What those chains involve will depend on the goal of the user.  A traffic light change in Manhattan, for example, may be relevant to someone nearby, but less so to someone in Brooklyn and not at all to someone in LA.  A major traffic jam at the same point might have relevance to our Brooklyn user if they’re headed to Manhattan, or even to LA people who might be expecting someone who lives in Manhattan to make a flight to visit them.  The point is that the things that matter will depend on who they matter to, and the range of events and nature of processes have that same dependency.

When you look at the notion of what I will call “public IoT”, where there are sensor-driven processes that are available to use as event sources to a large number of user applications, there is clearly an additional dimension of scale and distribution of events at scale.  Everyone can’t be reading the value of a simple sensor or you’d have the equivalent of a denial-of-service attack.  In addition, primary events (as I’ve said) need interpretation, and it makes little sense to have thousands of users do the same interpretation and correlation.  More sensible to have a process do the heavy lifting, and dispense the digested data as an event.  Thus, there’s also the explicit need for secondary events, events generated by the correlation and interpretation of primary events.

If we could look at an event-driven system from above, with some kind of special vision that let us see events flying like little glowing balls, what we’d likely see in most event-driven systems is something like nuclear fission.  A primary neutron (a “shallow event”) is generated from outside, and it collides with a process near the edge to generate secondary neutrons, which in turn generate even more when they collide with processes.  These are the “deep events”, and it’s our ability to turn shallow events from cheap sensors into deep events that can be secured and policy managed that determines whether we could make event-driven systems match goals and public policies at the same time.

What happens in a reactor if we have a bunch of moderator rods stuck into the core?  Neutrons don’t hit their targets, and so we have a slow decay into the steady state of “nothing happening”.  In an event system, we need to have a nice unified process and connection fabric in which events can collide with processes with a minimum of experienced delay and loss.

To make event-driven systems work, you have to be able to do primary processing of the shallow events near the edge, because otherwise the control loop needed to generate feedback in response to events can get way too long.  That suggests that you have a primary process that’s hosted at the edge, which is what drives the notion of edge computing.  Either enterprises have to offer local-edge hosting of event processes in a way that coordinates with the handling of deeper events, or cloud providers have to push their hosting closer to the user point of event generation.

A complicating factor here is that we could visualize the real world as a continuing flood of primary, shallow, events.  Presumably various processes doing correlation and analysis, and then distribution of secondary “deeper” events would then create triggers to other processes.  Where does this happen?  The trite response is “where it’s important”, which means anywhere at all.  Cisco’s fog term might have more a marketing genesis than a practical one, but it’s a good definition for the processing conditions we’re describing.  Little islands of hosting, widely distributed and highly interconnective, seem the best model for an event-driven system.  Since event processing is so linked with human behavior, we must assume that all this islands-of-hosting stuff would be shifting about as interests and needs changed.  It’s really about building a compute-and-network fabric that lets you run stuff where it’s needed, no matter where that happens to be, and change it in a heartbeat.

Some in the industry may have grasped this years ago.  I recall asking a Tier One exec where he thought his company would site cloud data centers.  With a smile, he said “Anywhere we have real estate!”  If the future of event processing is the “fog”, then the people with the best shot at controlling it are those with a lot of real estate to exploit.  Obviously, network operators could install stuff in their central offices and even in remote vault locations.  Could someone like Amazon stick server farms in Whole Food locations?  Could happen.

Real estate, hosting locations, are a big piece of the “who wins?” puzzle.  Anyone can run out and rent space, but somebody who has real estate in place and can exploit it at little or no marginal cost is clearly going to be able to move faster and further.  If that real estate is already networked, so much the better.  If fogs and edges mean a move out of the single central data center, it’s a move more easily made by companies who have facilities ready to move to.

That’s because our fog takes more than hosting.  You would need your locations to be “highly interconnective”, meaning supported by high-capacity, low-latency communications.  In most cases, that would mean fiber optics.  So, our hypothetical Amazon exploit of Whole Foods hosting would also require a lot of interconnection of the facilities.  Not to mention, of course, an event-driven middleware suite.  Amazon is obviously working on that, so have they plans to supply the needed connectivity, and they’re perhaps the furthest along of anyone in defining an overall architecture.

My attempts to model all of this show some things that are new and some that are surprisingly traditional.  The big issue in deciding the nature of the future compute/network fabric is the demand density of the geography, roughly equivalent to the GDP per square mile.  Where demand density is high, the future fabric would spread fairly evenly over the whole geography, creating a truly seamless virtual hosting framework that’s literally everywhere.  Where it’s low, you would have primary event processing distributed thinly, then an “event backhaul” to a small number of processing points.  There’s not enough revenue potential for something denser.

This is all going to happen, in my view, but it’s also going to take time.  The model says that by 2030, we’ll see significant distribution of hosting toward the edge, generating perhaps a hundred thousand incremental data centers.  In a decade, that number could double, but it’s very difficult to look that far out.  And the software/middleware needed for this?  That’s anyone’s guess at this point.  The esoteric issues of event-friendly architecture aren’t being discussed much, and even less often in language that the public can absorb.  Expect that to change, though.  The trend, in the long term to be sure, seems unstoppable.