How an Event-Centric Cloud Model Might Influence the Edge Devices

If we assume that the notion of an event-driven cloud is correct, we have to ask ourselves what that cloud model would do to the way edge devices get information and content.  If the cloud is a new computing paradigm, does the paradigm extend to the edge?  How does it then impact the way we build software or deliver things?  The answers are highly speculative at this point, but interesting.

Right now, consumers and workers both tend to interact with information/content resources through a URL click.  This invokes a “connection”, a session, between the user and the resource, and that is sustained through the relationship.  In an event model, things would have to work differently, but to see why (and how they would then have to work) we’ll have to use an example.

Let’s say we have a smartphone user walking down a city street.  In a traditional model of service, the user would “pull” information from the phone, looking for a location or perhaps a retail store.  In an event-driven model the user might have information pushed to the device instead, perhaps based on “shopping habits” or “recent searches”.  Indeed, this sort of push relationship is the most plausible driver for wearables, since hauling the phone out to look at messages would be intrusive to many.

Making this sort of thing work, then, is at least a reasonable goal.  Let’s start with the notion of “push”, which would mean having events cast to the user, representing things that might warrant attention.  It’s easy to envision a stream of events going to the user’s phone, but is that really logical, optimal?  Probably not.

A city street might represent a source for hundreds or thousands of IoT “events” per block.  Retail stores might generate more than that, and then we have notifications from other users that they’re in the area, alerts on traffic or security ahead, and so forth.  Imagining tens of thousands of events in a single walk is hardly out of line, but it’s probably out of the question in terms of smartphone processing.  At the least, looking all that stuff up just to decide if it’s important would take considerable phone power.  Then you have the problem of the network traffic that sending those events to every user nearby would create.

Logically speaking, it would seem that event-based applications would accelerate the trend toward a personal agent resident in the cloud, a trend that’s already in play with voice agents like Apple’s Siri or Amazon’s Alexa or “Hey, Google”.  It’s not a major step from today’s capabilities to imagine a partner process to such an agent in the cloud, or even cloud-hosting of the entire agent process.  You tell your agent what you want and the agent does the work.  That’s the framework we’d probably end up with even without events.

What events do is create a value for in-cloud correlation.  If there’s a cloud agent representing the user then there’s a way of correlating events to create useful context, not just flood users with information like an out-of-control visual experience.  We can do, in the cloud, what is impractical in the smartphone.  Best of all, we can do it in a pan-user way, a way that recognizes that “context” isn’t totally unique to users.

Say our smartphone user is at a concert.  There’s little doubt that the thing that defines the user’s focus and context at that moment is the concert, and that’s just what is defining those things for every user who attends.  News stories also create context; everyone who’s viewing an Amber Alert or watching breaking news is first and foremost a part of the story those channels convey.

If there are “group contexts” then it makes sense to think of context and event management as a series of processes linked in a hierarchy.  For example, you might have “concert” as a collective context, and then perhaps divide the attendees by where they are in the venue, by age, etc.  In our walk-on-the-street example, you might have a “city” context, a “neighborhood” and a “block”.  These contexts would be fed into a user-specific personal-agent process.

I say “hierarchy” here not just to describe the way that contexts are physically related.  It would make sense for a city context to be passed to neighborhood contexts, and then on down.  The purpose of this is to ensure that we don’t overload personal-agent processes with stuff that’s not helpful or necessary.

In this sort of world, a smartphone or PC user doesn’t need to access “the web” nearly as much; they are interacting with personal agent and context agents, which are cloud processes.  It’s pretty easy to provide a completely secure link to a single cloud process.  It’s pretty easy to secure cloud processes’ connections with each other, and to authenticate services these processes offer to other processes (if you’re interested in a “consensus” model HERE is how the original ExperiaSphere project approached it back in 2007).  Thus, a lot of the security issues that arise with the Internet today can’t really happen; all the identities and relationships are secured by the architecture.

This approach doesn’t define an architecture for context creation or personal agency, or the specific method for interconnection; those issues can be addressed when someone wants to implement the architecture.  The approach does define, in effect, the relationship between personal agent and user appliance.  It’s what the name suggests agency.  In some cases, the agent might provide a voice or visual response, and in others it might do something specific.  Whatever happens, though, the agent is acting for the user.  We see that now with Amazon’s Alexa in particular; some people tell me that they talk to it almost as they would a person.

Which I think is obviously where we’re headed with all of this.  The more sophisticated our processing and information resources are, and the more tightly they’re bound to our lives, the harder it is to surmount artificial barriers created by explicit man-machine interactions like clicking a URL.  We want our little elves to be anthropomorphic, and our devices likewise.

The biggest trend driving networking today is the personalization of our interaction with our devices and the information resources those devices link us with.  The second-biggest trend is the growth in contextual information that could be used to support that personalization, in the form of events representing conditions or changes in conditions.  The biggest trend in the cloud is the shift in focus of cloud tools toward processing and exploiting these contextual and event resources.  The second trend, clearly, is driven by the first.

As contextual interpretation of events becomes more valuable, it follows that devices will become more contextual/event-aware themselves.  The goal won’t be to displace the role of the cloud agent but to supplement that agent when it’s available and substitute for it when the user is out of contact.  The supplementation will obviously be the most significant driver because most people will never be out of touch.

Devices are valuable sources of context for three reasons.  First, since they’re with the user they can be made aware of location and local conditions.  Second, because the device can be the focus of several parallel but independent connections, the device may be the best/only place where events from all of them can be captured.  Texting, calling, and social-media connections all necessarily involve the device itself.  Third, the device may be providing the user a service that doesn’t involve communications per se.  Taking a picture is an example for smartphones, or perhaps movement of the device or changes in its orientation.  An example for laptops is running a local application, including writing an email.

The clearest impact of event-centric cloud processing is event-centric thinking in the smartphone.  Everything a user does is a potential event, something to be contextualized in the handset or in the cloud, or both.  Since I think that contextualization is hierarchical, as I’ve noted above, handset events would likely be correlated there.  The easy example is a regular change in GPS position coupled with the orientation shifts associated with walking or driving.  This combination of things lets the device “know” the user is on foot or in a vehicle.  You could correlate the position with the location of public transport vehicles to see if it’s a car or not.  You can learn a lot, and that learning means you can provide the user with more relevant information, which increases your value as a service provider.

The net of this is that devices, particularly smartphones, are going to transform to exploit cloud agency and contextual processing of events.  But even laptops will be impacted, becoming more event-centric with respect to application context and social awareness.  We can already see this in search engines, and every step it expands offers users and workers and businesses more value from IT.  It’s this value increase that will drive any increases in spending, so it’s important for us all.