We need a new way to look at information technology, given the number of different things that are driving change. The categories like “hardware” and “software” don’t suit a virtual world where it’s actually rather difficult to tell what or where something is. Talking about “computing power” is complicated when the computer is virtual and is made up of variable, distributed, resources. And consider “serverless” computing (see my blog of yesterday). Is the network indeed the computer? Is the computer actually the network? We’re groping around in definitions, often rooted in the past, as we try to come to terms with where we are and where we’re going.
I propose an information-centric approach, one that takes the general concept of information and divides it into three dimensions. First is the what-we-know dimension, meaning the scope of information that we have to process. This is expanded by contextual analysis, event-processing, IoT, or whatever you like. Second is the where-we-know-it dimension, meaning the place where information technology is applied to information. Think edge, core, cloud, fog, etc. Finally, we have the information-to-insight dimension, which is the analytics and AI dimension. Our challenge is that all of these information dimensions is in a period of major change, and the combinatory possibilities are endless. We’re not good at dealing with endlessness, especially for tech and business planning. That’s why we keep trying to classify things, to evolve things we know already.
The what-we-know space is growing in no small part because of the increased portability of technology. Our phones, for example, know a lot about where we have been, how fast we’re moving, and even in some cases whether we have our “normal” gait or a different one. This information might have been available in a subjective way in the past, but recording it in detail and transcribing it into a technology-useful form could have taken all the hours of the day. Imagine if you had to accurately trace the walk you’re taking both in terms of route, elevation gain and loss, pace, etc. Because we have tech devices with us, and in things, we have IT access to a bunch of stuff that we never could have had, in a practical way, before.
The where-we-know-it dimension is essentially defined by the fact that the Internet and associated broadband and mobile technology build a kind of parallel universe that is truly parallel. It spreads where we are, and go. Because portable technology is usually “online”, that means that we now have a parallel IT universe that extends across the globe (and of course beyond, though most of us aren’t going that far). Those with portable devices life in the usual real world, and also live in this parallel online world, and the fact that information is being generated all over that online world, it makes sense to presume that the same ubiquitous online-ness would be used to distribute work-processing more broadly. Do we want to send information about us on the east or west coast to St. Louis for processing? Why not process where we are?
The final information-to-insight dimension is about making use of the information we now have. You can look at this dimension in two ways—by mission or by tools. The tool-oriented perspective is one of analytics and AI, which tells you how you make the conversion from raw data to something that has personal or business utility. The mission perspective is about how you apply information to create insight, a perspective which can then guide tool selection. The best example of this perspective is what I’ve called contextual relationships. I ask “What’s that?” and I have a question that can be answered only in context.
If context is so important, then it may be that we can add some meat to the requirements for the future of information technology by starting there. We have five physical senses (sight, hearing, smell, taste, touch), and we use them to judge our place in the world. Ideally, contextual processing would allow us to transport IT-equivalent information into our online world. Since vision is our strongest sense, it’s almost imperative that we be able to recognize where we are and what we’re seeing, but in the online world. We already see software that can identify landmarks from pictures, so the technology here is available and only needs enhancement in accuracy and speed of recognition.
The other senses have their own fairly clear paths forward. Sound recognition, in the form of speech, is evolving quickly but still needs work. Beyond speech, we have music recognition but we have only limited capability to use either in a typical social setting. We have technology today that can analyze “smells”. We could envision wearable “gloves” that could convey a sense of touch into the online world, and there are examples of at least the beginning of this technology in play today.
It’s easy to see (no pun intended) the impact of sensory-driven context augmentation. In fact, TV commercials have illustrated the not-quite-possible-yet scenarios of “looking” via augmented reality at a street and “seeing” business names appended to the shops. Google has an app that will (sometimes) identify landmarks. Applying this to the workplace opens the door to having devices or glasses that let a worker find the right switch or valve, compare what’s there to what should be, etc.
One of the big questions is where sensory-translation happens. Right now, we do voice and image processing more centrally, but the location of sensory translation depends on where it’s to be used, and for what purpose. I think it’s likely that over time, as “contextual” starts to become more actionable in real time, things will migrate toward the edge.
IoT is a good example of this, though not one that (apparently) everyone accepts these days. Raw sensory data is very unlikely to be made available or consumed, for a whole variety of practical, social, and regulatory reasons. Instead, sensor information would likely be framed into a variety of contexts and made available as contextual information. Imagine trying to decide what sensors represent information about an intersection you’re coming to, interpreting them, and then guiding a self-drive vehicle! More logically, you would have “intersection slots” into which vehicles could be slotted, developed by “intersection processes” that interpreted the necessary data.
The three dimensions I’ve talked about here are all interdependent, which perhaps is why we have so much difficulty defining what a “revolution” in IT would mean, or how it would come about. Supply and demand combine to make a market, and so things like cloud computing will fall short of their potential as long as we don’t frame our needs in a cloud-friendly way. But what reason would we have to postulate new information relationships absent any way to fulfill them? The groping process that’s underway now is all part of our new age.
There is a lesson here, of course. We are talking about very fundamental changes in how we use information, how we get it, and where we host and process it. Any of those things alone would be difficult to get everyone’s heads around, and in combination they create a new model of IT. Everyone is going to struggle with it, but those who manage to get onboard quickly may have an enormous competitive advantage in the future.