I blogged last week on the reality of edge computing, and I think that it’s time to take a mission-focused look AI too. We tend, in all of tech, to focus entirely on a new technology rather than asking just what the technology will accomplish. As a result, tech promises turn into hype more than realization. Let’s forget AI tech for a moment and look at what we might do with it, focusing as always on things that could emerge, near-term, and drive AI adoption.
AI, of course, means “artificial intelligence”, and so all AI missions should be places where human intelligence, meaning something like judgment, would be beneficial but cannot be readily applied. The reason for that could be cost (we couldn’t afford to have a human standing around to make decisions), lack of expertise (a suitable human isn’t likely available), or speed and consistency of action (a human would waffle and make inconsistent decisions). These three reasons are the ones given by enterprises most often when they’re asked about building an AI business case.
One of the problems we have with “AI-washing”, meaning the claims for application of AI technology where there is no mission (no advantage) and/or no basis (it’s not true), is that it’s riskless. If somebody says they have a new AI feature, they get coverage. I’ve not seen any case where a claim was investigated, debunked, and the truth then published. In fact, just as edge computing has inherited the mantle of the cloud when cloud stories got blasé, AI seems to be inheriting the title of “best general technology.” Unfortunately, AI isn’t a general technology.
Here’s the thing. Suppose we have a keypad to give us entry into a gated area. We enter a code and a pound sign, and if it’s correct we open the gate. It’s difficult to make an objective case for AI here, given that interpreting a simple code is hardly something that requires human intelligence. But some will argue that since the keypad and gate replace a human guard in a shack, this must be AI. Automation isn’t always AI, and automation always reduces human effort and so replaces human judgment. AI applications require something more.
Suppose we take a different tack with the keypad-and-gate thing. We use cameras, analyze the image of the vehicles, the license plates, the faces of the people. We could construct a solution to our gated-area problem that could actually benefit from AI, but it would be AI that benefits and not the application. We don’t need all of that to open a gate, and probably couldn’t justify the cost. This approach doesn’t meet the three tests of AI value I noted above.
Let’s look at some other popular applications. How about where “AI” is used to scan data looking for correlations? We used to call this “analytics”, and so we have to distinguish between simple analytics and AI if we’re to dig out an AI mission. Go back to the rules. Could a human, with proper expertise, pour through vast assemblies of raw data looking for patterns that suggest something interesting? Obviously not, because we’d have a problem finding the right humans, and likely a greater one getting them to focus on a vast assembly of raw data for long enough to do any good. This meets all of our AI value tests.
What we can draw from these two examples is that AI is more likely needed to hash through a lot of stuff to dig out value than to support a very simple task. AI is then a logical companion to something like IoT or personalization and contextualization. The reason is that tasks that try to support and optimize human behavior are almost certain to justify artificial intelligence, because it’s human intelligence they’re supporting. The limiting factor is the extent to which support is really needed, as our keypad example shows.
Might there then be no such thing as a “little AI” or applications of AI on a very small scale? I think that’s likely true. I also think it’s likely that a task that’s performed regularly by thousands of people based on the application of a rule set isn’t an application crying out for “machine learning” at the site of each person. Why not have a system learn the rules instead, in one place over a reasonable period, and then build a rule-driven or policy system to enforce what we learned? In other words, AI and ML might be justified in putting a system together, but not needed in the operation of the system.
This is probably the biggest issue in assessing the value of machine learning. If analysis of handling of a few variables always results in the same decision, then you don’t need machine learning where the action is happening. But even for other forms of AI, like neural networks, what’s still true is that AI is valuable where set policies aren’t effective. In most cases, that’s where there’s a lot of unforeseen conditions. If one and one always make two, you don’t need AI to figure it out.
What justifies point-of-activity AI, then, is a combination of things. First and foremost, variability. Human judgment is able to extrapolate based on many variables. If there aren’t many variables, if there isn’t any major need for extrapolation, then simple policy-based handling of conditions is fine (AI might still be useful in setting up those policies). If conditions are highly variable, especially if they’re diverse, then a human could make a better decision than a set of policies, and an artificial intelligence cloud would likely do better too.
At this point, we can see that AI missions can be divided into two groups—those that apply AI to the point of activity, and those that use AI to set policies that will later be enforced at the point of activity. Both of these are valid missions, and so admitting that some AI isn’t going to live in your phone or in your car doesn’t mean AI has somehow failed (except perhaps that it might have failed the hype contest!)
We can also see another division of missions, into missions that are analyzing information to look for patterns or abnormalities, and missions that are acting or proposing actions to conditions or changes. Microsoft’s IntelliCode is an example of the first of these two missions. It’s a code review product that’s designed to pick out badly structured program statements and segments, and it’s based on a massive analysis of public repositories of code that set a practice baseline. Things that deviate from patterns that are clear in the baseline are flagged. Most of the AI attention, of course, goes to the latter group, which tries to identify appropriate actions to take. IntelliCode does a bit of this too.
And there’s yet another division—between missions that involve learning and those that involve inference. A learning mission means that the application will track both conditions and actions, and will “learn” what conditions trigger a consistent action sequence. That sequence can then be initiated with little or no oversight. An inference mission is one where AI works to emulate how human assessments and decisions are made, and can then act (or recommend actions) based on conditions.
Inference is the aspect of AI that might have the greatest value and also likely generates most of the “my-robots-are-out-of-control” risks (or myths). Actual human intelligence is a mixture of learning and inference, but I think that it’s inference that dominates. People extrapolate, taking rules that were defined to deal with a certain situation and applying them to other situations that “seem” related. Inference is what separates policies from intelligence. We could argue that for humans, learning is a shorthand way at improving inference by imparting both the rules of specific things and the relationship among things.
Would AI necessarily lead to something that might mimic humans? We already have software that can engage in a dialog with someone on a specific topic, play a game like chess. That shows the problem of emulating humans may well be more of scale than of functionality. Could learning-plus-inference give us something scalable? Cloud cloud-elastic resources combined with inference actually create something that, when examined from the outside, looked human? Not today, but likely in the not too distant future. Should we fear that? Every tech advance justifies fear of some aspect of its use or misuse. We just have to get ahead of the development and think the constraints through.