I mentioned the recent Amazon cloud event in my last blog, and one area where the event introduced a lot of very leading-edge stuff was AI. My premise in that earlier blog was that Amazon was working to a master strategy aimed at the future, where telcos and ONAP were mired in a box-networking vision of the past. What is Amazon’s AI strategy? Can it teach both an AI and a strategic lesson to the rest of the industry?
Amazon’s re:Invent conference announced a new series of AI tools for AWS, and they had quite a few before, which means that the inventory of AWS web services for AI and machine learning is extensive. You could look at the tools themselves and try to synthesize what Amazon is thinking, but the better approach would be to look at how Amazon proposes the tools relate to each other. That gives a sense of the Amazon AI strategy by defining their ecosystem.
Generally, Amazon has three categories of AI stuff (Amazon seems to prefer to call their offerings “machine learning” but sometimes also inserts the “AI” description, so I’ll just call everything “AI/ML” because it’s less typing!). Their positioning is based on a foundation of AI that dates back decades; you build or train an AI/ML system to a specific mission.
Classic AI involved a “subject matter expert” and a “knowledge engineer”. The former set up the information/rules to describe how a person would analyze information and make decisions, and the latter would take that information and turn it into AI/ML algorithms. Working together, the two would “train” the system to make appropriate decisions/analysis. The Amazon tools reflect this process. There’s a set of low-level AI/ML tools, there’s a set of pre-trained AI/ML applications, and there’s a toolkit to do the AI/ML training and testing.
Given that our third category is a kind of bridge between missions and tools, let’s start there. The central tool is called SageMaker (neat name, you have to admit), and it’s the AWS framework for building, training, and deploying models that apply AI/ML to analysis and decision-making. A related product (SageMaker Ground Truth) builds test databases to help the development of specialized AI/ML tools for a wide variety of business missions.
SageMaker works with its own algorithms, but also supports TensorFlow, PyTorch, Apache MXNet, and other popular AI/ML frameworks so there’s a pathway to integrate other AI/ML work you, or someone else, may have done into an Amazon AWS project. The word “experiment” comes up a lot in the Amazon material, and it’s an important point because SageMaker and related tools are to help companies experiment with AL/ML and, though the experiments, uncover valuable applications to support their business.
To help guide SageMaker stuff, and at the same time demonstrate some platforms for AI/ML application, Amazon has two tools—DeepLens and DeepRacer. The former is a deep-learning-enabled video camera to demonstrate teaching, human-interactive, and other computer vision projects. DeepRacer is the same thing except focused on self-driving (autonomous) vehicles. I think it’s clear that Amazon plans to add more and more capabilities to these platforms, and also to add new ones as the opportunities gel in the market.
Companies that either don’t have any in-house AI/ML expertise or simply don’t want to take the time to build their own tools will likely want to look at the higher-level Amazon AI/ML components. These are pre-trained AI/ML tools designed for a very specific (but still somewhat tunable) mission. Amazon offers these tools for making retail recommendations, analyzing documents and text, analyzing voice, image and video analysis, forecasting, simulating conversations…you get the picture.
These predefined AI/ML tools can be integrated into applications much faster than using SageMaker to build a custom AI/ML application. It’s clear that Amazon plans to build additional high-level tools as the market for AI/ML matures and the needs and opportunities become clear. Amazon will also be providing additional low-level AI/ML services to be incorporated into applications, so expansion here is likely to be linked to the enhancement of current DeepLens and DeepRacer platforms, and the addition of new ones.
Amazon provides specific linkages between these tools and the rest of the AWS ecosystem, and you can pick up AI/ML tools either as part of a managed and elastic service or through hosting on AWS. There are available examples that bind the elements together in various ways to accomplish simple business missions, too. In all, it’s a perfect example of what I believe is the Amazon strategy of laying out a long-term vision (that AI/ML will enhance many applications), defining an architecture (the three-layer approach of deep features, composition and training, and pre-trailed tools) and then building to that architecture in both the initial roll-out and in enhancements.
Contrast this with the usual AI hokum we get in the telecom/network space. People talk about AI as though slapping that label on something was all that was required. Saying something “uses AI” doesn’t in any way reflect the potential direction that the use might take, which is critical for the future. It doesn’t even frame the “use” in any particular context, either in the application/decision sense or in a definition of the mechanism being applied. Is any form of analytics “AI?” Perhaps, and certainly if nobody says exactly what they’re doing, it’s hard to say it isn’t.
There are unquestionably AI/ML applications in the telecom and networking space; many come to mind. Prediction, simulation and modeling, automated responses to both network conditions and customer inquiries relating to their services, are all fertile places for AI/ML application. Imagine a service lifecycle automation system, based on AI/ML, where simulations drive scenario generations and responses based on analysis of how past events evolved. It could revolutionize management.
We have companies claiming AI in all these spaces and I’ve yet to see a claim that was easy to defend. I asked a company who made such a claim recently why they didn’t follow the Amazon pattern. They said “It would take thousands of pages for me to explain things that way, and nobody would publish them.” There’s a lot of truth to that, but another question equally interesting is whether real adoption of AI in networking can happen if those pages aren’t available for prospects to read.
Amazon’s approach would work for network operators. Start by recognizing a set of specific goals that AI could help achieve. Map those goals to high-level services, build tools that then support those services and more tools that compose services, then use the combination to create an AI framework. It would work for AI, but the methodology would work for other things too. Amazon didn’t start with a bunch of arcane AI features and hope they’d be assembled by unknown parties to create something useful. They started with value and worked down, and that’s an important lesson.