Years ago, I was involved in early work on ISDN (remember that!). Since CIMI Corporation had signed the Cooperative R&D Agreement (CRDA) with the National Institute of Standards and Technology (NIST), I had a chance to review many of the early use cases. I remember in particular a day when a vendor technologist burst into the room, excited, and said “We just discovered a new application for ISDN. We call it ‘file transfer’!”
Well, gosh, that’s been done, but even that misses the point. A new technology has to have a business justification for its deployment. That justification isn’t just a matter of thinking up something that could be done with it, or even thinking up something to which our new technology adds value. What we need is a justification that meets the ROI requirements of all the stakeholders that would have to participate in the deployment.
The justification problem has plagued tech in general, but network and service provider network technology in particular. The reason is that the value proposition for these spaces is complicated, meaning that there are a lot of issues that not only decide who the stakeholders might be, but also decide how each of them might make a business case. I can prove that just about any new network technology would be great for operators if we assumed that 1) there was no sunk cost being displaced, 2) that the vendors who supplied the technology would charge less than break-even for it, and 3) that all the people involved in buying, installing, and maintaining the new technology were instantly and miraculously endowed with the necessary skills. In most cases, none of those assumptions would be valid.
Right now, where we’re seeing this issue bite is in areas like 5G, edge computing, and artificial intelligence. These are great for generating exciting stories, but so far, they’ve not been particularly good at making that critical business case. One might wonder whether a big part of the problem is that nobody is really trying to make it.
Vendor salespeople tell me that exciting new technologies are very valuable to them, because they “open a dialog”. You can’t push for a sale if you can’t get an appointment, and often a new technology offers a chance to sit down with a prospect. Often, that discussion can lead to an early deal for a more conventional product. This makes the vendors themselves a party to the exaggerated views we read, and it also means that the very people who will have to push for technology advance may have a vested interest in holding back. With everyone focused on the next quarter, after all, what good is something that might take two or three years to develop? This view then trickles down to impact the technologies themselves.
Let’s start with 5G. When the technology was first discussed, everyone was agog about the enormous speeds it would offer, and the changes in our lives that mobile broadband at these new speeds would surely bring. We still have a lot of stories about things like doing robotic surgery over 5G, and there are still people who eagerly await 5G service and phones, despite the fact that it’s very likely that the applications of today wouldn’t benefit a whit from 5G speeds, even if they were 10x what 4G brings.
Which, probably, they won’t be. The 5G radio spec would enable very high speeds, but would the operators necessarily upgrade all their network infrastructure to carry those extra bits per second? This article in Light Reading from Thursday of last week asks whether a major 5G operator has “moved the goalposts” in 5G performance. Probably they did, because as I’ve often told my clients, PR is a “bulls**t bidding war.” The story that generates the most excitement gets the best placement. If Joe down the road says their 5G is twice as fast as LTE, then I’ll see that 2x and raise you 2x, and so we’re off to the races.
The interesting thing is that we’re going to get 5G. Almost surely not the 5G that stories have featured and vendors have dreamed of, but we’re going to get it. It’s the logical upgrade path for LTE networks, after all, and as mobile service expands, it will expand based on the most efficient technology available. We might, eventually, lead into new mobile/wireless applications that demand higher speeds, even though we probably will never see that 10x improvement. In the near term, though, we’ve started a hype wave that we now have to declare is falling short, because reality is less interesting than the early publicity.
The real problem with 5G, the one we need to be working on, is that willingness to pay for incremental bits of performance is far less than the cost of providing those bits. Would you pay ten times as much for 5G as 4G? Surveys show that users want 5G with no price premium, which means that it would have to be provided at zero incremental cost, that operator profits would fall when the offered it, or that some other revenue source (presumably from something 5G could enable) would pick up the slack.
IoT was what the operators, and many vendors, hoped would provide that last boost, but IoT is another technology that’s been misunderstood or misrepresented. IoT can help applications and people coexist in a virtual world, by feeding real-world context, habits, interests, and goals into a common system. Absent that common system, expansion of sensor/controller technology beyond its current levels is hard to justify. Where’s the work on providing for that expansion?
It’s now AI’s turn, and like 5G, AI is something we’ll certainly see more of, but that will have a hard time living up to the hype it’s generated. The public, even many in the tech space, think of AI as something like one of Isaac Asimov’s robots or the “Hal” in “2001: A Space Odyssey”. I guess I don’t need to point out that we’re almost a generation past 2001 at this point, and Hal is still proving illusive, but that doesn’t mean that AI won’t be important.
There are many applications of AI or machine learning (ML), and at least some of them are practical and useful in the near term. One area where I think there’s particular hope is in replacing or augmenting fixed policies in networking and computing. One problem with “policy-based” management, for example, is coming up with all the policies you need. AI/ML could certainly help with that, both by learning from human responses to conditions and by inferring both problems and responses based on similar past activities.
AI/ML could be a huge benefit, combined with a realistic vision of IoT. Interpreting the variables input from the real world, and combining them with things like past behavior, current activity and communications, stated goals, and other factors, is a job for something more intuitive than rigid policies can provide. Could we frame an IoT mission combined with an AI/ML mission? Sure, but where’s that happening?
The challenge AI/ML faces is mostly one of expectations. Remember Y2K? There were legitimate problems with some programs that had failed to account for a rollover of dates beyond two digits, but visions of elevators plunging, planes dropping out of the skies? How many companies failed to address the real issues because they were busily tossing mattresses on the floors of their elevators? AI/ML isn’t going to threaten us with the Takeover of the Machine. It’s not going to put you out of work, or likely anyone you know. It could do some good, if we forget the silly stuff and focus on the basic value of inference and learning.
Which leaves us with edge computing. Is there a mission for it? Most assuredly, things like autonomous vehicles, computer gaming, and augmented reality could be applications of edge computing, but it’s not as simple as people would suggest.
Start with autonomous vehicles, everyone’s favorite…except mine. The only truly sensible model for an autonomous vehicle is the model where all the critical systems like avoiding collisions are resident in the vehicle and reliant on local sensors. The technology to do this is already in use and not particularly expensive. Why would we want to off-load it to anything else, even at the edge? We’d risk mass problems in the event of a network failure, and we’d really gain nothing in return. Could edge computing be an information-resource partner to on-vehicle driving technology? Sure, if we had it, and had the model for information distribution that would reside at the edge, neither of which we have.
Start with gaming. Sure, latency matters in multiplayer games, but remember that the latency is a function of the “electrical distance” between you and the gaming processor that keeps track of players’ positions and actions. Unless we think we’re playing only against local people, that processor isn’t in “the edge”. Could games be designed for optimal use of edge computing? Probably, but nobody is going to do that until it’s widely deployed; the move would limit the market for the game.
AR is probably the best candidate for a real low-latency edge mission. I blogged about this in the past, and I continue to think that AR and contextual applications that would build on it are perhaps the most revolutionary thing we could actually end up seeing in the near term. But again, while the pieces of the AR-and-contextual puzzle are easily identified, nobody seems interested in doing the dog work of building the system and promoting the approach enough to make a credible ecosystem out of it. Simple applications that get a bit of ink are good enough.
AR is in many ways the endpoint of a realistic vision for all these technologies. If we are to live in a virtual, contextualized, world, we need a way of getting the state of that alternate reality into our own brains. Human vision is by far the richest of our senses, the only one that could input enough information fast enough to render our virtual world useful. But does this mean we rush out and make AR work, and thus drive 5G and IoT and AI/ML? That’s the same mistake we’ve been making.
You don’t achieve the benefits of transcontinental flight with an airplane door, you need not only the whole airplane but airports, traffic control, fuel, flight crew…you get the picture. We’ve done ourselves a disservice by dividing revolutions into component parts, parts that by themselves can’t make a business case, and then hyping them up based ono the presumption that by providing one part, we secure the whole.
The open-source community may be the only answer to this. Why, because in today’s world all problems, all challenges, all opportunities are in the end destined to be addressed by software. A good team of software architects, working for six months or so, could frame the optimum future for all these technologies. A current body like the Linux Foundation, or a new group, could drive this effort, and launch what might not be the “Hal” or “I, Robot” or “George Jetson” future of our dreams, but a future that’s truly interesting, even compelling. If you’re starting, or involved in, an open-source activity to address the tech-ecosystem challenge we face, let me know.