The Year to Come

Lies, hype, truths, and 2020.  I guess that’s the sort of thing I’d be expected to write to open the New Year, and so that’s what I intend to do.  The bad news is that all these descriptive words will apply to some of what we hear this year, and the good is that I think we’ll finally see “truth” start to win out.  Let’s explore this interesting dynamic by looking at some of the key technology shifts.

The number one truth of 2020 is a new network model.  No, it’s not going to be a sudden shift to hosting and carrier cloud for service infrastructure; such a change would toss billions of dollars in undepreciated assets into the toilet.  What it will be is a shift in the way operators handle incremental enhancement and modernization.  The most important thing that 5G, one of the hype stars of the last couple years, will accomplish in 2020 is the seeding of this shift.

It’s not that 5G in 2020 is going to do anything revolutionary in a service or technology sense.  Yes, we will have 5G New Radio deployments, in the form of 5G RAN over LTE core, the so-called “Non-Stand-Alone” or NSA model.  And we’ll also have deployments of 5G/FTTN hybrids using millimeter wave.  Neither of these things really moves the ball in terms of network model, and both will be as much PR moves as real initiatives.  Look for a lot more from 5G/FTTN in 2021, but for this year the big news is that new 5G cells and backhaul are a great opportunity for modernization because it’s greenfield deployment.

We can already see that operators like AT&T who are under unusual pressure to improve costs are looking to open-model networking.  What makes this really important is that Cisco, no fan of open anything, has taken care to position its Silicon One evolution as something that would embrace at least some unbundling of router elements, and likely some use of open-source, open hardware, or both.  When has Cisco ever talked about supporting the P4 flow protocol?  They are now.

If networking is going to use more open elements, then vendors have to compete both at the whole-device and piece-parts levels.  The old appliance model relied on a monolithic assembly of silicon, hardware, and software.  The new model will make all these elements separate, making each into a differentiator that has to be expanded and protected.  A best-of-breed combination doesn’t cut it in a piece-parts world.

This is going to be a strategic nightmare.  Do vendors sell everything—chips, devices, and software?  Do they try to run their own software on competitors’ devices, do they buy competitive chips where the chips provide competitors an edge?  Can they hold back their own silicon to make their product combinations stickier, or will that give competitors’ chips a foothold in the open-market space?  You can see that Cisco is looking at this problem of positioning already, well ahead of its competitors, and that may give them a major advantage in 2020 as the piece-part open-network model evolves.

The second truth we’ll face in 2020 is that “the cloud” is really a computing model, a platform-as-a-service framework that extends not only to public cloud services but to every aspect of both business and home computing.  Remember Sun Microsystems’ saying “The network is the computer?”  That’s where we’ll get to, at least in a realization sense, this year, and the consequences of this will be more profound in the computing space than the open-network model will be in the networking space.

Application programs drive computing because they’re what delivers value to buyers.  The rest is, in a real sense, plumbing.  Application programs interface with that plumbing, the hardware, network, systems software, middleware, whatever, through APIs.  The set of APIs available to an application will drive development, and the applications will then be highly specific to those APIs.  If the network is the computer, then that network computer has an API set, at least de facto.  The question is what that API set will be, because there is no standard framework to define it today and no really solid effort underway to get one.

Cloud platforms today are made up of a collection of middleware tools ranging from development tools through service and network connectivity tools, to deployment, operations, and management tools.  There are at least two or three major options in each of these areas, and in some a dozen lesser-known options as well.  That adds up to a dazzling number of possible combinations of tools, all of which will be incompatible at the software level, the operations level, or both.  Users who assemble a set of tools for their “hybrid cloud” would likely end up with a one-off framework that could run no applications designed for any other combination.

That’s not workable, of course, and everyone will recognize that in 2020.  The result will be a sudden push to shift the cloud dialog from technology to product, or more properly to suite or ecosystem.  It’s already clear that Kubernetes and containers will be the basis of cloud development and operations, and so the Kubernetes ecosystem will expand to become the cloud platform of the future.  This will happen not by making Kubernetes bigger, but by having a few major, credible, vendors assemble a complete ecosystem around Kubernetes, then market and support it aggressively.

The major contenders to produce this new ecosystem are VMware and Red Hat/IBM, with Microsoft, Google, and Amazon trailing.  Because this ecosystem will create the cloud equivalent of the Windows operating system, the framework for development and operations in the future, the player who owns it will have an enormous time-to-market advantage.  If we look back at the cataclysmic shakeout we had in minicomputers in the ‘80s, we’d see it came about because only vendors who had a platform with popular APIs could sustain development support, and thus build utility.  The same thing that killed off Data General and DEC and others will kill off players in the cloud space, and anoint the remainder with financial rewards.

What about the ballyhooed technologies of our (current) age, like superspeed 5G or ubiquitous IoT or almost-human AI or (gasp!) edge computing?  All of them will make some progress, but none of them will become what proponents have long predicted.  Let’s do a quick look.

5G will deploy in 5G NSA form, as noted above, and we will see some progress in the millimeter wave 5G/FTTN space, but most of the benefits of those transformations will be limited and will come only in 2021 and beyond.  Remember that 5G is, beneath the hype, just a better RAN.  Companies like Ericsson and Nokia (and, to a lesser degree) Huawei will benefit from the NSA progress because of their RAN position, but don’t expect major changes in service, major improvements in backhaul or metro network capacity.  People build to gain revenue, not for the good of others.

IoT may show surprising strength by abandoning its roots.  Rather than looking for 5G IoT sensors and controllers, look to initiatives still largely vague, like Amazon’s Sidewalk.  Federations of home/business sensors, created by vendors and others, will expand as vendors look to improve their market position by creating a kind of “friends and family IoT” model; get others involved in the same technology you use, and you build your collective value.  This approach makes more sense than one that depends on wholesale deployment of cellular sensors, given the cost of those sensors and the need to financially support their connectivity.

AI has been around for a very long time, though it gets promoted as “new”.  Because practically anything that involves computers and software could be positioned as AI, we can expect to get a lot of mileage out of the term.  Real progress will also emerge, mostly through the application of older principles like machine learning and fuzzy logic, to modern applications.  As federated IoT grows, so will “AI” applications designed to help assess the meaning of a growing number of sensor feeds, and activate a growing number of alerts or control systems.  Again, we can expect more in 2021.

Edge computing is a technology shift that needs a strong driver, but even more it needs a solid definition.  What the heck is an “edge” anyway?  Where we’ll see the most edge action is in the federated IoT hub space, where more power in home hub devices will be encouraged both by the growing utility of home automation and the growing power of neighborhood federation.  Eventually, the latter will expand to the point where autonomous hosting of the federation will become necessary, and that’s what will likely drive the concept of edge computing further.

And, finally, what about our lies and hype?  That’s only going to get worse.  Technology is getting more complicated, and at the same time our mechanisms for covering it are getting more simplistic.  Where we used to have two-thousand-word articles, we have 500-word sound bites.  There is no way to make something as complex as network or cloud revolution simple, except by making it wrong.  There’s no way to build an information industry funded by the seller other than to compromise the interests of the buyers.

What can we hope for in 2020, if nearly everything that’s going to happen won’t mature until sometime later?  The answer is a model, an architecture.  Telephony would have been delayed for decades had we not had a “Bell system”, and even electrical power needed some standard mechanism for describing things like AC and DC, and even arcane concepts like “cycles per second” for alternating current.  We need to concentrate solutions so they’re universal enough to deal with everyone’s problems.  That’s what mass markets and revolutions are really about.

Happy New Year, everyone!