Sometimes the new things you learn are things you should have known all along. That was the case last week with two critical announcements. First, Ericsson (a major 5G advocate) came out and told operators they wouldn’t be getting a big revenue windfall from cellular connection of IoT devices. Second, Amazon (who had no real premises hosting strategy) admitted that hybrid cloud was the real cloud opportunity.
I’ve never understood how anyone believed that IoT would be a major 5G driver or that it would somehow generate a lot of revenue for network operators. My model has said from the first that over 95% of IoT devices wouldn’t be connected through cellular networks, and in fact that about the same percentage of IoT applications couldn’t meet required price points if cellular connectivity were used. Despite every shred of evidence, we had a media blitz on cellular IoT and operators like Verizon were framing their whole IoT strategy on the control of cellular-connected IoT elements.
Ericsson’s numbers for cellular IoT are still optimistic, IMHO, but given the focus Ericsson is giving to 5G it’s hardly surprising they don’t want to totally rain on the IoT parade. What is more surprising is that they’re not really offering an alternative role for operators. I think that IoT represents an enormous opportunity for operators in general, and for carrier cloud in particular, but it’s an opportunity that’s more related to “cloud” and “servers” than to networks, which is where Ericsson is incumbent. However, the real role for operators in IoT is truly no more than a glimmer at this point, and there’s no reason Ericsson couldn’t reinvent itself to fulfill it.
IoT is really a service market, just not a connection service market. In fact, it’s a lot of different service markets rolled into one. Even a smart home typically has two service types—security services and home control and monitoring. These are often united in a common portal, and that portal provides three things—control over each service, the opportunity to unite elements of the services into a common policy/feature set, and a gateway through which everything can be accessed via browser or app. A smart city or a broad enterprise IoT application would likely have many more services, and would also have an augmented controller that offered database and analytics services. It’s this framework that builds large-scale IoT, and it’s a framework operators could support.
We have many different sensor types today, many different sensor protocols, and many controller devices. We also have many different portals. All these differences mean that it’s difficult to create a truly unified IoT application even at the home level. Pick a portal, for example, and you’ve tied yourself to whatever control protocols and sensor/controller devices that portal happens to support. All this cries out (as so much does these days) for a good logical set of abstractions.
An abstraction is in effect a combination of an intent model and an API. The intent model describes a general hierarchy of functional divisions among the things it represents. For example, IoT might be described as an “IoT Element” at the highest level, which might then be divided into “IoT Portal”, “IoT Controller”, and “IoT Sensor”. The latter would then be divided by what’s being sensed—doors opening, motion, temperature, and so forth. A model like this would allow greater interchange among the elements of an IoT system, which may well be why we don’t have it yet. IoT vendors aren’t interested in promoting open ecosystems, which is where operators and network equipment vendors could come in.
Ericsson and its customers don’t have direct IoT equipment exposure, so they could promote a set of logically abstracted services without overhanging their own revenue stream. These services could be realized by on-premises software, in the cloud, or both, and the cloud part of these services could then be sold by operators. Remember, the goal of IoT isn’t to read sensors or diddle with lamp controllers, it’s to realize a specific security or facilities goal. If services sold by operators directly realize those goals, then they’re easier for consumers to use.
Ease of use is also the launch point for my second area of focus, which is the sudden thrust by cloud providers to support the hybrid cloud market. Amazon has been the leader in cloud computing revenue for ages, but all the cloud hype has obscured the basic truth that Amazon’s leadership was based on its success with startup companies using web delivery (social media, video, storage, etc.) rather than the enterprises. Microsoft has been quietly winning way more deals in the enterprise space, simply because from the first Azure has provided good support for hybrid cloud—the use of public cloud and data center hosting in a combined application resource pool.
Startups are a great early opportunity in the cloud space because they have limited capital resources and a potentially enormous need for expandable hosting. Amazon catered to this group from the first, and because of it they managed to grab a big lead in the cloud space. The downsides? Two. First, Amazon was grabbing the mother of all low apples, opportunity-wise, and there are always limits to the low-apple approach. The startup space can’t grow exponentially which is what Amazon wants revenue to do. Second, the startup space doesn’t have any legacy IT investment, which means that what they need from a cloud provider is very different from the needs of an enterprise with a big complex of data centers and a big staff.
Microsoft, from the first, targeted Azure at the enterprise. Everything that Amazon did, Microsoft sort-of-followed, but with an enterprise spin on it. The specific technical focus was to create a strong symbiotic relationship between premises IT and Azure, which was easy given that Microsoft had an enormous installed base of IT technology. As the media-induced (and profoundly silly) notion of “moving everything to the cloud” fell from grace, the reality that future enterprise computing would be based on a hybrid-cloud application model emerged to replace it.
As I’ve pointed out in other blogs, it’s hard to support hybrid cloud if you have only one bank of the river that hybridization is supposed to bridge. Amazon had no premises positioning to exploit, to hybridize with. Their startup-focused early mission didn’t require one. Their future does, and that’s why they’ve announced “Outposts”, a dualistic and explicitly hybrid extension to their AWS cloud. The dualistic part comes from the fact that you can either have a way of extending VMware into AWS so you can use the control plane of VMware and its APIs in the cloud, or you can extend AWS control plane and API features onto the premises. If you see this as anything but a reaction to the new hybrid-cloud driver for cloud revenue, you’re missing a big truth.
And the bigger, related truth. A hybrid cloud is an enormous, agile, virtual computer. You have to program it like you’d program any other computer, and programming practices and (perhaps most importantly) platform software, can either expand the capabilities of your new virtual computer or constrain those capabilities. We don’t have a model for that virtual computer…yet. Somebody is going to create that model, and when they do the others in the space (either on premises or in the cloud) will have to try to promote their alternative against that first-mover incumbent, or simply cave in and adopt that same model. Which, unless our first-mover is stupid, will be a model that favors the player who defined it.
Is Microsoft already the incumbent, the first-mover? No, because their Azure approach was designed to be compatible with their existing Windows Server model, an essential step toward building that hybrid bridge. They couldn’t get too futuristic without moving the bank out of range of the span. Amazon has demonstrated it doesn’t have that second, bigger, first-mover-think, structure either. They have two approaches, push current IT into the cloud or push a cloud that wasn’t designed for hybridization into the data center. Again, they rely on an old model of software development.
Eventually, both Microsoft and Amazon will take advantage of the increased understanding of what the new hybrid-model virtual computer looks like, and will direct their platform into the right place. “Eventually” can be a long time, though. I don’t think they will make a quick and bold move because they’d have done that already. That leaves other players in the cloud, most notably Google and perhaps IBM.
IBM’s Red Hat deal could give them a shot at the hybrid cloud opportunity, but IBM could have had the shot for years now and hasn’t taken it. When they made the decision to virtually exit the computer hardware space (and, remember, the network equipment space as well) they disconnected themselves from the mainstream of data center evolution. Yeah, they still had their mainframes, but mainframes have been outside the center of the IT universe for almost two decades. Can Red Hat add the necessary Moxy? I don’t think they’ve demonstrated that they have it either.
Google is the Amazon-like member of the duo, and IBM of course the Microsoft-like one. Neither of the two has really grasped the reality of that new virtual-computer model, and both should have. Of the two, I think Google had, and still has, the best shot. Neither Google nor IBM has any chance of simply following in the footsteps of the leader; it’s too late for that. They can win only by jumping the line and becoming the first mover into that future virtual-computer hybrid cloud. That would be a good assignment for Google’s new cloud head, and for IBM’s current one.
So there we have it. Two “new” developments that are really nothing more than belated recognition of what’s been a pretty obvious truth all along. It demonstrates that we live in an age where ad clicks or TV commercials drive everything we see and hear…but they don’t drive the truth. Now that we’re finally getting that, perhaps we’ll see some useful developments in both the cloud and 5G spaces.