If demand-side issues are driving changes in the industry, then it’s fair to ask where the industry is going. We talked yesterday about the major drivers, and today I want to talk about the major technology shifts a bit more. In particular, I want to make the connection between the changes in opportunity and the changes in technology.
The Internet is a cheap bandwidth fabric, and as such it’s not the sort of thing to attract a lot of interest from those with ROI targets. However, the Internet is here to stay in no small part because there’d be an international revolution if you tried to do away with it. Thus, we’re seeing the changes in demand that we’ve talked about create changes on top of the Internet.
We can visualize all of the demand-side forces acting to create a broader notion of the Internet as a cloud—broader because the new notion involves the information and processing resources needed to support decisions. In a way, we’re drawing the hosts into the network because apps and Siri-like personal agents really act as intermediaries, hiding the sources and resources because those indirect agents do all the interacting with them. Google does a lot of this today, and so do CDNs. So as we cache and disguise things, what happens?
With relatively few points of cloud hosting to worry about, likely most cloud providers would create fat optical pipes between data centers and thus create what would be effectively a flat data center MAN/WAN. The technology a lot of people favor for this is OpenFlow, and I think that it’s logical to assume that OpenFlow could play a major role. In OpenFlow, the forwarding rules are created by a central software controller, hence the term “Software-Defined Network” or SDN. The strength and weakness of OpenFlow is this explicit path control. It doesn’t scale to Internet size, but in the cloud model it doesn’t need to because the internal connections aren’t between users, they’re between agents and resources, and among the resources themselves. The structure isn’t “open” like the Internet any more than a CDN is in its internal paths, so it doesn’t need Internet flexibility and scale. The Internet model is a liability inside the cloud, in fact.
The key point here is that we should not look for something like OpenFlow to become the architecture of the Internet. It’s the architecture of the cloud, and most specifically of the two-layer inside-outside model of the cloud. I think that model will prevail, and so I think OpenFlow will prevail—eventually.
Remember my rule, though. Only new revenue can drive revolution in infrastructure. In order for the OpenFlow revolution to happen the cloud revolution has to happen, and today’s cloud computing isn’t focused on making it happen at all, it’s focused on the cost-based optimization of tech resource consumption by enterprise IT. I’m covering this whole issue in greater depth in the next (April) Netwatcher, but for now let me say that the total revenue available to the cloud model I’ve been discussing is nearly an order of magnitude more than the revenue available from displacement of enterprise IT from data center to cloud. That’s enough revenue to pay for a tech revolution, but we won’t get it all at once.
A year or two from now you’ll probably be consuming some OpenFlow-based services, but you won’t know it. In fact, if my model is correct there will never be a time when the nature of “the Internet” appears technically different at the client level. The IPv4 to IPv6 address transition would create far more visible change (and some will make their cloud transformation under the covers). We’ll need the big Internet address space, the open model, for access to services and for a declining population of legacy websites and services. We’ll have it, forever, even when a transition to the cloud model of the Internet is complete. Which won’t be for five to ten years.
I’ve been getting some very optimistic predictions about how OpenFlow is going to push everyone out of the market; how it will kill major vendors. Not likely. Not only is the cloud-transition process going to take a long time, it’s going to be evolutionary and thus will tend to favor trusted incumbents. That’s particularly true given the fact that recent studies have suggested that open source software libraries contain some serious malware vulnerabilities. I think it’s likely that more and more open-source success will come from having commercial providers take responsibility for the sanitizing of the code and the integration of the elements.