If the Internet is the source of technology revolution, perhaps we should remember that consumerism is the source of the Internet’s importance. Revolutionizing communications wouldn’t have happened had the Internet stayed a haven for researchers and university eggheads. Given this, we should look to what is or could happen with Internet consumerism to see what might happen in networking and tech.
Functionally, the thing we call “the Internet” is a three-level structure. The first level is Internet access, which is what ISPs and mobile operators provide us. The Internet’s second level is the content and experiences that the user is seeking, and the third level is the communications facilities that tie all that content/experience hosting into a common network that users then access. Up to now, most of what we could call “innovation” in the Internet has been in that second content-and-experiences layer. Innovation is going to continue there, but it’s also going to crop up elsewhere.
You are, Internet-wise, what your DNS decodes you to. Users know Internet resources by URL, and the URL decodes into an IP address. It doesn’t matter where that IP address terminates, and that has allowed content delivery networks (CDNs) to jump into the market to provide close-to-access-edge hosting of content to deliver a better quality of experience. With CDNs, a URL is translated to an IP address of the best cache, which means that the DNS decoding is optimized for the experience.
Logically, this same approach could be applied to the next-generation “contextual” experiences that users want to have supported. A good example of this is the classical auto-GPS application. You want to know how to get to your destination, but also to have that route optimized with conditions and presented to you in useful turn-by-turn directions. When you ask “what’s ahead” either explicitly or implicitly (via route optimizing), your “ahead” is contextual. You could envision the processing of such a request as a request to decode a URL what_is_ahead and having the question then directed to the answer process for your current location. That would eliminate the need for you to find sensors based on where you are, or wade through a zillion intersections to get results that were actually useful.
Another example of contextual requests is the “where can we meet for lunch?” question, which is contextual because you aren’t likely to want to pick a location it would take ten hours to reach. In this case, what our where_can_we_meet URL would likely decode to a processor for the city in which the user resided, but it would have to consider the locations of the others implicit in the “we” part of the question.
The point here is that contextual enhancements to services would likely be handled by a set of processes that would obtain focus, and likely even hosting position, based on the geography of the people involved with the request. Visualize a set of naked processes hanging about, to be instantiated where they are needed—cached, in effect, like content already is.
IoT creates a similar need set. If you have sensors, they have first and foremost the property of location, meaning they are somewhere and it’s the surrounds of that somewhere that the sensors can represent as status. If we wanted to interpret sensors to get insight into conditions, we’d probably think of the condition we wanted to review first (“what’s ahead?”) and expect it to be contextually decoded.
All of this suggests that future service will likely have cached processes to provide contextually relevant answers to users, which means they’ll have places to cache them, which means a migration of interpretive resources toward the edge, to ensure that the answer a user gets is still relevant when it’s delivered.
If the purpose of caching of content or processes is to get things close to the user, it naturally follows that experiences, features and content might be considered a kind of elastic set of properties that could be hosted anywhere based on QoE needs. In some cases, the properties might get pushed all the way to a user device. In others, they might live inside the Internet, even away from the edge.
An example of a migratory process is easy to find; look at Comcast’s upcoming xFi service capability. They propose to create a more personalized and elastic Internet experience, one that exploits modules of connectivity (pods) that extend WiFi range and manages what repeater a user might exploit based on where they are in the home. It doesn’t take much thinking to see that you could extend pods to host content and processes, and thus use them to build a multi-layer service like home control. Think of a “control pod” and “sensor pods”.
All this could be based on a nice easy-to-use drag-and-drop home control programming system that’s hosted in the network. The user inputs a floor plan (by scanning or building it with online tools) and locates sensors, controllers for lights and appliances, thermostats, sprinkler systems, and so forth. The user builds the program to do what they want, and can test it by clicking on a sensor to see what effect it has on the program and the controllers.
Increasingly, the notion of controllable mapping between logical/virtual destinations (my process URLs) and real hosting points, seems certain to create a new Internet model where access and interior connectivity is pretty much plumbing, and the experiences and processes and content float around within a hosting layer that extends from what’s literally in your hand to what might be a world away.
You can see that, as it matures, this model creates a different kind of Internet. There are two parallel worlds created. One is the world of process-host caching, where the URL the user clicks (actually or implicitly) decodes to a hosted, optimized, process. The other is where the URL decodes to a persistent host on the Internet and not part of the infrastructure. We have the division informally for cached content today, but in the future it will probably be most important for the contextual and IoT processes.
This is the kind of thing that will drive “carrier cloud”. NFV doesn’t have the same potential for the simple reason that having independent processes hosted for individual services doesn’t scale to consumer levels. The most credible NFV application is virtual CPE, and yet we know from announcements like that of Comcast that WiFi is the most important element of a home broadband hub. We can’t build a cloud-hosted WiFi hub, and now Comcast is extending what goes on premises with pods.
Another data point is that the Brocade break-up has left a conspicuous orphan, Vyatta, their virtual router property. Why, if there was tremendous potential for virtualization of routing, would that be the piece left at the dance? The answer is that multi-tenant services like IP routing are based on static locations based on aggregate traffic. If I’m going to have a box in Place A to serve all traffic from all users there, what’s the value of making it a virtual box and not a real physical router?
There are a half-dozen drivers of carrier cloud, and we need all of them—including NFV—to get to the best possible place. We’re not going to get them by ignoring that common process-hosting model, and every day we get more evidence of that. Time to listen!