Often in our industry, a new technology gets linked with an implementation or approach and the link is so tight it constrains further evolution, even sometimes reducing utility. This may have been the case with cloud computing and NFV, which have been bound from the first to the notion of harnessing units of compute power through virtual machines. The truth is that other “virtualization” options have existed for ages, and some may be better suited for cloud and NFV applications. We should probably be talking less about virtual machines and more about containers.
Hardware-level virtualization, meaning classic virtual machines, take a host and partition it via hypervisor software into what are essentially separate hardware platforms. These act so much like real computers that you run their own operating systems on them, and facilities in the hypervisor/virtualization software make them independent in a networking sense as well. This approach is good if you assume that you need to have the greatest level of separation possible among the tenant applications, which is why it’s popular in public cloud services. But for private cloud, even private virtualization, it’s wasteful in resources. Your applications probably don’t need to be protected from each other, at least no more than they would be if run in a traditional data center.
Linux containers (and other containers based on other OSs like OpenSolaris) are an alternative to virtual machines that provide application isolation within a common OS instance. Instead of running a hypervisor “under” OS instances, containers run a virtualization shell over it, partitioning the use of resources and namespaces. There is far less overhead than with a VM because the whole OS isn’t duplicated, and where the goal of virtualization is to create elastic pools of resources to support dynamic componentization of applications, the difference can add up to (according to one user I surveyed) a 30% savings in server costs to support the same number of virtual hosting points. This sort of savings could be delivered either in virtualization or private cloud applications.
For NFV, containers could be an enormous benefit because many virtual network functions (VNFs) would probably not justify the cost of an autonomous VM, or such a configuration would increase deployment costs to the point where it would compromise any capex savings. The only problem is that the DevOps processes associated with container deployment, particularly container networking, are more complicated. Many argue that containers in their native form presume an “instance first” model, where containers are built and loaded and then networked. This is at odds with how OpenStack has evolved; separating hosting (Nova) and networking (Neutron) lets users build networks and add host instances to them easily. In fact, dynamic component management is probably easier with VMs than with containers, even if the popular Docker tool is used to further abstract container management.
There’s work underway to enhance container networking and DevOps. Just today, a startup called SocketPlane announced it would be “bringing SDN to Docker”, meaning to provide the kind of operational and networking agility needed to create large-scale container deployments in public and private clouds and in NFV. There are a few older and more limited approaches to the problem already in use.
Containers, if operationalized correctly, could have an enormous positive impact on the cloud by creating an environment that’s optimized to the future evolution of applications in the cloud instead of being optimized to support the very limited mission of server consolidation. They could also make the difference between an NFV deployment model that ends up costing more than dedicated devices would, and one that saves capex and perhaps even could enhance operations efficiency and agility. The challenge here is to realize the potential.
Most NFV use cases have been developed with VMs. Since in NFV the way that virtualization hosting and networking is managed is the responsibility of the Virtual Infrastructure Manager or VIM, it is theoretically possible to make containers and container networking (including Docker) work underneath a suitable VIM, which means that it would be possible in theory to make containers work with any of the PoCs that use VM hosting today. However, this substitution isn’t the goal or even in scope for most of the work, so we’re not developing as rich a picture of the potential for containers/Docker in NFV as I’d like.
One of the most significant questions yet to be addressed for the world of containers is the management dimension. Anyone who’s been reading my blog knows of my ongoing concerns that NFV and cloud management is taking too easy a way out. Shared resources demand composed, multi-tenant management practices and we’ve had little discussion of how that happens even with the de facto VM-based approaches to NFV and cloud services. Appealing to SDN as the networking strategy doesn’t solve this problem because SDN doesn’t have a definitive management strategy that works either, at least not in my view.
The issues that containers/Docker could resolve are most evident in applications of service chaining and virtual CPE for consumers, because these NFV applications are focused on displacing edge functionality on a per-user basis, which is incredibly cost-sensitive and vulnerable to the least touch of operations inefficiency. Even in applications of NFV where edge devices participate in feature hosting by running what are essentially cloud-boards installed in the device, the use of containers could reduce the resource needs and device costs.
While per-user applications are far from the only NFV services (shared component infrastructure for IMS, EPC, and CDNs are all operator priorities) the per-user applications will generate most of the scale of NFV deployments and also create the most dynamic set of services. It’s probably here that the pressure for high efficiency will be felt first, and it will be interesting to see whether vendors have stepped up and explored the benefits of containers. It could be a powerful differentiator for NFV solutions, private cloud computing, and elastic and dynamic application support. We’ll see if any vendor gets that and exploits it effectively.