What We May Have Here is a Quiet Revolution

If you look at the combined state of networking and IT, the most interesting thing is the fact that it’s getting harder to find the boundary point.  We’ve been linking the two since online applications in the ‘60s.  Now, componentization of software, virtualization of resources, and mobility have combined to build agile applications that move in time and space and rely on the network to be almost an API between processes that spring up like flowers.

While software and this network/IT boundary are symbiotic and so co-evolving, you could argue that our notions of platforms have been less responsive to the changes.  In most operating systems, including Linux, we have a secure and efficient “kernel” and a kind of add-on application environment.  Since the OS is responsible for network and I/O connections, we’ve limited the scope and agility of virtualization by treating I/O as either “fixed” in the kernel or agile only as an extension of the applications—middleware.  Now all of that may be changing, and it could create a revolution within some of our other revolutions—especially SDN and NFV.

Some time ago, PLUMgrid developed what was essentially a vision for an I/O and network hypervisor, an “IO Visor” as they called it.  This product was designed to create a virtual I/O layer that higher-level software and middleware could then exploit to facilitate efficient use of virtualization and to simplify development in accommodating virtual resources.  What they’ve now done, working with the Linux Foundation, is to make IO Visor into an architecture for Linux kernel extension.  There’s an IO Visor Project and the Platinum members are (besides PLUMgrid) Cisco, Huawei, and Intel.

The IO Visor project is built on what’s called “Berkeley Packet Filters”, an extension to Linux designed to do packet classification for monitoring.  BPF fits between the traditional network socket and the network connection, and extended in 2013 to allow an in-Kernel module to handle any sort of I/O.  You can link the extended BPF (eBPF) at multiple layers in the I/O stack, making it a very effective tool in creating or virtualizing services.  It works for vanilla Linux but probably most people will value it for its ability to enhance virtualization, where it applies to both hypervisor (VM) and container environments.

The technical foundation for IO Visor is an “engine” that provides generalized services to a set of plugins.  The engine and plugins fit into the Kernel in one sense, and “below” it, just above the hardware, in another.  Unlike normal Kernel functions that require rebuilding the OS and reloading everything to change a function, these IO Visor plugins can be loaded and unloaded dynamically.  Applications written for IO Visor have to obey special rules (as all plugins do) but it’s not rocket science to build there.

What IO Visor creates is a kind of “underware” model, something that has some of the properties of middleware, some of user applications, and some of the OS (Kernel) itself.  You can put things into “underware” and create or modify services at the higher layer.  The monitoring example that was the basis for BPF in the first place has been implemented as an IO Visor case study, for example.

What’s profound about IO Visor is the fact that it can be used to create an “underservice” that’s got components distributed through the whole range of Linux OS deployments for something like SDN or NFV.  An obvious example is a virtual switch or router “service” distributed among all of the various hosts and a functional part of the Kernel.  You could create a security service as well, in various ways, and there’s an example of that on the IO Visor link I referenced above.

Some of the advantages of this approach in a general sense—performance, security, and agility—are easily seen from the basic architecture.  If you dig a bit you can find other benefits, and it’s in these that the impact on SDN and NFV is most likely to emerge.

Signaling and management in both SDN and NFV are absolutely critical, and you can see that by applying IO Visor and plugins to a signaling/management service, you could create a virtual out-of-band connection service accessible under very specific (secure, auditable, governable) terms by higher-layer functions.  This could go a long way toward securing the critical internal exchanges of both technologies, the compromise of which could create a complete security/governance disaster.

Another feature is the distribution of basic functions like DNS, DHCP, and load balancing.  You could make these services part of the secure Kernel and give applications a basic port through which they could be accessed, a port like that of my hypothetical signaling/management network above would be limited in functionality and thus virtually impossible to hack.

If you’re going to do packet monitoring in a virtual world, you need virtual probes, and there’s already an example of how to enlist IO Visor to create this sort of thing as a per-OS service, distributed to all nodes where you deploy virtual functions or virtual switch/routers.  Management/monitoring as a service can be a reality with this model.

NFV in particular could benefit from this approach, but here’s where “impact” could mean more than just reaping more benefits.  You can load IO Visor plugins dynamically, which means that you could load them into a Kernel as needed.    That could mean that NFV deployment orchestration and management would need to contend with “underware” conditioning as well as simply loading VNFs, and it would certainly mean that you’d want to consider the inventory of IO Visor features that a given set of VNFs might want, and decide which you’d elect to bind persistently into the kernel and which you’d make dynamic.

This raises another obvious point.  If one of the big benefits of the IO Visor approach is to support the creation of distributable kernel-based service.  If that’s what you’re aiming for, you can’t just have people doing random IO Visor plugins and hoping they come together.  You need to frame the service first then implement it via plugins.  I’ve blogged about the notion in the past, and it’s part of my ExperiaSphere model—I call it “infrastructure services”.  Since you don’t need to deploy something that’s part of the kernel (once you’ve put it there), you need to conceptualize how you use a “resident” element like that as part of a virtual function implementation.

This probably sounds pretty geeky, and it is.  The membership in the project is much more limited than that of the ONF or the ETSI NFV ISG.  There are three members who should make everyone sit up, though.  Intel obviously has a lot of interest in making servers into universal fountains of functionality, and they’re in this.  Cisco, ever the hedger of bets in the network/IT evolution, is also a member of the IO Visor Project.  But the name that should have SDN and NFV vendors quaking is Huawei.  While they’re not a big SDN/NFV name in a PR sense, they’ve been working hard to make themselves into a functional leader, not just a price leader.

And IO Visor might just be the way to do that.  I think IO Visor is disruptive, revolutionary.  I think it brings literally unparalleled agility to the Linux kernel, taking classic OSs forward into a dynamic age.  It opens entirely new models for distributed network services, for NFV, for SDN, for control and management plane interactions.  It could even become a framework for making Linux into the first OS that’s truly virtualized, the optimum platform for cloud computing and NFV.  You probably won’t see much about this in the media, and what you see probably won’t do it justice.  Do yourself a favor, especially if you’re on the leading edge of SDN, NFV, or the cloud.  Look into this.