Let’s Stop Thinking Small About Network Virtualization

Somebody told me last week that network virtualization was well underway, which surprised me because I don’t think it’s really even begun.  The difference in view lies in two factors—is “acceptance” equivalent to deployment and is a different way of doing the same thing the same as doing something different.

The issue we have with network virtualization, in which category I’ll put both SDN and NFV, is much the same as we have with the cloud.  We presume that a totally different way of doing things can be harnessed only to do what we’ve been able to do all along—presumably cheaper.  If all the cloud can offer is a lower cost because of economies of scale, then most enterprises will get enough scale through simple virtualization not to need public or private cloud services at all.  The cloud will succeed because we’ll build new application architectures to exploit its properties.  Network virtualization will be the same.

Traditional network services created through cooperative service-specific infrastructures impose a single set of connection rules as the price of sharing the cost among users.  Virtualization, at the network level, should allow us to define service connection rules on a per-user and per-service, basis without interfering with the cost sharing.  There are two elements to service connection rules—the address space or membership element and the forwarding rules.  With today’s networks we have default connectivity and we add on connection controls and policies.  We have a very one-dimensional vision of forwarding packets—it’s an arbitrary address.

Virtual networking should relax these constraints because it should allow us to impose any convenient addressing or connection model on a common infrastructure framework.  That means that networks would work according to the utility drivers associated with each user and application; no compromises to be part of the team, to secure economy of scale.

One of the most important prerequisites for this is breaking down the one-user-one-network rule.  We tend to think of networks today in a static and exclusive membership sense.  I have an address on a network, associated with a network service access point.  Send something to it and I get that something.  We already know from Amazon and Google’s experience in the cloud that you need to change that simple approach.  In virtual networking, a user is a vertical stack of NSAP/addresses, one for each connection network they’re a member of.  Google represents this well in their Andromeda documents, but Andromeda is still all about IP and there’s no reason to presume that NSAPs all have the same protocol, or any of the protocols that are in use today.

Multi-networkism (to coin a phrase) like this is critical if elastic networking is to be useful because we have to presume that the intersection of user and application/need will be multifaceted.  You need to be able to be a member of different networks if you want networking to be different.

The next step is getting traffic to users.  Forwarding rules define how a packet is examined by nodes to determine how to route it onward.  They associate an address and handling instructions, so they are linked to the address/membership side of the picture by the address concept.  The address is your “name” in a connection network.  The forwarding rules define how the address is interpreted to guide handling and delivery.

OpenStack’s real advance (which sadly isn’t completely realized for reasons we’ll get to) is that it defines a more elastic model of describing packet-handling by nodes.  Ideally what you’d like to have is a kind of mask-and-match or template structure that lets you pick what an “address” is from the range of stuff you’re presented with in the packet header.  Ideally, you’d also like to be able to transform the stuff you find, even to the extent of doing some high-speed local lookup and using the result.  The architecture might not work for all applications, but we should not constrain virtualization at the network level by the limits of current technology.  We have to accommodate those limits, but not perpetually.

An example of transformation-driven handling is the optical routing issue.  Optical is really a special case of non-packet traffic segmentation; TDM is another.  The point is that if there is any characteristic that separates traffic flows (and there’d better be or routing is kind of moot), we should be able to abstract that and then to convert that abstraction back to the form needed for the next hop.  A flow that’s incoming on Lambda A might be outgoing as a TDM slot; as long as we know the association we should be able to describe the handling.

Forwarding rules also unite the vertical stack of NSAP/addresses and the user who represents the logical top of that stack.  Every virtual network in the stack associates that user with an NSAP and the rules needed to get packets to and from the user.  How exactly that would work, and how complicated it would be, depends on how homogeneous you think the networks are.

If we presume (as is the case in cloud computing today) that the virtual networks are all IP networks, then what we have is multi-addressed users.  The presumption is that every virtual network has its own address space and that a packet sent by the user is linked to a virtual network by the address the user presents or by the destination address.  When a packet is received, it’s sent to the user and it can be presumed that the virtual-network affiliation of the origin doesn’t really matter.  This is consistent with the one-address-space Internet IP model.

This is the cloud-inspired virtual network model today, the one Amazon and Google have deployed.  This model offers considerable advantage in application-specific VPN examples for the future.  Imagine as-a-service apps presented with their own address space, connected outward via VPN into the virtual-network stacks of users.  Access to an application now depends on having a virtual-network-forwarding connection from that app’s NSAP to your vertical “stack”.

If we have different network memberships with different protocols in each, then network software in the user’s space would have to provide a means of separating the traffic.  You could assign multiple logical software ports, put a network ID in the packet, or use any other mechanism handy.  This shows that for virtual networking to reach its full potential we’ll need to examine how software accesses network connections.  Otherwise usage practices of the present will tie down our options for the future.

I’m not saying that this is the only approach to virtual networking that realizes its potential; obviously the benefit of virtual networking lies in large part in its agility to present many connection and forwarding models.  I do think this approach represents an advance from today, one that’s already being used by cloud giants, and so it’s the kind of thing that could start discussions that might break many out of excessive “IP-and-Ethernet-think”.  We need that to advance networking as far as it can be taken.