In my last blog, I mentioned that we might be on the verge of “application networking”. Some of my readers who know me sent emails asking if I could define exactly what I mean by that. I can do that, but perhaps better still I can offer the definition that enterprises have evolved during the time I’ve surveyed them. I can also offer my view on where application networking is likely going to go.
To users, “application networking” is the process of creating and sustaining connection among users and application components. That, perhaps, isn’t totally helpful because in theory you could do that with a good shipping service. When you dig down a bit, the users first add the notion that the connections have to support the quality of experience required for the user/workers, offer the availability they need, and support the range of traffic that application/user loads create.
Functionality plus service-level agreement, right? That’s a fair start toward defining application networking as an intent model for connectivity. The big question, though, is the nature of the inputs and outputs, meaning how the users and application components connect and how traffic is routed among them. We have inherited our current model of connecting/routing from the Internet and even earlier service models, but is that what the users really want?
Not according to the surveys I’ve done. Almost from the start (1989 for those interested in the specific history of the view) users have favored the notion of “logical networking”. A logical network is one where users and application components (resources, if you like) are known by logical names. The connections made are therefore logical connections, and things like URLs and IP addresses are things that should never rear their ugly heads above the users’ horizon.
The notion of URLs (or URIs, to adopt the current terminology) has been around a long time and has been used as a means of linking a logical name to a physical network address. The problem of our age is that the physical network address of something is really the address of the “network service access point” or NSAP where that “something” is currently attached. That was fine when users were chained to desks and applications chained to mainframes. Today, it’s not so great. Virtualization means that resources move dynamically, and mobile devices and mobile workers mean users do too.
The decoding of URLs, which comes through the Domain Name Server system (DNS), is a potential pathway to logical networking. If every network user and network resource had a logical name registered in a DNS and changed if the user moved from one NSAP to another, you’d have a kind of proto-logical network model. The problem would be that the updating of the DNS could take time and introduce an interruption in communication. This is less an issue for message/transaction activity than for real-time streams, but it’s still a factor. So is DNS security.
If the specific implementation of DNS isn’t the best answer, then the general strategy of a registration-and-reference database that translates between logical name and NSAP probably is. Mobility management in today’s cellular networks is a form of this; a phone is associated with an IP address, but the traffic to the phone is piped to a tunnel that moves as the phone does. The process can be handled to minimize the impact on the conversation, and surely moving tunnels is more time-consuming than other modern options for traffic management.
Users also think that load-balancing should be an attribute of an application network. This issue is a lot more complicated than it seems, because of the fact that virtualization can move a resource a considerable distance, both in geographic and NSAP terms, and the optimum position of a load-balancer depends on where the scaled resources are hosted. In today’s enterprise services, this complication is exacerbated by the fact that wherever that optimum location is, it’s likely inside a VPN whose structure is opaque and where user processes can’t be directly hosted. What you really need to have in a VPN world is a distributed load-balancer that operates at the NSAPs near the user, which means that they can pick a resource instance based on any number of relevant factors, including traffic and QoS impact of the selection.
I worked out a distributed load-balancer at one point in my network integration career. It issued a “bid” to resources and picked the one that responded with the best performance/load factor. This approach works for the kind of application I was working with at the time, which had a small number of requests separated by considerable think time. There are many other approaches that are more generally useful, and keep in mind that complex scheduling doesn’t work well in distributed load-balancing; you can use either round-robin or random-number approaches. You do have to ensure that the user connection points are kept up-to-date on the available resource points to balance among.
All of this is something fairly easily accomplished with at least the extended version of SD-WANs, and it may also be available through broader data-center-and-user-office forms of SDN. The two technologies started from different places—SDN in data center networking and SD-WAN in branch networking—but both seem to be broadening their scope to envelope a little of the other’s turf.
Where SD-WAN has the advantage, in my view, is that it’s always been considered a service-overlay solution to enterprise connectivity. That means it’s fundamentally a logical-networking mechanism. If logical networking is a fundamental enterprise requirement as my surveys show it is, then SD-WAN has already accepted that at least in some implementations. SDN is still largely focused on “networking” meaning overlay-IP, addressed, entities.
We have work today in groups like the IETF on location-independent addressing, meaning some means of separating the logical “who” from the network “where”, but most of these are still hung up on IP addresses of some type. The SD-WAN community could take it to where it needs to be, at least with some of the “entities” being connected. Do they have the answer to full resource-side logical networking, including load-balancing? Not yet, and I think that’s going to be a major differentiating point very quickly, particularly as MSPs and network operators vie for differentiation in the evolving service market.
We should have realized, from the dawn of virtualization and the emergence of Nicira and multi-tenant data centers, that this sort of thing was going to be important. A smart approach to logical or application networking could have been devised when we had plenty of time. Now, the market is going to demand a solution long before any realistic open standards process could be expected to do more than decide on its own name and governance policies. There are now three camps in play to create ad hoc strategies; SD-WAN vendors, MSPs, and network operators. Whoever wins the race might gain an enormous competitive advantage for themselves, and set the tone of networking for the foreseeable future.