SDN World Congress Day One

I’ve not gone to trade conferences for a while and I forgot how exhausting they can be!  Yesterday I had a chance to peruse the booths at SDN World Congress, and I met with Charlie Ashton of 6WIND for a demo of their technology.  I’ve blogged a bit before about the importance of data path acceleration in NFV, but this was an opportunity to see exactly what difference it made.

The demo is simple; you set up a server with an OVS connected to a load generator and pump a over a dozen ports full of information, measuring the packet throughput per port and noting the variation among the ports and the “jitter” of a given port.  You then repeat the test with 6WIND’s technology installed.  The difference is astonishing.  Packet throughput was increased about 15 times, and the variation in performance per port and port-level jitter were cut to less than a third of their original values.

If you’re a network operator trying to get the most bang from your cloud service or deploying NFV (6WIND is a CloudNFV founding member) a 15x throughput increase in itself is a significant benefit.  This would translate to the ability to load the server with more network activity, meaning that network connections wouldn’t be a limiting factor in the VM density.  It would also clearly improve the performance and response time of the application.

With respect to the variability, the most fundamental presumption of a cloud is resource equivalence, meaning that you can assign an application or VNF to a pool of VMs when it’s necessary and pick any VM that’s available.  If you have significant variations (and I observed almost 20% variation among ports without 6WIND’s acceleration) then you don’t get equivalent performance from each VM, and you have to try to match apps to the VMs that give you the best connectivity.  The problem is that this varies over time, as the jitter rate per port of about 13% showed.

When you don’t get consistent performance out of a system you have to engineer your SLAs conservatively, which again means under-loading the servers and increasing your cost per user.  You also have to expect that performance issues are going to increase your customer complaints, and if the variability is extreme enough you may find you can’t write a business-grade SLA for the service.  That would significantly reduce the potential revenue yield too.

All this adds up to a totally different picture of first cost, the dreaded period when carrier investment in a service drives cash flow negative, where it stays until you can sell enough and collect your fees and bring things back to normal.  Cash flow starts off like the classical sine wave, but dipping before it rises.  Anything you do to reduce yield and increase costs makes first-cost worse, and that makes it harder to get approval for a service rollout.

Another announcement at the show came from Mellanox, who has joined the CloudNFV initiative as an Integration Partner, the first firm to complete the process since the project launched in August (Metaswitch, whose Clearwater IMS was announced as the demo use case, was a partner at the time of the launch).  Initially, Mellanox will be supplying 40G NIC cards to allow for high-throughput testing of the IMS use case for the public traffic-carrying demonstration in December, and the first cards are already on their way to the Dell lab for integration.

Mellanox is also the first partner in the newly announced Active Data Center component of CloudNFV.  The importance of getting a complete platform integrated and running is something all operators will appreciate, and Mellanox has a line of fabric switches and storage networking solutions that will enhance the “horizontal” or inter-NFV connectivity that virtually all NFV applications generate.  Fabrics mean that the relative location of the VMs selected to host a set of virtual functions won’t generate different performance levels because of different packet transit delays due to hop variations in classic tiered data center switch configurations.

With the Mellanox addition, CloudNFV’s foundation platform for execution consists of Dell switches and servers, the Mellanox NICs and in the future switches, Red Hat Enterprise Linux and 6WIND data path acceleration, Overture carrier Ethernet switching, and Qosmos traffic probes (Qosmos also supplies DPI-as-a-service for packet-inspection-based use cases and the evolving Testing and Monitoring as a Service framework).

Active Data Center would host the VMs for CloudNFV and it could also be used to run the software that deploys and manages services that include NFV.  EnterpriseWeb is the provider of the foundation software platform for the optimization and management tasks, and Overture provides the orchestration and OpenStack interface.  Both these can run on Active Data Center but the specific requirements for hosting them would vary with the configuration and are generally less than needed to run the VMs hosting the virtual functions themselves.

There are a number of big network operators at the show, speaking or captaining tables for discussion, and in conversations I found some interesting common threads.  One was that operator interest in and commitment to both SDN and NFV remains very high, but operators are also very aware of the work to be done in getting the most bang for their buck in both technologies.  One issue that was expressed by almost every one I talked to was that of multi-vendor support or open interfaces.  Another was management, which is so far only vaguely addressed.

I think we’re going to see progress in both these areas, in no small way because of the operators’ emphasis on them.  For SDN, I think that the biggest issue may be settling on a controller framework, and for that I think OpenDaylight is the leading contender.  I’ve always liked the inclusion of the Service Abstraction Layer in the software, but as I heard from talking with the folks in their booth, OpenDaylight is like NFV in that it’s a project and not a product.  Somebody is going to have to run with it in a formalized way to make it suitable for carrier or even enterprise use.  Red Hat may need to make a more formal commitment to that to insure that SDN shines.  For NFV, early development is (according to operators who have seen the presentations) largely proceeding in vendor-specific projects.  CloudNFV is an open approach (and data model and optimization demos are being offered to operators at the SDN World Congress event) but the OpenStack Neutron interface is a limiting factor in what can be done to build connections, both across vendor boundaries and in terms of general service/connection models.  We’ll probably see more NFV options as currently “secret” vendor NFV projects become public, and hopefully some of the offerings will have strong support for multiple vendors and open interfaces.  Operators need to gracefully transition to either SDN or NFV, and single-vendor support won’t make that easy.

Leave a Reply