Since I’ve blogged recently about the progress (or lack of it!) from proof-of-concept to field trials for SDN and NFV, I’ve gotten some emails from you on just what a “field trial” is about. I took a look at operator project practices in 2013 as a part of my survey, and there was some interesting input on how operators took a new technology from consideration to deployment. Given that’s what’s likely to start for SDN and NFV in 2015, this may be a good time to look at that flow.
The first thing I found interesting in my survey was that operators didn’t have a consistent approach to transitioning to deployment for new technologies. While almost three-quarters of them said that they followed specific procedures in all their test-and-trial phases, a more detailed look at their recent or ongoing projects seemed to show otherwise.
Whatever you call the various steps in test-and-trial, there are really three phases that operators will generally recognize. The first is the lab trial, the second the field trial, and the final one the pilot deployment/test. What is in each of these phases, or supposed to be in them, sets the framework for proving out new approaches to services, operations, and infrastructure.
Operators were fairly consistent in describing the first of their goals for a lab trial. A new technology has to work, meaning that it has to perform as expected when deployed as recommended. Most operators said that their lab trials weren’t necessarily done in a lab; the first step was typically to do a limited installation of new technology and the second to set up what could be called a “minimalist network” in which the new stuff should operate, and then validate the technology itself.
If we cast this process in SDN and NFV terms, what we’d be saying is that the first goal in a lab trial is to see if you can actually build a network of the technical elements and have it pass traffic in a stable way. The framework in which this validation is run is typically selected from a set of possible applications of that technology. Operators say that they don’t necessarily pick the application that makes the most sense in the long term, but rather try to balance the difficulties in doing the test against the useful information that can be gained.
One operator made a telling comment about the outcome of a lab trial; “A properly conducted lab trial is always successful.” That meant that the goal of such a trial is to find the truth about the basic technology, not to prove the technology is worthy of deployment. In other words, it’s perfectly fine for a “proof of concept” to fail to prove the concept. Operators say that somewhere between one in eight and one in six actually do prove the concept; the rest of the trials don’t result in deployment.
The next phase of the technology evolution validation process is the field trial, which two operators out of three say has to prove the business case. The biggest inconsistencies in practices come to light in the transition between lab and field trials, and the specific differences come from how much the first is expected to prepare for the second.
Operators who have good track records with technology evaluation almost uniformly make preparation for a field trial the second goal of the lab trial (after basic technology validation). That preparation is where the operators’ business case for the technology enters into the process. A lab trial, says this group, has to establish just what steps have to be proved in order to make a business case. You advance from lab trial to field trial because you can establish that there are steps that can be taken, that there is at least one business case. Your primary goal for the field trial is then to validate that business case.
More than half the operators in my survey didn’t formally work this way, though nearly all said that was the right approach. The majority said that in most cases, their lab trials ended with a “technology case”, and that some formal sponsorship of the next step was necessary to establish a field trial. Operators who worked this way sometimes stranded 90% of their lab trials in the lab because they didn’t get that next-step sponsorship, and they also had a field trial success rate significantly lower than operators who made field-trial goal and design management a final step in their lab trials.
Most of the “enlightened” operators also said that a field trial should inherit technical issues from the lab trial, if there were issues that couldn’t be proved out in the lab. When I asked for examples of the sort of issue a lab trial couldn’t prove, operations integration was the number one point. The operators agreed that you had to introduce operations integration in the lab trial phase, but also that the lab trials were almost never large enough to expose you to a reasonable set of the issues. One operator called the issue-determination goal of a lab trial the sensitivity analysis. This works, under what conditions? Can we sustain those conditions in a live service?
One of the reasons for all the drama in the lab-to-field transition is that most operators say this is a political shift as well as a shift in procedures and goals. A good lab trial is likely run by the office of the CTO, where field trials are best run by operations, with liaison with the CTO lead on the lab trial portion. The most successful operators have established cross-organizational teams, reporting directly to the CEO or executive committee, to control new technology assessments from day one to deployment. That avoids the political transition.
A specific issue operators report in the lab-to-field transition is the framework of the test. Remember that operators said you’d pick a lab trial with the goal of balancing the expense and difficulty of the trial with the insights you could expect to gain. Most operators said that their lab-trial framework wasn’t supposed to be the ideal framework in which to make a business case, and yet most operators said they tended to take their lab-trial framework into a field trial without considering whether they actually had a business case to make.
The transition from field trial to pilot deployment illustrates why blundering forward with a technical proof of concept isn’t the right answer. Nearly every operator said that their pilot deployment would be based on their field-trial framework. If that, in turn, was inherited from a lab trial or PoC that wasn’t designed to prove a business case, then there’s a good chance no business case has been, or could be, proven.
This all explains the view expressed by operators a year later, in my survey in the spring of 2014. Remember that they said that they could not, at that point, make a business case for NFV and had no trials or PoCs in process that could do that. With respect to NFV, the operators also indicated they had less business-case injection into their lab trial or PoC processes than usual, and less involvement or liaison with operations. The reason was that NFV had an unusually strong tie to the CTO organization, which they said was because NFV was an evolving standard and standards were traditionally handled out of the CTO’s organization.
For NFV, and for SDN, this is all very important for operators and vendors alike. Right now, past history suggests that there will be a delay in field trials where proper foundation has not been laid in the lab, and I think it’s clear that’s been happening. Past history also suggests that the same conditions will generate an unusually high rate of project failure when field trials are launched, and a longer trial period than usual.
This is why I’m actually kind of glad that the TMF and the NFV ISG haven’t addressed the operations side of NFV properly, and that SDN operations is similarly under-thought. What we probably need most now is a group of ambitious vendors who are prepared to take some bold steps to test their own notions of the right answer. One successful trial will generate enormous momentum for the concept that succeeds, and quickly realign the efforts of other operations—and vendors. That’s what I think we can expect to see in 2015.