Surveys and Research: How Deep is the Ocean, How High is the Sky?

Why is it so difficult to find accurate predictions on, or even current statistics on, technology evolution?  One of the biggest complaints that CIOs presented me with in the last two months is that they don’t get as much useful planning information as they need, and most think they get less of it than they used to.  It’s not that we don’t have plenty of numbers, after all.

The biggest problem is what we might call utility bias.  My own work in this space over decades generates an interesting statistic; among sellers, virtually every research purchase is to buttress their marketing position.  Among buyers, the primary goal of buyers of research material (by a whopping 73% margin) is to validate a position or course of action that they’re advocating for their company.  Only 15% look for a truly objective assessment the space involved (the rest have a mixture of motives with no dominant one).

Years ago, I got an RFP from a big research firm, and the mission (paraphrasing) was “Develop a report validating the one-billion-dollar annual market for widgets.”  An analyst in the firm told me the number came from research they’d done, showing that was the optimum market size for selling reports (I didn’t respond to the RFP, by the way).

Suppose that you want to figure out what’s actually going on.  Are there any steps you can take to weed out the chaff?  There are no guarantees, but there are things you can do, and look for.

The practical step is to place the research in the broader context of the tech market.  For example, if a report says that there will be a billion dollars’ worth of IoT sensors sold, you need to know two things.  First, what exactly is the report considering “IoT sensors?”  Second, what is current spending on the macro-tech market the target space is a part of, which is industrial control, for example.  If the forecast for a subsection of a market is a very big piece of current total-market spending, then you should be suspicious of the massive business case that would be needed to drive such a change.

The next signal is reliance on predictions of or reports of buyer adoption of a technology.  We always see citations like “over 50% of buyers plan to adopt….”  There are even more direct claims; “A third of operators report they’re deploying….”  These kinds of statistics are susceptible to three distorting factors.

Factor one is that the surveys talk to the wrong people.  I spent an enormous amount of time and effort when I started doing buyer surveys, just identifying the right kind of person to talk with and then finding examples of that job type that would actually talk with me.  Sustaining a survey base over time is very difficult, and what tends to happen in the real world is that research firms have a stable of respondents they can count on, and tend to use repeatedly.  I was asked by a company to assess, confidentially, a research report created for them by one of the top analyst firms.  They were doubtful of the conclusions, and when I audited the survey process, it was clear that only about a quarter of the people who were surveyed even had a loose job connection with the technology.  Most worked at companies who could never have consumed it, so the survey base was wrong for the survey.  Be sure you know who’s providing information for research; require a profile of the base.

Factor two is that nobody wants to look like they’re a luddite.  Call somebody up and ask them if they’re using the hottest technology in the IT or networking space, and see how many will say “Yes” even though they’ve never even considered it.  I remember a survey done by Bellcore on ATM (remember that?) which asked whether people were using it.  They were surprised when the trial-audit run of the questions revealed an almost-70% penetration of ATM, which was at the time in its infancy.  I asked to see the transcripts of the questions, and in almost every case, the respondents asked what ATM stood for.  That’s a bad enough sign, of course, but when the acronym was decoded, the most popular comment was “Yes, I use 9600 bps ATM.”  Obviously, they got no further than “asynchronous”, and were talking about modem technology.

A more recent example in the Ethernet space shows this issue is still common.  A publication wanted to see who was using 40G Ethernet in their data centers, and came back with the astounding fact that 40% of their users reported they were.  At the time, no 40G products were even on the market.  Hey, ask me anything, their users seemed to be saying; I’m on the leading edge of tech!

Factor three is that even the savviest enterprises are unable to project future spending trends.  I did a ten-year correlation between what my survey base told me about their plans for a technology deployment, and what was happening three years later.  By the half-way point in the decade, there had ceased to be any statistical correlation between what they said they would do and what they actually did.

Any enterprise CIO will tell you that their focus on budgeting is for the coming quarter, then the current year, and beyond the year-ahead budget planning early in a given year, they don’t look ahead much.  The business can do little with those kinds of far-future forecasts, and so they don’t produce them.  Might a CIO or other senior IT type speculate on the future?  Sure, but apparently with little success based on my assessments.

Network operators have much longer capital cycles, but they’re not much better at forecasting things.  A good metric to prove that point is that for the last 30 years, the percentage of lab trials that turned into actual deployments of new technologies has hovered in the 15% range.  I’ve seen many examples of technology that were put into trials, even field trials, and even limited production deployments, that failed to gain any significant market traction.  Adoption doesn’t mean success, because the scale of adoption determines that.

Which brings me to the final point, which is that even where statistics are correct, they can be highly misleading.  The best example of this is the tendency of research reports to talk about the number of customers who have adopted something, versus how much was consumed.

First, if every prospective buyer purchased a widget, would that constitute 100% adoption?  Bet at least some research would claim that, even as the addressable widget market might be thousands of units per buyer.  The best information on adoption would compare current purchases or usage with a projection of either the expected current market or the total addressable market.

Second, the market tends to jump on a technology concept that got good ink, widening the definition of a term or product scope to far beyond the original.  IoT is a good example of this.  When the concept was introduced, it was about “public sensors on the Internet” available for general use.  I just saw an ad for a home automation product introduced at CES, and this product fit a niche that was already served when the IoT term was introduced.  Clearly that niche wasn’t, at the time, considered “IoT”, yet this product was ballyhooed as an IoT advance.  Bracket creep strikes again!

Everyone loves data, data that can be presented as a factual foundation for business decisions.  Reports, surveys, and predictions are likeable data.  They’re just not particularly accurate, which means that if your goal is to get the right answer, you may have to spend more time looking for it than you’d like, if you can get it at all.

CIMI Corporation used to produce syndicated research reports, based on a survey base of 277 users, 77 network operators, and a computer model that was designed to predict the way buyers made purchase decisions.  The numbers were pretty good in accuracy terms, but who wants a forecast of a million dollars in sales Year One when you can get one ten or a hundred times that?  I got out of that business in the ‘80s.

It’s still possible to get reliable data these days, and some of the research that’s published is in alignment with my own model, which I still use along with data people send me to generate my commentary on the market trends.  It’s not easy, though, and there are enough factors that muddy the waters that you should take every number you see with a grain of salt—mine included.  Only predictions that turn out to be true, or surveys that are proven to reflect real conditions, will give you a true picture of conditions, so you have to gravitate to sources that give them to you if…if you want the truth.