Why Polling and Surveys Often Fail

Probably the only election topic most agree on is polling.  For two presidential elections in a row, we’ve had major failures of polls to predict outcomes, so it’s no surprise that people are disenchanted with the process.  What’s more surprising is that many people don’t realize that all surveys are as inherently flawed as polling is.  If getting a view of the “future” is important to you, it should be important for you to understand why it usually doesn’t work out well.

I started doing surveys back in 1982; a big survey was in fact the first source of revenue for CIMI Corporation.  I had to identify 300 large users of digital data services (TDM DDS and T1, in those days) and get information on their use and issues.  The project worked fine, and because it was so difficult to find people to survey, I kept in touch with the survey base to keep them engaged for future projects.  To do that, I posted them questions twice a year on various network topics, and by 1997 I had a nice baseline of predictions on future network decisions.

Nice, but increasingly wrong.  In fact, what I found was that the data showed that by 1997 there would likely be no statistical correlation between what my surveyed users said they were planning to do with their networks and equipment, and what they actually did.  Obviously, that was a major disappointment for me, but it did get me involved in modeling buyer decisions based on more objective metrics rather than asking them to guess.

That’s the first key truth to surveys.  You can ask somebody something that’s objective and observable that’s within their own scope of responsibility, but don’t ask them for planned future actions or about things outside their own areas.  Few people or organizations have any skill at figuring out what they’ll do much more than a year out, and most have their head down, focusing on their own jobs.

The next key truth emerges from another set of experiences.  First, I was asked by Bellcore to look over a survey they’d done a trial run with, because there were some extremely troubling responses.  I did, and what I said was that the survey presumed that the person being surveyed was a specialist in the technology they were asking about, but they never qualified them.  I suggested that, instead of asking whether their company “used ATM”, they ask what speed the ATM connection was running at.  When they did, the great majority said they were using “9600 bps ATM”, obviously mistaking asynchronous modem operation with Asynchronous Transfer Mode, what Bellcore wanted to know about.

Second, a major network publication approached me on a survey they did, and as in my first example they were concerned by some of the results.  I looked things over, and sure enough, 33 percent of the survey responders reported they were using gigabit Ethernet.  At the time there were no gigabit Ethernet products offered; the standard wasn’t even fully defined.  What happened here?  What I found in my own survey experience is that people want to sound smart and involved when they’re surveyed.  Ask them about a new technology and about a third will give you the answer they think makes them look good, even if the situation they describe is impossible.

The third key truth is what I’ll call the “diamond in the rough” paradox.  A big organization has tens of thousands of employees, and in the IT part of the company alone there are hundreds of people.  What do you suppose the chances are that all of these people know everything a company is doing or planning?  Zero, but what percentage of surveys actually effectively target the questions to people who are likely to know the answer?  Zero.

A big network equipment vendor had a well-known analyst firm do a survey and report for them, one that cost a boatload of dollars.  As in other cases, they were troubled by what they saw in the results and asked me to audit the survey and results against my own data and suggest what might have gone wrong.  The problem was that the survey did in fact ask the right companies, but made no attempt to engage the right people.  As a result, fully three-quarters of those who responded had nothing to do with the technology at issue, and their answers to the questions were totally irrelevant.

Sometimes the survey people even know that.  While I was auditing a survey of LAN use in the early days, I happened to hear one of the survey people talking to a target.  “Do you use Ethernet or Token Ring?” they asked, and apparently the party they had called had no idea.  The surveyperson helpfully said, “Well, feel around the back of your computer.  Do you feel a fat cord?”  Apparently the response was affirmative, and so the user was put down as an Ethernet user (that’s what Ethernet was delivered on at the time).  In fact, the cord was likely the power cord, so I asked the person doing the survey, and they said that almost nobody had the answer, so they were given that follow-up to use.  Guess how good those results were!

Then there’s the “bulls**t bidding war” problem.  People like getting their names in print.  A reporter calls an analyst firm and says “What do you think the 2021 market for 5G is?  XYZ Corp says it’s a billion dollars.”  The analyst firm being called knows that the reporter wouldn’t be making the call if they weren’t interested in a bigger number, so they say “We have it as two-point-two billion.”  This goes on until nobody will raise the bid, and so the highest estimate gets into the article.  The same problem happens with reporter calls with users; “When do you expect to be 100% cloud?” or “What percentage of your computing is going to the cloud?” is going to generate shorter timelines and higher percentages.

I know there are a lot of people who believe in surveys, and I’m among those who do as long as the survey is designed and conducted properly.  I only survey people I know well enough to know their skills and roles, and with whom I’ve had a relationship for some time.  I try to use a stable base of people, and if somebody leaves, I get them to introduce me to their replacement.  But even with that, most of the questions that get answered in survey articles and reports are questions I’d never ask, because even the best survey connections couldn’t get a good answer to them.  Most businesses respond to market trends that are largely unpredictable and most often tactical, so why ask they what they plan to be doing in three or five years?

This is why I’m a fan of using data from the past to forecast the future.  If you’re careful in what data you collect and how it’s used, the past measurements and the trends they expose are more authoritative than anything you could get by asking somebody for their plans.  They can also tell you how the mass of companies in the area you’re interested have responded to changes, and that’s a better measure of the future than asking people to guess.

The data on IT spending growth cycles I blogged about HERE is an example.  I found that there was a cyclical pattern to the growth in IT spending, both year-over-year and 5-year-smoothed and both by itself or compared with comparable GDP.  I found that there were notable new technology advances in the early part of each cycle, and my own surveys showed that in those cycles, companies reported that a larger portion of their IT spending came from new projects rather than budgeted modernization.  All that suggests a productivity driver for each wave.  It was particularly significant that after the last wave, there was never a time when project spending (justified by new benefits) contributed significantly to IT spending overall.

You can get government data on an amazing number of things.  Want to know what industries are most likely to use cloud computing, adopt IoT, rely on mobile workers and WFH?  Massage the government data and you can find out.  Where might empowerment of workers focus, either in industry or geography?  It’s there.  Where will high-speed broadband be profitable, and so available?  That’s there too.  Best of all, careful use of the data generates results that tend to hold up.  My demand density work and IT cycles work started fifteen years ago, and predictions made from the work in the early days have come true most of the time.

The final issue with surveys and polls is that they tend to show what whoever is paying for them wants to see.  I used to get RFPs from some of the big “market report” firms, and they’d be in a form like “Develop a report validating the two billion dollar per year market for ‘x’”, which is hardly a request for an unbiased solution.  You could try to ask if surveys are really unbiased, if all those who respond are fully qualified.  Good luck with that.  A better way might be to get a quote on a report of your own.  If your company can go to a firm and request research to back up a position, you can be sure others can do the same, and you should therefore distrust the information that comes from these sources.  I stopped selling syndicated research because people didn’t want to pay for the truth.

No survey is, or can be, perfect.  No model will always get things right.  There is no absolutely reliable way to know what a given company or person or group thereof will do, even months in the future.  There are just ways that are worse than others, and the best you can do is to try to latch on to one of the less-worse methods, and if it’s not good enough, opt for flexibility in your planning.  So, the moral here is don’t tell me what your survey showed, or someone else’s survey showed, or what you project to be the truth, because I don’t believe it.  So, of course, you’re free not to believe me either.