Some business managers and marketing executives mistakenly believe that “big data” will deliver better insight because of the sheer volume of data now at our disposal. Now we just need the statisticians, the computing power, and the analytics software to sift through it all, right? Not so. The truth is, for most purposes you don’t need a lot of data. You need a small random sample of data. (more…)
Posts Tagged ‘Sampling’
With all our excitement over the last few months about the accuracy of online polling during the election season—substantially outperforming “gold standard” telephone research—there was not time to share ESOMAR’s September 2012 updated guide to purchasing online sample. The guide consists of 28 questions all purveyors of online sample should answer, publish, and make available to every buyer of its products and services. The guide has been updated to reflect rapid changes in online sampling over the last couple of years, including use of routers, real-time sampling, and blended sample from multiple sources.
Before purchasing online sample for your next research survey, be sure that you know the answers to these 28 questions: (more…)
By elephants, we mean Republicans. Or maybe you have too many Democrats. Maybe it keeps going back and forth, which is the problem that Gallup sometimes has. In the spirit of learning all we can from election season polling, this week we focus on whom to include (or exclude) in your research, analysis, and market projections.
The issue is showcased right now as political polls attempt to measure voter preference and predict the election outcome. Is voter preference really as volatile and open to persuasion as the polls sometimes suggest? Probably not. A 2004 research article in Public Opinion Quarterly carefully documented that much of the volatility in Gallup’s polls results from how they screen respondents and weight their data. (more…)
A mistake often made by both professional and do-it-yourself researchers is letting a survey sit in the field without actively monitoring it. Once we design a survey and put it out there for people to respond, we just wait patiently (or get busy on another project) until we have data for analysis, right? But collecting data is never straightforward. It nearly always requires daily adjustments and decisions from the most senior members of a research team.
So at Versta Research, all fieldwork we conduct or oversee requires a daily and detailed fieldwork report that gives us visibility into all kinds of technical and conceptual issues that might affect the quality and outcomes of research. Figure 1 shows an example of a report; nothing fancy, but full of crucial data. As we review these reports, we watch for several warning signs and intervene where needed:
During a presidential election year there is no escaping the flurry of public opinion polling and the intense scrutiny that surveys get from the media. But love it or hate it, there are excellent reasons to pay close attention to this year’s political polling.
Way back in 1944, Edwards Deming published an article in the American Sociological Review that could be required reading for anybody who does research today. He outlined all potential (and unfortunately, common) sources of error in survey research.
Apparently our contemporary obsession with sample sizes, random samples, response rates, and margins of error is not so new. In outlining all sources of error, Demining wanted to emphasize that “sampling errors, even for small samples, are often the least of the errors present.”
So despite some old-fashioned language and defunct technologies (Versta Research has never fielded a survey via telegraph!) we feel it is worth reproducing here what Deming called the thirteen factors “affecting the ultimate usefulness of a survey” as all of them apply as much today as they did 68 years ago:
A couple weeks ago we presented new data showing that response rates continue to decline. You can now expect that a typical, rigorously executed phone survey will yield a response rate in the single digits.
Scientific evidence over the last decade has shown that high response rates do not necessarily yield more accurate surveys. In fact, it turns out that high response rates can actually hurt the accuracy of surveys.
Survey response rates are now staggeringly low—in the single digits. A typical response rate for a relatively high-budget, carefully executed phone survey is merely 9%, down from 36% just fifteen years ago. Here are the numbers from research conducted earlier this year by the Pew Center:
If you want to throw money at a survey and try really hard to boost your response rate (the high-effort survey shown in the chart above), you can likely get up to 20% to 25%. But you will need to:
How big of a sample size do you really need? A recent article in the New York Times cited the following statistics:
- A small Voice of the Customer (VoC) research company called Mindshare Technologies collects satisfaction data from 175,000 respondents every day. That’s 60 million in a year.
- ForeSee, a small customer experience analytics firm fielded 15 million surveys in 2011.
These numbers are believable. I get a pop-up survey from ForeSee at least two or three times a week.
And it is absurd. Granted, these companies (and hundreds of other similar firms) are collecting surveys for multiple clients. But almost certainly, nobody needs to collect that much survey data from that many survey respondents. Why not? (more…)
Many of us in marketing research have been deploying web surveys for over ten years, and web surveys are, by far, the dominant mode of data collection in our industry nowadays. But our techniques and methods are an amalgam of practices adapted from other data collection modes, learned in part through trial and error, and taught to others through channels more akin to oral traditions. So it is helpful when our academic colleagues manage to document and codify the art and science of what we do. (more…)