Some business managers and marketing executives mistakenly believe that “big data” will deliver better insight because of the sheer volume of data now at our disposal. Now we just need the statisticians, the computing power, and the analytics software to sift through it all, right? Not so. The truth is, for most purposes you don’t need a lot of data. You need a small random sample of data. (more…)
Archive for the ‘Sampling’ Category
I was the kid who skipped recess to help grade quizzes; the graduate student who delayed getting a degree because it meant the end of school; the professor who told students that he was now in the 33rd grade and still loved school.
Even now I can’t resist reviewing great opportunities for more coursework and learning that can help Versta and its clients do smarter work.
A number of top universities offer condensed summer coursework and seminars on topics critical to market research. Knowledge and innovation in these areas advance quickly, so staying on top of this learning is essential. Here are some that we highly recommend: (more…)
With all our excitement over the last few months about the accuracy of online polling during the election season—substantially outperforming “gold standard” telephone research—there was not time to share ESOMAR’s September 2012 updated guide to purchasing online sample. The guide consists of 28 questions all purveyors of online sample should answer, publish, and make available to every buyer of its products and services. The guide has been updated to reflect rapid changes in online sampling over the last couple of years, including use of routers, real-time sampling, and blended sample from multiple sources.
Before purchasing online sample for your next research survey, be sure that you know the answers to these 28 questions: (more…)
By elephants, we mean Republicans. Or maybe you have too many Democrats. Maybe it keeps going back and forth, which is the problem that Gallup sometimes has. In the spirit of learning all we can from election season polling, this week we focus on whom to include (or exclude) in your research, analysis, and market projections.
The issue is showcased right now as political polls attempt to measure voter preference and predict the election outcome. Is voter preference really as volatile and open to persuasion as the polls sometimes suggest? Probably not. A 2004 research article in Public Opinion Quarterly carefully documented that much of the volatility in Gallup’s polls results from how they screen respondents and weight their data. (more…)
During a presidential election year there is no escaping the flurry of public opinion polling and the intense scrutiny that surveys get from the media. But love it or hate it, there are excellent reasons to pay close attention to this year’s political polling.
Way back in 1944, Edwards Deming published an article in the American Sociological Review that could be required reading for anybody who does research today. He outlined all potential (and unfortunately, common) sources of error in survey research.
Apparently our contemporary obsession with sample sizes, random samples, response rates, and margins of error is not so new. In outlining all sources of error, Demining wanted to emphasize that “sampling errors, even for small samples, are often the least of the errors present.”
So despite some old-fashioned language and defunct technologies (Versta Research has never fielded a survey via telegraph!) we feel it is worth reproducing here what Deming called the thirteen factors “affecting the ultimate usefulness of a survey” as all of them apply as much today as they did 68 years ago:
Survey response rates are now staggeringly low—in the single digits. A typical response rate for a relatively high-budget, carefully executed phone survey is merely 9%, down from 36% just fifteen years ago. Here are the numbers from research conducted earlier this year by the Pew Center:
If you want to throw money at a survey and try really hard to boost your response rate (the high-effort survey shown in the chart above), you can likely get up to 20% to 25%. But you will need to:
Cost matters when you choose a sample or panel provider for your survey because there are good panels and bad panels. Bad panels provide survey respondents at cheap prices. But they do a lousy job managing and screening their members. Not surprisingly, a good portion of the data you get from bad panels will likely be lousy.
A recent study entitled “Dirty Little Secrets of Online Panel Research” by one of our industry colleagues described and documented lousy panel management practices of some companies. Mystery shoppers joined and participated in online surveys offered by nearly all of the leading panel companies that most of us rely on. Here are some of the “worst practices” they uncovered: (more…)
How big of a sample size do you really need? A recent article in the New York Times cited the following statistics:
- A small Voice of the Customer (VoC) research company called Mindshare Technologies collects satisfaction data from 175,000 respondents every day. That’s 60 million in a year.
- ForeSee, a small customer experience analytics firm fielded 15 million surveys in 2011.
These numbers are believable. I get a pop-up survey from ForeSee at least two or three times a week.
And it is absurd. Granted, these companies (and hundreds of other similar firms) are collecting surveys for multiple clients. But almost certainly, nobody needs to collect that much survey data from that many survey respondents. Why not? (more…)
Many of us in marketing research have been deploying web surveys for over ten years, and web surveys are, by far, the dominant mode of data collection in our industry nowadays. But our techniques and methods are an amalgam of practices adapted from other data collection modes, learned in part through trial and error, and taught to others through channels more akin to oral traditions. So it is helpful when our academic colleagues manage to document and codify the art and science of what we do. (more…)