February 1st, 2017
Suppose your colleague left her job tomorrow and you needed to recreate, and double check, all of the work she produced for a research study just completed. You probably couldn’t do it.
Think of every step along the way: exactly how the survey was programmed, how data were cleaned, why certain respondents and not others were cut, how data were tabulated and bucketed into groups, how analyses were generated, how a segmentation algorithm was finalized, how charts were populated, and how conclusions were formulated.
Ideally, all of this should be documented. Indeed, the most unsexy, and therefore neglected, but critically important trend in marketing and research analytics today is this: reproducible research.
“The term reproducible research refers to the idea that the ultimate product of research is the paper along with the laboratory notebooks and full computational environment used to produce the results in the paper such as the code, data, etc. that can be used to reproduce the results and create new work based on the research.”
How does one do it? Here is a handy list of ten simple rules to follow for reproducible research, recently published by a team of collaborating academic researchers from Norway and the U.S.: Read the rest of this entry »
January 18th, 2017
Reflecting a shift in how applied research is being organized and used in businesses today, 2017 marks the end of two important market research industry groups, and the beginning of a new one. Gone is the Council of American Survey Research Organizations (CASRO) founded 37 years ago. Gone is the Marketing Research Association (MRA) founded 60 years ago. The two have merged into a new group called the Insights Association.
At Versta Research we were one of many who voted in favor of the merger. But I’m a wee bit down on the name “Insights Association.” Read the rest of this entry »
January 11th, 2017
Almost certainly more than you think. If you are purchasing access to survey respondents from panel providers, or from survey software providers, or from any other source beyond your own list (and even then, how carefully are you controlling access to your survey?) you are probably getting fraudulent data from automated bots or from survey-taker farms.
We experienced this problem just last week, and here is what we learned: Read the rest of this entry »
January 4th, 2017
I hope the recent presidential election didn’t squash your belief that smart researchers can discern patterns in our collective social lives, and that we can make decent (if limited) predictions about what’s to come. Because here we go—Versta Research is going to make a few modest predictions about where market research is headed in 2017.
The overarching theme of our predictions this year is that none of them sound like research at all. Instead, they are about bringing other modes of thinking into our enterprise. Or, putting it the other way around: they are about extending our skills and influence into functional areas that need what we offer. Read the rest of this entry »
December 28th, 2016
Google Analytics is a great example of how powerful OR how utterly useless data can be. If you let your analytics lead you around, and you search and search, and try all those variations and data cuts hoping to land on the “actionable insight” that Google always promises—trust me, you will fail. But if you start with a specific question and then go to your analytics, you will succeed.
That’s what we do at the end of every year to learn more about what our customers and prospects have been focusing on in preceding 12 months. We have published roughly 400 articles over the last several years. Which ones got the most attention and readership in 2016?
Of all 400 articles, here were the top six: Read the rest of this entry »
December 21st, 2016
What does it take to be a great market research vendor? That is the animating question behind Quirk’s top-read article in 2016 written by yours truly, your friends at Versta Research. We were thinking about all the annoying things our vendors do to us, and turned them into lessons for what we at Versta believe all researchers on the supplier side should be doing for their clients.
In case you missed it, here is a short summary of what we think it takes, laid out as Nine Habits of Great Market Research Vendors:
Read the rest of this entry »
December 14th, 2016
Data wonks have more fun than you may think. If you have not yet begun working with the R statistical program (which is mesmerizing, extremely powerful, hard to learn, but weirdly intuitive, and FREE) then here is a fun and relatively easy way to give it a test drive.
It is becoming a tradition that, at the end of every year, data wonks share holiday greetings with R code. They write elegant little snippets of programming syntax that generate picture and greetings in R, like Christmas trees. It helps demonstrate some of R’s amazing capabilities in fun, engaging ways. And it gives others a way to jump in and start manipulating code for themselves.
Ready to try it? Here is what you need to do: Read the rest of this entry »
December 7th, 2016
The thing I love about strategists is that they know exactly what they want. By the time they come to us, they have analyzed their business needs. They have figured out what kinds of data will help them make decisions. They know why they want research, how they are going to use it, and how they want to report it.
The problem is when strategists write questionnaires. Read the rest of this entry »
November 30th, 2016
Using surveys to “quiz” people about their knowledge (versus asking them about their attitudes or behaviors) is tricky. First, online surveys make it easy for people to look up answers they may not otherwise know. Second, people want to get quiz answers right, and so they are motivated to cheat.
This is the reason I usually advocate against quizzing respondents on factual questions. But there is new research just published that gives me new confidence that problems with online cheating can be overcome. Read the rest of this entry »
November 23rd, 2016
One of the “best practices” for survey research I learned in graduate school and then early in my career is this: Survey measurement scales should have at least five points. Presumably that provides enough room for variation, and it provides a neutral midpoint as well.
But it is not true. Or, like all best practices, it is not always true. In market research, and especially in research designed for PR or other types of communications, four-point scales often work better, and here’s why.
Read the rest of this entry »