Helping you understand your competition
 
 
QUICK LINKS

       Home
       News and Knowledge
            Press Releases
            Versta in the News
            Newsletters
            Knowledge
       Services
       Leadership
       Professional Affiliations
       Newsletter Sign-Up
       Versta Research Blog
   
 
NEWSLETTER SIGN-UP
 
   
   
LET'S TALK

 

   

Newsletters

October 2017


Dear Reader,

Two days after returning from an industry conference this month, a survey asking for feedback arrived. It was broken. It specified an incorrect skip pattern, such that half the conference attendees were unable to evaluate two keynote sessions.

Even our top industry trade group fell on its face when executing fieldwork, setting a miserable example. It highlights the urgent need for researchers to give more effort to flawless fieldwork. In this newsletter, we offer Ten Tips for Flawless Fieldwork in hopes that more of our colleagues will join us in tackling the ever-present risk of bad data in market research.

Other items of interest in this newsletter include:

   Grab Attention with a Research Infographic
   Reasons Customers Blow Off Your Surveys
   “Just Let Go” for Great Qualitative Interviews
   Responsive Surveys Go Way Beyond Mobile
   Avoid This Adherence Scale for Health Surveys
   Your Best Segmentation Tool Is a Salesperson
   AAPOR Says Trump-Clinton Polls Mostly Right
   Why Bigger Is Better for Numeric Rating Scales
   Dilbertís Boss: Focus Groups Are Not Reliable
   Donít Believe This Best Practice from Google Surveys
   Why Segmentation Is Sometimes Useless
   Your Margin of Error Is Bigger than You Think

We are also delighted to share with you:

   Versta Research in the News

… which highlights some of our recently-presented work that was conducted in partnership with IBM.

As always, feel free to reach out with an inquiry or with questions you may have. We would be pleased to consult with you on your next research effort.

Happy fall,

The Versta Team

 Ten Tips for Flawless Fieldwork

F

or many quantitative researchers, data collection is the last thing they want to do. Design, strategy, and analysis—thatís where we bring high-level thinking and add value for business partners, right? The problem is that design, strategy, and analysis are worthless without good data. Data is foundational. Ensuring that a data foundation is rock-solid is one of the highest-value contributions we can make. But data collection is probably the weakest element of what most research groups do.

Versta Research recommends getting as close to data collection as you possibly can. Donít push it to the lowest tier of staff, or outsource it to the lowest bidder. Donít rely on technology solutions that promise simple and near-instantaneous results. Pretend youíre the U.S. Census Bureau or the Bureau of Labor Statistics. Nearly their entire budget is devoted to rock-solid data collection, and they hire super smart people to oversee it.

 

Ensuring that a data foundation is rock-solid is one of the highest-value contributions we can make


As you get closer to your data collection, you will discover how devilishly difficult, and important, great fieldwork is. You will begin to appreciate (and begin to worry about!) how profoundly it affects your analysis and conclusions.

Ideally, as you get closer and more appropriately involved in the details of your fieldwork, you will begin to compile a list of best practices, tips, and tricks for flawless fieldwork, just like those we have developed and that we share with you here:

1. Hover over vendors. Wouldnít it be great if you could rely on data collection vendors—panels, call centers, technology providers, etc.—to take the reins on a project and not bother you until they deliver final data at the end? Donít do it. Most are not trained in the protocols of research, sampling, and data validation, so you have to watch and document everything they do. Ask for screen shots, and log every setting and every decision they make: sample sources, device restrictions, time-of-day restrictions, and so on. Even if you have a top-notch vendor you trust implicitly, hovering over them will ensure that their decisions align with exactly what you need.

2. Set sample quotas. It sounds obvious, but think beyond the obvious, and think beyond what you have specified as quotas in your research design. For example, even if gender and age seem tangential to the focus of your B2B study, you probably ought to set gender and age quotas. Why? Because data collection vendors will fill up quotas in lazy ways that are easiest for them. They turn on the hose and start filling your study with women in their 50s. If you donít set quotas to ensure balanced sampling on secondary characteristics, you will get crazy skews that mess up your study.

3. Build in validation. In the old days of phone and in-person data collection, a supervisor would reach out randomly to 10% of survey respondents to confirm they really participated, and that interviewers were not falsifying data. These days it is the respondents, robots, or panels themselves that offer up bad data. So you need to build in data quality and consistency checks to catch them. Scour open ends for key-smashing and weird responses. Cross tabulate data while in field to identify random and unlikely combinations of answers. Avoid the temptation to p-hack (also known as cherry picking) by doing this while data collection is in field, not when you are analyzing your data.

4. Optimize for everyone. Making a survey responsive means making it easy to complete without undue barriers for all the respondents you invite. Mobile-responsive is one crucial piece of that. But having a mobile-responsive platform is not enough. Have you looked at how your questions actually render on mobile? Many of them adapt in ways that you should never allow. Have you tested on multiple devices via multiple browsers? If not you should be using a tool like BrowserStack. Is your survey responsive to assistive devices that people with disabilities use? If not, read How to Make a Survey ADA Accessible. Think beyond mobile to optimize for everyone.

5. Punish test surveys. Testing should involve more than running through a survey a few times after it is programmed. You should subject it to an exceptionally thorough review, with a checklist, on multiple devices and with multiple browsers. Confirm that all displayed text matches your questionnaire exactly. Test every skip pattern and programming instruction, including randomizations. Use a random data generator and review the data structure to identify programming mistakes. Deliberately enter bad data to test input constraints and error messages. Assume that your survey is broken (it probably is) and that your goal is to find out where. Read A Quick Puzzle for Market Research Brains to understand what we mean by this, and why “punish testing” is an excellent idea.

6. Write great invitations. A survey invitation is the first thing your audience sees. Despite response rates being miserably low these days, trust that most of your potential respondents will see an invitation. So it is your opportunity to make an easy and simple pitch. Keep it very short. Tell them in simple language what the survey is about. Tell them how long it takes, and how it will benefit them. Include a big beautiful button that says “Take Survey” as a call to action. Read A Snazzy Revamp of Survey Invitations for more ideas on how to make your invitation sing.

7. Donít go fast. Surveys can be executed faster and more efficiently than in years gone by, but if you want good data, resist the pressure for speed. Yes, you can get thousands of “respondents” answering surveys overnight. But I promise you, these are not the respondents you want. Go slow so you can see who is coming in. Validate who they are. Tweak your quotas and sampling protocols to avoid the inevitable skews that outbound efforts will create. Please believe us when we say that fast data will be bad data (see our next tip!)

8. Review for quality. And cut ruthlessly. Unless survey respondents are coming from your own list and you have rigorous ways to validate them, a big chunk of your data may be fraudulent. There are real humans who lie, or race through panel surveys to earn cash or rewards. There are now thousands of mechanical bots likewise deployed. Finding and eliminating fraud requires several hours of excruciating data review, flagging records for speed, inconsistency, duplication, suspicious ISPs, odd numeric and open-end entries, random data, and so on. These days we cut 10% to 25% of “respondents” as probably fraudulent even from the highest quality panels available.

9. Solicit feedback. When you field a survey, well-meaning respondents invest five, ten, or fifteen minutes of their time to help you. So at the end of your survey, why not give them a chance to share something that may be important to them? We always ask an open-end: “Do you have any comments about this survey?” A majority will leave it blank. Others use it as an opportunity to share invaluable information, such as issues overlooked, glitches in the survey, or nuances to answers that should affect our data cleaning and analysis. In addition, poor-quality vendor panelists or robots often type in junk that you can use to flag and cut fraudulent respondents.

10. Keep a watchlist. One of the best ways to ensure commitment to all these best practices for fieldwork is to develop a fieldwork watchlist that you tailor for every project. Do not rely on your platformís dashboard. You need a deeper look. Download your data every day, tabulate, and monitor your progress. Look at your quotas. Look at the skew of demographics within each quota. Look at the length of interview, not just overall, but for multiple paths within the survey. Look at open-ends, device usage, ISPs, break-offs, and questions in your survey that can serve as benchmarks.

 

“In God we trust. All others must bring data.”
—W. Edwards Deming


“In God we trust. All others must bring data.” Those words by W. Edwards Deming, a statistician and engineer of the 20th century, are words to live by among those who make a living doing research. But letís add to Demingís words. It must be good data, gathered and documented through a fieldwork process that is systematic, careful, rigorous, validated, tested, diligent, detailed, and thorough.

It is no accident that Deming was also the founder of TQM (Total Quality Management) and its various offshoots, like Six Sigma. Take this to heart as you think about fieldwork and data on your next research project. You must bring data, and you must bring good data. It wonít happen without your close and diligent involvement in fieldwork.

BACK TO TOP
 Stories from the Versta Blog

Here are several recent posts from the Versta Research Blog. Click on any headline to read more.

Grab Attention with a Research Infographic
Effective infographics for market research provide just enough information to get the reader engaged and asking for more. Here are 8 tips to make that happen.

BACK TO TOP


Reasons Customers Blow Off Your Surveys
Here's a life rule worth keeping: Never ask questions to which you already know the answers. Apply it to your surveys, too, and customers will more likely respond.

BACK TO TOP


“Just Let Go” for Great Qualitative Interviews
Improvisational comedians offer great advice for qualitative interviewing: Let go of your protocols, listen carefully, and respond to what you hear in skillful ways.

BACK TO TOP


Responsive Surveys Go Way Beyond Mobile
Mobile-optimized is just one way of making surveys responsive and adaptive. Here are other important ways—both basic and complex—that surveys should adapt.

BACK TO TOP


Avoid This Adherence Scale for Health Surveys
A professor at UCLA takes aggressive legal action against researchers who use his super obvious and simple set of questions to measure medication adherence.

BACK TO TOP


Your Best Segmentation Tool Is a Salesperson
Good salespeople intuitively ask customers the smart, strategically useful questions that a good segmentation tool asks. So deploy them instead of a new survey!

BACK TO TOP


AAPOR Says Trump-Clinton Polls Mostly Right
A blue-ribbon panel weighs in on last year's election polling, explaining why the public felt misled even though the national polls were remarkably accurate.

BACK TO TOP


Why Bigger Is Better for Numeric Rating Scales
New research highlights an important cultural bias in how respondents use numeric rating scales. Working “against” that bias will dampen respondent sensitivity.

BACK TO TOP


Dilbertís Boss: Focus Groups Are Not Reliable
Nor should they be! From a research perspective, focus groups should provide rich, new, and surprising depths of insight, not necessarily “reliable” data.

BACK TO TOP


Donít Believe This Best Practice from Google Surveys
Google Surveys tells you to always randomize your answer options “for best results.” But there are lots of times when randomizing answer options is foolish.

BACK TO TOP


Why Segmentation Is Sometimes Useless
Statistical segmentation (like clustering) can find strong but trivial differences that are wrongly interpreted as “insight,” as this NYT example demonstrates.

BACK TO TOP


Your Margin of Error Is Bigger than You Think
Election polls provide a test of whether calculated margins of error capture true values the way they are supposed to. Guess what? They don't.

BACK TO TOP

 Versta Research in the News

Versta Research and IBM Explore E-Signatures
Wells Fargo commissioned Versta Research and IBM to explore how consumers behave when offered loan documents for review and signature on mobile devices.

BACK TO TOP


A “How-To” on Infographics for PR Research
Versta Research presented at the CRC conference in Chicago highlighting work with Fidelity Investments to communicate PR research through infographics.

BACK TO TOP