Grab Attention with a Research Infographic

October 18th, 2017

Infographic handout from Versta Research’s CRC presentation

Yesterday I was delighted to share the stage with Kate Morris at the Corporate Researchers Conference in Chicago, talking about the power, the potential, and the “how-to” of spectacular infographics for market research. Kate spent many years in the research group at Fidelity Investments, and it was for Kate that Versta Research first tried its hand at infographics.

We didn’t tell her we were creating an infographic—we just did it. And when we delivered it at the very end of the project, asking if it was something she might want to use, Kate’s response was:

Did you jump into my brain this morning? This is fabulous and I just emailed my PR contact about infographics…. I LOVE this infographic.

Hence the focus of our talk: How to Create Spectacular Infographics for Market Research. We learned how to do infographics for ourselves and for our clients and internal business partners. So we laid out 8 tips and tricks to share with you—things for you to keep in mind as you go down the winding path (a great image for infographics!) of learning how to do it yourself.

In case you missed our talk in Chicago, here are two tips from our full presentation of eight: Read the rest of this entry »

Reasons Customers Blow Off Your Surveys

October 11th, 2017

If you deploy your own surveys trying to solicit feedback from your customers, you know how hard it is to get them to respond. There are many good reasons why customers ignore surveys nowadays. Here is one potential culprit: You may be asking them for information you already know. If you are, they know that you know. Asking them to provide you with what you already know is irritating, and it makes you look lazy and incompetent.

Here is an example. My friend, Paul, got an oil change. Within a day after his oil change, he received an e-mail inviting him to complete a customer satisfaction survey. It looked like this: Read the rest of this entry »

“Just Let Go” for Great Qualitative Interviews

October 4th, 2017

A really good qualitative researcher knows how hard (and how frustrating) it can be to get respondents to tell us things we want to know. We have our interview guide, with all our questions laid out in a nice logical progression. But often when we ask questions, respondents have less to say than we had hoped. Or we find that our questions are too weird or too abstract or too embedded in the context of our thinking instead of in the context of their experience.

But have you ever experienced the thrill of finishing a truly superb in-depth interview? If you have, chances are you let go of your interview guide. And you will recognize this sentiment from Billy Eichner, a comedian and television interviewer, being interviewed by Ana Marie Cox of the New York Times: Read the rest of this entry »

Responsive Surveys Go Way Beyond Mobile

September 27th, 2017

If you design surveys that adapt well to mobile devices, you can feel proud. Current estimates are that only about half of all market research surveys are mobile-friendly. According to Research Now, an online panel that fields thousands of surveys from research vendors like Versta, just 15% are fully optimized for mobile use.

But now it is time to think bigger and broader. Responsive and adaptive design goes far beyond mobile usage. It means any design feature that makes a survey tailored and responsive to the specific person taking the survey. Read the rest of this entry »

Avoid This Adherence Scale for Health Surveys

September 20th, 2017

Here’s an unfortunate reality in the world of survey research: Survey questions can sometimes be copyrighted. When certain questions (no matter how obvious and common) are asked together as a set, and then scaled into a single measure (no matter how obvious and simple), and then validated in a study as being a solid tool … that scale can be considered intellectual property. And then the owners of that property can stop you from using it in your research.

This issue has been in recent news as a professor at UCLA has been demanding payments or retractions in publications from other researchers (usually academic) who have used versions of the Morisky Medication Adherence Scales without permission.

Here are the exceedingly simple questions on the four-item scale: Read the rest of this entry »

Your Best Segmentation Tool Is a Salesperson

September 13th, 2017

A cool deliverable of many segmentation studies is a “typing tool” that allows you to input data on just a few dimensions (usually six to twelve survey questions) in order to predict which segment any customer belongs to. It works because even though segmentation algorithms sort through tons of data to find the best clusters, ultimately just a few of those data points drive the differentiation of one segment from another.

The problem is that often the most interesting, strategically relevant segmentation schemes (or personas, as we often call them) are based on attitudinal or behavioral data you do not have easy access to. You gained visibility into these data from carefully designed qualitative research and quantitative survey research. With your cool typing tool in hand everything seemed awesome until you realized … wait, how are you going to get answers to those survey questions from all of your customers?

Well, you probably can’t. Try surveying your entire customer base. If you haven’t alienated them with too many surveys already, you’re likely to get response rates of just 3% to 4%.

Here’s another idea, though. Read the rest of this entry »

AAPOR Says Trump-Clinton Polls Mostly Right

September 6th, 2017

Here is an update on what went wrong with the Clinton vs. Trump polling debacle last fall, according to AAPOR’s official Evaluation of 2016 Election Polls in the U.S. that was released several weeks ago.

The conclusions are based on an analysis and assessment of publicly available polling data plus additional data supplied by top polling organizations committed to AAPOR’s efforts to build ongoing transparency and trust into the polling process.

Overall, the findings are consistent with Versta Research’s conclusions published in January (Survey Says…Trump Won? Research Lessons from the Polling Mess) and they add additional insight around some of the weaknesses and dangers of state-level polling.

In brief, key AAPOR findings are: Read the rest of this entry »

Why Bigger Is Better for Numeric Rating Scales

August 30th, 2017

If you grew up in the United States, you probably think big numbers are better when it comes to rating things. Higher scores on school exams are better. Higher scores in games and sports are usually better. Higher credit scores are better. Five-star restaurants are definitely better than one-star restaurants. In Germany it is often the reverse. Lower grade point averages, for example, are superior to higher ones.

This has important implications for how you should design research if you use numeric rating scales. People can use rating scales that are reversed from what they are used to. But their “sensitivity” as measured by a reversed scale will be dampened.

Here is what a new study just published in the Journal of Consumer Research found: Read the rest of this entry »

Dilbert’s Boss: Focus Groups Are Not Reliable

August 23rd, 2017

The pointy-hair boss has a point here, even though he does not realize it. From a research perspective, focus groups should provide rich, new, and surprising depths of insight, not necessarily “reliable” data. In fact, that’s why we typically suggest doing multiple focus groups, in different locations, with different types of participants. We want each group telling us different things from a variety of perspectives. We want each person shedding new light on our research questions, not repeating what others have just said.

William Trochim, a professor at Cornell University and an expert on methodology and evaluation research, has this to say about reliability: Read the rest of this entry »

Don’t Believe This Best Practice from Google Surveys

August 16th, 2017

We use Google Surveys for quick, cheap incidence tests, or to test question wording or answer scales. Every time I use it, however, I am startled by how foolish Google Surveys can be. Here is an example we noticed from our most recent use of the tool.

Start constructing your Google Survey. Add a “single answer” question. Type in your question, and then your answer options. At the bottom, you will notice advice from Google Surveys: “Randomization produces best quality results.

If you constructed a question similar to the one we were constructing, with answers that represent a scale from negative to positive, this “best practice” is terrible. Take a look at what a respondent is likely to see: Read the rest of this entry »