In market research, there are many ways to approach survey design.  Each approach has pros and cons and there usually isn’t one right answer. In fact, the survey design process is almost an analysis before the analysis. You have an objective in mind and you need to design a question or series of questions to answer that objective. You probably even take a look at past surveys and come to a conclusion on which design works best for this unique project.

Here at Research & Marketing Strategies (RMS), our team thoroughly reviews the pros and cons to each method before implementation and chooses the best approach for our client.  Output from a survey is only as good as the input and thought that goes into the survey design

A common objective of our market research clients is to assess customer’s feelings towards their business – a simple image and awareness or image and usage study.  Obviously in this type of study, at some point in the survey script, you will ask the respondent about the importance they place on certain features/aspects of a product or service or simply their satisfaction with each of the features/aspects. The most commonly utilized technique to gather this information is a rating scale, such as the Likert Scale.  This is commonly seen as a five-point, seven-point or ten-point scale. This is referred to as “traditional scaling.”

The value of knowing how customers rate the importance of certain characteristics is vital to the success of a product or service. Scaling an importance rating allows for advanced statistical analysis (including correlation and regression); the data can be put into charts, linked with open-ended questions, or cross-tabbed with other data from the survey.  The cross-tabulations are often compared to the demographics of survey respondents to identify how different groups rate characteristics.

Conversely, one problem that continually shows up in traditional scaling is seen in the above graphic. This particular respondent chose 10s for all four of the factors tested, rated every characteristic as “very important.” So put your research analyst hat on and tell me which of those four is most important to this particular respondent?

Coming up with anything…? 

…I didn’t think so.  It’s impossible to tell which of those four characteristics is most important and which is least important. How is a client supposed to make actionable changes based on this data? The top choices may all be important to the customer, but which one truly matters the most? 

One way to avoid this problem is by engaging the respondent in a way that forces them to choose between options.  Forcing a choice onto a respondent will provide the researcher a better understanding of the decision-making process for a respondent. Take a look at the graphic below:

This is your basic Max Difference Analysis (MaxDiff) question in a survey. It is a simple, yet a useful tool to force a decision onto the respondent.  Through MaxDiff scaling (best-worst, most-least), the respondent is tasked the choice of choosing between extreme opposites – what is least important, and what is most important? Typically, four factors are used in MaxDiff, and from those the respondent must choose which one they think is the best, and which one is the worst.  This approach is an all around better way to engage the respondent to get a true sense of the level of importance.  This method eliminates the respondents’ “opting-out” by ranking all factors as “very important” – or all “10” on the 1-10 scale.

MaxDiff is just one of the techniques to consider when designing a questionnaire or survey. As previously stated, there are many pros and cons to each method and often times there is no right answer. Even the MaxDiff setup leads to cautions involving:

  • Respondent time – forces the respondent to stop and truly consider a level of importance, which is what you want as a researcher, but it could turn a 15-minute survey into 20 or 25 minutes depending on how many MaxDiff series you use.
  • Longitudinal consistency – traditional scales have been around market research forever, and many clients have years upon years of data on 1-10 scale ratings from their customers.  Changing the format will impact the trending.
  • Survey mode – using online versus telephone methodologies. Reading off a list of 10 MaxDiff questions can create respondent fatigue, in comparison to going through a list of 1-10 scales only once with a respondent using CATI.

At Research & Marketing Strategies (RMS), we work with our clients to create a survey that will deliver the most relevant and actionable results.  Oftentimes, it is a trial of weighing pros and cons of different survey designs. When creating survey questions, it is always important to consider all of the available techniques.  Whether it is MaxDiff or traditional scaling – no matter what survey design you implement, the true importance is that you are using market research to listen to your customers.