Most people know that the wording of a question on a survey can have an impact on the way respondents answer it. A less obvious, but potentially significant source of bias is the point in the survey instrument at which a question is asked. A good example of this issue is the problem of when to ask about overall satisfaction in a survey that seeks to measure a variety of aspects of the customer experience in some detail.

Let’s say that we wanted to measure customer satisfaction with Vance’s Greasy Spoon Diner, a fine eating establishment that is looking to evaluate its place in the market after a series of Health Department-ordered shutdowns. Our client wants to know the customers’ overall satisfaction with the restaurant, as well as how they rate several sub-components. The survey could take the form of a simple rating scale with the series of items. It might look like this:

Please rate on a 5-point scale your satisfaction with the following aspects of Vance’s Greasy Spoon Diner (5=very satisfied, 1=very dissatisfied).

  1. Overall Experience
  2. Speed of Service
  3. Friendliness of Wait Staff
  4. Menu Selection
  5. Food Quality
  6. Atmosphere of the Dining Area
  7. Value

In this example, the survey seeks to capture opinion of the overall experience first, and then delves into the components that make up the overall experience. But not all survey instruments are structured that way. There is a school of thought that the order of questions should look like this:

  1. Speed of Service
  2. Friendliness of Wait Staff
  3. Menu Selection
  4. Food Quality
  5. Atmosphere of the Dining Area
  6. Value
  7. Overall Experience

In this case, the respondent is asked to think about a variety of specific issues first, and then rate their overall satisfaction. It may seem like a minor, subtle difference, but this is a case where the order of the items could potentially influence the overall satisfaction ratings. The first example, would capture the respondent’s “gut” reaction (Pun not originally intended, but it is very fitting for a restaurant survey, yes?) that they have before giving the matter much thought. In the second example, the process of getting them to think about the various aspects of their experiences at the restaurant can cause respondents to assign a rating that is swayed by that process. For example, if a person rated every aspect leading up to the overall experience item as a 4 or a 5, they might feel that it would be irrational to give the overall experience a 3 — even if that might be their initial thought.

 

This is an issue that can be argued either way, and we have had this debate a number of times in the Bunker when writing survey scripts for various projects. My own personal feeling is that in most cases, I think it makes more sense to ask the overall satisfaction question first. I believe a person’s initial gut reaction usually comes closest to being the one that determines their behavior as consumers. It may be irrational, but who ever said human behavior was entirely rational?

Another reason I favor that approach is that the second approach assumes that ALL the factors that might go into a person’s overall experience have been covered in the survey instrument. That is always a dangerous assumption to make. For example, what if one of the key drivers of dissatisfaction with Vance’s Greasy Spoon is the fact that the diner is located in between Chris’s Junkyard and George’s Horse Stable? Our survey asked about the atmosphere of the dining area, but not of the broader surroundings. Or perhaps the wait staff is friendly but they have a tendency to get the orders mixed up, and we never asked about order accuracy. Both examples suffer from those omissions, but I believe it makes more of a difference in the second example, where, as survey writers, we have tacitly told the respondent “These are the only factors you should be rating us on,” before asking for their overall rating. The first example does not introduce that level of bias. So at the very least, if the overall satisfaction scores seem at odds with everything else, the analysis will suggest that there is some other key driver out there to be explored in further research.

There are some cases where it might make sense to ask the overall satisfaction question last. If the survey was about a topic that the person might struggle to form an opinion on right off the bat, then asking about the specific components first might serve as something as a warm-up and/or memory aid to get them thinking about the overall experience. Another case might be where the customer experience is highly colored by emotional factors outside of the control of the survey’s subject. An example of that would be a hospital survey where the person’s rating of their experience could be influenced by the seriousness of their medical condition and/or the prognosis after their care. In that example, it would help to keep the respondent focused on the specifics of the stay in the beginning of the survey as a way to remind them of the operational aspects they are being asked to assess (nurse care, doctor communication, pain management) as opposed to immediately reminding them of how frightened or desperate they might have felt during their time as a patient. 

That said, I think in most cases it’s best to get the overall impressions first and then drill down into specifics. We in the Bunker would be interested to know what others feel about this issue. If you have any thoughts on this, please leave us a comment. We’d love to read your takes on this!

If you have any questions about writing customer satisfaction surveys or need to work with a consultant to draft up a customer satisfaction survey script for your business, contact our Director of Business Development, Sandy Baker, at SandyB@RMSresults.com or by calling 315-635-9802.