Posted By Doug Peterson
In part 2 and part 3 of this series, we looked at writing good survey questions. Now it’s time to turn our attention to the response scale.
Response scales come in several flavors. The binary or dichotomous scale is your basic yes/no option. A multiple choice scale offers three or more discreet selections from which to choose. For example, you might ask “What is the public sector organization in which you work?” and provide the following choices:
Dichotomous and multiple choice scales are typically used for factual answers, not for opinions or ratings. The key to these types of scales is that you must make sure that you offer the respondent an answer they can use. For example, asking a hotel guest “Did you enjoy your stay?” and then giving them the options “yes” and “no” is not a good idea. They may have greatly enjoyed their room, but were very dissatisfied with the fitness center, and this question/response scale pairing does not allow them to differentiate between different aspects of their visit. A better approach might be to ask “Did you use the fitness center?” with a yes/no response, and if they did, have them answer more detailed questions about their fitness center experience.
The response scale we typically think about when it comes to surveys is the descriptive scale, where the respondent describes their opinion or experience as a point on a continuum between two extremes such as “strongly disagree” to “strongly agree”. These are the scales that elicit the most amount of debate among the experts, and I strongly encourage you to Google “survey response scales” and do a little reading. The main points of discussion are number of responses, direction, and labeling.
Number of Responses
We’ve all seen the minimalist approach to rating scales:
Disagree Neutral Agree
  
There are certainly situations where this scale is valid, but most of the time you will want to provide more options to allow the respondent to provide a more detailed answer:
Strongly Disagree Disagree Neutral Agree Strongly Agree
    
Five choices is very typical, but I would agree with the point made by Ken Phillips during his session on surveys at ASTD 2013 – Five may not be enough. Think of it this way: I’m pretty much going to either disagree or agree with the statement, so two of the choices are immediately eliminated. Therefore, what I *really* have is a three point scale – Strongly Disagree, Disagree, and Neutral, or Neutral, Agree, and Strongly Agree (and an argument can be made for taking “neutral” out of the mix when you do agree or disagree at some level). Ken, based on the Harvard Business Review article, “Getting the Truth into Workplace Surveys” by Palmer Morrel-Samuels, recommends using a minimum of seven choices, and indicates that nine and eleven choice scales are even better. However, there are a number of sources out there who feel that anything over seven puts a cognitive load on the respondent: they are presented with too many options and have trouble choosing between them. Personally, I recommend either five or seven choices.
By “direction” I’m referring to going from “Strongly Disagree” to “Strongly Agree”, or going from “Strongly Agree” to “Strongly Disagree”. Many experts will tell you that it doesn’t matter, but others bring out the fact that studies have shown that respondents tend to want to agree with the statement. If you start with “Strongly Agree”, the respondent may tend to select that choice “automatically”, as this is a positive response and they tend to want to agree anyway. This could skew your results. However, if the first choice is “Strongly Disagree”, the respondent will have more of a tendency to read through the choices, as “Strongly Disagree” has a negative feel to it (it’s not an attractive answer) and respondents will shy away from it unless they truly feel that way. At this point, the respondent will have more of a tendency to truly differentiate between “Agree” and “Strongly Agree”, instead of seeing “Strongly Agree” first and thinking, “Yeah, what the heck, that’s good enough,” and selecting it without much thought.
In Part 5, we’ll finish up this discussion by taking a look at labeling each choice, along with a few other best practices related to response scales.
If you are interested in authoring best practices, be sure to register for the Questionmark 2014 Users Conference in San Antonio, Texas March 4 – 7. See you there!