Introduction
Contingency questions are items meant to be answered by only a subset of respondents (Barrett 2008). We first provide a brief review of the avenues available to survey designers for constructing questionnaires with contingency questions and the arguments supporting each approach. We then present a dramatic example of the inadvisability of relying on one of these methods: general instructions to skip non-applicable items.
Approaches to Handling Non-applicable Questions
Broadly speaking, three options are available to guide potential respondents to the questions applicable to them: (a) provide general instructions to not answer items that do not apply, (b) create a branching (skip) system directing the respondent to the relevant questions based on his or her answers to prior filter items, or (c) include a “does not apply” answer option for every contingency question.
Skip Instructions
The main virtue of using skip instructions is that it allows one to shorten the questionnaire and thereby reduce the response burden (Groves, Cialdini, and Couper 1992), which can be a major reason for non-response (Roszkowski and Bean 1990). In turn, the chief problem with skip instructions is that respondents often fail to read them (Barnard, Wright, and Wilcox 1979; Martin and Gerber 2005; McBride and Cantor 2010; C. Redline et al. 1999). If the question looks like it is self-explanatory, then respondents tend to not consult instructions (Frohlich 1986). Dillman, Carley-Baxter, and Jackson (1999) report that respondents have an implicit expectation that they are supposed to answer each and every question in sequence, and therefore, skip instructions are counterintuitive. Thus, questions intended to be skipped are more likely to be answered in error if they follow a series of questions that do not have branching (C. D. Redline et al. 2003). Another problem with branching is that it can become quite complex on a paper-and-pencil survey and can confuse the respondent.
Does Not Apply Answer Option
Incorporating “does not apply” as an answer option can also add to the perceived (if not actual) burden because it increases the physical size of the survey (i.e., an additional response category) and may make completing the form slightly more time consuming. Moreover, having to reply “does not apply” to a large number of questions in sequence could easily frustrate the respondent, leading to premature termination of the survey. However, with this approach, the respondent is apt to become more aware that the other response options are inappropriate and should not be used.
Which Approach is Preferable?
Research by Gendall and Ramsay (2001) and Gendall and Davies (2003) indicate that the inclusion of a “does not apply” option leads to fewer errors than skip instructions. Some indirect evidence supporting the “does not apply” method can also be found in studies comparing what happens when a respondent is presented with a list of items and is requested to either (a) “select all that apply” or (b) answer either “no” or “yes” to each item on the list. Such studies find that more items are endorsed with the latter procedure, which suggests that it forces people to pay greater attention to each item (Rasinski, Mingay, and Bradburn 1994; Smyth et al. 2006; Thomas and Klein 2006).
Method
Context
During their first semester, most first-year students at our university are required to enroll in a one-semester orientation course, which covers topics such as choosing a major, avoiding dangerous situations, preventing drug and alcohol abuse, etc. In the last class session of that course, a paper evaluation form is administered anonymously which asks how much was learned about each topic. In the fall 2012 semester, 858 first-year students were enrolled at the university, 800 took the first year experience course, and 551 completed the orientation course evaluation form.
At the time, the university was revising its core curriculum, and questions had been raised about the value of the orientation course. In order to gauge how this nonacademic course compares with the academic courses, we introduced into the orientation evaluation form an additional question asking the student to indicate how much she or he had learned in the academic courses taken during the same first semester, using the same response format as for the non-academic topics addressed in the orientation course. The new question was phrased as follows: “The list below contains the subject matter of courses that you may have taken this semester. For any course that you took, please indicate how much you learned in that course. If you did not take a course in that subject, please leave that item blank.” Seventeen academic courses were listed on rows, followed by four columns containing the 1 through 4 rating scale which represented the following respective verbal anchors: almost nothing, a little, quite a lot, a great deal. For all questions, the respondents were instructed to “Please circle the number corresponding to your answer.”
Results and Discussion
We were initially delighted to observe that in terms of how much was learned, orientation topics fared very well relative to the regular academic courses. However, certain features of the data were puzzling. To begin with, the academic courses that received the higher ratings were ones with the larger number of respondents. Second, the number of students who provided a rating for many of the courses appeared too large.
We therefore checked the registrar’s records to determine the number of students among the 800 enrolled in the orientation course who took each of the academic courses on the list. We then calculated the direction and magnitude of the discrepancy between the number of respondents rating the course and the course census by subtracting the census figure from the respondent figure. Positive numbers indicate that the number of respondents was larger than the census, whereas negative numbers indicate that the number of respondents was smaller than the census. Given that only 551 of the 800 students taking the orientation course completed the course evaluation form, one would have expected all the discrepancies to be negative in direction. However, in actuality, the majority of the discrepancies were in fact positive numbers, which means that too many students provided a rating of the academic courses given the number enrolled in these courses. Table 1 provides the details of this pattern. Several features of this table are noteworthy.
First, as the course census number increases, the discrepancy between the number of respondents and the census number gets smaller; in terms of a Pearson correlation coefficient, the magnitude of the relationship equals –0.96. In other words, the number of excessive respondents is lower in the courses with large enrollments relative to the courses with small enrollments.
Second, inspection of the pattern between the mean rating column and the discrepancy column likewise suggests an inverse relationship. With two exceptions, as the excessive number of respondents goes down, the average course rating increases. The Pearson correlation between the average course rating and the discrepancy is r=–0.61.
Third, when the size of the discrepancy is correlated with the percent of respondents selecting each of the four answer options, the correlation between “almost nothing” and the discrepancy is positive (r=0.53), whereas the correlations between the discrepancy and the remaining three answer options are negative (i.e., a little=–0.15, quite a lot=–0.13, a great deal=–0.71). This suggests that most of the respondents who constituted the excess selected the learned almost nothing option.
Conclusion
The patterns in Table 1 suggest that many students were answering the question about how much they learned in a given course even if they did not take the course. Apparently, the skip instructions were not followed. According to Oppenheimer, Meyvis, and Davidenko (2009), ignoring instructions is a manifestation of the tendency to put minimal effort into answering a questionnaire (“satisficing”). Without reading the instructions, many respondents thus had the expectation that an answer was expected of them for each and every course, and if they did not take the course, the most reasonable response was to state that they did not learn anything.
Sometimes lessons relearned are quite valuable and worth relating to others, particularly when they become memorable due to their striking nature. Assuming that skip directions at the top of the question will be read by all respondents is a mistake. Our results provide further support for Krosnick’s (1991) caution that the probability of the occurrence of satisficing behavior needs to be anticipated when designing a questionnaire. Less than optimal responding was obvious given that many students failed to read critical directions about skipping items not applicable to their circumstances, which rendered the data unusable for the purpose intended. Based on this experience, we advise others intending to employ a similar format to either (1) use a does not apply option (i.e., “I did not take this course”), or (2) make the skip instructions much more prominent by using attention-drawing features such as bold letters, capital letters, underlining, arrows, etc. (as recommended by C. Redline et al. 1999).
Acknowledgement
The authors wish to thank and acknowledge the role of Michael Dillon, PhD, JD in first noticing the anomalies in the students’ feedback.