Background
Surveys are widely used in higher education (Kuh and Ikenberry 2009; Porter 2004), and alumni surveys have become an important tool for programmatic and institutional assessment. Unfortunately, alumni surveys often have low response rates because of bad contact information and other reasons such as suspicion of money solicitation or decreased loyalty after graduation (Smith and Bers 1987). Yet even with relatively few respondents, institutions may be able to glean information on important concerns of respondents in the form of qualitative data derived from open-ended survey questions (Geer 1991; Krosnick 1999). Although those collecting this qualitative data receive benefits, a largely recognized disadvantage of open-ended questions is the heavy burden on respondents (Dillman 2007). Existing research suggests that open-ended questions have much higher rates of item nonresponse than other types of survey items (Millar and Dillman 2012). Another concern is that even when one has many open-ended responses at hand, how well do the responses represent the opinions of the entire group? Are some types of respondents more likely to complete open-ended questions? Previous research has shown that some personal characteristics, such as language fluency and positive affect (Wallis 2012), can increase the likelihood of responding to open-ended questions. Survey mode can play a role in nonresponse on open-ended items, and research suggests that for online surveys there may be differences in nonresponse across types of devices (Lambert and Miller 2014; Peytchev and Hill 2009). The purpose of this study is to explore whether those with certain demographic and personal characteristics, including gender, age, cohort, number of children, marital status, citizenship, race, current employment status, income, and institutional satisfaction level, are more or less likely to respond to open-ended questions placed at the beginning, middle, and end of an online alumni survey.
Method
Participants
The data used for this study was from the 2011 administration of the Strategic National Arts Alumni Project (SNAAP). SNAAP is a multi-institution online alumni survey designed to obtain knowledge of arts education. The participants were 33,801 alumni from 57 different arts high schools, undergraduate, and graduate colleges or arts programs within larger universities. Participating institutions provided the researchers with population information, including name, email address, phone number, mailing address, degree level, cohort (year of graduation), and major/arts field. All alumni with email addresses were invited to participate. No more than five contact messages (initial email invitation plus up to four reminder emails) were sent to alumni; this data was collected from September 2011 to November 2011. Of those who responded, 2,606 were high school level alumni (8 percent); 23,607 undergraduate level alumni (70 percent); and 7,588 graduate level alumni (22 percent). Of these alumni, 38 percent were male, 62 percent female, and 0.2 percent transgender. The majority of alumni (87 percent) reported their ethnicity as Caucasian. The overall response rate was 18 percent, which was derived by dividing the total number of respondents by the total number of alumni contacted (minus undeliverable emails). The average institutional response rate was 21 percent, which was derived by calculating the response rate for each institution and averaging those response rates. Because these analyses compared respondents on questions at the beginning, middle, and end of the survey, in order to prevent any bias from partial respondents only those who completed the entire survey (did not drop out before making it to the end of the survey) were included. This lowered the eligible number to 27,212. The characteristics of these respondents remained consistent with the entire sample. The average duration for those who completed the survey was 28 minutes.
Materials
The measures were questions included in a larger survey administered to participants online. Participants were emailed an invitation including a link to the survey. Participants could log in multiple times, so they were not constrained to complete all questions during a single session. Participants were not required to answer any of the items; therefore, they could advance through the survey even if they did not respond to individual items throughout the instrument.
The open-ended questions included in the analyses were three different items, selected due to placement on the survey instrument. (SNAAP contains 11 different open-ended items overall.) One item was selected from near the beginning of the survey (appears as the 17th of 82 total questions), one from the middle (appears as the 44th of 82), and one from near the end (appears as the 80th of 82). The item from the near-beginning asked respondents if there was anything their institution could have done better to prepare them for further education or career; the middle item asked them to describe how their arts training is or is not relevant to their current work; and the near-end item asked them to describe any additional information about their education, life, and/or career that were not adequately covered on the survey. From each of these questions, a binary variable was created based on whether or not the respondent provided an answer. In order to be classified as providing an answer, the respondent had to enter at least one character in the accompanying text box.
To compare the characteristics of those who did provide responses to those who did not, the demographic and personal variables included gender, age group, graduation cohort, number of children, marital status, citizenship, race/ethnicity, current employment status, income, and institutional satisfaction level. Citizenship (i.e., whether or not respondent was a US citizen) was a binary variable. Age, graduation cohort, and number of children were ordinal variables that contained recoded group ranges. Race/ethnicity was a “check all that apply” question and therefore was made up of seven binary race/ethnic variables. Gender, marital status, and current employment status were categorical variables, made up of three, four, and seven response options, respectively. Income was an ordinal measure, using midpoints of ranges; overall institutional satisfaction was also ordinal, using a four-point scale from “Poor” to “Excellent.” For a complete list of items and response options, see the Appendix.
Analyses
A series of fourteen chi-squared analyses was done for each of the three open-ended question binary variables. The chi-squared analyses were run for gender, age group, graduation cohort, number of children, marital status, citizenship, each race/ethnicity option, and current employment status. Three independent samples t-tests were completed for institutional satisfaction and each of the open-ended question binary variables. Three nonparametric Mann-Whitney U tests were completed for each of the comparisons of income, as this variable used midpoints for recoding and the skewed variance violated the parametric assumptions of the independent samples t-test.
Results
Descriptive Statistics
In looking at the percentages of responses for the open-ended questions, there are much higher percentages of responses for the near-beginning and middle questions than for the near-end item, keeping in mind that only those who reached the end of the survey are included in this analysis. For the near-beginning question, 68 percent of respondents provided an answer. For the middle question, 79 percent of respondents provided an answer. For the near-end question, 24 percent of respondents provided an answer.
Chi-Squared Analyses
When looking at comparisons based on gender, the results indicated that females were significantly more likely to answer the near-beginning and middle questions, but for the near-end question there were no significant differences (see Table 1 for χ2 values). For age, those groups over 50 were significantly more likely than their younger counterparts to answer all three questions. For graduation cohort, a similar pattern occurs, with those graduating in or before the year 1990 being significantly more likely to answer all three questions. Furthermore, for marital status, those who are single were significantly less likely to answer all items, which relates to age as well, as many of those who are single are also younger. For number of children, those with no children under 18 dependent on them for support were more likely to answer all three questions. Looking at current employment status, those who were unemployed and looking for work, retired, or selected “other” (and had the opportunity to supply an answer in a corresponding “other” text box) were more likely to answer all three open-ended items. Those who reported they were US citizens when attending their institutions were also more likely to answer all three questions.
Some different patterns occur when looking at the binary race variables. White/Caucasian individuals were more likely to answer the middle item, while Black individuals were more likely to answer the near-beginning item. Furthermore, American Indians were more likely to answer the near-beginning and near-end items, but not the middle item. Asian individuals were consistently less likely to answer all three items, while interestingly those who selected the “other” race response option (some of whom also wrote in the “other” text box) were consistently more likely to answer all three items. No significant differences were found for Hispanic or Native Hawaiian respondents.
Means and Other Ordinal Comparisons
The results of the independent samples t-tests showed that those who answered the near-beginning and near-end questions were significantly less satisfied with their overall institutional experience (see Table 2 for test statistics). In looking at income (recoded into midpoints of ranges), the Mann-Whitney U test indicated that those who answered the open-ended questions had a significantly lower income that those who did not, which was consistent across all three questions (see Table 3 for test statistics).
Discussion
There are several potential explanations for the various patterns found in the results, many of which support previous research and survey methodology knowledge. Completing open-ended response options requires a greater amount of time and mental effort than most close-ended questions (Dillman 2007); thus, it is not surprising that those with no dependents, are retired or unemployed, and older are more likely to provide open-ended responses. Time burden seems to fall more heavily on certain groups than others. Therefore, survey designers must choose open-ended questions wisely if they wish for as many types of respondents as possible to complete them. They should also consider placing the most important open-ended questions toward the beginning, as these showed more responses than those near the end (although they should continue to avoid open-ended questions as the very first item in the survey, which can result in survey abandonment) (Dillman 2007).
Another explanation for the patterns of results may be that those with negative feelings are more likely to voice their opinions as comments in the open-ended items, using them as a platform for their complaints. This negativity bias has been found in research with workplace environments (Poncheri et al. 2007), and may explain why those who are unemployed and looking for work are more likely to respond to these items. These alumni in particular might be frustrated with their situation, and feel that their institution, who provided them with their degree, should shoulder some of the responsibility. Furthermore, those who provided open-ended responses had significantly lower levels of income and were significantly less satisfied with their institutional experience, compared to those who left the questions blank. It seems that the disgruntled alumni are more willing to spend the time and effort to provide a response to the open-ended questions. While this might initially seem like an unwanted result, it may actually be that these alumni have the best insight on improvements to curriculum and programming. However, given the potential negativity bias, survey researchers need to carefully craft the stems of open-ended questions to illicit both valences of response.
A third and quite interesting pattern was found concerning the use of the “other” response option, as those individuals who prefer to describe themselves as “other” are also more likely to respond to open-ended questions throughout the survey. A cursory review of the open-text boxes that accompany the “other” employment and race options shows a considerable number of responses that actually do fall into one of the categorizations, but the respondents choose to further explicate on themselves. For instance, some respondents reported their “other” race as things like “Caucasian/American Indian,” even though the race/ethnicity question was in a check-all-that-apply format and one could have simply checked both of these response options. It seems that those respondents choosing to identify themselves as an “other,” thinking of themselves as a unique individual, are more likely to provide responses to open-ended questions. Perhaps they are simply more verbose, or they have a disposition that resists the confinement of categorization. It is possible that this “other” preference is a distinct response style, and more qualitative analysis of the “other” text boxes is needed.
Although there are several strengths of this study, some limitations should be noted. Given the data collection procedures and response rates, the sample may not be representative of all arts alumni, or alumni in general, and caution should be made when making generalizations. More sophisticated analyses, such as logistic regression, could shed further light on which combinations of groups can predict the likelihood of responses in larger, more diverse samples. One particular source of bias may be that alumni who are either extremely dissatisfied or satisfied are more likely to respond to the survey itself, although follow-up studies on SNAAP pilot tests indicate that these differences are negligible (Kennedy, Tepper, and Lambert 2010). There may be differences between partial and full completers, although our sample did not differ substantially on key characteristics.
Overall, the findings suggest that while a great deal of information can be gained from open-ended survey questions, some groups are more likely than others to provide responses, and this should be kept in mind when designing future surveys and interpreting one’s qualitative survey results. More research is also needed to explore the patterns of results concerning the influence of question placement and issues such as type of device on whether particular groups respond to open-ended questions, as well personal and environmental influences that may contribute to this “other” survey response style.