Introduction
When designing a survey incentive strategy, two fundamental questions must be resolved: (1) What types of incentives and of what value will be offered and (2) When will the incentives be provided? We implemented a set of experiments varying each of these factors to determine the best approach for a sample of public school principals. These experiments were implemented in the second wave of The National College Ready Survey (NCRS), a survey sponsored by the Bill & Melinda Gates Foundation. The survey was administered by web and took respondents 15 minutes on average to complete.
Research has shown that incentives can increase response rates, though effectiveness can be limited by factors including incentive type, amount, timing, sample composition, mode of administration, and demands of the request (Singer and Ye 2013). The default incentive condition for our experiment used online gift cards delivered as a post-response payment. The experiments compared the relative effectiveness of three alternative conditions: (1) offering a larger gift card amount upon early completion, (2) providing an additional gift card at initial contact prior to completion, and (3) providing an additional gift card to nonrespondents to encourage completion.
Experiment Purpose and Related Work
The NCRS targets a busy, and therefore challenging-to-interview, population of public school principals. Providing meaningful incentives demonstrates that researchers understand the competing demands on their time and value their input. Research suggests that monetary incentives boost response rates to a higher degree than do lotteries, and prepayment of monetary incentives can be particularly advantageous (Gajic, Cameron, and Hurley 2012; Halpern et al. 2011). However, providing incentives in advance of participation introduces the risk of paying sample members who never participate. Another option is offering an “early bird” incentive, where early completion of the survey results in receiving more money for completion (LeClere et al. 2012). A final option we considered was the use of non-response conversion payments, which have also been found to increase survey participation, but not without ethical concerns of fairness for participants (Singer and Ye 2013). To assess which option would be most effective for increasing participation, we designed three experimental conditions.
Experiments
We undertook three experiments to test the effect of different incentive conditions compared to control group members, who were offered a $50 Amazon gift card code via email to be received after completing the survey.
- Differential incentive experiment for “early response”: Sample members were promised a $100 gift card for completing in the first three weeks, compared to the $50 gift card they would receive for completing surveys after the three-week period (the same amount offered to the control group).
- Pre-paid incentive experiment: This group received a $25 prepaid gift card code with their email invitation, in addition to the offer of the standard $50 gift card to be received upon completion.
- Nonresponse incentive experiment: The nonresponse follow-up group received a $25 pre-paid gift card code if classified as nonrespondents when follow-up began.
Methods
The experimental sample was randomly split into three equal-sized groups, one of which had two subgroups. Group 1 was assigned the additional $50 early response incentive, and group 2 was assigned the $25 pre-paid incentive. The remainder of the experimental sample was assigned to group 3 and served as the comparison group for groups 1 and 2. In addition, group 3 was randomly split into two groups: group 3a was sent a $25 nonresponse follow-up gift card if applicable, while group 3b was offered only the default $50 post-response incentive and served as the comparison to group 3a. All groups were offered the default $50 post-response incentive to be received upon responding to the survey. Table 1 shows which groups served as the treatment and comparison groups for each incentive test. Group 3 as a whole served as the no-treatment comparison to groups 1 and 2, while group 3b served as the no-treatment comparison to group 3a.
To test the differences in response rates, chi-squared tests of association were used to determine if a relationship existed between incentive type and response rates. When comparing the mean number of days until response, independent sample t tests were run, with equal variances assumed.[1] Tests with p-values less than 0.05 were considered statistically significant, while those between 0.05 and 0.10 were considered marginally significant. All analyses were run unweighted using SAS 9.3® software (SAS Institute, Cary, NC). All response rates were calculated using American Association for Public Opinion Research (AAPOR) RR2 except as noted otherwise.
Results
Early Response Incentive
The purpose of this early response incentive was to increase both response rates early in the field period and final response rates above those obtained by offering only the standard $50 post-response incentive. We hypothesized that the additional incentive would promote early response, providing an increase to the response rate at the beginning of data collection over that of the standard incentive, and this difference would be maintained when the response incentives in the treatment and control groups became equal after three weeks in the field.
Five hundred and sixty principals were randomly selected into this treatment group (group 1) and were offered the standard $50 post-response incentive plus the additional $50 post-response incentive for early response, and 560 principals were randomly selected into the control group (group 3), which would be offered only the standard $50 post-response incentive.[2] Research boards in some districts did not give approval for principals to participate in the study, or approved participation but did not allow for differential incentives to be offered to principals. This meant 37 principals in the treatment group and 40 principals in the control group could not participate in the experiment. However, if these rejections or late approvals for research activities were related to observed or unobserved district characteristics, removing these principals from the experiment could introduce bias, yielding results that may not represent the actual response to these incentives across all subgroups in the sample. To avoid introducing bias, these principals were kept in the experiment as if they had received the treatment or control conditions. This “intent-to-treat” approach provides unbiased results, although treatment effects may be dampened due to a certain portion of principals not receiving the treatment condition or (in some cases) the control condition.
The bar graph in Figure 1 shows the response rates for the treatment and control conditions at the early response cutoff date (three weeks after initial contact) and the response rates at the end of data collection.
After three weeks in the field, the treatment group had a 29.7 percent response rate, while the control group had a 20.1 percent response rate. This difference was statistically significant at the 5 percent level, indicating a clear positive effect of the early response incentive on response rates. This result confirms our hypothesis of an initial boost in response rates due to the additional $50, as compared to the standard incentive.
In contrast to the early response cutoff, there was no effect of the early response incentive at the end of data collection. While the difference was not statistically significant, the control group actually had a slightly higher response rate (56.6 percent) than the treatment group (55.4 percent). This finding rejects the second half of our hypothesis, that the initial boost in response rate in the treatment group would be maintained until the end of data collection. Figure 2 below plots response rates for the two groups across time.
Time-in-field in relation to the early response incentive cutoff is shown in two-week increments. The vertical black line highlights the end of the early response incentive period. The first time point shown is two weeks prior to the early response cutoff (one week in the field), and at this point, there appears to be an effect of the additional incentive, with the treatment group having a response rate about 4 percentage points higher than the control group. This difference increases to a maximum of 9.6 percentage points at the early response cutoff.
Because the response incentives became equal after the early response cutoff, we expected the difference in response rates at that point to hold steady until the end of data collection. However, the line graph shows that this difference starts to diminish immediately following the early response cutoff and continues to decrease until seven weeks after the early response cutoff when the response rates of the groups become essentially equal.
Thus, while the early response incentive did increase response rates early in the field period, this effect did not hold. It is possible that the early response incentive actually became a disincentive to respond after the cutoff date. Principals in the treatment group who did not respond in time to receive the additional incentive may have been less motivated to respond knowing that they were no longer eligible to receive the additional incentive. These results indicate that while the early response incentive was effective at boosting early response rates, this effect did not persist after the additional incentive was no longer available.
Pre-paid Incentive
We tested a $25 pre-paid incentive, which was provided to randomly selected principals (group 2) in the initial contact materials. If these principals responded to the survey they also received the $50 post-response incentive, bringing their total compensation to $75. We hypothesized that principals offered the pre-paid incentive would respond at higher rates than those offered only the post-response incentive.
We found that the additional pre-paid incentive did not increase response rates above that obtained by only the offered post-response incentive. The final response rate for the pre-paid incentive group was 54.6 percent, versus the 56.6 percent for the control group (p=0.49). Thus, while not statistically significant, the post-response incentive-only group had a higher response rate. We suspect the use of Amazon.com gift card codes for the incentives – rather than a check or cash – may have reduced the effectiveness of the pre-paid incentive, as principals may have mistakenly thought response was still required before obtaining both incentives.
Nonresponse Conversion Incentive
The final experiment tested the use of a $25 pre-paid incentive used during nonresponse follow-up. This incentive was offered to a random subset of principals (group 3a) who had not yet responded 12 weeks after initial contact. If a principal received this additional incentive and then responded to the survey, he or she received a total compensation of $75. The purpose of this additional incentive was to increase response rates among principals who were not convinced to respond with the standard $50 post-response incentive. For this test, we randomly split the 560 principals in the control group (group 3) into two groups; one was offered the additional $25 pre-paid gift card if they remained nonrespondents after 12 weeks, while the other was not.[3]
There was virtually no difference in the overall response rates between the treatment and control groups. The overall final response rate for principals initially eligible for the additional $25 nonresponse conversion incentive plus $50 post-response incentive was 56.5 percent compared to 56.7 percent for the $50 post-response incentive only group (p=0.97). Thus, no evidence exists that the $25 nonresponse conversion incentive increased overall response rates. A null finding of this test is not completely unexpected as only a subgroup of these cases, nonrespondents as of week 12, received the treatment in group 3a. To estimate the impact of this incentive on those who actually received the treatment, we removed all principals who had responded or explicitly refused prior to the nonresponse follow-up emails, principals in districts that did not grant approval for the survey or incentive experimentation, as well as any cases deemed ineligible due to school closure. The final number of cases included in the analysis was 79 in the treatment group and 62 in the control group. The final response rate for principals offered the additional $25 nonresponse conversion incentive plus the $50 post-response incentive was 24.1 percent compared to 12.9 percent for the $50 post-response incentive only group.[4] This seemingly large difference was only marginally significant at the 10 percent level due to the small sample size. This provides some evidence that this additional incentive was effective at converting nonrespondents. Thus, while this incentive did not increase overall response rates, it did increase response rates among the principals who were nonrespondents at the start of nonresponse follow-up and received the offer compared to those who did not receive the offer.
Discussion
Our findings suggest that the $50 post-response incentive, which served as the control condition in the experiment, is the most overall effective incentive in terms of promoting response rates. All three experimental conditions increased the total potential incentive payment. Yet all three failed to show gains in response rates at the end of data collection. This suggests that a significant incentive amount, with simple requirements for redemption (i.e., completion at any point during the data collection period) is the most effective overall incentive strategy for this population using this mode.
The early response incentive did significantly increase response rates while it was available, but this effect rapidly decayed after that period. This may be the result of an after-period disincentive effect, where sample members who missed the early response period are disincentivized to respond. The overall effect of this strategy may be highly dependent on the length of time between the end of the early response period and the end of data collection. Thus, rewarding quick response with larger incentives may be effective for surveys with very short fielding periods, but less so for those with longer field periods.
The additional pre-paid incentive with the post-response incentive was no more effective than the post-response incentive alone. Previous findings have shown the effectiveness of pre-paid incentives (Gajic, Cameron, and Hurley 2012; Halpern et al. 2011), and we do not see our findings as evidence against the effectiveness of pre-paid incentives in general. Rather, we suspect that the method we used to deliver incentives, Amazon.com gift card codes, did not effectively communicate the pre-paid nature of the incentive. Given that sample members had to use a computer to receive the pre-paid incentive, the immediate impact of the incentive may have been lessened, compared to more direct pre-paid incentives (e.g., cash included with invitation materials). In addition, some sample members may have thought that response was still required to obtain the incentive, given the need to use the included web address to obtain it. Thus, the pre-paid incentive may have been perceived as a post-response incentive, thereby making it no more effective than the actual post-response incentive.
The nonresponse conversion incentive was effective among those who did not respond by 12 weeks into the field period, although the effect was only marginally significant and there was no effect for all initially selected sample members. This finding suggests the use of this method in an adaptive design strategy, where certain subgroups with low response rates could be targeted with this incentive to boost their response rates. Similarly, other metrics like R-indicators (Schouten, Cobben, and Bethlehem 2009) could be used to identify underrepresented subgroups during data collection, and this incentive could be used to improve their representation in the responding sample. However, given the small sample size for this test, further research would be needed to confirm the effectiveness of this incentive strategy.
The effectiveness of the incentives may also have been influenced by the population’s knowledge level of interest in the topic. Prior research has shown that the effect of an incentive may be smaller when the population is interested in the survey topic (Baumgartner and Rathbun 1997). Groves, Singer, and Corning (2000) attributes this to leverage-salience, whereby individuals weigh the characteristics of a survey differently depending on their personal experiences and beliefs. For this population, public school principals’ interest in the survey topics may have been leveraged more heavily, diminishing the difference in impact of the incentives.
Limitations
The ability to generalize these findings across survey domains is limited by two factors: (1) the population and (2) the mode of incentive delivery. The population included in this experiment, public school principals, may not have the same likelihood of participating in surveys or responding to incentives as members of the general population. Demands on their time and the restrictive district policies may dampen the potential effect of response incentives. Thus, the incentives tested in these experiments may be more effective in general population surveys. We also found that the content of the survey, particularly Common Core standards, may have been considered controversial by some principals and therefore increased reluctance to respond.
Both the survey and incentives were administered via the Internet, which may have resulted in a mode effect and may have dampened the effect of the incentives, particularly for principals who do not frequently use Amazon.com. In addition, sample members had to read the invitation materials to become aware of the incentives, so anyone reading only the first sentence or two may not have been aware of the incentives, thereby negating any possible effect. Gift card codes from other retailers or the use of physically-delivered incentives may have different effects.
Conclusion
We tested the effectiveness of several incentive strategies for a web survey of public school principals. We examined the effect of an early response incentive, a pre-paid incentive, and a nonresponse conversion incentive compared to the effectiveness of a post-response incentive. Overall we found no incentive strategies more effective than the post-response incentive. However, we did find that the early response incentive was effective while it was available, making this strategy useful in some situations. In addition, we found some evidence that the nonresponse conversion incentive was effective among initial nonrespondents, and therefore may be useful as a targeted incentive in adaptive design for improving the representation of certain subgroups. Further research is needed on the use of incentives in the context of web surveys and electronic delivery of incentives. The findings presented here suggest that the effectiveness of different incentive strategies in this context may not be the same as findings for similar strategies used for traditional mail surveys.
Acknowledgements
We would like to thank the Bill & Melinda Gates Foundation for support for this research.
No variances were determined to be different per the folded F values. Likewise, significance conclusions were unchanged using either pooled variance or the Satterthwaite method.
Half of these were also part of the nonresponse conversion incentive, but this offer occurred well after the three-week early incentive cutoff date.
Eighteen principals in the experimental group and 22 in the control group were either in a district that did not allow its principals to be surveyed or did not allow for differential incentives.
Response rate calculated using American Association for Public Opinion Research RR6.