Effect of a Pre-Paid Incentive on Response Rates to an Address-Based Sampling (ABS) Web-Mail Survey

Z. Tuba Suzer-Gurtekin Institute for Social Research, University of Michigan

Mahmoud Elkasabi ICF International

Mingnan Liu SurveyMonkey

James M. Lepkowski Institute for Social Research, University of Michigan

Richard Curtin Institute for Social Research, University of Michigan

Rebecca McBee Institute for Social Research, University of Michigan


The effects of incentives on response rates, web completion rates, and demographic and socioeconomic characteristics were investigated in a randomized experiment for a general population economic attitude survey. The experiment was conducted in two groups: (1) fresh cases from address-based sampling (ABS) postal addresses and (2) recontact cases that were interviewed six months ago. The prepaid cash incentive condition was crossed with a set of Web-Mail survey contact strategies. While the fresh cases were contacted by Web-Mail concurrent or web-intensive contact strategies, the recontact cases were contacted by mail, concurrent or web-intensive contact strategies. Overall findings showed that incentives increased average response propensity without changing the covariance component of the nonresponse bias given the comparisons on a set of demographic and socioeconomic characteristics by the incentive condition. While the prepaid cash incentives significantly increased the web completion rates in the web intensive contact strategy for the fresh sample, they had a reversed effect in the concurrent strategy for the recontact sample.


The University of Michigan’s Surveys of Consumers (SCA) tested the effect of a set of design variations on response rates in address-based sampling (ABS) mail and Web-Mail surveys in monthly experiments between March 2011 and April 2012. In this paper, we report findings from a selected set of experiments focusing on the effect of prepaid cash incentives in the ABS Web-Mail surveys for a general population. The selected set of findings includes response rates, proportion of completed interviews by web (web completion rates), and demographic and socioeconomic characteristics of respondents by the experimental groups.



The SCA is a monthly telephone survey that follows a rotating panel survey design (Curtin 1982). Each monthly sample is composed of 300 fresh and 200 recontact cases. The SCA targets household heads 18 years and older in the contiguous United States. In addition to the random digit dialing (RDD) sampling telephone interviews, postal addresses from the U.S. Postal Service Computerized Delivery Sequence File were contacted in a set of monthly mail survey experiments. Further information on the SCA can be found on https://data.sca.isr.umich.edu/.

In September and October 2011, monthly fresh samples of 1,500 postal addresses were randomized to concurrent and web-intensive contact strategies. In these two Web-Mail contact strategies, the respondents were allowed to complete either a web or a paper questionnaire. In addition to the fresh samples, a total of 507 respondents who completed the interview six months ago were randomly assigned to be recontacted by one of the three strategies: mail, Web-Mail concurrent, or Web-Mail web-intensive. Under the assumption that monthly differences are not related to the outcomes of interest and contact strategies, prepaid incentive and no incentive conditions were assigned to September and October samples, respectively (see Figure 1).

Figure 1 Study flow diagram.


In both the mail only and the Web-Mail, the mailing protocol consisted of five mailings which are seven days apart as follows: (1) advance letter, (2) paper questionnaire package, (3) nonresponse follow-up: postcard reminder, (4) nonresponse follow-up: replacement paper questionnaire package, and (5) nonresponse follow-up: second postcard reminder. In the Web-Mail design, the invitation package included letters with a Uniform Resource Locator (URL) and a custom id for the sample units to use to complete the web survey if they choose to do so. While in the concurrent design, the web survey invitation was included concurrently with the paper questionnaire package, the web response option was included in the advance letter in the web-intensive design. When incentives were offered to the prepaid cash incentive group, a 5 USD cash incentive was included when a web survey and/or paper questionnaire invitation was included the first time. The advance letter in the web-intensive contact strategy mentioned a forthcoming paper questionnaire.

Analysis Plan

Between the incentive and no incentive groups, we compared the following: response rates, web completion rates, and demographic and socioeconomic characteristics. The comparisons included the differences by the contact strategy for both fresh and recontact cases. In addition, we explored the differences between the incentive and no incentive groups, specifically for the recontact cases, in terms of the SCA baseline scores.


Table 1 reports the response rates by the experimental groups. The incentives yielded higher response rates (Wald chi-square [df=1]=40.1, p<0.0001) controlling for the fresh and recontact case status. The differences in response rates by fresh and recontact case status were not significant by the incentive group. In addition to the fresh and recontact case status, we also tested the effect of incentives and contact strategy combined in each group (results not shown here), and these effects were not significant.

Table 1 Response rates (RR2) by randomized group, 2011 SCA ABS Web-Mail survey data.

Sample Incentive No incentive
Fresh 34.5% 17.4%
Recontact 70.1% 50.3%

The web completion rates by sample and contact strategies and incentive groups are shown in Table 2. For the fresh cases, incentives increased the web completion rate significantly in the web-intensive contact strategy (Wald chi-square [df=1]=6.8066, p=0.01). That is, when the incentives were offered, respondents were more likely to respond by web in the fresh sample. Although the direction of the difference was same for the recontact cases, it was not significant significantly due to small sample size. For the concurrent contact strategy, when the incentives were offered, respondents were less likely to respond by web in the recontact sample (Wald chi-square [df=1]=4.3730, p=0.04).

Table 2 Web completion rates by randomized group, 2011 SCA ABS Web-Mail survey data.

Sample and contact strategies Incentive No incentive
 Concurrent 22.8% 21.8%
 Web-intensive 45.5% 31.1%
 Concurrent 23.3% 36.8%
 Web-intensive 34.3% 30.3%

As shown in Tables 3 and 4, the characteristics of the respondents were similar by the incentive group in both fresh and recontact samples except for the stock ownership in the recontact group. In the recontact sample, percentage of stock owners was significantly higher in the no incentive group.

Table 3 Characteristics of respondents by randomized group, fresh cases.

Characteristic Incentive
No incentive
Counts % Counts %
 18–24 6 1.4% 3 1.3%
 25–34 54 12.5% 22 9.6%
 35–44 54 12.5% 22 9.6%
 45–54 84 19.4% 41 17.8%
 55–64 97 22.4% 72 31.3%
 65–97 138 31.9% 70 30.4%
 Missing 48 10.0% 11 4.6%
 Owns or is buying 385 82.4% 195 82.3%
Stock ownership
 Yes 305 66.6% 156 69.0%
 West 107 22.2% 39 16.2%
 Midwest 129 26.8% 66 27.4%
 Northeast 88 18.3% 35 14.5%
 South 157 32.6% 101 41.9%
 Under $10,000 19 4.2% 11 5.0%
 $10,000–$14,999 24 5.3% 10 4.6%
 $15,000–$19,999 15 3.3% 11 5.0%
 $20,000–$29,999 33 7.3% 27 12.3%
 $30,000–$39,999 46 10.2% 18 8.2%
 $40,000–$49,999 44 9.8% 18 8.2%
 $50,000–$59,999 42 9.3% 25 11.4%
 $60,000–$74,999 62 13.7% 26 11.9%
 $75,000–$99,999 56 12.4% 22 10.0%
 $100,000–$124,999 42 9.3% 21 9.6%
 $125,000–$149,999 16 3.5% 8 3.7%
 $150,000–$174,999 20 4.4% 6 2.7%
 $175,000 or more 32 7.1% 16 7.3%
 Missing 30 6.2% 22 9.1%

Table 4 Characteristics of respondents by randomized group, recontact cases.

Characteristic Incentive
No incentive
Counts % Counts %
 18–24 0 0.0% 1 0.4%
 25–34 14 4.2% 11 4.6%
 35–44 41 12.2% 24 10.1%
 45–54 71 21.1% 49 20.6%
 55–64 82 24.4% 68 28.6%
 65–97 128 38.1% 85 35.7%
 Missing 14 4.0% 9 3.6%
 Owns or is buying 296 86.8% 213 87.3%
Stock ownership*
 Yes 209 63.5% 171 72.5%
 West 66 18.9% 50 20.2%
 Midwest 85 24.3% 70 28.3%
 Northeast 78 22.3% 50 20.2%
 South 121 34.6% 77 31.2%
 Under $10,000 14 4.3% 7 3.0%
 $10,000–$14,999 9 2.8% 7 3.0%
 $15,000–$19,999 23 7.1% 15 6.4%
 $20,000–$29,999 34 10.5% 25 10.7%
 $30,000–$39,999 24 7.4% 26 11.2%
 $40,000–$49,999 37 11.4% 24 10.3%
 $50,000–$59,999 27 8.3% 21 9.0%
 $60,000–$74,999 41 12.6% 27 11.6%
 $75,000–$99,999 39 12.0% 20 8.6%
 $100,000–$124,999 36 11.1% 19 8.2%
 $125,000–$149,999 11 3.4% 22 9.4%
 $150,000–$174,999 8 2.5% 6 2.6%
 $175,000 or more 22 6.8% 14 6.0%
 Missing 25 7.1% 14 5.7%

*Significant at p=0.05.

Table 5 shows the baseline scores from the first interview for the recontact cases. Table 5 includes three leading indices that the SCA reports monthly. The Index of Consumer Sentiment is composed of five survey items: (1) current personal finances, (2) expected personal finances, (3) expected business conditions in 12 months, (4) expected business conditions in 5 years, and (5) current buying conditions. While the current index includes items 1 and 5, the expected index includes items 2, 3, and 4. The pairwise comparisons were not significant in the index scores by the incentive group.

Table 5 Baseline measures by randomized group, recontact cases.

Index Incentive No incentive
Index of Consumer Sentiment 69.8 73.9
Current index 76.8 81.5
Expected index 65.3 69.0


These experiments allowed the principal investigator to test the mailing protocol and the package for a general population in addition to testing the effect of the incentives. Overall, the prepaid cash incentive increased the response rates in this study. This is in line with the previous literature (Church 1993). Additional analysis offered further information on the potential magnitude of the possible change in nonresponse bias for a general population survey on the economic attitudes.

According to the leverage-saliency theory of survey participation, the survey participation decision is based on an equation including the importance and the saliency of the feature and the direction of the influence (Groves et al. 2000). Furthermore, the saliency and the importance of a survey feature may differ by people in addition to the direction of the influence. That is, incentives could motivate people with certain characteristics to respond disproportionately at a higher rate. The higher the covariance between the survey measures of these people and their response propensities, the higher the nonresponse bias (Groves et al. 2004, 2006, 2009). For example, incentives attracting poorer people disproportionately could be a concern as this could imply a higher covariance between the economic attitudes and the response propensities, consequently higher nonresponse bias. In particular, this study used a set of socioeconomic demographics to understand the differences in the respondent characteristics by the incentive group. For the fresh sample, socioeconomic demographics were comparable. For the recontact cases, further research is required on the significant difference by the stock ownership, but the baseline scores did not suggest any differences in the nonresponse bias by the incentive group. Overall, we concluded that using the incentives increased the average response propensity without changing the covariance component of the nonresponse bias substantially.

For the fresh cases, the incentives did increase the web completion rates in the web-intensive contact strategy. On the other hand, the concurrent contact strategy yielded lower web completion rate in the incentive group for the recontact cases. This finding should be interpreted with caution due to small sample sizes and future research should investigate its replicability. One possible explanation is that incentives could be urging the participants to respond and to choose the mode at hand.


Church 1993
Church, A.H. 1993. Estimating the effect of incentives on mail survey response rates: a meta-analysis. Public Opinion Quarterly 57(1): 62–79. Available at http://poq.oxfordjournals.org/content/57/1/62.full.pdf+html.
Curtin 1982
Curtin, R. 1982. Indicators of consumer behavior: the University of Michigan surveys of consumers. Public Opinion Quarterly 46(3): 340–352. Available at http://poq.oxfordjournals.org/content/46/3/340.full.pdf+html.
Groves et al. 2000
Groves, R.M., E. Singer and A. Corning. 2000. Leverage-saliency theory of survey participation: description and an illustration. Public Opinion Quarterly 64(3): 299–308. Available at http://poq.oxfordjournals.org/cgi/reprint/64/3/299.pdf.
Groves et al. 2004
Groves, R.M., S. Presser and S. Dipko. 2004. The role of topic interest in survey participation decisions. Public Opinion Quarterly 68(1): 2–31. Available at http://poq.oxfordjournals.org/content/68/1/2.full.pdf+html.
Groves et al. 2006
Groves, R.M., M.P. Couper, S. Presser, E. Singer, R.Tourangeau, G.P. Acosta and L. Nelson. 2006. Experiments in producing nonresponse bias. Public Opinion Quarterly 70(5): 720–736. Available at http://poq.oxfordjournals.org/content/70/5/720.full.pdf+html.
Groves et al. 2009
Groves, R.M., F.J. Fowler, M.P. Couper, J.M. Lepkowski, E. Singer and R. Tourangeau. 2009. Survey methodology (2nd ed.). Wiley Series in Survey Methodology, Hoboken, NJ.

About Survey Practice Our Global Partners Disclaimer
The Survey Practice content may not be distributed, used, adapted, reproduced, translated or copied for any commercial purpose in any form without prior permission of the publisher. Any use of this e-journal in whole or in part, must include the customary bibliographic citation and its URL.