Our purpose in this study is rather focused. The literature on lotteries indicates that lotteries can enhance response rates. Once a lottery is adopted, the question becomes: Is it better to use multiple smaller rewards or a single larger reward?
There is some literature on the topic of variable incentives for a lottery. Deutskens et al. (2004) noted as an additional finding that adults in the Netherlands responded more often (and with some indications of higher quality) with more, but smaller, lottery incentives. On the other hand, Göritz (2004, p. 343) notes “… raffling a few big cash prizes or a few big gifts instead of several smaller ones keeps transaction costs lower, as fewer people need to be contacted and sent their prizes.” Finally, Porter and Whitcomb (2003) report that the magnitude of a lottery reward has virtually no impact on response rate for a student survey. It would seem that there is a lack of consensus on the effect of size and/or number of the lottery incentives.
Method
Data were collected in 2008 from a simple random sample of 2,000 members of the American Counseling Association (ACA) as part of a larger research effort. The survey task involved responding to multiple scales related to attitudes toward battered women. There were a total of 99 items in the online survey. ACA provided the researcher with two independently chosen simple random samples of their membership. The first sample was assigned to the first incentive group (N1=1,000) and was offered the chance to win one of four $25 Visa gift cards; the second sample was assigned to the second incentive group (N2=1,000) and was offered the chance to win one $100 Visa gift card. There was no obligation to participate in the survey to enter the lottery, but respondents did have to send a separate email to the researcher with contact information to be entered into the lottery. Participants were sent a pre-notice email and, shortly after, a second email with an invitation to participate and a link to the survey. One week later, non-respondents were sent a third email as a reminder.
Results
Of the 2,000 email addresses, 153 proved to be invalid. After the third contact, there were 532 respondents; 290 of these were in the first group and 242 were in the second group. The overall response rate was 28.8%. Almost 80% of respondents were female; 10% were students. Gender was not significantly related to lottery condition.
The response rate across response incentives was significant using a goodness-of-fit test, X2(1)=4.331, p=0.037 indicating a modest, but statistically significant preference for the four $25 Visa gift card incentive condition.
When the students (and one additional person with no survey responses) were removed, the sample size of respondents dropped to 478 with 240 in the first group and 238 in the second group. That is to say, of the 53 responding students (532-478-1=53), 51 were in the first group. The statistical test in this case, X2(1)=0.008, p=0.927, was non-significant. We do not know the precise proportion of students in each of the incentive groups, however, it is exceedingly unlikely that 2 or fewer of the 53 student respondents would be in only one group, given that the groups were randomly formed. This may indicate that the multiple smaller lotteries encouraged more student participation. It is also possible (as one of the reviewers pointed out) that university spam filters on student email accounts could have differentially blocked email depending on the lottery amount.
Conclusions
While far from definitive, the results of this randomized study would indicate a possible small advantage for the use of smaller multiple incentives with students, but not with professionals. Our conclusion is strengthened somewhat by the fact that the expected value of the two conditions were identical (the expected monetary reward was $0.10 for both incentives where we assume all of those invited chose to participate). Interestingly, the study by Deutskens et al. (2004) that confirms our findings was conducted in the Netherlands with a sample more representative of the general population of adults than our population of professionals and students.
When the students were removed from our sample, the slight advantage of multiple small incentives disappeared entirely. That is, when our sample was almost entirely professional, there was no advantage to the either condition. It would seem that students were much more motivated by multiple smaller incentives and this is, perhaps, reasonable given the typical financial distinction between professionals and college students. Even though the expected values were the same, the more financially pressed group of students were attracted to the better chance of winning regardless of the smaller prize. Göritz (2004) commented on the advantages to the researcher to offering fewer larger prizes, but the context for this statement was different still, being a sample of online adults from the German population. The study by Porter and Whitcomb (2003) indicated that were no substantial differences in response rates for students between incentives of different value, but was conducted with high school students in the United States. It might be that secondary school students are a bit more financially secure group than the graduate students we surveyed.
It is assumed that the results of our study might well change if the expected value and/or nature of the incentive is altered. Additionally, this professional and largely female population is likely to be quite different in their tendency to respond than other populations. In short, it would seem that the influence of lotteries with single larger prizes compared to lotteries with multiple smaller prizes may well depend on a number of factors one of which is likely the financial need of the participant.
In sum, this study does suggest the importance of knowing an audience when it comes to developing incentive plans, and we hope that others will continue to explore and report on their strategies for enhancing response rates. Given that response rates appear to be declining, and funding and review agencies (e.g., Institutional Review Boards, Office for Management and Budget) will often comment on such plans, it can only help researchers to have some straightforward empirical evidence about what has worked in different data collection contexts.