Introduction
The Rutgers Center for Tobacco Studies has surveyed internal medicine physicians (IMPs) since 2018 on various issues related to tobacco use (Delnevo and Singh 2021). Despite often-cited difficulties in obtaining responses from physicians (Cho, Johnson, and VanGeest 2013; Thorpe et al. 2009; VanGeest, Johnson, and Welch 2007), our first two surveys achieved overall response rates of 62.6% (2018) and 59.3% (2019), due in part to the offer of a $50 Starbucks gift card as an upfront incentive.
Wave 3 presented several challenges. First, based on the results of an incentive experiment in Wave 1, in Wave 2 we offered all respondents a $50 Starbucks gift card. However, limited resources prevented us from offering all respondents the $50 incentive in Wave 3. Second, this wave was the first taking place during COVID-19, when physicians confronted a new, quickly evolving public health crisis. Many began seeing patients remotely due to the closure of outpatient practices, and we were unsure if our sample would receive their invitations to participate. In this context, the salience of tobacco issues was likely dwarfed by other concerns. For us, COVID precautions meant our mail (including returned survey invitations) was not delivered directly to us but rather to a centralized university mailroom where we had to retrieve it. All of this led us to expect lower response rates. We wanted to know, however, the independent impacts of the lower incentive and the COVID-19 context. We therefore fielded an experiment to find out.
An extensive literature on the use of incentives in surveys demonstrates that offering incentives, especially upfront, unconditional incentives, boosts response rates in general and among physicians in particular (e.g., Delnevo, Abatemarco, and Steinberg 2004; Singer and Ye 2013; Dillman and Christian 2014). The literature on physician surveys indicates that higher incentives are generally more effective at producing higher response rates (Gunn and Rhodes 1981; Mizes, Fleece, and Roos 1984; Asch, Christakis, and Ubel 1998; Kellerman, Scott, and Herold 2001). Given our limited resources in Wave 3, we could only offer half of our sample the regular $50 incentive. To the other half, we could offer $25. We therefore hypothesized that we would see higher response rates in the group receiving the $50 incentive compared to the group receiving $25. Previous research has shown that the context in which a survey is fielded can affect response rates (e.g., Johnson et al. 2006) and that COVID-19 specifically has led to lower response rates (Bates and Zamadics 2021). We therefore further hypothesized that, due to the pandemic and the special challenges it presented to physicians, even the $50 group would have response rates lower than in previous waves.
Methods
Five hundred IMPs were randomly selected from the American Medical Association’s Physician Masterfile. With the first mailed invitation to complete the survey anonymously online, physicians were randomly assigned to receive an upfront incentive of either a $50 or $25 Starbucks gift card. There were no significant differences in incentive condition by age or sex. Three additional reminders were mailed, each 10 days apart; the final mailing contained a paper version of the survey and a prepaid return envelope.
These procedures differed somewhat from previous waves of the study. In Wave 1, most respondents completed the survey via paper and pencil, though some were randomized to a web-push condition. In that wave, we also experimentally evaluated the impact of differing incentive amounts ($25 or $50) to different retailers. In Wave 2, all respondents were recruited via a web-push approach and incentivized with a $50 Starbucks gift card. In that survey, 750 IMPs were randomly selected to be invited to the survey.
The survey was offered in English and consisted of 40 items. Data were collected between May and July 2021. We calculated the American Association for Public Opinion Research Response Rate 3 (American Association for Public Opinion Research 2016) to compare the impact of the different incentives.
Results
We obtained a response rate of 50.2%, considerably lower than previous waves (62.6% in 2018 and 59.3% in 2019). In line with our hypotheses, Table 1 suggests this was partially due to the lower incentive, as the group offered $25 only achieved a response rate of 48.0% compared to 52.4% among the $50 incentive group. Even the group offered $50 had a response rate substantially lower than previous years.
We observed striking differences by age and gender, the two demographic variables available in our sampling frame. IMPs under 45 had a 16 percentage point higher response rate when offered $50, and among those 45-54 the difference was nearly 11 percentage points. In contrast, participants 55 and older responded at higher rates to the lower incentive. Although both male and female IMPs responded at higher rates to the larger incentive, the difference was greater for females (over 7 percentage points).
Of course, for survey practitioners who may field physician surveys during COVID-19, there are considerations beyond response rates that are important at the survey design stage. Table 2 therefore offers further information on the impact of the different incentive amounts on survey cost, the number of reminders required to obtain a response, and the mode of completion.
When it comes to the cost of the survey, the $50 incentive condition costed significantly more than the $25 condition. Indeed, for a 35% increase in cost per complete and a 62% increase in overall cost, the $50 condition only resulted in a 9% (4 percentage point) increase in response rate. Survey practitioners interested in surveying physicians in the context of COVID-19 should be aware of this dynamic when choosing the incentive amount to offer during recruitment.
Across the two incentive levels, fewer reminders were needed, on average, in the $50 incentive group compared to the $25 incentive group (1.8 vs. 2.3). Meanwhile, the percentage of respondents who chose to complete the paper version of the survey did not differ statistically from the percentage of respondents who opted to complete the survey online across the two groups.
Discussion
In line with previous literature and our hypotheses, we found that IMPs responded at higher rates to the higher incentive amount. While the lower incentive affected response rates, however, even the group offered the original amount failed to achieve response rates comparable to previous years. We attribute this to the pandemic.
This study has some limitations. First, while the samples assigned to the different incentive conditions did not differ from one another in terms of age or sex, the two demographic variables to which we had access in the sampling frame, there may be other variables that we were unable to measure on which the samples were imbalanced. Second, while the internal validity of the study is high given random assignment of respondents to incentive levels, we do not have population benchmarks to which to compare our sample. However, given that only small amounts of variance on potentially moderating variables are necessary to estimate valid, generalizable treatment effects (Druckman and Kam 2011), we are confident in the generalizability of our findings. Third, given that our survey administration procedures were not exactly the same across waves, it is possible that differences unrelated to COVID-19 explain some of the variation in response rates across waves. Finally, even before COVID, response rates to surveys had been declining. We cannot rule out that some of the decline in response rates on which we report here is due to that ongoing trend.
While COVID-19 has made it more difficult to survey physicians, understanding their attitudes and opinions is perhaps more important now than ever before. It therefore behooves researchers to continue investigating how to increase physician responses in this challenging new context.