An Experiment on Improving Response Rates and Its Unintended Impact on Survey Error

Daniel M. Merkle ABC News

Murray Edelman Edelman Consulting

Groves (2007) warns that the “[b]lind pursuit of high response rates in probability samples is unwise” (p. 668) because it may have the unintended consequence of actually increasing survey error. This will happen when efforts to improve the response rate increase the correlation between the propensity to respond and the survey variable being measured. The findings of a study we conducted a while back, in 1997, provide a good illustration of Groves’ point. This experiment was designed to test the impact of factors hypothesized to increase the response rate but had the unanticipated effect of increasing survey error.

Many of the studies that explore ways to increase response rates ignore the bigger issue of how methodological changes will impact survey error (see Groves 2007 for a review). Often it’s not possible to compute a measure of error because the population parameters are not known. This experiment was conducted as part of an Election Day exit poll, making it possible to compute a measure of survey error in addition to the response rate.

The key moment in an exit poll occurs when the interviewer approaches the voter. In a matter of just seconds, the request for participation is made and the voter decides whether or not to respond based on the information given. This study tested two factors that were hypothesized to increase response rates and thus decrease survey error. The first was an incentive — a pen which included the logos of the six Voter News Service (VNS) member organizations (ABC, the Associated Press, CBS, CNN, FOX, and NBC).

The second factor tested in this study was the incorporation of a colorful folder over the questionnaire pad to better standardize the interviewer’s approach to the voter and to help the interviewer stress key pieces of information that we hypothesized would lead to better compliance. A folder was designed that fit over the right half of the questionnaire pads. On the top of the folder were the color logos of the six media organizations. Below that were the words “Survey of Voters,” “Short” and “Confidential.” The folder approach was expected to increase response rates for a few reasons:

First, it was expected that stressing key information, that the survey was short and confidential, would make voters more likely to fill it out. As part of a previous evaluation we interviewed voters who refused to fill out the exit poll questionnaire and found that lack of time was the primary reason for refusing, followed by concerns about confidentiality and privacy.

Second, we tried to grab the attention of sampled voters leaving the polling place by making the color folder more eye-catching than the black and white questionnaire that is normally used.

Finally, on the back of the folder we included some directions and helpful hints for the interviewer about how to get voters to fill out the survey, including special instructions detailing how to deal with people who hesitated or refused.

Method

An experiment was conducted as part of the New Jersey and New York City general election exit polls conducted by VNS in November 1997. A total of 80 precincts were randomly selected from all precincts in each state, 44 in New Jersey and 36 in New York City. These precincts were then randomly assigned to one of three conditions:

  1. Folder Condition — The interviewers in these precincts were given questionnaire pads with the folders described above. In the folder were the standard VNS questionnaires, which were printed in black-and-white and included the logos of the sponsoring media organizations in the upper left-hand corner. All three conditions used the same questionnaire.
  2. Folder and Pen Condition — The interviewers in these precincts used the folders just described and also offered VNS pens to voters as an incentive for filling out the questionnaire.
  3. Traditional Condition — The interviewers followed the standard VNS interviewing procedures, approaching voters without the folder and without the pen.

Interviewers randomly selected voters leaving the polling place to fill out the exit poll. (1) They also kept track of each sampled voter who refused to fill out the questionnaire or who they missed. This information was used to compute precinct-level response rates, refusal rates and miss rates.

Two measures of survey error were also computed at the precinct level, using the vote question from the exit poll as the survey estimate and the official precinct votes as the population values. First, the signed error was computed by taking the Democratic percentage minus the Republican percentage from the exit poll and subtracting from it the Democratic percentage minus the Republican percentage from the official vote. (2) Second, a measure of the absolute error was computed by taking the absolute value of the signed error.

Results

a. Pens

Contrary to our expectations, the hypothesis that the pen would increase the response rate was not supported. The average response rate was similar in both the Folder/Pen Condition (55.4 percent) and the Folder Condition (54.2 percent) (see Table 1). The same is true of refusal rates: 34.7 percent in the Folder/Pen Condition and 34.3 in the Folder Condition.

Table 1 Comparison of Means.
--Folder/Pen Vs. Folder Only--

Folder/Pen
(n=27)
Folder
(n=26)
SE Diff t-value
Response rate 55.4 54.2 4.8 0.24
Refusal rate 34.7 34.3 3.6 0.11
Miss rate  9.9 11.5 2.2 0.71
Signed error  7.0  8.3 3.8 0.36
Absolute error 11.0 12.6 2.2 0.74

*p<0.05

The hypothesis that the pen would decrease survey error was also not supported. The Folder/Pen Condition and the Folder Condition did not differ significantly in terms of the signed error or the absolute error (see Table 1).

b. Folders

Because the two Folder Conditions were not significantly different from each other, we combined them to test the impact of the Folder by comparing it to the Traditional method. The data suggest that the Folder Condition had a small, although not quite statistically significant, impact on response rates in the hypothesized direction. The response rate was about five percentage points higher using the Folder compared with the Traditional method (t=1.22, p=0.11, one-tailed test) and the refusal rate was 4 percentage points lower in the Folder Condition (t=1.25, p=0.11, one-tailed test) (3) (see Table 2).

Table 2 Comparison of Means.
--Folder Vs. Traditional Method--

Folder
(n=53)
Traditional
(n=27)
SE Diff t-value
Response rate 54.8 49.9 4.0 1.22
Refusal rate 34.5 38.4 3.1 1.25
Miss rate 10.7 11.6 2.0 0.48
Signed error  7.6 –2.0 4.1  2.33*
Absolute error 12.5 15.5 3.0 1.00

*p<0.05

Although these differences do not quite reach the traditional level of statistical significance, an argument could have been made to implement the Folder based on this finding. The cost of implementing the Folder was relatively small compared with the cost of other procedures that might be used to increase response rates, such as incentives or hiring multiple interviewers per precinct.

Based on the response rate results, conventional wisdom might have suggested that survey error would be lower in the Folder Condition than in the Traditional Condition, if only slightly. In fact, the opposite was the case. Contrary to what was hypothesized, the signed error was actually larger in the Folder Condition. The Folder Condition had a fairly large, statistically significant overstatement of the Democratic candidate (7.6 percentage points), whereas the Traditional Condition had a small, nonsignificant Republican overstatement (2.0 percentage points).

Conclusion

Studies that explore ways to increase response rates often ignore the more important issue of how methodological changes will impact survey error. This study is unique because it investigated how a pen incentive and a change in the interviewer’s approach in the Folder Condition affected not only response rates but the more critical measure of survey error.

The pen incentive did not have an impact on response rates or survey error. In the Folder Condition there was a slight, although not quite statistically significant, five-point increase in response rates. Had this study focused on response rates as the only measure of survey data quality, this may have seemed like a good enough justification for the implementation of this low-cost procedure in future exit polls. However, that would have been a mistake because the experiment also found that the Folder Condition significantly increased the bias in the vote estimates.

The procedures used in the Folder Condition were more appealing to Democrats than to Republicans. The data do not allow us to determine what specific aspect of the Folder Condition is responsible for this. After conducting this study we initially hypothesized that the color logos of the national news organizations on the folder may have been perceived more positively by Democrats, leading to a greater propensity to respond among these voters. However, an experiment to test the effect of the logos by the second author using the 2000 exit polls did not find an impact of the logos on response rates or error.

While this doesn’t rule out the logos as having an effect in this study, there are other possible explanations for the Democratic overstatement. The Folder Condition emphasized that the survey was short and confidential and also included more interviewer training on refusal conversion. These are standard methods for improving response rates but perhaps the message of “short and confidential” appealed more to Democrats than Republicans, and it may have been the case that Democratic refusals were more easily converted than Republican refusals.

But whatever the reason for the effect, the lesson for survey researchers is clear. When studying ways to improve survey data quality it’s important to look beyond the impact on response rates and also consider the impact on survey error. As Groves (2007) states, “nonresponse bias is a phenomenon much more complex than mere nonresponse rates.” The quest for higher response rates for the sake of higher response rates is misplaced. Surveys with higher response rates are not necessarily more accurate (e.g., Merkle and Edelman 2002), and manipulations designed to increase response rates can increase survey error if they are differentially attractive to subgroups of the population.

************************************

An earlier version of this paper was presented at the annual conference of the American Association for Public Opinion Research, St. Louis, MO, May 14–17, 1998. The authors would like to thank Kathy Dykeman and Chris Brogan for their help fielding this study.

(1) See Merkle and Edelman (2000, 2002) for details on the VNS exit poll methodology.
(2) Other operationalizations of exit poll error (e.g., Lindeman et al. 2005) produced results similar to those reported below.
(3) There was an extreme outlier in this analysis, a precinct with a very low response rate of six percent. The reason for this outlier was that the interviewer experienced significant legal problems at the precinct and had to stand over 100 feet away. Dropping this precinct from the analysis strengthens the observed effect on response rate (t=1.53, p=0.07, one-tailed test) and refusal rate (t=1.39, p=0.08, one-tailed test).

References

Groves 2007
Groves, R.M. 2007. Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly 70: 646–675.
Lindeman et al. 2005
Lindeman, M., E. Liddle and R. Brady. 2005. Investigating causes of within-precinct error in exit polls: confounds and controversies. 2005 Proceedings of the American Statistical Association. American Statistical Association, Alexandria, VA.
Merkle and Edelman 2000
Merkle, D.M. and M. Edelman. 2000. A review of the 1996 voter news service exit polls from a total survey error perspective. In (P.J. Lavrakas and M. Traugott, eds.) Election polls, the news media, and democracy. Chatham House, New York.
Merkle and Edelman 2002
Merkle, D.M. and M. Edelman. 2002. Nonresponse in exit polls: a comprehensive analysis. In (R.M. Groves, D.A. Dillman, J.L. Eltinge and R.J.A. Little, eds.) Survey nonresponse. Wiley, New York.


About Survey Practice Our Global Partners Disclaimer
The Survey Practice content may not be distributed, used, adapted, reproduced, translated or copied for any commercial purpose in any form without prior permission of the publisher. Any use of this e-journal in whole or in part, must include the customary bibliographic citation and its URL.