The Effectiveness of Mailed Invitations for Web Surveys and the Representativeness of Mixed-Mode versus Internet-only Samples

Wolfgang Bandilla GESIS Leibniz Institute for the Social Sciences

Mick P. Couper University of Michigan

Lars Kaczmirek GESIS Leibniz Institute for the Social Sciences


E-mail is a common invitation mode for Web surveys. However, collecting e-mail addresses in another mode may raise privacy concerns among respondents. In our previous study, fewer than half the respondents provided an e-mail address. In this paper, we report on an experiment to test the efficacy of asking for e-mail addresses. Respondents to the 2012 German General Social Survey (ALLBUS) who reported having Internet access at home were randomized to two groups: the first was asked for their e-mail address, and the other was not. Using a mailed invitation to a follow-up Web survey, we explore the effect of this request on the subsequent response rate. We also followed up all cases (including those who reported not having Internet access at home) with a mail survey to explore the effect of adding mail in a sequential mixed-mode design. We find that asking for e-mail address does not appear to have negative effects on subsequent response. We also find that a mixed-mode design substantially increases response rates and brings the follow-up sample more in line with the ALLBUS in terms of selected demographic and attitudinal variables.


E-mail is the most common invitation mode for Web surveys, because it is cheap, easy to automate and personalize, and provides easy access to the survey (delivery of a unique clickable URL). But there are some concerns with this invitation mode, including low legitimacy, concerns about spam, and issues of churn (maintaining e-mail addresses over time) [see Couper (2008) for a discussion of e-mail versus alternative invitation modes]. Moreover, many sample frames do not contain e-mail addresses, and soliciting such addresses in another mode may raise privacy concerns. For this reason, some surveys (e.g., the Health and Retirement Study) do not collect e-mail addresses and instead send mailed invitations to supplemental Web surveys.1

The literature on mail versus e-mail invitations to Web surveys is both limited and mixed. Birnholz et al. (2004) found that a mailed invitation (with an incentive) to faculty yielded a higher response rate than an e-mailed invitation with incentive (40 percent vs. 32 percent), though the difference was not statistically significant. In a survey of faculty, staff, and students, Kaplowitz et al. (2012) tested a postcard versus e-mail invitation, and found a significantly higher response rate among students for the postcard (22 percent vs. 19 percent), but a significantly lower response rate among faculty (33 percent vs. 40 percent) and no difference for staff (43 percent for each group). Millar and Dillman (2011) compared a mailed and e-mailed invitation in a student survey. In the no-incentive condition, the letter did not significantly affect response rates (21.2 percent vs. 20.5 percent). In another faculty survey, Dykema et al. (2013) found a mailed invitation significantly improved responses rate over an e-mailed invitation (30.1 percent vs. 19.4 percent), with no incentive in either condition.

In our two previous studies in 2008 and 2010 (Bandilla et al. 2012; Bosnjak et al. 2013), both conducted among general population samples in Germany. We found that a mailed letter (pre-notice or invitation) to a Web survey was more effective than e-mail alone (2008: 57 percent vs. 43 percent; 2010: 51 percent vs. 40 percent). Another finding from our 2010 study was that only 45 percent of in-person respondents who reported using the Internet provided their e-mail addresses. For cost reasons, invitations to the follow-up survey were sent by e-mail only. This meant that more than half (55 percent) of eligible Internet users were not invited to the Web survey. This raises the question whether it is worth trying to solicit e-mail addresses for Web surveys where they do not already appear on the frame. The 2010 Web follow-up survey also excluded those without Internet access, potentially further increasing selection bias.

Given all this, in our 2012 study, we experimentally tested the effect of asking for e-mail addresses in the main survey. We also explored the effect on sample representation of including a mail survey for those without Internet access or as a follow-up to nonrespondents, relative to a Web-only survey. We thus have two research objectives:

  1. To test the effectiveness of asking (or not) for an e-mail address and then sending a mailed invitation to a Web survey
  2. To test the effectiveness and representativeness of a mixed-mode procedure comparing Internet users and non-users, using a mailed invitation to a Web survey and a mailed reminder with a paper questionnaire


The study was designed as a follow-up to the 2012 German General Social Survey (ALLBUS).2 The ALLBUS is based on a random sample of German-speaking adults, with persons randomly selected from the community registers. Data collection is by in-person interviews, using computer-assisted personal interviewing (CAPI). The 2012 ALLBUS achieved a response rate of 37.6 percent.

The Internet access status of all ALLBUS respondents was ascertained. All respondents were asked if they were willing to participate in a follow-up survey. (This is a requirement of the German data protection laws.) Among those with Internet access and who reported willingness to do a follow-up survey, a random 1/3 was asked for their e-mail address, while the remaining 2/3 was not asked this question. This process (outlined in Figure 1) yields four groups of willing respondents:

Figure 1 Overview of process.


  1. Those with Internet access who were not asked for an e-mail address
  2. Those with Internet access who were asked for an e-mail address and provided it
  3. Those with Internet access who were asked for an e-mail address and did not provide it
  4. Those without Internet access

For cost reasons, we could not follow up all willing ALLBUS respondents. We thus drew random subsamples from each of the four groups, in two stages. Our analysis focuses on a random subset of 250 cases from each of the four groups. All four groups were sent an invitation by mail to a web survey; this was followed by a mailed reminder which included a paper questionnaire. The Web/mail survey included a subset of items from the ALLBUS to compare differences in key measures. For the combined analyses of the change in key variables with the addition of mail, we weighted cases by the inverse of the selection probabilities into these subsets.


Overall, 73 perent of the ALLBUS respondents reported having Internet access (see Figure 1). While 71 percent of ALLBUS respondents expressed willingness to do a follow-up survey, those with Internet access were more willing to do so (85 percent) than those without (62 percent). This may reflect socioeconomic and demographic differences between these two groups. While the overall level of Internet access has increased over prior waves of the ALLBUS (54 percent in 2008 and 66 percent in 2010), the level of willingness to do a follow-up survey has declined somewhat (80 percent of Internet users in 2010 were willing).

Among those with Internet access and randomly assigned to be asked for an e-mail address, 42.4 percent provided one. This is a similar level to that found in 2010, suggesting that fewer than half of willing Internet users in Germany are willing to provide an e-mail address to a survey interviewer. Given this, is it worth asking for e-mail addresses? Are those who decline to provide such information less willing to participate in a follow-up survey? This is the focus of our first research question.

Table 1 shows the Internet and overall response rate for each of the four groups. Focusing on the first column (online only), we see that those who were asked and provided an e-mail (group B) responded at a slightly (but not significantly) higher rate than those who were asked and did not provide an e-mail (group C), when sent a mailed invitation to a Web survey (27.2 percent vs. 22.0 percent, χ2 (1)=1.82, n.s.). However, we see that those not asked for their e-mail address (group A) had a response rate of 19.2 percent to the online survey. This is significantly different from group B (19.2 percent vs. 27.2 percent, χ2 (1)=4.49, p=0.034) but not from group C (19.2 percent vs. 22.0 percent, χ(1)=0.59, n.s.). While the weighted response rate for groups B and C combined (24.2 percent) is higher than that for group A (19.2 percent), this difference is also not significant (χ2 (1)=2.56, n.s.). This suggests contrary to our initial concerns that there seems to be no harm in asking for e-mail addresses, even if not everyone is willing to provide one. Also note from Table 1 that 3.2 percent of those who reported not having Internet access responded to the online survey (group D).

Table 1 Counts and response rates (AAPOR RR2) by group.

Group Internet access Asked for e-mail address Invited R (n) Response
Only online

Mixed-mode (online+paper)
n % n %
A Yes Not asked 250 48 19.2 137 54.8
B Yes Asked and gave e-mail address 250 68 27.2 140 56.0
C Yes asked and did Not give e-mail address 250 55 22.0 143 57.2
D No Not asked 250 8 3.2 148 59.2
Unweighted total 1,000 179 17.9 568 56.8
Weighted rates 17.0 56.2

Our second research question focuses on the value of including a mail option, in contrast to a Web-only survey. While doing so increases costs, it is likely

  1. to increase response rates, and
  2. to reduce coverage biases associated with only surveying those with Internet access.

The answer to the first of these questions is clear from Table 1: the overall weighted response rate increases from 17 percent to 56 percent. In terms of case count, the number of completed surveys more than tripled, increasing from 179 to 568. Further, offering the mail follow-up also served to reduce the disparities in response rates across the four groups. This is especially noticeable for group D (those without Internet access). Although they made up 27 percent of the ALLBUS population, not surprisingly, they comprise only 4.1 percent of the total weighted number of online respondents. But once the mail follow-up is offered, this group comprises 21.7 percent of the final weighted set of respondents, more in line with the ALLBUS proportion.

Of course, it is not just the increased response (yielding more cases for analysis) that is important, but whether and to what extent these additional respondents bring the sample more in line with the population than an online-only survey, i.e., reduce the selection bias that may arise through noncoverage or nonresponse. To examine this, we repeated selected questions from the ALLBUS in our Web and mail follow-up survey, permitting us to compare both demographic and attitudinal measures to the ALLBUS population. The full set of comparisons is presented in Table 2. We highlight a few of the findings here for illustrative purposes. Note that we do not test for significant differences, given the overlapping nature of the samples. The estimates are weighted to reflect the differential selection probabilities into the four groups, and the standard errors reflect the design, using the pweight command in Stata (StataCorp LP, College Station, TX, USA).

Table 2 Comparison of online and mixed-mode (online+paper) responses versus ALLBUS.

Follow-up (random subsample)
online only
Demographics: Mean Std. err. Mean Std. err. Mean Std. err.

Age 48.39 1.569 52.01 0.841 51.65 0.781

% std. err. % std. err. % std. err.

Higher qualification, entitling holders to study at a university 43.35 0.047 27.23 0.024 24.18 0.007
Is male 62.56 0.045 48.72 0.026 49.57 0.008

Other variables:
Please tell me after each one whether you have the same or a different opinion (percentage of same opinion):

% Std. err. % Std. err. % Std. err.

No matter what some people say, life for ordinary people is getting worse rather than better. 77.11 0.039 75.55 0.022 78.34 0.007
With the future looking as it does, its almost irresponsible to bring children into the world. 22.90 0.042 30.68 0.024 37.45 0.008
Most politicians are not really interested at all in the problems of ordinary people. 73.67 0.043 77.57 0.022 77.58 0.007
Most people dont really care in the slightest what happens to others. 52.99 0.048 58.49 0.026 71.76 0.008

Please tell me whether in your opinion a woman should be legally permitted to have an abortion or not(percentage of yes, should be permitted):
% Std. err. % Std. err. % Std. err.

if there is a strong chance of the baby being born with a severe defect? 95.15 0.020 94.31 0.011 88.87 0.007
if the woman is married and doesnt want any more children? 61.64 0.046 59.22 0.026 52.25 0.012
if the pregnancy would seriously endanger the womans health? 99.42 0.004 97.73 0.007 95.20 0.005
if the family has a very low income and cant afford more children? 55.95 0.048 55.62 0.026 44.52 0.012

% Std. err. % Std. err. % Std. err.

if the pregnancy is the result of rape? 99.51 0.003 97.24 0.007 90.82 0.007
if the woman is unmarried and doesnt want to marry the childs father? 55.82 0.047 44.85 0.026 31.95 0.011
if that is what the woman wants, regardless of her reasons? 60.53 0.045 56.06 0.026 44.19 0.012

Some people think that most people can be trusted. Others think that one cant be careful enough when dealing with other people. What do you think?
One cant be careful enough 22.98 0.044 31.96 0.024 42.00 0.008

How interested in politics are you?
Very strongly/strongly 42.59 0.048 33.21 0.024 28.28 0.007

Should there be Islamic religious instruction in state schools, should there only be Christian religious instruction or should there be no religious instruction at all in state schools? In state schools in Germany, there should be
Islamic religious instruction too 19.79 0.034 33.63 0.024 36.22 0.008

Many people use the terms left and right when they want to describe different political views. Here we have a scale which runs from left to right. Thinking of your own political views, where would you place these on this scale?
Mean Std. err. Mean Std. err. Mean Std. err.

10 point scale (1=left, 10=right) 4.97 0.194 4.84 0.096 4.99 0.029

Would you describe yourself as more religious or less religious? Here we have a scale which runs from not religious to religious. Where would you place yourself on this scale?
Mean Std. err. Mean Std. err. Mean Std. err.

10 point scale (1=not religious, 10=religious) 4.12 0.314 4.61 0.162 4.98 0.029

In terms of demographic variables, the additional of the mail cases brings the follow-up sample more in line with the ALLBUS distributions than the Web-only group. For example, 43 percent of the Web-only respondents report a university entrance qualification, compared to 27 percent of the Web+mail respondents and 24 percent of the ALLBUS respondents. Similarly, the age and gender distributions of the follow-up sample including mail are closer to the ALLBUS distributions.

When comparing the responses to attitude and opinion measures, we note that mode effects could account for some of the differences, given that the follow-up survey was self-administered while ALLBUS was interviewer administered. Nonetheless, we see a clear gradient, with the addition of the mail responses bringing the distributions more in line with the ALLBUS responses. For instance, 23 percent of Web respondents agree with the statement that with the future looking as it does, its almost irresponsible to bring children into the world, compared with 31 percent of Web+mail respondents and 37 percent of ALLBUS respondents. Similarly, 53 percent of Web respondents endorse the statement most people dont care in the slightest what happens to others, while 58 percent of Web+mail respondents and 72 percent of ALLBUS respondents do so. We similarly see increasing levels of distrust in others and decreasing levels of interest in politics as we move from the Web respondents only to Web+mail, and finally to ALLBUS respondents. In terms of abortion attitudes, ALLBUS respondents interviewed face-to-face show lower levels of support for abortion under a variety of circumstances, while Web-only respondents show the highest levels of support.


We set out to address two research questions. The first focused on the efficacy of collecting e-mail addresses for a Web follow-up survey. Despite the fact that fewer than half of the respondents are willing to provide an e-mail address, doing so does not appear to harm subsequent response rates. That is, those who are asked for an e-mail address but declined to provide one did not respond at a lower rate than those not asked for an e-mail address. Overall, those asked (whether or not they provided an e-mail) responded at a slightly higher rate to a mailed invitation, whether looking at Web-only responses or at Web+mail responses.

Our second research question addressed the utility of using a sequential Web+mail design rather than Web-only follow-up survey. We find evidence that doing so significantly increases responses rates (from 17 percent to 56 percent overall) but also brings response distributions more in line with the ALLBUS responses. Not only does this bring in those without Internet access, but it also brings in substantial numbers of respondents who reported having Internet access but who did not respond to the Web survey invitation. While not suggesting that the ALLBUS is a gold standard, our results do suggest that restricting the follow-up survey to those with Internet access may produce distributions that deviate substantially from the core ALLBUS results. Again, this points to the value of including a mail component to a Web survey.


Bandilla et al. 2012
Bandilla, W., M.P. Couper and L. Kaczmirek. 2012. The mode of invitation for web surveys. Survey Practice 5(3): 15.
Birnholz et al. 2004
Birnholtz, J.P., D.B. Horn, T.A. Finholt and S.J. Bae. 2004. The effects of cash, electronic, and paper gift certificates as respondent incentives for a web-based survey of technologically sophisticated respondents. Social Science Computer Review 22(3): 355362.
Bosnjak et al. 2013
Bosnjak, M., I. Haas, M. Galesic, L. Kaczmirek, W. Bandilla and M.P. Couper. 2013. Sample composition discrepancies in different stages of a probability-based online panel. Field Methods 25(4): 339360.
Couper 2008
Couper, M.P. 2008. Designing effective web surveys. Cambridge University Press, New York.
Dykema et al. 2013
Dykema, J., J. Stevenson, L. Klein, Y. Kim and B. Day. 2013. Effects of e-mailed versus mailed invitations and incentives on response rates, data quality, and costs in a web survey of university faculty. Social Science Computer Review 31(3): 359370.
Kaplowitz et al. 2012
Kaplowitz, M.D., F. Lupi, M.P. Couper and L. Thorp. 2012. The effect of invitation design on web survey response rates. Social Science Computer Review 30(3): 339349.
Millar and Dillman 2011
Millar, M.M. and D.A. Dillman. 2011. Improving response to web and mixed-mode surveys. Public Opinion Quarterly 75(2): 249269.
1 In contrast, the U.K. Household Longitudinal Survey (or Understanding Society) sends mailed invitations to everyone and e-mail to those who provided e-mail addresses.
2 See for further information.

About Survey Practice Our Global Partners Disclaimer
The Survey Practice content may not be distributed, used, adapted, reproduced, translated or copied for any commercial purpose in any form without prior permission of the publisher. Any use of this e-journal in whole or in part, must include the customary bibliographic citation and its URL.