Mobile-Only Web Survey Respondents
Survey researchers have long been concerned about “mobile-only” survey respondents. In the 2000s, random digit dialing surveys included landline phones only. The steady decline of landline coverage rates in both the United States and Europe and rise of mobile phone subscriptions has led survey researchers about a decade ago to start using dual-frame samples containing both landline and mobile phone numbers (Brick et al. 2007; Link et al. 2007). Nowadays, cell-phones often constitute the large majority of sampled telephone numbers (Pew Research 2016).
Where cell phones have now largely replaced landline phones as a method of communication in polling, cell phones nowadays feature enough capabilities to potentially replace the (desktop) computer as well in web surveys. As of July 2015, 73 percent of Americans owned a computer, 68 percent a smartphone, and 45 percent a tablet computer (Pew Research 2015).
In the last couple of years, mobile phones have evolved from a medium of auditive communication into a smart multimedia device. Most smartphones feature a high-speed mobile Internet connection, which enables the use of all kinds of new information and communication technologies.
Earlier studies have documented that in Europe an ever increasing number of Internet panel survey respondents are completing surveys on their tablets and smartphones. We found that in 2014 in the Dutch Longitudinal Internet Studies for the Social sciences (LISS) panel, 15 percent of respondents completed surveys on a tablet and 5 percent on a mobile phone. Others have found similar rates in different panels in different countries (Lugtig and Toepoel 2016; Mavletova 2013; Struminskaya, Weylandt, and Bosnjak 2015).
This paper aims to study what defines the group of mobile-only web survey respondents in the United States. With mobile-only, we in practice mean both tablet and cellphone respondents, although we will break down our analysis to show differences between the devices. The remainder of this article is structured as follows:
We will first describe data from the American Life Panel (ALP), which we will use to illustrate device use in one large U.S.-based panel. We will describe the proportion of respondents using different devices over time, both at each of seven measurements within the panel, as well as longitudinally. Then we study the characteristics of different types of respondents and focus on differences in voting behavior between the groups to see whether including or excluding the mobile-only group in web surveys would lead to biases in political polls.
Data
The data in this study stem from the RAND American Life Panel (ALP). The ALP is a nationally representative panel survey, consisting of 6,000 survey respondents. The panel started in 2003 with a probability-based survey in the context of the Health and Retirement Survey, carried out at the University of Michigan, and is now maintained by RAND Corporation. Regular refreshments of panel respondents are added to the panel to ensure the ALP is representative of the whole U.S. population. All ALP-respondents are recruited offline. More details about the recruitment and representativeness of the panel can be found on the website of ALP (https://alpdata.rand.org/).
In this paper, data from seven waves are used which all asked respondents questions related to the midterm congressional elections held on November 4, 2014. The first wave was conducted in May 2014; later waves started on September 28, October 5, 12, 19 and 26. Data in each wave were collected over a one-week period. Finally, a post-election survey was held on November 5, 2014. 2,925 respondents from the ALP panel participated in at least three out of seven questionnaires and are used in our analysis. Respondents who participated in fewer than three waves were dropped from the analyses (2,355 cases in total, which included respondents newly recruited to the panel in October 2015). A separate wave-by-wave analysis including all respondents did not show differences in device use between the respondents included and not included in our analysis.
In each wave, we coded the device respondents used to complete the survey using the user agent strings (UAS). UAS contain information on the device, operating system, and browser being used (Callegaro 2013). The strings were manually coded using a script in R 3.3.0 (R Core Development Team 2016), which is available from the authors.
We defined a regular PC or laptop to be a computer which has a fixed-keyboard, and a screen size larger than 6 inches. A tablet is defined as a device with a screen larger than 6 inches, but without a fixed keyboard. A phone is a device smaller than 6 inches that can also be used to make calls over a cellular network.
As covariates, we use variables that we deem to be relevant in explaining differences in the use of devices. We expect male and younger respondents, as well as respondents in paid work, in larger households, with higher incomes, and of non-white ethnicity to use a mobile device more often (Struminskaya, Weylandt, and Bosnjak 2015; Toepoel and Lugtig 2014).
Results
Table 1 shows that in each wave of the ALP study, about 80 percent of all respondents use a regular personal computer (PC) or laptop to complete the survey. Slightly less than 10 percent of respondents use a tablet, and slightly more than 10 percent a smartphone. Nonresponse in the ALP panel amounts to about 15 percent of all respondents that are invited in every month.
For our analysis purposes, it is more relevant to study whether respondents use the same device consistently over different waves. We find that about 80 percent of respondents use the same device at all times (see Table 2). 68 percent of respondents always use a PC or laptop for completing questionnaires, 5 percent always use a tablet and 7 percent always a mobile phone. About 20 percent of respondents use multiple devices over the course of the seven waves we study. 11 percent of respondents use both their PC and phone, 7 percent use a PC and a tablet, while 1 percent of respondents uses all three types of devices over a seven-wave period.
In accordance to earlier findings (Lugtig and Toepoel 2016; Struminskaya, Weylandt, and Bosnjak 2015), we again find that mobile respondents (respondents who use their tablets or phones to complete the surveys) are less likely to participate in any given wave. Out of a maximum of seven waves, the ‘always PC’ respondents on average participate in slightly more than six waves, whereas tablet only respondents respond in 5.58 waves. Although this difference may seem small, remember that we constrained our sample to respondents who participated at least three times.
We hypothesize that mobile only respondents in the ALP study are very different from PC respondents on a range of demographic variables. Cook (2014), using data from another panel in the United States, indeed finds that younger and female respondents are more likely to complete surveys on tablets. Smartphone respondents are more likely to have only a high school degree and have a lower income than PC and tablet respondents. Both tablet and smartphone users are more likely to identify as Hispanic or African-American (Cook 2014).
Table 3 shows the results of a multinomial logistic regression analysis explaining longitudinal device use, as categorized into the six longitudinal device use patterns we showed in Table 2. The ‘always PC’ group serves as the reference group, which means that the coefficients shown in Table 3 are relative coefficients of differences between groups. We report the average marginal effect (ME) (Mood 2009). These represent the multivariate effects of the predicted probability of being in the ‘always PC’ group compared to the other groups. For example, the coefficient for age/10 in the ‘Mix PC-Phone’ group means that for every decade that respondents become older, they have a 5 percent lower predicted probability to be in the ‘always PC’ group as compared to the ‘Mix Pc-phone’ group. Although this difference appears to be small, we have to keep in mind that when a random person is selected from either the ‘always PC’ or ‘Mix PC-phone’ groups, he or she has a probability of 0.86 to be in the always PC group, as that group is much larger than the ‘Mix Pc-phone’ group. So a shift of 0.05 in these probabilities for every decade that a respondent is younger, should be considered to be a large effect. If an imaginary respondent aged 60 has a predicted probability of 95 percent of being in the always PC group as opposed to the PC-phone group, this probability is about 75 percent for someone aged 20, controlling for the effects of other covariates.
Of all predictors tested, we find that only a few are predictive of device use in the ALP. Small marginal effects for age are found for the ‘always phone’ and ‘always tablets’ group in comparison to the ‘always PC’ group, while there is no significant difference in age between the ‘always PC’ and ‘mix PC-tablet’ groups.
Panel respondents who complete surveys on their phone are not only younger than the ‘always PC’ group. ‘Always phone’ respondents are also less likely to have a higher education (Bachelor degree or higher), are more likely to be married, and more likely to be of Hispanic or African American ethnicity. These characteristics coincide with some of the most important characteristics of the hard-to-recruit (Tourangeau, Edwards, and Johnson 2014).
However, while we find that the types of respondents who are hard to recruit into surveys are more likely to answer surveys on their phone, we do not find any differences in reported voting behavior between respondents. ‘Always phone’ respondents are no more or less likely to vote republican or democrat than other respondents.
Discussion
This paper studied the device use of respondents in the American Life Panel. ALP respondents are not necessarily representative of all potential survey respondents in the United States. Although ALP aims to be representative of the U.S. population, nonresponse errors may be introduced at the panel recruitment phase, as well as in every wave of the panel. Still, this papers shows that there is a large proportion of respondents that is ‘mobile-only’ or mixes different devices over time to participate in the panel. This finding is likely to extend to other Internet panel studies, as this finding is in line with earlier findings in the United States and Europe.
We find differences in the sociodemographic background of device use, but not in voting behavior. This means that not offering a mobile-optimized web survey, would exclude specific sociodemographic strata of respondents, but not specific voters. It is important however to note that the results presented here are from multivariate analysis. A univariate analysis with only voting behavior in the 2014 midterm elections as a predictor would have resulted in small, but significant differences. The group that always participates by phone is less likely (ME −0.04) to vote republican. Because we also include age and ethnicity as predictors in our multivariate analysis, this univariate effect disappears in multivariate analysis. Always phone respondents are younger and come from ethnic minority groups. These are strata of people who are generally also less likely to vote republican. This reveals that it is important to recruit and maintain hard-to-recruit respondents as panel respondents. As these respondents are often mobile-only, it is important to offer a good user experience to mobile-only web respondents. RAND, which coordinates the ALP, has implemented a mobile-compatible interface in 2014, but as mobile technology changes constantly, it remains important to study how the web survey experience on mobile devices can be improved.