Computers, Tablets, and Smart Phones: The Truth About Web-based Surveys

Patrick Merle Florida State University

Sherice Gearhart University of Nebraska at Omaha

Clay Craig Coastal Carolina University

Matthew Vandyke Texas Tech University

Mary Elizabeth Brooks Susquehanna University

Mehrnaz Rahimi Miami University of Ohio

Abstract

The exponential increase of smart phone, tablet, and laptop use places the topic of web-based surveys at the center of survey methodology discussions. As individuals now have a variety of options for taking online surveys, researchers must understand who completes their surveys through which device as it may impact completion rates and data quality. The analysis of two national online surveys (n=487 and n=1,046) revealed that individuals utilizing smart phones to complete the studies were significantly younger than those accessing surveys through computers, while only one study indicated a significant gender difference with females using smart phones more than males. Additionally, data showed that the respondent’s level of education did not significantly differ by device used to take the surveys.

Introduction

This study presents the results of an investigation conducted to determine the characteristics of respondents who use different platforms to complete web-based surveys. To date, limited research has evaluated differences between surveys completed on different devices (Couper 2013). Thus, this work proposes an analysis of two sets of nationally representative panel data compiled from online surveys administered between December 2012 and February 2013.

The increased use of smartphones and tablets to go online has propelled web-based surveys to the forefront of public opinion research. Scholars have initially drawn comparisons between web-based surveys and other survey modes (e.g., face-to-face, telephone) to understand the implications of adopting online survey tools. Given the panoply of existing studies with often diverging conclusions, further research is clearly needed to better understand the effect of survey mode on completion rates and data quality.

Given the exponential penetration rate and usage of mobile devices, such as smart phones, netbooks, and tablets, research investigating surveys administered on mobile devices are of increasing interest in the field (Buskirk and Andres 2013). Data indicates that a stronger majority of survey participants now completes surveys using mobile devices (e.g., smartphones and tablets). Almost 60 percent of adult Americans choose cellphones or laptops to go online (Smith 2010). This creates both opportunities and challenges for self-administered surveys to be conducted more efficiently (Peytchev and Hill 2010).

Empirical evidence indicates that surveys completed on mobile devices and personal computers elicited similar response quality, yet lower response rates and increased time for completion for surveys completed on mobile devices (de Bruijne and Wijnant 2013; Guidry 2012). Additionally, less than 1 percent of the population complete web-based surveys on a tablet (Guidry 2012; McClain et al. 2012). Compared to using a tablet, surveys completed using smartphones have a higher proportion of survey breakoffs (Guidry 2012). However, both smartphones and tablets have a higher breakoff rate compared to responses acquired via personal computer.

Research Questions

To contribute to the existing discussion on such a pressing issue in survey research and to provide a clearer picture of potential differences among respondents who utilize different devices to complete surveys, the present study focused on three overarching research questions (RQs):

  1. RQ1: Do participants vary by age in their choice of device used to complete web-based surveys?
  2. RQ2: Are there significant differences by gender among device used to complete web-based surveys?
  3. RQ3: Are there any significant differences by education levels among device used to complete web-based surveys?

Method

Overview

Two web-based surveys, partially funded by university grants, were administered to national representative panels in the United States between December 2012 and February 2013. Each survey focused on different outcome variables. Study 1 pertained directly to the processing levels of political information while Study 2 focused essentially on the usage of social media to voice an opinion.

Sample and Procedure

Study 1. Supported by a university grant, a national panel was recruited by The Sample Network (Cherry Hill, NJ, USA), a private sample company. Each participant received nominal compensation ($3.00) to complete the survey. An initial total of 550 questionnaires were completed, yet the elimination of incomplete and partial responses reduced the sample size to 487. The pool of respondents, representative of the US population, consisted of 48.5 percent female and 51.3 percent male (0.2 percent preferred not to answer) aged 48 (SD=14.07). The survey was fielded for one week starting December 5, 2012, to December 12, 2012.

Study 2. Participants were recruited by Toluna (Wilton, CT, USA), a professional survey company contracted to collect a sample of US adults (n=1,046). Potential respondents were contacted by the company and asked to voluntarily participate in exchange for credit to be used in their internal reward system. A total of 1,871 people responded to the survey solicitation. After eliminating incomplete questionnaires, the final data set yielded 1,046 responses. The pool of respondents consisted of 50.8 percent female and 49.2 percent males, aged 44 (SD=15.81). The survey was fielded for one week from February 19, 2013, to February 26, 2013.

Measures. Participants in both panels were asked to share how they responded to the survey, selecting between desktop computer, laptop, tablet, or smartphone. Age was determined through an open-ended question, and education level required participants to select the option corresponding to the highest education degree obtained (some high school, high school/GED, some college, a 2-year degree, a 4-year degree, some graduate school, a graduate degree). Participants also indicated their gender by selecting either male, female, or prefer not to answer (see Table 1).

Table 1 Education level.

Study 1 Study 2
Less than high school 7 1.4 19 1.8
High school/GED 115 23.6 225 21.5
Some college 136 27.9 281 26.9
2-year college degree 66 13.6 129 12.3
4-year college degree 116 23.8 286 27.3
Graduate degree 47 9.7 105 10.1
Total 487 100 1,045 99.9 (1 missing)

Results

RQ1 focused on the potential differences in age in the usage of a device to complete a survey. Study 1 indicated a significant main effect of age on device, F(3, 483)=4.13, p<0.01. A Student-Newman-Keuls (SNK) post-hoc analysis showed that people who completed the survey on a smartphone (M=37.55; SD=12.40) were significantly younger than those who used a desktop computer (M=49.50; SD=14.30). Data revealed no significant differences in age between respondents who completed the survey using all other devices.

Study 2 also revealed a significant main effect of age on device used, F(3, 1042)=13.10, p<0.001. A SNK post-hoc analysis showed that respondents who completed the survey using a smartphone (M=37.68, SD=12.59) were significantly younger than those who completed the survey from a desktop (M=47.70, SD=15.91). Data showed no significant differences in age between respondents who completed the survey using all other devices.

RQ2 pertained directly to differences in the selection of a device by gender. Using a crosstabs analysis, Study 1 found no significant differences in the selection of a device by gender, [X2(3, N=486)=3.87, p=0.27]. Conversely, results of Study 2 found that device use did significantly differ depending gender [X2(3, N=1,045)=24.44, p<0.001]. Although males were more likely to use a desktop (57.40 percent), females were more likely to use laptops (55.3 percent), smartphones (66.7 percent), and tablets (72.7 percent).

RQ3 focused on differences in the selection of a device by education levels. Study 1 showed the absence of a significant difference, F(3, 483)=0.50, p=0.70. Study 2 revealed a similar trend with no significant main effect of device used on level of education, F(3, 1041)=0.56, p=0.64. (See Table 2 for a summary of age differences.)

Table 2 Mean values of age by device.

Desktop Laptop Smartphone Tablet
Age of respondent
Study 1: Age 59.12 (13.12)a 45.53 (13.97)a 37.55 (12.40)a 45.30 (10.63)a
Study 2: Age 47.70 (15.91)a 41.76 (15.39)a,b 37.68 (12.59)b 43.61 (17.80)a,b

Note: Means with different letter superscripts (a, b) horizontally differed significantly at p<0.001, such that smartphone significantly differs from desktop in study two. SD in parentheses.

Discussion

In this analysis of two national panels, differences between survey respondents who participate using varying devices surfaced. Such results provide valuable insights for practitioners eager to further web-based surveys and understand specific patterns of participation.

Recent reports published reveal that younger populations are twice as likely to use a smartphone to access the Internet (Brenner 2013). Moreover, the majority of tablet owners are 18 to 49 years of age (Pew 2012), a trend discerned in both panels reviewed here. This study confirmed that if younger participants are of interest to researchers, smartphone compatibility should be the primary concern during the survey design phase. Additionally, this finding is relevant for public opinion scholars eager to further elucidate questions associated with screen sizes, visual features, and item-missing data. The particular inquiry of survey methodological issues associated with smartphones may, in fact, require an experimental design.

Although both studies discovered a similar pattern of device usage by gender, the absence of any significant differences for Study 1 showed the plausible influence of sample size on the examination of the topic. Study 1, which consisted of nearly 500 respondents, compared to Study 2 encompassing more than 1,000 people indicates the importance of having larger sample sizes specifically for the examination of the relationship between devices selection and survey completions. The discussion over sample sizes is hardly new in public opinion scholarship. Nonetheless, this finding reinforces the need to carefully consider this facet when reviewing survey behaviors on multiple devices.

Finally, the present analysis underscored the absence of any significant differences by levels of education as all participants reported at least some college experience. This information opens the discussion of the relationship between socioeconomic level and device usage. Importantly, if this finding was to be unearthed in additional studies, it would enable public opinion scholars to have a better sense of which outcome variables may be more pertinent to test in web-based surveys.

Limitations and Further Research

The present analysis examined the plausible influence and differences that may surface from taking surveys on a smartphone, a tablet, laptop, or desktop computer.

Several limitations must be documented. The first drawback originated from the differences in sample sizes. While both studies commented here relied on national panels, the disparity in sample sizes may be a point of contention. The conclusions presented here also need to be placed in an overall context of low tablet usage. As noted, less than 1 percent of survey respondents completed surveys on such devices, a case equally determined in this study (Guidry 2012; McClain et al. 2012). Further inquiries consequently need to be developed to delineate more accurately survey behaviors associated with each device. In summary, additional examinations must continue developing a research agenda on differences between surveys taken on different devices to advance scholarship.

References

Brenner 2013
Brenner, J. 2013. Pew Internet: Mobile. Available at: http://pewinternet.org/Commentary/2012/February/Pew-Internet-Mobile.aspx.
Buskirk and Andres 2013
Buskirk, T.D. and C. Andres. 2013. Smart surveys for smart phones: exploring various approaches for conducting online mobile surveys via smartphones. Survey Practice 5(1). Available at: http://www.surveypractice.org/index.php/SurveyPractice/article/view/63/html.
Couper 2013
Couper, M.P. 2013. “Surveys on mobile devices: opportunities and challenges.” Presentation at web surveys for the general population: how, why and when? London, UK.
de Bruijne and Wijnant 2013
de Bruijne, M. and A. Wijnant. 2013. Comparing survey results obtained via mobile devices and computers: an experiment with a mobile web survey on a heterogeneous group of mobile devices versus a computer-assisted web survey. Social Science Computer Review 31(4): 482–504.
Guidry 2012
Guidry, K.R. 2012. “Response quality and demographic characteristics of respondents using a mobile device on a web-based survey.” Presentation at the annual meeting of the American Association for Public Opinion Research, Orlando, FL.
McClain et al. 2012
McClain, C.A., S.D. Crawford and J P. Dugan. 2012. “Use of mobile devices to access computer-optimized web instruments: implications for respondent behavior and data quality.” Presentation at the Annual Meeting of the American Association for Public Opinion Research, Orlando, FL, May 16–19, 2012.
Pew 2012
Pew. 2012. A snapshot of e-reader and tablet owners (Infographic). Available at: http://libraries.pewinternet.org/2012/01/27/a-snapshot-of-ereader-and-tablet-owners.
Peytchev and Hill 2010
Peytchev, A. and C.A. Hill. 2010. Experiments in mobile web survey design similarities to other modes and unique considerations. Social Science Computer Review 28(3): 319–335.
Smith 2010
Smith, A. 2010. Mobile access. Available at: http://www.pewinternet.org/Reports/2010/Mobile-Access-2010.aspx.


About Survey Practice Our Global Partners Disclaimer
The Survey Practice content may not be distributed, used, adapted, reproduced, translated or copied for any commercial purpose in any form without prior permission of the publisher. Any use of this e-journal in whole or in part, must include the customary bibliographic citation and its URL.