Since 1971, the National Survey on Drug Use and Health (NSDUH), previously known as the National Household Survey on Drug Abuse, has collected critical information regarding tobacco, alcohol, and drug use, amongst a myriad of other health indicators. Approximately 70,000 completes are obtained annually in the 50 United States and the District of Columbia. In the two decades prior to 2020, NSDUH was administered exclusively in-person, using a combination of computer-assisted personal interviewing (CAPI) and audio computer-assisted self-interviewing (ACASI). Following challenges brought on by the COVID-19 pandemic, NSDUH transitioned to a multimode data collection protocol at the beginning of 2021 in which individuals are first offered the opportunity to respond by web. For the bulk of sample dwelling units that do not complete the web screener or selected interviews by web, interviewers visit these dwelling units and complete household screening using a tablet and conduct any selected interviews via CAPI and ACASI. The new self-administered web mode has quickly become a popular way to respond to the survey: during the first two years of the multimode design, a total of 68,368 (48.4%) of the 141,219 completes were obtained by web.
NSDUH’s conversion to a multimode survey has led to concern about whether respondents might be exhibiting measurement error attributable to the survey mode (Groves 1989). That is, certain individuals may be inclined to answer the same question differently if asked in-person by an interviewer than in a self-administered, web-based setting. For example, self-administered data collection modes have been shown to reduce social desirability biases relative to interviewer-administered modes (Tourangeau and Smith 1996). Pertinent to NSDUH’s key outcome variables, prior research has shown self-administration yields higher reports of alcohol consumption (Aquilino 1994) and illicit drug use (Schober et al. 1992) than an interviewer-administered mode.
Methods to quantify measurement error attributable to the survey mode include fielding concurrent surveys on separate samples (e.g., Link et al. 2008), randomizing the mode offered for a single sample (e.g., Chang and Krosnick 2010), or advanced statistical approaches (e.g., Vannieuwenhuyze, Loosveldt, and Molenberghs 2010; Ollerenshaw 2023). Another method is to reinterview respondents in two (or more) modes at two separate points in time. Reinterviews have long been utilized to assess the simple response variance component of measurement error (e.g., Hansen, Hurwitz, and Pritzker 1964; Biemer and Wiesen 2002; Harrison et al. 2007), but they can also be used to assess systematic differences that may be linked to the mode of survey administration. We take that approach here by evaluating reinterview data on a (nonrandom) subset of 776 NSDUH respondents between 2021 and 2022 who began the survey in a self-administered web setting, broke off prior to completion, and then later completed the survey during the in-person CAPI follow-up phase of the quarterly data collection cycle. The in-person reinterview started the survey from the beginning, enabling the ability to compare individual-level responses from both modes.
We focus on three binary yes/no outcome variables: whether or not the respondent had (1) ever smoked cigarettes; (2) ever consumed alcohol; or (3) ever used marijuana. These indicator variables are based on source questions that occur early in the questionnaire and have high response rates. They are also minimally time-dependent, but note that we removed one respondent whose data indicated initiating one of these three behaviors for the first time in between the two survey administrations.
Results presented in Table 1 show how the three outcome variables have inconsistency percentages ranging from 14.1% to 17.8%. By comparison, a reinterview analysis of a random sample of NSDUH 2006 respondents (both in CAPI mode) found inconsistency percentages for these three outcomes to range from 3.2% to 4.2% (SAMHSA 2010). Perhaps even more surprising is that the second chance respondents are far more likely to change their answer from “no” to “yes” in the presence of an interviewer than they are to change their answer from “yes” to “no.” For instance, while 14.0% of individuals initially saying they had used marijuana claimed never to have used the drug when asked by an interviewer, 22.4% of respondents who initially said “no” on the web said “yes” during the in-person interview. This finding is somewhat puzzling, as it seems at odds with the notion that abstaining from these three behaviors would be more socially desirable.
A natural follow-up question is whether the second chance respondents provide substantively different data than other web respondents. If so, this would help justify the additional effort expended to convert these breakoff cases into completes. To help answer this question, Table 2 contrasts unweighted percentages of the three outcomes for second chance respondents against CAPI-only, Web-only, and all respondents. As suggested by analyses in Table 1, in aggregate, second chance respondents are more likely to have engaged in these behaviors. They are over 15 percentage points more likely than web-only respondents to have ever smoked cigarettes or used marijuana, and roughly 3 percentage points more likely to have consumed alcohol.
Because our analysis data set was nonrandom, the result of a natural experiment occurring in response to data collection challenges introduced by the COVID-19 pandemic, we can only speculate on the reasons for these findings. One reason may be that the stigma of cigarette, alcohol, and marijuana use in the United States is diminishing over time, muting the effects previously found in studies such as Aquilino (1994) and Schober et al. (1992). Granted, it is also possible that usage outcomes for other substances would exhibit the expected directional change (lower reports in CAPI) than the three examined here. Future research could examine those outcomes, as well as (re)analyze outcomes over time as additional second chance respondents accumulate under the new NSDUH multimode design. Also, the order of modes offered could be influencing results. A randomized sequential mode experiment, such as the one described in McMorris et al. (2009), could be pursed in the future to quantify the impact of mode order. Lastly, models could be fit to assess the relationships between sociodemographic variables (or other survey outcomes) and the likelihood of being a second chance respondent and/or responding inconsistently.