Does Voice Matter for Youth Reports of Tobacco Use? An Interactive Voice Response Experiment

Niki Mayo RTI International

Brenna Muldavin RTI International

Douglas B. Currivan RTI International

Introduction

Survey mode has been associated with differing reports of smoking behavior among youth, with household telephone surveys generally yielding lower estimates of youth smoking rates than school-based surveys. Researchers assume the lower estimates from telephone surveys reflect underreporting due to youths’ concerns about parents or others overhearing their responses. School surveys lessen concerns about parents overhearing youths’ responses but exclude youth who have dropped out of school and underrepresented those who attend infrequently. As a result, the best methods for accurately measuring youth smoking behavior continue to be investigated (Fowler and Stringfellow 2001; Gfroerer et al. 1997).

For household telephone surveys, using interactive voice response (IVR) to allow youth to self-report has been shown to increase youth reports of smoking compared to interviewer administration (Currivan et al. 2004; Moskowitz 2004). Nevertheless, this research shows that a significant gap remains between youth smoking estimates from IVR household surveys and school surveys for the same population (Currivan et al. 2004).

The observed differences in estimates of youth smoking between household telephone surveys and school surveys raise the question of how disclosure risk might influence how youth answer smoking questions. If youth respondents are concerned about disclosure risk, can we manipulate the household telephone survey protocol through IVR to influence how youth think about the potential audience for their responses to smoking questions? The standard adult female voice used in many IVR applications may encourage youth to think about the risk of disclosure to adults and therefore discourage reporting smoking behavior or intentions. However, a youth voice may encourage respondents to think about disclosure to an audience of their peers and perhaps lead to increased reporting of smoking behavior or intentions. If youth respondents are not sufficiently concerned about disclosure or are not influenced by the voice type, no differences in reporting smoking behavior or intentions might be observed. In this paper, we present the results of an IVR experiment where youth respondents were randomly assigned to an adult or youth female voice to assess whether their reports of smoking behavior varied by voice type.

Background and Research Questions

Only a few research studies have assessed whether different IVR voices influence respondents. This research has focused on adults and asked a variety of sensitive questions, though none involved tobacco use.1 Overall, the literature gives little reason to suspect that voice type has a large or consistent effect on adults’ responses to sensitive questions (Couper et al. 2004; Evans and Kortum 2010; Tourangeau et al. 2003).

We did not find any published studies that directly assessed how IVR voices might influence youths’ survey responses during telephone interviews or audio computer-assisted self-interviewing (ACASI) voices for in-person interviews. The possibility exists that youth could respond differently to alternative IVR voices to a greater degree than adults if the perceived risk of revealing sensitive information is higher.

In our experiment, we assumed that youth respondents were thinking about disclosure risk in terms of potential audience. Thus, we expected youth receiving the adult female IVR voice to be thinking about the risk of parents or guardians learning their smoking behavior. Likewise, we expected youth receiving the youth female IVR voice to be thinking about the risk of siblings or friends learning their smoking behavior. These expectations are consistent with the “computers as social actors” viewpoint whereby youth would have to imagine the person behind the voice and think about how that person would react to their responses to questions on smoking behavior and intentions (Reeves and Nass 1997). The contrasting view, as discussed in Couper et al. (2004), would lead one to expect youth participants to respond to the two voices as similar computer applications that did not vary significantly in disclosure risk.

We were also interested to see if results varied by demographic subgroup, as they did for Currivan et al. (2004). Even though Currivan et al. (2004) studied the effect of IVR versus computer assisted telephone interviewing (CATI) by live interviewer and did not examine differences between IVR voices, their study focused on youth responses to sensitive smoking items. The study found that some demographic subgroups of youth responded to the experimental design differences more than others.

Methods

The data used for this study come from the Florida Youth Cohort Tobacco Study (FL YCS), sponsored by the Florida Department of Health. The FL YCS is a longitudinal telephone survey designed to track tobacco-related beliefs, attitudes and experiences of Florida youth aged 12–16. Baseline interviews were conducted in 2009 in English and are the basis for this research.

Florida households were sampled using list-assisted landline random digit dial (RDD) numbers, supplemented by directory-listed numbers to increase efficiency in reaching households with at least one eligible youth. Interviews were completed with 1,546 youth, though we present results for the 1,444 youth who provided sufficient data to be included in the final analyses. Survey data were weighted to be representative of Florida youth age 12–16 who lived in households covered by the sampling frame.

Gaining cooperation to conduct interviews with eligible youth involved obtaining both parental consent and youth assent. Consent from the parent/guardian was acquired before speaking to the eligible youth and obtaining assent. Youth were selected using the most recent birthday method when more than one eligible youth was identified in a household.

Youth were asked a series of demographic questions before being told they would be asked questions about their experiences with tobacco products through an automated phone system. Instructions were given on how to use the IVR system before youth were switched to it. Within the system, youth were randomly assigned to hear pre-recorded questions from either the adult or youth female voice. Respondents entered answers using the telephone keypad. Upon completion of the IVR module, youth were reconnected with a live interviewer to finish the study.

The overall response rate was 15.4 percent using American Association for Public Opinion Research (AAPOR) RR4. This rate was negatively impacted by our screening procedures, which required affirmative parental consent.

Results

Table 1 presents a voice type comparison for lifetime and recent tobacco use behaviors among respondents in the IVR mode. Youth receiving the adult female voice were slightly more likely than those receiving the youth female voice to report that they had ever tried cigarette smoking. Conversely, youth receiving the youth female voice were slightly more likely to report that they had smoked one day or more in the past 30 days, the most sensitive question in the instrument. Neither of these differences were statistically significant at the conventional p<0.05 level, but the p-values did equal 0.10, suggesting marginal significance. While these might have represented meaningful differences between the two voices, we cannot be fully confident that true differences existed.

Table 1 Tobacco use behaviors among all respondents by IVR voice.

Survey item Adult female voice Youth female voice
Have you ever tried cigarette smoking, even 1 or 2 puffs?*
  Yes 13.4% 10.6%
  No 86.6% 89.4%
(734) (687)
During the past 30 days, on how many days did you smoke cigarettes, even 1 or 2 puffs?*^
  Didn’t Smoke (0 days) 52.0% 55.6%
  Smoked (1 day or more) 42.0% 44.4%
(77) (76)

*Differences in response patterns due to voice were not statistically significant at p<0.05.

^Respondents routed to these questions must have answered “yes” to “Have you ever tried cigarette smoking, even 1 or 2 puffs?”

Table 2 shows outcomes for intentions to smoke cigarettes by voice type. Youth respondents receiving the adult female voice were found to be significantly more likely than those receiving the youth female voice to report that they would probably smoke anytime during the next year and if a best friend offered a cigarette (p<0.05).

Table 2 Intentions to smoke cigarettes among all respondents by IVR voice.

Survey item Adult female voice Youth female voice
Do you think you will smoke a cigarette anytime during the next year?*
  Definitely/probably yes 8.1% 4.8%
  Definitely/probably no 91.9% 95.2%
(721) (687)
If one of your best friends offered you a cigarette, would you smoke it?*
  Definitely/probably yes 7.6% 4.3%
  Definitely/probably no 92.4% 95.7%
(721) (692)

*Differences in response patterns due to voice showed statistical significance at p<0.05.

We were also interested in seeing whether factors like age or gender were associated with observed differences in smoking behavior and intentions based on voice type. We found that younger youth aged 12–13 (8.9 percent) and female respondents (15.4 percent) reported significantly more often that they had ever tried cigarette smoking when they were asked questions by the adult female voice (p<0.05) versus the teen voice.

We found similar results when examining reports of intentions to smoke anytime during the next year. Younger youth aged 12–13 (7.1 percent) and female respondents (8.8 percent) again reported significantly more affirmative intentions to the adult female voice (p<0.05). Likewise, younger youth and females reported significantly more affirmative intentions to the adult female voice (7.6 percent, 7.1 percent, p<0.05) when reporting their intentions to smoke if a best friend offered it.

Discussion

Following the “computers as social actors” paradigm (Reeves and Nass 1997), we assumed the perceived “audience” for youth responses could be salient and result in greater reporting of smoking behavior and intentions with the youth female voice. Instead, all differences in youth reports of smoking behavior or intentions that were statistically significant at the conventional p<0.05 level involved higher reports with the adult female voice. Voice type did matter for youth smoking reports, but not in the direction expected. The adult female voice – not the youth voice – elicited significantly higher reports of smoking intentions for the sample as a whole, and elicited higher reports for both intentions and past use for youths aged 12–13 and females.

Because voice type produced some differences in youths’ responses in this study, we recommend survey practitioners pretest and experiment with different voices when budget and time allow. In our study, we found youths who were either younger or female were most likely to report differently based on voice type. Currivan et al. (2004) found that female respondents were more likely than males to report smoking behavior in IVR mode compared to CATI mode, particularly those girls who believed their parents would strongly disapprove of their smoking. Combined with our current study, these findings indicate some youth subpopulations might be more sensitive to protocol differences when survey questions focus on sensitive topics. For youth surveys that cover sensitive topics such as illicit or illegal substance use, evaluating IVR voice types before administering the primary data collection might be useful to avoid this kind of bias.

If survey-specific experimentation with voice types is not feasible, we suggest practitioners continue to use the “standard” adult female voice typical of most IVR applications. Although our study could not determine whether this standard voice increases or decreases reporting bias, the measurement bias associated with this standard voice would be consistent with data from most existing surveys using this kind of IVR voice. The common bias produced by the standard voice could then be ignored as a source of differences in estimates when comparing results across these surveys.

References

Couper et al. 2004
Couper, M.P., E. Singer and R. Tourangeau. 2004. Does voice matter? An interactive voice response experiment. Journal of Official Statistics 20(3): 551–570. Available at: http://www.jos.nu/Articles/abstract.asp?article=203551.
Currivan et al. 2004
Currivan, D.B., A.L. Nyman, C.F. Turner and L. Biener. 2004. Does telephone audio computer-assisted self-interviewing improve the accuracy of prevalence estimates of youth smoking? Evidence from the UMass Tobacco Study. Public Opinion Quarterly 68(4): 542–564. Available at: http://poq.oxfordjournals.org/content/68/4/542.full.
Evans and Kortum 2010
Evans, R.E. and P. Kortum. 2010. The impact of voice characteristics on user response in an interactive voice response system. Interacting with Computers 22(6): 606–614. Available at: http://www.sciencedirect.com/science/article/pii/S0953543810000639.
Fowler and Stringfellow 2001
Fowler, Jr., F.J. and V.L. Stringfellow. 2001. Learning from experience: estimating teen use of alcohol, cigarettes, and marijuana from three survey protocols. Journal of Drug Issues 31(3): 643–664.
Gfroerer et al. 1997
Gfroerer, J.C., D. Wright and A. Kopstein. 1997. Prevalence of youth substance use: the impact of methodological differences between two national surveys. Drug and Alcohol Dependence 47(1):19–30. Available at: http://www.sciencedirect.com/science/article/pii/S037687169700063X.
Moskowitz 2004
Moskowitz, J.M. 2004. Assessment of cigarette smoking and smoking susceptibility among youth. Telephone computer-assisted self-interviews versus computer-assisted telephone interviews. Public Opinion Quarterly 68(4): 565–587. Available at: http://poq.oxfordjournals.org/content/68/4/565.full.
Reeves and Nass 1997
Reeves, B. and C. Nass. 1997. The media equation: how people treat computers, television, and new media like real people and places. CSLI and Cambridge University Press, Cambridge.
Tourangeau et al. 2003
Tourangeau, R., M.P. Couper and D.M. Steiger. 2003. Humanizing self- administered surveys: experiments on social presence in web and IVR surveys. Computers in Human Behavior 19(1): 1–24. Available at: http://www.sciencedirect.com/science/article/pii/S0747563202000328.
Footnotes
1 Couper et al. (2004) investigated IVR voice type in relation to illicit substance use, but only with adults.


About Survey Practice Our Global Partners Disclaimer
The Survey Practice content may not be distributed, used, adapted, reproduced, translated or copied for any commercial purpose in any form without prior permission of the publisher. Any use of this e-journal in whole or in part, must include the customary bibliographic citation and its URL.