A Collection of Caller ID Experiments

David Dutwin, Melissa Herrmann, Eran Ben Porath and Susan Sherr Social Science Research Solutions

With the well-documented drop in response rates over the past quarter century, survey researchers continue to explore new and innovative ways to boost cooperation. Strategic use of caller ID has been shown in the past to potentially help participation. This paper reports on five experiments manipulating the caller ID text and phone number in a variety of contexts.

Customized caller IDs are increasingly becoming a prerequisite for high quality research. A host of scholars have argued that caller IDs can (and will) be used against us as survey researchers [as a method of screening out calls (see Battaglia et al. 2007; Couper 2005; Lepkowski 1999; Tuckel 2001)]. However, recent articles suggest that it can also be strategically used for, rather than against, survey research [as a method of fostering participation (see Callegaro et al. 2010; Trussel 2004]. Still others find inconclusive results in any clear direction (Curtin et al. 2005; Link and Oldendick 1999).

There have been to date a number of caller ID experiments noted in prior literature. Trussell and Lavrakas (2005) found about a 1.5 percent increase in responses rates when using “Nielsen Ratings” compared to “Unknown” or “Out of area.” Okon et al. (2008) found increased response rates using “Census Bureau” compared to “Unknown Caller.” For the National Immunization Survey, Barron and Khare (2008) found about a 3 percent jump in response rate when using “NORC U Chicago” instead of “Toll Free.” And Callegaro et al. (2010) found the use of “Gallup” to help by 3 percent. However, not all experiments were positively efficacious. Callegaro et al. also found some of their experimental conditions (varying by certain target populations) actually depressed response rate. And Fernandez and Hannah (2007) found no difference in a number of BRFSS-related experiments.

It is important to note that all of the above research generally leaves two major questions untouched with regard to caller ID. First, can a specific caller ID help if you are not a “brand name?” That is to say, how effective is a caller ID when it does not include commonly recognizable names such as Gallup, Census, Nielsen, or a prominent University? And more importantly, the above experiments only deal with half the message, and to some degree, the less important one. That is, they manipulate the caller ID text, but not the number itself. What if some numbers foster greater participation than others?

This paper details five experiments regarding both text and number. The first largely replicates what has been done before by comparing the use of “SSRS” (for the survey vendor, Social Science Research Solutions) with “Univ MN” in a study within the state of Minnesota. A second experiment looks at the potential efficacy of using a Spanish caller ID compared to an English ID in a national survey of Latinos. Third, we explore the use of the term “Jewish” in surveys of Jews. Fourth, we compare the use of a local 617 number for a study conducted in the state of Massachusetts, compared to the use of a 1-800 number. And finally, we similarly compare a “local” (484) number compared to a 1-877 number in a national omnibus survey context.

Before delving into these experiments, it is important to provide a brief primer on the technicalities of caller ID. Caller ID is not owned by everyone. The most recent published estimates are about five years old, and find that just under 60 percent of landline households have caller ID (Glaser 2006). Since caller ID comes standard today with many telephone contracts, this figure could be somewhat higher. It is important to remember however that a quarter of households do not own a landline telephone at all (Blumberg and Luke 2011). Cell phone owners all have caller ID; however, they will not see any text unless the person calling is already in their contact list. Presumably, this will not be the case for survey organizations, and as such, cell phone recipients will see an incoming phone number, but no text. Overall, then, if 60 percent of landline owners have caller ID and 25 percent of households are cell phone only, then only about 45 percent of all households have a caller ID that will show both number and text.

It is also critical to note that caller ID is dependent upon the network service provider of both the sender and receiver. To attain customized caller ID text, survey organizations have to request that their telephone provider change the text on record. Changing the number itself requires purchasing the number of choice, which must route to a physical location in the local area associated with the number. When calls go through digital providers, the receiver’s telephone provider will pull the number and text from the sender’s database. But this is not guaranteed, as the receiver’s provider does not have to be digital, and/or does not have to do lookup. Therefore, the receiver may not end up getting the ID you are trying to display. But largely, it is assumed that the text does “get through” and is displayed correctly.

The Experiments

Given that three of our five tests were concerned with caller ID text, only landline sample was utilized, since again text will not show on cell phones. The Massachusetts survey utilized an address-based design and therefore did not have outbound cell phone dialing. The final experiment, however, comparing a 484 number to a 1-877 number, was explored on both landlines and cell phones.

Our first experiment was associated with the Minnesota Health Access Survey of 2009. While 90 percent of all landline sample was routed through a local number with the text “UNIV MN” displayed, a random 10 percent used the ID “SSRS SURVEY RSRC.” Overall, response rate (RR3) for the SSRS text was in fact significantly (p = 0.03) higher than it was for UNIV MN (48.9% to 47.7%). We have been concerned that the use of a local University text may in fact be less effective if respondents assume that the call is going to solicit donations, but of course we have no way to determine if this is the case.

Our second experiment utilized two texts, with near identical 1-800 numbers: “Pew Survey” and “Estudio de Pew,” during the 2009 Pew Hispanic Center’s National Survey of Latinos. It was thought that Spanish text would be more efficacious in areas of high Hispanic household incidence. Overall, the English text secured a 31.4 percent response rate, compared to 31.6 percent for the Spanish text. There was no effect on response rates, cooperation rates, and other measures by language of the text, both overall and when interacted with Hispanic density. Though not statistically significant, the Spanish text actually attained a higher response rate in areas of low Hispanic household incidence compared to the English text (36.1 versus 32.9).

The third experiment yielded quite interesting results. We randomly assigned one of two texts to the Baltimore Jewish Population Survey: “BALTJEWISHSURV” and “BALTSURVSSRS.” Our initial hypothesis was that the use of the term “Jewish” would foster participation in a listed sample source of likely Jewish households (a compilation of synagogue and other lists), but depress participation in RDD sample, where the vast majority of respondents would be non-Jewish (over 95 percent non-Jewish). In fact, we found no significant effect in the Listed sample, and a significantly positive effect of the term “Jewish” in RDD sample. We found not only a 6.1% increase in response rate in the RDD sample when using the term “Jewish,” but as well, we found that the source of this increase was due to both a lower refusal rate and answering machine rate (p < 0.01). In short, we surmise that the use of “Jewish” suppressed the tendency to outwardly refuse and increased the tendency to pick up the phone rather than allow the call to be screened by an answering machine. The results are interesting when one considers that 19 times out of 20, the person deciding whether to refuse or not and/or pick up the phone or screen is not Jewish. Importantly, the difference in ID text did not impact the incidence attained in the data, since one goal of the study was Jewish population estimation.

Table 1 Hispanic survey response rates (RR3).

Strata English Spanish
Surname sample 18.6% 19.3%
Very high incidence 32.4% 31.9%
High incidence 29.3% 28.0%
Medium incidence 21.1% 23.7%
Low incidence 32.9% 36.1%

Table 2 Jewish survey results, RDD sample.

Jewish SSRS
Response rate 3 46.3% 40.2%
Percent refused 4.7% 8.7%
Answering machine 6.7% 9.9%
Incidence 4.7% 4.7%

The final two experiments underscore our most important finding, as they are concerned with the efficacy of the telephone number itself. Again, we consider telephone number to be the most important part of the caller ID since it is the only type of identification that we can reliably assume will be seen by the majority of respondents on landlines and on cell phones. In the first of these two tests, we randomly assigned either a 617 (Boston area code) or a 1-800 number during the 2010 Massachusetts Health Insurance Survey (1). We found, as expected, that the 617 number attained a higher response rate compared to the 1-800 number (23% versus 19.5%, p < 0.01). The source of the difference was not in a lower refusal rate or a lower answering machine rate, as these data were nearly identical across the two numbers. Rather, the only difference was in the percent of sample that completed the interview.

A final comparison replicated the Massachusetts experiment on a national scale. This is an interesting test, since any local number will in fact only be “local” for those who live in its service area. Californians will not and should not consider a “484” number local: They will nevertheless recognize it as not being a 1-800/888/877 number, and therefore perhaps be more willing to pick up the phone. In the end, we found a significant effect in the landline sample, but not in the cell phone sample. Response rate increased from 9 to 10 percent on landlines (p < 0.01). It is possible that this difference would increase with effort, such that a study with response rates in the 30s or 40s could realize a 3 to 4 percentage point difference overall. The non-finding on the cell phones is interesting in that we found slightly lower rates of voicemail using the local number, but then a higher percentage of non-completed callbacks. One possibility is that the local number piqued people to answer the phone, but then upon realizing that the call was a request for a survey, asked to be called at a later time. The net result was a near identical response rate for both conditions.

Overall, we note that consistent with past research, the efficacy of caller ID varied by context, and even where differences were significant, the effect was modest at best. It is likely that in many instances a particular caller ID has a slightly positive impact on some segments of the population but conversely a negative effect on others. Our key findings suggest that avoiding 1-800 numbers seems to be the most promising manipulation of caller ID in order to increase cooperation and response rate. We believe that further research on number variation is paramount, given the promise that local numbers can foster significantly greater response compared to 1-800 numbers, and the fact that cell phone respondents can only see the caller ID number and not the text.

Table 3 Telephone number survey results.

617 1-800
Response rate 3 23.0% 19.5%
Percent of sample completed interview 19.6% 15.3%
Cooperation rate 49.1% 43.8%

References

Battaglia et al. 2007
Battaglia, M., M. Khare, L.R. Frankel, M. Cay Murray, P. Buckley and S. Peritz. 2007. Response rates: how have they changed and where are they headed? In: (J. Lepkowski, C. Tucker, M. Brick, E. De Leeuw, L. Japec, P.J. Lavrakas, M. Link and R. Sangster, eds.) Advances in telephone survey methodology. John Wiley, Hoboken, NJ, pp. 529–560.
Blumberg and Luke 2011
Blumberg, S.J. and J.V. Luke. 2010. Wireless substitution: early release of estimates from the National Health Interview Survey, January–June 2010. National Center for Health Statistics. December 2010. Available from: http://www.cdc.gov/nchs/nhis.htm.
Callegaro et al. 2010
Callegaro, M., A.L. McCutcheon and J. Ludwig. 2010. Who’s calling? The impact of caller ID on telephone survey response. Field Methods 22(2): 175–191.
Couper 2005
Couper, M.P. 2005. Technology trends in survey data collection. Social Science Computer Review 23(4): 486–501.
Curtin et al. 2005
Curtin, R., S. Presser and E. Singer. 2005. Changes in telephone survey nonresponse over the past quarter century. Public Opinion Quarterly 69(1): 87–98.
Fernandez and Hannah 2007
Fernandez, B.M. and K.M. Hannah. 2007. The impacts of caller ID on response and refusal rates for the BRFSS. Paper presented at the 62nd annual conference of the American Association for Public Opinion Research, Anaheim, CA, May 17–20.
Glaser 2006
Glaser, P. 2006. CMOR industry image study 2006. Paper presented at the respondent cooperation workshop, San Antonio, TX, September 13–15.
Lepkowski 1999
Lepkowski, J.M. 1999. More about telephone surveys. ASA series “What is a survey?” American Statistical Association, Section on Survey Research Methods, Alexandria, VA.
Link and Oldendick 1999
Link, M.W. and R.W. Oldenick. 1999. Call screening: it is really a problem for survey research? Public Opinion Quarterly 63(4): 577–589.
Okon et al. 2008
Okon, A.A., J.C. Moore and N.A. Bates. 2008. “Census Bureau” vs. “Unknown Caller” – Caller-ID displays and survey cooperation, 2008–5. Washington, DC.
Trussell and Lavrakas 2005
Trussell, N. and P.J. Lavrakas. 2005. Testing the impact of caller ID technology on response rates in a mixed mode survey. Paper presented at the 60th annual conference of the American Association for Public Opinion Research, Miami Beach, FL, May 12–15.
Tuckel 2001
Tuckel, P. 2001. The vanishing respondent in telephone surveys. In Proceedings of the Section on Survey Research Methods [CD-ROM]. American Statistical Association, Alexandria, VA.
U.S. Census Bureau
U.S. Census Bureau. Available at: http://www.census.gov/srd/papers/pdf/rsm2008-05.pdf
Footnote
(1) The MA HIS final response rates approached 50%. However, the data here are based on the dispositions after only a maximum of 6 call attempts since the caller ID’s were varied starting at the 7th attempt in an effort to maximize response. Sample here is approximately half phone-listed address based sample and half landline RDD.


About Survey Practice Our Global Partners Disclaimer
The Survey Practice content may not be distributed, used, adapted, reproduced, translated or copied for any commercial purpose in any form without prior permission of the publisher. Any use of this e-journal in whole or in part, must include the customary bibliographic citation and its URL.