A Direct Comparison of ABS and Telephone Sampling in a Pilot Study of Children’s Health

Mary E. Losch University of Northern Iowa

Peter Damiano University of Iowa

Jean Willard University of Iowa

Anne Bonsall Hoekstra University of Northern Iowa

Ki H. Park University of Iowa

Duoc Nguyen University of Northern Iowa

This pilot study was designed to determine the relative strengths and weaknesses of ABS (mixed-mode web/telephone) as a sampling alternative to list-assisted RDD in a statewide study of a targeted population study of children’s health. Additionally, the pilot served as a feasibility study aimed at determining whether ABS can be used effectively for small studies (relative to national-scale data collection) that do not rely on substantial budgets. The results of this pilot study provide support for the contention that ABS sampling improves coverage (i.e., reduces coverage error) when compared to traditional RDD sampling.

Background

In the early 2000s, articles examining addressed-based alternatives to traditional approaches to sampling began to appear in the literature. Early uses of the United States Postal Service (USPS) Delivery Sequence File (DSF) were focused on the search for more cost-effective approaches to on-site enumeration for in-person surveys (e.g., Iannacchione et al. 2003). Survey scientists at RTI (Iannacchione and colleagues) and at NORC conducted several extensive comparisons of address-based samples and household unit (HU) field enumeration (see O’Muircheartaigh et al. 2002; O’Muircheartaigh et al. 2005). These findings supported the value of the DSF and address-based samples as a less expensive sampling alternative.

Falling response rates and more recently, growing concerns about coverage bias and the complexity of combining cell phone samples with landline samples have resulted in several alternatives to random digit dialing (RDD) sampling for telephone surveys. Although not the only sampling alternative to concerns about coverage bias in telephone surveys (e.g., Guterbock et. al. 2008; Lambert et al. 2010), address-based sampling (ABS) has received the most attention as a possible alternative to traditional RDD sampling for population-based surveys.

Extending the early work at RTI and NORC, Michael Link and his colleagues have conducted several large studies that expand the knowledge base about the feasibility and strengths of ABS for probability population surveys (e.g., Link et al. 2005, 2008). In addition, a number of large national studies are either now testing or have included ABS in their sample designs for some or all of their data collection. Examples include the CDC’s REACH U.S. survey, conducted by NORC (see Barron 2009, for detailed information), the American National Election Studies, and the National Cancer Institute’s Health Information National Trends Study.

Comparing ABS and RDD in a Statewide Study of Children’s Health

The findings to date suggest that ABS holds promise as a valuable alternative to RDD sampling. However, there are aspects of ABS sampling that may limit its utility — especially in those cases where resources and field periods are limited. ABS requires at least one mail contact for each case and unless a relatively low-cost data collection mode is offered (e.g., web, self-administered mailback), the costs of adding programming, materials, labor for mailing preparation, and postage can be prohibitive. In addition, if a mixed- or multi-mode approach is adopted, additional field time is needed to allow for the completion of web or self-administered questionnaires before beginning the telephone data collection. An additional question that requires examination is the practicality of ABS designs for studies with modest budgets where only a subset of the population is targeted and the group cannot be identified through location such as geocoding.

Methods

In April 2010, the University of Northern Iowa Center for Social and Behavioral Research (CSBR) conducted a pilot study of children’s health using both list-assisted RDD and ABS sampling. The population of interest was households with children under the age of 18. The pilot was completed prior to launching the 2010 statewide study which was the third survey of children’s health needs conducted by the University of Iowa Public Policy Center in collaboration with the Iowa Department of Public Health. The goal of the pilot was to provide approximately 300 completed interviews for each sampling design. To reduce anticipated costs, a web mode was added to the ABS sampling. Based on the estimate of 25% of households having a child under 18 and an anticipated response rate of 40%, an ABS sample frame of 3000 addresses was purchased from Marketing Systems Group (MSG). A list-assisted RDD sample of approximately 7,000 numbers, including a portion of households targeted for children under 18 was purchased from Marketing Systems Group (MSG) and used for the telephone group. This approach was consistent with sampling designs used in earlier data collection efforts associated with this study series. In order to test the actual yield of the ABS sample, no differential targeting of households was included.

ABS Sample. In the ABS group, a packet was mailed to each of the 3000 addresses on April 6th, 2010. A unique 6-character alphanumeric code was assigned to each household address for tracking. The packet included an information letter with instructions for completing a web questionnaire (including a unique web access code) and information indicating that we would call if the web mode was not accessed within the next week or so. A contact information card (with access code) was also included to indicate eligibility (child under 18 in home), phone number preference and preferred call times. A business reply envelope was included for return of the contact/eligibility card. A reminder postcard was mailed to those who did not return the contact sheet or complete the web option within 10 days.

The ABS sample yielded the following profile with regard to phone numbers appended and provided:

30% — No Number (No phone in sample or provided on contact card)

69% — No Preferred Number (Phone in sample was only phone number available)

1% — Same Number (Number in sample matched preferred number provided on contact card)

0.5% — Provided number or provided different number

About 10 days following the initial mailing, ABS cases were moved to the CATI facility for phone follow-up if no web response was provided. Because residential addresses were the unit of analysis, and because generic “[CITY] Household” was used for the labels, undeliverable packets (n=163) were removed from further contact and considered ineligible. All phone numbers received up to 15 attempts to reach a final disposition (15 for those who returned contact/eligibility cards and 10 for all others). Mean number of attempts was 4.3. Average interview length was 17.6 minutes. The ABS data collection was ended when all available phone numbers were maximally attempted.

List-Assisted RDD Sample. Data collection for the RDD group began on April 19th, 2010. Sample was released in replicates and less than a third of the RDD sample numbers were required to reach the goal of 300 completed interviews. All phone numbers received up to 10 attempts to reach a final disposition. Mean number of attempts was 3.6. Average interview length was 16.9 minutes.

Results

Sample sizes, costs, response rates, and field periods. Table 1 summarizes the comparisons between the ABS and RDD designs. The RDD sample yielded more interviews overall. The ABS design fell slightly short of the goal of 300 completed interviews. The costs of the ABS approach were much higher than those of RDD in this pilot. In addition to the costs associated with the mailing packet (materials, postage, mailing prep) and web programming, substantial administrative costs were incurred to manage the day-to-day input of information from contact cards and the database regarding web completes, updating phone numbers and scheduling follow-up calls. The bulk of the 75% difference in the cost comparison was in the staff category of salaries/wages.

Table 1 Comparison of sample sizes, costs, response rates, and field periods.

Comparison Category RDD ABS
Completed Interviews 339 279
Cost Per Interview $38 $67
RR3/Coop3 44 / 84 46 / 86
Field Period 4/19 – 5/12 (24 days) 4/6 – 5/13 (38 days)

Using the AAPOR RR3 and COOP 3 calculations, the ABS design yielded slightly higher response and cooperation overall. At 38 days in the field, the ABS time frame was 82% longer than that of the RDD design.

Coverage. As found in other studies, the ABS design did provide greater coverage of the population. Table 2 summarizes key coverage comparisons in child’s race/ethnicity, household income, education and absence of a landline telephone. All results are unweighted values. The ABS sample yielded greater racial and ethnic diversity, a greater percentage of lower income respondents, more respondents with lower educational attainment, and more cell-phone only households. Notable differences were seen in race (ABS 5 percentage points lower than RDD for white respondents), percent reporting incomes over $80,000 (ABS 13 percentage points lower than RDD), percent reporting less than a high school education, and absence of a landline (6% of ABS respondents reported no landline).

Table 2 Coverage comparisons: Ethnicity, race, income, education, & absence of landline.

Coverage RDD ABS
Hispanic Ethnicity 3% 5%
White Race 97% 92%
Household Income $80,000+ 52% 39%
Household Income <$25,000 6% 11%
≤ HS Education 15% 20%
Absence of Landline 0% 6%

Response Comparisons. A number of variables were selected for design comparison. The variables are functional health status, chronic condition diagnosis, activity level, services used, access to care, preventive care in previous year, and prescription medications in previous year. All values reflect unweighted data.

Table 3 provides a summary of the RDD-ABS comparison for the selected variables.

Table 3 Survey response comparisons.

Variable RDD ABS
Health Status (Excellent) 67% 65%
Chronic Condition (Yes) 12% 11%
Mean hours of TV/video per day 1.5 1.6
Does Child Need/Use More Medial Care than Others? (Yes) 11% 14%
Does Child Have Doctor/Nurse? (Yes) 91% 92%
Any Time the Child Could Not Get Care in the Last 12 Months? (Yes) 2% 2%
Last Preventive Care Visit within 12 Months (Yes) 88% 85%
Prescription Medication Needed in the Last 12 Months (Yes) 35% 41%

With the exception of prescription medication needed in the last year, no variables showed marked differences that fell outside the expected sampling variability for each group.

Mode Comparison. Because the ABS design incorporated mixed modes (phone and web), both the coverage and the response were also compared by mode. As shown in Tables 4 and 5, mode effects are suggested for some coverage dimensions and some response variables but not others.

Table 4 Coverage comparisons × mode: Ethnicity, race, income, education, & absence of landline.

Coverage ABS Web ABS Phone
Hispanic Ethnicity 5% 5%
White Race 92% 92%
Household Income $80,000+ 45% 37%
Household Income <$25,000 7% 14%
≤ HS Education 10% 25%
Absence of Landline 14% 3%

Table 5 Survey response comparisons × mode.

Variable ABS Web ABS Phone
Health Status (Excellent) 67% 64%
Chronic Condition (Yes) 11% 11%
Mean hours of TV/video per day 1.7 1.6
Does Child Need/Use More Medial Care than Others? (Yes) 10% 16%
Does Child Have Doctor/Nurse? (Yes) 91% 93%
Any Time the Child Could Not Get Care in the Last 12 Months? (Yes) 3% 2%
Last Preventive Care Visit within 12 Months (Yes) 79% 88%
Prescription Medication Needed in the Last 12 Months (Yes) 42% 40%

Those in the ABS phone group were more likely than those completing on the web to report that their child needed or used more medical care than other children and were much more likely to report that the child had received a preventive care check-up or immunizations in the last 12 months.

Conclusions

Although requiring more financial resources and time, ABS did prove to be a workable alternative for a statewide survey. Moreover, ABS also proved to be a feasible approach for a study targeting only a subset of households. In this case, the target was estimated at about 25% of all households. Screening via contact card, web and phone did not result in prohibitive costs given the benefit of decreased coverage error. The increase in time required for data collection is a drawback that must be considered when choosing sampling approaches. If data must be collected quickly, then ABS may be problematic.

Mode comparisons resulted on comparable estimates overall. Because there was no random assignment to mode in this study, it was impossible to fully separate mode effects from population selection effects, and no attempt was made to systematically evaluate the mixed-mode effect (e.g., Vannieuwenhuyze et al. 2010). However, because there were notable differences in income and education between the web and phone respondents, it is likely that at least some of the contrast evident in some items was a function of sampling error (respondent selection of mode) rather than measurement error (mode) per se.

In summary, the ABS design is a workable approach for modest statewide projects and can be used successfully even for projects targeting a subset of the population. ABS sampling yielded benefit in the form of reduced coverage error. However, the increased costs incurred by the approach included materials, staff wages/salaries, and increased time in the field. Importantly, the addition of one or more modes within the ABS context inserts an additional level of complexity; this may impart new measurement error in the form of mode effects. Although not unique to ABS, potential mode effects must be a consideration when survey design alternatives are decided for each project. Future investigations should examine the feasibility of ABS for other subpopulations with larger sample sizes to provide increased power for comparisons.

References

Barron 2009
Barron, M. 2009. Multi-mode surveys using address-based sampling: The design of the REACH U.S. risk factor survey. Proceedings of the American Statistical Association [CD-ROM]. American Statistical Association, Alexandria, VA. (http://www.amstat.org/Sections/Srms/Proceedings/y2009/Files/400055.pdf)
Guterbock et. al. 2008
Guterbock, T.M., J. Ellis, A. Diop, K. Le and J.L. Holmes. 2008. Who needs RDD: combining directory listings with cell phone exchanges for an alternative sampling frame. Paper presented at the Annual Meetings of the American Association for Public Opinion Research, New Orleans, May 2008.
Iannacchione et al. 2003
Iannacchione, V., J. Staab and D. Redden. 2003. Evaluating the use of residential mailing addresses in a metropolitan household survey. Public Opinion Quarterly 67: 202–210.
Lambert et al. 2010
Lambert, D., G. Langer and M. McMenemy. 2010. Cell-phone sampling: an alternative approach. Paper presented at the annual conference of the American Association for Public Opinion Research, Chicago, IL, May 14, 2010.
Link et al. 2005
Link, M., M. Battaglia, M. Frankel, L. Osborn and A. Mokdad. 2005. Address-based versus random-digit dial sampling: comparison of data quality from BRFSS mail and telephone surveys. Proceedings of the 2005 Federal Committee on Statistical Methodology Research Conference [CD-ROM]; Federal Committee on Statistical Methodology, Arlington, Virginia.
Link et al. 2008
Link, M., M. Battaglia, M. Frankel, L. Osborn and A. Mokdad. 2008. Comparison of address-based sampling versus random-digit dialing for general population surveys. Public Opinion Quarterly 72(6–27). doi: 10.1093/poq/nfn003.
O’Muircheartaigh et al. 2002
O’Muircheartaigh, C., S. Eckman and C. Weiss. 2002. Traditional and enhanced field listing for probability sampling. Proceedings of the American Statistical Association [CD-ROM]. American Statistical Association, Alexandria, VA.
O’Muircheartaigh et al. 2005
O’Muircheartaigh, C., S. Eckman, N. English, J. Lepkowski and S. Heeringa. 2005, May. Comparison of traditional listings and USPS address database as a frame for national area probability samples. Presented at the American Association for Public Opinion Research Conference, Miami Beach, FL.
Vannieuwenhuyze et al. 2010
Vannieuwenhuyze, J., G. Loosveldt and G. Molenberghs. 2010. A method for evaluating mode effects in mixed-mode surveys. Public Opinion Quarterly 74(5): 1027–1045.


About Survey Practice Our Global Partners Disclaimer
The Survey Practice content may not be distributed, used, adapted, reproduced, translated or copied for any commercial purpose in any form without prior permission of the publisher. Any use of this e-journal in whole or in part, must include the customary bibliographic citation and its URL.