Introduction
Population-based health surveys are often plagued by low response rates (Asch, Jedrziewski, and Christakis 1997). Results from surveys with low response rates may be at a greater risk for nonresponse bias (Federal Judicial Center 2010; Office of Management and Budget 2006); limiting the generalization of the data among a population. Research has shown that nonrespondents can differ from respondents in terms of demographics, as well as their underlying health condition (Etter and Perneger 1997; Grotzinger, Stuart, and Ahern 1994; Macera et al. 1990; Norton et al. 1994; Richiardi, Boffetta, and Merletti 2002). Additionally, nonrespondents may have a less favorable perception of their care (Eisen and Grob 1979; Ley et al. 1976).
The relation between response rate and nonresponse bias, however, may not be as clear-cut. A 2008 meta-analysis examining 59 different surveys failed to demonstrate an association between the two. It was found that surveys with response rates from 20 percent to 70 percent had similar levels of nonresponse bias (Groves and Peytcheva 2008). This study demonstrated that response rate may not be the ideal indicator of response bias and that an adequate sampling frame may provide a truly representative sample, regardless of response rate. Benefits of such a strategy would be survey administration cost and manpower reductions, compared to higher response rates.
In Alberta, Canada, Alberta Health Services (AHS) is the sole provider of healthcare services for the province’s approximately 4 million residents. Inpatient hospital experience is one of 16 publicly-reported performance measures (Alberta Health Services 2014). The necessary data is captured by a team of trained health research interviewers, who administer a telephone survey, primarily comprised of the Hospital-Consumer Assessment of Healthcare Providers and Systems (HCAHPS) instrument (Centers for Medicare and Medicaid Services 2014b).
Since 2011, the inpatient hospital experience survey touches upon all of the province’s 94 acute care inpatient facilities. With three years of complete data, an evaluation of the representativeness of survey respondents is a timely piece that will strengthen the conclusions derived from the results. Given this, the purpose of the present project was to compare selected demographic and clinical attributes of survey respondents to those of all eligible inpatient discharges over the same time period. Organization-specific information regarding sampling methodology, survey administration, and preliminary results are also provided.
Data and Methods
Survey Instrument
Our organization’s inpatient hospital experience survey contains 51 questions. This includes 32 core HCAHPS items and 19 others which address organization-specific policies and procedures. Of the core HCAHPS items, 21 encompass nine key topics: communication with doctors, communication with nurses, responsiveness of hospital staff, pain management, communication about medicines, discharge information, cleanliness of the hospital environment, quietness of the hospital environment, and transition of care. The remaining core questions include four screener questions and seven demographic items. These are used for patient-mix adjustment and sub-analyses (Centers for Medicare and Medicaid Services 2014b). Organization-specific questions represent domains not included in HCAHPS, including pharmacy care and patient complaints. Each survey requires 10 to 20 minutes to complete using a standard script, a list of standard prompts, and responses to frequently asked questions. Surveys are administered using computer-assisted telephone interview (CATI) software (Voxco; Montreal, Canada). Ten percent of the calls are monitored for quality assurance and training purposes.
Responses to survey questions are Likert-type scales. Certain questions ask the respondent to rate aspects of their care on a scale of 0 (worst) to 10 (best), while others employ categorical responses (e.g., always; usually; sometimes; never). Details about the development, validity, and American results from HCAHPS are publicly available at www.hcahpsonline.org (Centers for Medicare and Medicaid Services 2014b, 2014a). At the end of the survey, open-ended questions provide an opportunity for respondents to give detailed feedback about their experience, including complaints that they may have. Patients wishing to report a concern, complaint, or compliment are provided with contact information for the Patient Relations department.
Sample Derivation and Dialing Protocol
Across our province, acute care admission, discharge, and transfer information is captured in four clinical databases. A biweekly data extract of eligible discharges is obtained using a standard script. Survey exclusion criteria include: age under 18 years old, inpatient stay of less than 24 hours, death during hospital stay, any psychiatric unit or physician service on record, any dilation and curettage, day surgery, or ambulatory procedures, as well as visits relating to still births or those associated with a baby with length of stay greater than 6 days (e.g., complication/NICU stay) (excluded out of consideration).
The list of eligible discharges is imported into CATI software, and stratified at the site level. Random dialing is performed, until a quota of 5 percent of eligible discharges is met at each site. Patients are contacted up to 42 days post-discharge, Monday to Friday from 10 AM to 9 PM, and on Saturdays from 9 AM to 4 PM. To increase potential for survey completion, each number is dialed up to nine times on varying days and times.
Data Linkage and Analysis
All biweekly data extracts were merged into a single file. Through cross-reference with our list of complete surveys, each eligible case was classified as a complete survey, or a nonrespondent case (e.g., indeterminate, disqualified, refused). Cases were then linked, based on personal health number (PHN), facility code, and service dates, to the corresponding inpatient discharge record in the Discharge Abstract Database (DAD) – a database of all inpatient hospital discharges. The national version of the DAD is maintained by the Canadian Institute for Health Information (CIHI), while a provincial copy is retained within our organization. Information regarding data elements, coverage, and data quality of the DAD are publicly available (Canadian Institute for Health Information 2010, 2014).
Study Variables
To assess the representativeness of survey respondents, we examined a variety of demographic (age, gender) and clinical (admission type, mean length of stay, mean number of comorbidities, ICU stay, discharge to home) variables. Admission type was classified as elective or urgent. Mean length of stay was recorded in days. A validated list of ICD-10-CA codes was used (Quan et al. 2005) to generate a comorbidity profiles for each record using the Elixhauser Comorbidity Index (Elixhauser et al. 1998). Diagnosis types “M” (most responsible diagnosis) and “2” (post-admission comorbidity) were excluded. ICU stay was classified as yes or no. Discharge to home was classified using the discharge disposition field in the DAD (codes “04” and “05”) (Canadian Institute for Health Information 2012).
Differences between inpatient experience survey respondents and nonrespondents were assessed using student t-tests for continuous variables, and chi-square analyses for binary ones. All analyses were performed using SAS Network Version 9.3 for Windows (Cary, NC, USA). P-values less than 0.05 were deemed statistically significant.
Results
Over the three-year study period (April 1, 2011 to March 31, 2014), 27,493 inpatient experience surveys were completed. Over this period, 493,527 eligible inpatient discharges took place, representing a 5.6 percent survey completion rate. Of completed surveys, 26,295 were matched with the inpatient hospital record (95.6 percent). Respondents had a mean age of 53.8±20.0 years, were predominantly female (65.0 percent), and had a mean length of stay of 5.4±9.4 days (Table 1). Compared with eligible nonrespondents (n=466,034), the sample had similar mean age (53.8±20.0 years vs. 54.4±21.3 years), sex (35.0 percent vs. 38.8 percent male), admission type (60.7 percent urgent in both groups), and mean number of comorbidities (0.8±1.2 vs. 1.0±1.3). However, compared to nonrespondents, respondents had a shorter mean length of stay (5.4 vs. 7.0 days), required less ICU care (2.1 percent vs. 3.0 percent), and were more likely to be discharged home (95.2 percent vs. 91.9 percent) (p<0.0001 in all cases).
Table 2 displays the results of survey respondents versus nonrespondents for each Elixhauser comorbidity. Twenty-three of the 30 comorbidities were more present in the nonrespondent group. The percentage of individuals with documented complicated hypertension, peptic ulcer disease excluding bleeding, AIDS/HIV, lymphoma, rheumatoid arthritis/collagen diseases, and obesity was similar between groups. Only uncomplicated diabetes was more prevalent in the survey respondent group (6.9 percent vs. 6.0 percent).
Discussion
Our main finding was that inpatient experience survey respondents were similar in age and sex to the eligible nonresponders. The present study is novel in that it sheds new light on the relation between response rate and corresponding nonresponse bias in health survey research. To our knowledge, it is the first report which examines this in the Canadian provincial context; one where healthcare services are universally provided. Perhaps more importantly, our findings may dispel the myth that a low response rate will, by default, result in nonresponse bias. Although our analyses resulted in statistical significance in the majority of comparisons, we feel that this is more a product of our extremely large sample size (over 26,000 surveys) and not any clinically meaningful difference between respondents and nonrespondents. We observed that survey respondents may be marginally healthier than nonrespondents, as shown in the mild reduction in mean length of stay, ICU stays, and mean number of documented comorbidities, as well as the increased proportion of patients discharged home.
There are several key strengths to the present study. First, the present project uses data linkage to compare several demographic and clinical factors of our inpatient experience respondents to nonrespondents. Lee et al. (2009) cite the absence of data from nonrespondents as a major difficulty in examining nonresponse bias in health survey research (Lee et al. 2009). Our data contains discharge information of all eligible patients; hence, we are able to make direct comparisons between respondents and nonrespondents, overcoming this critical limitation. This greater availability of data and data linkage provides opportunities for future research.
Second, as we have used HCAHPS methodology, a validated tool with standard script and prompts has assessed inpatient experience. Traditionally, patient satisfaction/experience measurement has been via instruments developed on an ad hoc basis. These instruments may not be valid or reliable. Waljee et al. (2014) shared the findings of 36 studies which examined the relationship between patient expectations and satisfaction (Waljee et al. 2014). Of these 36, the majority used ad hoc questionnaires and none used the HCAHPS survey. One of the inherent strengths of HCAHPS is that valid, measurable comparisons may be made between institutions and jurisdictions. In most cases, this is not possible with ad hoc questionnaires. Given the heterogeneity of clinical populations between institutions, research has examined the effects of patient-mix upon HCAHPS scores (Centers for Medicare and Medicaid Services 2014c). More information regarding patient-mix adjustment is available for consultation (Centers for Medicare and Medicaid Services 2014b).
Third, perhaps most important, is our sampling strategy. Given that we obtain all eligible inpatient discharges, each potential participant has an equal chance of participation. Our abstracted data includes up to two telephone numbers provided at hospital registration. These contact numbers do not discriminate between landlines or cellular phones and are presumed to be the most accurate way of contacting patients. Additionally, our interviewers attempt to call patients up to nine times at varying times on varying days, including one weekend day when one would presume most people are available. Patients who are not able to speak freely are provided with the opportunity to book a convenient callback time. Time is set aside each day for interviewers to complete callbacks in order to reduce nonresponse (Goyder 1985; Heberlein and Baumgartner 1978). Anecdotally, these strategies have helped ensure that survey quotas are met. Further, the sample is stratified for each of the 94 inpatient facilities, ensuring that each has an equal probability of representation within the final data set. This quota sampling approach has been applied elsewhere, with similar success (O’Cathain, Knowles, and Nicholl 2010).
There are limitations which warrant discussion. First, despite having obtained a fairly representative sample, it is impossible to assess the actual responses that nonrespondents may have had. This is important, as research has shown that survey respondents tend to have more favorable opinions of the care received, when compared to nonrespondent counterparts (Eisen and Grob 1979; Ley et al. 1976; French 1981). Despite this, there may not be a need to define an acceptable a priori response rate, provided that potential differences between survey respondents and nonrespondents are assessed (Kelley et al. 2003). Our findings support this assumption.
Second, our telephone administration, results may not apply to other modalities such as mail or face-to-face administration. An organizational pilot study (performed in 2004) highlighted differences in terms of response rates and demographics of survey respondents between the mail and telephone surveys (Cooke et al. 2004). With respect to HCAHPS specifically, de Vries et al. (2005) found that telephone administration elicited more positive responses on more than half of the survey items, particularly among domains relating to nursing care and the physical environment of the hospital (de Vries et al. 2005). These findings are consistent with other previous health survey studies (Burroughs et al. 2001; Fowler, Roman, and Di 1998; Fowler, Gallagher, and Nederend 1999).
A third potential limitation is the use of an administrative comorbidity algorithm. As outlined by Quan et al. (2005) the specificity and sensitivity of these coding algorithms, relative to a gold standard (e.g., chart review) remain undetermined (Quan et al. 2005).
In conclusion, this investigation provides novel information relative to the use of an HCAHPS-derived inpatient experience survey within our organization. This represents a key piece regarding the validity of the conclusions supported by the data. We advise that our sampling strategy may result in a representative sample, despite a 5 percent survey completion rate. This is an important finding, as further capital and manpower investments may not be necessary to bolster response rates; activities which themselves may not provide any measureable benefit. Future research will examine our survey methodology in greater detail.
Acknowledgements
The authors wish to recognize and thank the team of health research interviewers from Primary Data Support, Analytics (Data Integration, Measurement and Reporting, Alberta Health Services), as well as the patients who participated in the survey.