Background
Health insurance in the United States is obtained through a mix of public and private sources. The most prevalent types of private coverage are employer-sponsored insurance (ESI) and insurance directly purchased by individuals or families through an insurance company (i.e., direct-purchase coverage). Private coverage usually involves a monthly payment, or premium, for coverage. Public coverage is dominated by Medicaid (primarily for low-income people), which generally does not require a premium, and Medicare (primarily for those aged 65 and older).
In 2014, the Affordable Care Act (ACA) introduced the “marketplace”, an online portal through which people can shop for and enroll in both public and private coverage. The ACA also established premium subsidies to facilitate the purchase of private coverage through the marketplace for individuals within certain income guidelines.
Prior to full implementation of the ACA in 2014, researchers explored ways of adapting surveys to capture both enrollment in private marketplace plans and the receipt of premium subsidies. In 2011, a study was conducted in Massachusetts (which in 2006 passed state-level legislation much like the ACA) with individuals who were known to be enrolled in subsidized marketplace plans. Results indicated these enrollees accurately reported their subsidy receipt when asked, “Is the cost of the premium for your health insurance plan subsidized based on your family income?” (Pascale 2008). The Current Population Survey (CPS) and the Medical Expenditure Panel Survey (MEPS) adopted this question wording in their post-ACA surveys. The National Health Interview Survey (NHIS) introduced similar but not identical wording after the ACA: “Is the premium paid for this plan based on income?” In the meantime, researchers at the Urban Institute had launched the Health Reform Monitoring Survey (HRMS), a quarterly survey whose purpose is “…to provide timely information on implementation issues under the Affordable Care Act (ACA) and changes in health insurance coverage…” (Urban Institute 2017). Given its purpose and frequency of administration, the HRMS was an ideal vehicle for testing the CPS/MEPS and the NHIS wording. In addition, for experimentation purposes, expert review was conducted to develop a third alternative that hinged on the phrase “…qualify for and receive financial help with the cost of the premium.”
Differences in the wording of the subsidy question could affect estimates of subsidy receipt and marketplace coverage, which are important measures for policymakers studying the ACA. For example, policymakers are interested in examining the effect of subsidies on access to insurance coverage and health care. This research provides the first findings on whether subsidy questions currently used in different surveys are equivalent. More specifically, we measure whether the questions yield different estimates of subsidy receipt and levels of “don’t know” or “refused” responses (hereafter referred to as “missingness”).
Data and Methods
Data come from experiments on subsidy question wording embedded in the March and June 2014 HRMS. The HRMS draws its sample from the GfK KnowledgePanel, a nationally representative probability-based internet panel. Sample size is approximately 7,500 adults aged 18–64 per quarter, and laptops and Internet access are provided to panel members who need them. The American Association for Public Opinion Research cumulative response rate for the HRMS is the product of the panel household recruitment rate, the panel household profile rate, and the HRMS completion rate and is roughly 5% each quarter (Long et al. 2014).
The content of the HRMS March and June surveys covered a range of topics including health status, access and barriers to care, and health care affordability. In terms of health insurance, both surveys began with a question on type of coverage (Q1) shown in Figure 1. In the March HRMS, if any coverage was reported (that is, “a” through “g” was selected in Q1 or coverage was reported in a follow-up health insurance verification question) respondents were asked if the premium was subsidized. But, in the June HRMS, if any coverage was reported in Q1, a follow-up question was asked to determine whether there was a premium (Q2). Only if the answer was “Yes” were respondents asked if the premium was subsidized. In both the March and June HRMS, respondents who were asked the subsidy question were randomly assigned to one of two versions as shown in Figure 2.
The first experiment was conducted before most of the surge in enrollment occurred at the end of the first ACA open enrollment period. The second experiment was conducted after the first enrollment period ended but when the health insurance marketplace was still quite new. In addition, there was the change to the instrument noted above, which introduced a lack of comparability across experiments. In March/Experiment 1, where all respondents who reported any coverage were asked if their premium was subsidized, the question presupposes people have a premium. For coverage types such as ESI and direct-purchase, both of which almost universally require a premium, this presupposition is nonproblematic. But, for Medicaid and other public programs, most of which do not carry a premium, the question could render ambiguous answers (Schoua-Glusberg et al. 2012)[1]. Due to this change in the universe of respondents being asked the subsidy question, we cannot directly compare results from Experiments 1 and 2.
Because our focus was on the marketplace subsidy question, ideally we would subset the sample to respondents with private coverage obtained through the marketplace and analyze question wording effects among only this subset for whom the question was relevant. However, even before the ACA, the literature on self-reports of coverage type suggested that respondents often did not accurately categorize their coverage (Davern et al. 2008; Pascale 2008, 2016), and the ACA only complicated this reporting task. For example, research from Massachusetts indicated that respondents conflate Medicaid and other public coverage with marketplace coverage (Pascale et al. 2013). Furthermore, the marketplace was new at the time of data collection, and there were unique and multiple pathways to enrollment, including community-based organizations, “navigators,” and brokers, which may make it harder for respondents to map their experience to the survey response options for coverage type. Finally, because the term “marketplace” has come to denote not only the health insurance marketplace itself but also private, direct-purchase coverage obtained through the marketplace portal, being asked if a plan was from the marketplace renders an ambiguous answer. The question could refer to a public plan obtained on the marketplace/portal, or it could refer to an actual marketplace plan (for example private, direct-purchase). For all these reasons, to define our universe of marketplace enrollees, rather than select only those respondents with direct-purchase/marketplace (“b” from Figure 1), we cast a wider net. We began with all respondents identified as having coverage[2] and then removed only those we think are very unlikely to be marketplace enrollees. Specifically, we removed those with ESI (“a” from Figure 1) or TRICARE/military (“e” from Figure 1, which is essentially ESI for active duty or retired military members and their dependents). ESI and military plans are not available on federal or state marketplaces, and the literature suggests that self-reports of ESI and uninsurance are fairly accurate, especially compared to self-reports of Medicaid (Call et al. 2012; Davern et al. 2008; Hill 2007). Thus, our core analytic sample is those with direct-purchase; Medicare; Medicaid/other public; or nonspecified coverage (“b”, “c”, “d”, or “g” in Figure 1).
Furthermore, we subset the sample by income to explore reporting relative to the income range eligible for subsidies.[3] We expect that sample in the income range eligible for subsidies includes a higher proportion of true marketplace enrollees compared to the upper and lower income brackets, and true marketplace enrollees are who researchers will generally want to focus on in real-world research on subsidies and direct-purchase from the marketplace. We expect that the lower income sample includes a higher proportion of Medicaid or Medicare enrollees, and we expect the higher income sample includes a higher proportion of people with direct-purchase not obtained on the marketplace. We approximate subsidy eligibility using the three categories of income available in the HRMS:
-
At or below 138% of the federal poverty level (FPL)
-
139– 399% of FPL (subsidy-range income)
-
400% of FPL or more.
We examined differences across treatment groups to ensure effective randomization. Although there were few significant differences across participants assigned to treatment groups (Supplemental Material Table 1), we used regression to adjust for observed differences. Our outcomes of interest were the two-tailed tests of regression-adjusted difference in reported subsidy and level of missingness observed in the different subsidy-question treatments. The regression model includes control variables for age, health status, gender, race/ethnicity, income, marital status, educational attainment, homeownership status, and a rural-urban indicator. We estimated the equation separately for each experiment. Unadjusted results are very similar (Supplemental Material Tables 2 and 3). We account for the complex design of the HRMS and report results from subgroup analyses with at least 250 sample people.
Findings
We found no statistically significant differences between the CPS/MEPS and the HRMS or the NHIS in estimates of subsidy receipt or levels of missing data. In Experiment 1, the CPS/MEPS and the HRMS questions yielded similar estimates of subsidy receipt overall and among income groups examined (see Figures 3 and 4). For example, among people in the subsidy-range income group, reported subsidy was 19.0% and 22.8%, respectively.
Levels of missingness were also not significantly different across treatments overall or for the income groups examined in Experiment 1. For example, among people in the subsidy-range income group, missingness was 41.9% in the CPS/MEPS version and 39.5% in the HRMS version.
In Experiment 2 (which included only those reporting a premium), the CPS/MEPS and the NHIS estimates of subsidy receipt and missingness were also not statistically different overall or among people in the subsidy-range income group (see Figures 5 and 6). Among people in the subsidy-range income group, the reported subsidy was 34.2% and 33.9%, respectively, and levels of missingness were 19.9% and 19.6%. We did not have enough sample cases to analyze those in the lower or upper income brackets but find overall that the results are similar: There was a roughly 3 percentage point difference in reported subsidy (33.2% in the CPS/MEPS versus 36.5% in the NHIS) and a roughly 1 percentage point difference in level of missingness (20.1% in the CPS/MEPS versus 18.9% in the NHIS).
Limitations
We note caveats around the low response rate and the online panel nature of the survey. By casting a wide net in terms of the analysis universe, we likely include respondents who do not have private marketplace coverage, and for whom the question on subsidies is irrelevant (particularly in Experiment 1 and in the upper and lower income brackets). This renders our estimates of subsidy receipt suspect as point estimates. However, whatever noise is introduced by including these respondents should affect both treatments equally. Hence, we suggest that relative comparisons across treatments are still relevant.
Discussion
The ACA was passed in 2010 and private, subsidized coverage through the marketplace was launched in 2014. On paper, this may have given ample opportunity for surveys to adapt to the coming changes, but in reality, there was a circularity problem. Until the marketplace was up and running, there were no real-world enrollees with whom to conduct research on how to ask questions. As a proxy, research was conducted in 2011–2012 with then real-world enrollees in the state-level marketplace in Massachusetts, and further research was conducted within months of full implementation of the ACA. Given these circumstances and the novelty of the marketplace and subsidies, finding that the CPS/MEPS version of the subsidy question generates equivalent estimates compared to the HRMS and to the NHIS comes as a pleasant surprise.[4] And while this research does not speak to the validity of the questions, subsequent research examined the agreement between enrollment records and self-reports of subsidies using the CPS/MEPS version of the question. Results showed that those with unsubsidized private coverage through the marketplace (according to enrollment records) correctly answered “No” to the premium subsidy question 85.7% of the time, and those with subsidized private coverage correctly answered “Yes” to the subsidy question 90.4% of the time (Pascale et al. 2017).
In terms of missingness, the high levels in Experiment 1 could indicate confusion about subsidies and/or the question wording as a whole. However, interpreting this finding is not straightforward because of the broad universe asked about subsidies. Recall there was no question on whether respondents’ plans carried a premium, and all respondents were asked if they received a subsidy on the premium. Among those who reported do not know or refused (DK/REF) to the subsidy question in Experiment 1, about 60% were reported to be in the 0–138% of FPL income range. Thus, it is likely that a sizable portion of the respondents were Medicaid enrollees who pay no premium. These respondents may have found that the subsidy question did not apply but did not find an appropriate answer choice so answered DK/REF. Indeed, in Experiment 2, where the universe was restricted to those who said “Yes,” they did have a premium, and the level of missingness was reduced to about 20%. There was also a corresponding increase in the overall levels reporting a subsidy in Experiment 2 compared to Experiment 1 (around 40% versus 20%). This implies that the universe in Experiment 2 contained fewer people whose plans carried no premium, and, hence, fewer respondents who found the question did not apply. Finally, the novelty of the marketplace and subsidies could explain some amount of the missingness. A later survey, carried out in the spring of 2015 with known marketplace enrollees, found the level of missingness on the CPS/MEPS subsidy question was only 2.2% (Pascale et al. 2017).
Conclusions
We find no evidence that the slight wording differences in the subsidy questions affect levels of subsidy receipt or missing data. To the extent that researchers may want to compare estimates from the CPS/MEPS to the NHIS or to the HRMS, they can have some confidence that any observed differences are not attributable to the minor difference in wording of the subsidy question. Any differences would likely stem from other differences in how the survey is conducted and the estimates are produced, including how the marketplace universe is defined. A second implication has to do with defining individuals with private coverage through the marketplace. Given the novelty and complexities of the marketplace (only some of which are noted previously), researchers continue to struggle with how to use survey data to identify this group. One strategy proposed in the Massachusetts research involves essentially an algorithm using responses to questions on type of coverage, premium, and subsidy (Pascale et al. 2013). To the extent that researchers working with these surveys may rely on the subsidy question in such an algorithm, the findings here suggest that the minor wording difference in the subsidy question would not be a detrimental factor in those algorithms.
Acknowledgements
We gratefully acknowledge the contributions of Sharon K. Long, Genevieve M. Kenney, Lisa Clemans-Cope, Yvette Odu, and Caroline Au-Yeung.
For example, Schoua-Glusberg et al. (2012) found in tests in Massachusetts that some Medicaid enrollees answered “Yes,” reasoning that “Yes my premium is subsidized, and it’s subsidized down to $0 because I don’t pay anything for it.” Others without a premium answered “No,” reasoning, “No I don’t receive a subsidy on my premium because I don’t have a premium.”
Indian Health Service (“f”) is not comprehensive enough to be considered coverage.
In states that expanded Medicaid eligibility under the ACA (25 states at the time of data collection), the subsidy range is 139 to 400% of FPL and in nonexpansion states the range is 100 to 400% of FPL. However, we were limited by the income ranges asked about in the survey, so our “subsidy range” group excludes those in the 100–138 FPL in nonexpansion states.
We did not conduct an experiment to compare the NHIS subsidy question to the HRMS subsidy question.