"Finding Based on Extremely Outdated Estimate Contradicted by More Recent Evidence. First, they were relying on the oldest available study showing that being uninsured increases mortality risk by about 25% compared to having private insurance. True, they applied this estimate to very recent population data, but the conclusion that mortality risk increases among the uninsured came from the Franks at al. study which observed what happened to people who were uninsured in 1971-1975 and then following them for 11 to 16 years to see how many died.
Policy wonks will recognize that most of this follow-up period occurred prior to the implementation of EMTALA which guarantees everyone access to emergency treatment regardless of ability to pay. EMTALA was enacted in 1986, but its implementing regulations were not issued until 1994. It would not be at all unreasonable to suppose that being uninsured posed a far greater risk to health back then than today.
Admittedly a parallel study of the same type was done by Harvard researchers in 2009 using more recent data (Wilper et al. 2009). EMTALA notwithstanding the elevated risk of death associated with being uninsured purportedly had increased to 40%.
Yet that very same year, UC San Diego’s Richard Kronick conducted a study that was far larger (a sample nearly 22 times as large) and arguably methodologically superior. He found no statistically significant difference in the mortality risks faced by people who were uninsured compared to those with private coverage.
Moreover, in 2011 Kim and Milyo replicated the Harvard study using 6 additional years of follow-up data and likewise found no statistically significant effect of being uninsured on mortality.
In short, the FamiliesUSA did its calculations by cherry-picking the most outdated and least informative estimate available.
Studies Based on Comparisons of Having No Coverage vs. Private Coverage, Not Medicaid.
The second inconvenient truth is that all these studies compare being uninsured to having private health insurance coverage. It is completely inappropriate to use these studies to draw any inferences about whether giving uninsured people Medicaid coverage would reduce their risk of death.
How do I know this? Because a study published just one year after Franks showed that the death rate for white males on Medicaid was more than twice that of statistically equivalent individuals with no insurance coverage.
To be fair, one might also dismiss this study as being outdated except that Kim and Milyo--using the most recent data of all these 4 studies--also found that Medicaid coverage is associated with increased mortality risk; the adjusted hazard ratio for Medicaid compared to no insurance is 1.32 . In plain English, this means that after accounting for the most important ways in which people on Medicaid and those who are uninsured differ, people on Medicaid are 32% more likely to die in a given time period than their uninsured counterparts. In short, it is highly problematic to infer that Medicaid will save lives simply because private insurance possibly may do so.
Observational Studies Are Generally a Weak Form of Evidence. The third inconvenient truth is that all of these studies are inherently problematic in that they are observational studies rather than randomized controlled trials. Consequently, no matter how many sophisticated statistical adjustments we make, we can never entirely rule out the possibility that observed differences in the rate of death between insured people versus their uninsured “statistical twins” are due to unmeasured differences between the 2 groups rather than to their insurance status.
For example, other evidence gives us good reason to believe that uninsured people are more willing to make risky life choices that are not captured in the observed data:
The best observational studies control for obesity, lack of exercise, smoking and drinking behavior, but it is not unreasonable to infer from the existing literature that the uninsured may also be engaged in other risky activities (e.g,, driving without seatbelts or driving drunk) not taken into account. Thus, it may be these choices, rather than their lack of insurance, that drives most or all of the observed mortality differentials.
- Wilper et al's study showed that on every metric of health risk they tracked (obesity, lack of exercise, smoking, and drinking) the uninsured led riskier lives (Table 1).
- Similarly, a very recent study showed both the uninsured and those on Medicaid are more likely to smoke; an earlier study had similarly demonstrated higher rates of substance abuse among both groups relative to the privately insured.
Sommers Study of Medicaid Expansions (AZ, ME, NY)In light of the limitations of observational studies, ACA supporters leapt on the quasi-experimental findings from a 3-state study conducted by a team led by Harvard’s Benjamin Sommer and published in September 2012. This study found a weighted average mortality decrease of 19.6 per 100,000 non-elderly adults in 3 states that had substantially expanded Medicaid eligibility for adults starting in the year 2000: NY, AZ and ME. This is a sophisticated study methodologically, but it is not without flaws. I have critiqued this study in much more detail elsewhere, so here I will just highlight the biggest concerns:
Aggregate Mortality Risk. First, unlike the Oregon Health Insurance Experiment, the Sommers study didn’t directly measure mortality risk among those with Medicaid. It compared changes in all-cause county-level mortality for adults 20-64 in 3 states that expanded Medicaid to states presumed to be good comparison states.
To be clear, the Sommers team did as sophisticated a job as possible with the data they had, but could not control for everything. Thus, for example, if the Medicaid expansion states experienced a relative reduction in deaths due to automobile accidents (e.g., due to more aggressive enforcement of drunk driving laws), all of these would have chalked up to Medicaid expansion even when it’s obvious Medicaid would have had nothing to do with such mortality reductions. Two examples illustrate my concern:
External Causes of Death Reduced by 100%? Take the roughly one-quarter of the estimated mortality reduction was due to "external causes" (injuries, suicide, homicide, complications of medical treatment, and substance abuse). If one assumes that Medicaid was responsible for the entire observed reduction in external causes, that would imply a 100% reduction in such deaths among those gaining Medicaid coverage ! That seems quite implausible to me, yet that's exactly what people are implicitly assuming when they naively use this study’s results to calculate the potential number of lives saved under Medicaid expansion.
Even if we jigger the assumptions to conclude that such deaths were reduced by, say, 25%, what exactly is the theory here? In a later Massachusetts study and re-analysis of the 3-state data, Prof. Sommers made a point of separately analyzing "amenable deaths" which are deaths that are most likely to be affected by timely access to medical care. What's notable is that major causes excluded from this definition include accidental deaths, suicides, and homicides! So why would anyone imagine that expanding Medicaid should somehow affect such causes of death? In a re-analysis of these data 5 years later, Prof. Sommers himself questions the plausibility of a connection: "There has been suggestive evidence that insurance status can reduce mortality even from some of the latter conditions; for instance, in-hospital mortality may be as much as 40 percent higher for trauma victims without health insurance compared with insured patients (Doyle 2005). However, the majority of trauma deaths occur before hospital admission, as is the case for homicides, suggesting that health insurance has limited ability to impact these conditions at the population level compared with other causes listed above. 11 Meanwhile, the OHIE showed that the acquisition of Medicaid significantly reduces depression rates (Baicker et al. 2013), raising the possibility that coverage might also reduce suicide rates. While some trials of specific cognitive interventions have been shown to reduce recurrent suicide attempts (Brown et al. 2005), the most common medical intervention for depression (antidepressant use) has not been shown to reduce suicides in randomized trials (Fergusson et al. 2005), and the overall evidence base is mixed as to whether medical care reduces the risk of self-harm (Hawton et al. 2000)."
Total Mortality Reduced by 69%? Even the overall result should have been viewed with a great deal of skepticism. As recounted in a Health Affairs piece put out in January 2014 [recall that at that point the Obamacare rollout was a disaster, so it was imperative for supporters to bolster support by proving it would save many lives], the study's findings imply that 1 death was averted for every 455 people newly covered under Medicaid (Exhibit 2). This is equivalent to a reduction of 219 deaths per 100,000 in the Medicaid expansion group. But the baseline death risk for the Medicaid expansion group was 318 per 100,000 . If the study authors really were correct in attributing all of the observed mortality reduction to the Medicaid expansion population, this would imply that death rates among this group dropped by 69%! Do Obamacare enthusiasts really believe that Medicaid is this potent a weapon against premature mortality?
As I will show in Part 2, just this year Prof. Sommers has addressed this question in a re-analysis of these 3 states, using a pyramid of assumptions to show that the implied reduction in mortality risk is 38%--a figure he views as plausible. But my point is that back in 2012 and the years that followed, Obamacare supporters were so eager to prove that Obamacare was saving tens of thousands of lives, they enthusiastically embraced the 3-state study findings uncritically without examining that the results implied a hugely implausible reduction in the risk of death.
Mixed Results. Second, in reality, this study reported mixed results. While the authors focus on the weighted average findings across all 3 states, NY was actually the only state among the three to experience a statistically significant decline in mortality. In contrast, an apparent increase in mortality in Maine and an apparent mortality decline in AZ were not statistically significant.
It should be obvious even to a non-researcher that if Medicaid truly reduced mortality risk, we would not expect to see it having a demonstrable beneficial effect on mortality in only 1 out of 3 states studied. Thus, the aggregate result of a significant reduction in mortality is driven largely by New York. Indeed, had NY not been in the study and a state similar to ME or AZ substituted, the authors would have had to report that Medicaid had no significant effect on mortality.
Statistical Artifact? A third concern is that the New York results may be a statistical artifact. Pennsylvania was the comparison control state for NY, but for reasons explained in great detail by Avik Roy here, this is an extremely flawed comparison in light of the substantial differences between New York and Pennsylvania in terms of poverty rates (14.1 percent vs. 11.5 percent) and presence of ethnic or racial minorities (38 percent vs. 16 percent). Both factors have well-established connections to mortality risk. Had a different, more appropriate comparison state been selected, the estimated beneficial effect of Medicaid expansion might well have disappeared entirely.
Limited Generalizability. In terms of generalizability, the evidence is that more states are like Maine and Arizona than New York in terms of generosity of eligibility/benefits etc. Indeed, Public Citizen ranks New York’s Medicaid program 8th in the nation compared to Maine’s ranking of #13 and Arizona's ranking of #24 (Table 6). Thus, most states would be likely to exhibit the pattern seen in Maine and Arizona--i.e., no statistically significant reduction in mortality.
So even if one believed NY's Medicaid program actually reduced mortality, one cannot cherry-pick the Sommers’ results. If people are willing to overlook the study's clear methodological limitations to claim it “proves” Medicaid saved lives in NY, then they have to be prepared to concede that Maine and Arizona’s Medicaid programs evidently had no impact on mortality.
Oregon Health Insurance ExperimentIn truth, the best available evidence regarding Medicaid’s actual impact on health and mortality risk comes from the Oregon Health Insurance Experiment (OHIE), which is as close to a randomized controlled trial as we might ever get on this question. In that study, reported in May 2013, Medicaid “generated no significant improvement in measured physical health outcomes,” nor did it result in a statistically significant reduction in mortality risk.
Specifically, the authors found no significant improvement in (1) elevated blood pressure; (2) high cholesterol; (3) elevated glycated hemoglobin levels; or (4) long-term cardiovascular risk, as measured by the Framingham risk score [The Framingham Risk Score predicts 10-year risk of cardiovascular disease based on age, cholesterol levels, blood pressure, blood sugar, use of medication for high blood pressure, and smoking].
Mortality Risk. As for mortality, OHIE examined mortality at one year, with no statistically significant changes detected (these results were reported in 2012, although a working paper version was out in 2011; see ). More precisely, their point estimate showed a 16% relative mortality reduction. However, the confidence interval was extremely wide and could not rule out very large individual-level mortality changes, (i.e., 95 percent confidence interval ranged from −82% to +50%) .
Some have argued that the study didn't last long enough or didn't have a sample size large enough to produce detectable effects on health outcomes. They are encouraged that while insignificant, the direction of effect was what they had hoped. That is, while statistically insignificant, Medicaid lowered high blood pressure by 1.3%, high cholesterol by 2.4%, elevated glycated hemoglobin levels by 0.9% and Framingham Risk Scores by 0.2%. However, Jim Manzi points out (courtesy of Megan McArdle) that if one is reduced to comparing statistically insignificant results, two others were equally important.
Study Biased in Favor of Medicaid. These results shocked many health services researchers especially since, as Avik Roy has recounted in more detail, the study had some biasing factors working in Medicaid’s favor: most notably, the fact that Oregon’s Medicaid program pays doctors relatively more; that the Medicaid enrollees were sicker, and therefore more likely to benefit from medical care than those in the uninsured control arm and that the study was not blinded (which might have differentially affected how those newly eligible for Medicaid were treated). As well, Oregon has a better-than-average Medicaid program: Public Citizen ranks Oregon’s Medicaid program 12th in the nation (Table 6). If a program in the top quartile of performance cannot improve the health of its recipients, it is hard to see why lower-ranking programs would be expected to do any better. Put another way, even if Oregon's Medicaid program had managed to achieve laudable improvements in health, it is inappropriate to extrapolate that rosy result to the entire nation.
- First, Medicaid evidently increased smoking rates by 5.6 percentage points (which clearly in the long run will have an adverse effect on mortality even if this might take decades to manifest itself).
- Second, Medicaid "coverage increased the Framingham Risk Score for those who were sick . That is, it made overall cardiovascular health of sick people worse. And this estimated effect was far closer to statistical significance (p = 0.24) than the estimated effects of coverage on any measurements of elevated blood pressure (p = 0.65), elevated blood sugar (p = 0.61) or high total cholesterol (p = 0.37)."
Depression Outcomes Improved. That said, the authors did find a statistically significant reduction (9.2 percentage points) in the rate of depression. But if depression reduction is the only health benefit attainable under Medicaid, we obviously could achieve the identical improvements (or greater) far less expensively through some sort of targeted mental health intervention. That is, it makes no sense to preemptively cover 100% of low income adults (at ~$4,000 apiece) in order to find and treat the 30% with depression (Table 2).
Was OHIE Underpowered? Austin Frakt and others have forcefully argued here and here that OHIE was underpowered to detect changes in the 3 clinical measures since only a small number of participants who had elevated blood pressure, cholesterol, or glucose at the outset. One review noted that 18% of participants who ended up in the control group reported that they had hypertension pre-lottery, 13% reported they had high cholesterol, 7% diabetes, and 2% a heart attack. The highest prevalence of disease was depression, with 35% of controls reporting this diagnosis prior to the lottery.
Avik Roy had a pithy response to these claims: "Balderdash. The study examined the health outcomes of 12,229 Oregonians. In trials of drugs for high cholesterol, high blood pressure, and diabetes, such sample sizes are nearly always adequate for demonstrating whether or not the treatment is beneficial. If Medicaid were a new medicine applying for approval from the Food and Drug Administration, it would be summarily rejected."
In parallel fashion, I did a deep dive comparing the OHIE to the RAND Health Insurance Experiment, pointing out that the OHIE effectively had a Medicaid sample size roughly 2-1/2 times the size of the RAND HIE, yet the latter study was able to find some statistically significant results regarding low blood pressure:
For the foregoing analysis, I was only counting the 1,903 who actually enrolled in Medicaid in the OHIE and comparing this to the estimated 850 person-years of coverage in the RAND HIE for low-income adults. But Richardson et al. have pointed out that the OHIE study’s reported effective sample sizes given survey weighting were 5,406 treatments (i.e., Medicaid) and 4,786 controls. So if OHIE was underpowered, it is truly puzzling how the RAND HIE was able to obtain significant results even with a sample size that was many times smaller.
- For those who began the experiment with high blood pressure (the 20% having the highest diastolic blood pressure), free care plan participants had a clinically significant decline in blood pressure compared to their counterparts in cost-sharing plans.
- Epidemiologic data imply that a reduction of this magnitude would lower mortality about 10% a year in the free care group (the sample size was too small to actually measure this mortality reduction among HIE participants).
ConclusionAs should be clear, as of 2013, the evidence was not exactly overwhelming that the Medicaid expansion to able-bodied adults was likely to save lives. To be fair, CAP relied on a 2014 study of the Massachusetts health reform to make its estimates of how many purportedly will die if the Senate health bill were passed. However, in Part 2, I will show how that study, as well as a 2017 Sommers re-analysis of the 3-state Medicaid expansion likewise fail to provide convincing evidence that repeal of the Medicaid expansion will increase mortality risk on balance."
Monday, July 3, 2017
Reality Check: The Obamacare Medicaid Expansion Is Not Saving Lives, Part 1
By Chris Conover. Excerpt: