Thursday, May 2, 2019

By Derek Bonett of Cato suggests that Medicaid does not lower health care cost

See Statistical Ambiguity Should Never Be Glossed Over.
"The article presents us with the following:
new study published by in-house researchers at the Department of Health and Human Services compared places that have expanded their Medicaid programs as part of Obamacare with neighboring places that have not. They found that, in 2015, insurance in the marketplace for middle-income people cost less in the places that had expanded Medicaid.
By comparing counties across state borders, and adjusting for several differences between them, the researchers calculated that expanding Medicaid meant marketplace premiums that were 7 percent lower.
Red Flag #1: This study was conducted, as noted, by HHS researchers. This is certainly not disqualifying per se, but it should cause us to readjust our priors. While peer-review has many, many problems as a vetting mechanism for quality, at least it’s vetting mechanism. Moreover, generally speaking, more prestigious journals do tend to demand more scrupulous econometric work.

It’s worth noting that before we even dive into the methodology, the study itself admits that whatever causal effect is at play here, the likely mechanism is that newly eligible Medicaid enrollees in states that expanded Medicaid are lower-income and therefore in poorer health than those individuals earning in excess of 133% of the poverty line. This means that sicker-than-average individuals transfer from private insurance to Medicaid, which lowers the overall private risk pool, and thereby lowers premiums. Katz’ summary is correct when she carefully writes: “Expansion of Medicaid could lower insurance prices for everyone else.” Indeed, it does lower insurance prices. But Medicaid doesn’t dragoon healthcare providers into working for free. Someone still pays for these new enrollees, and that someone turns out to be taxpayers. This article does not demonstrate that there’s a free lunch from Medicaid expansion. Conversely, “Because the states that didn’t expand had more sick people in their middle-class insurance pool, prices went up for everyone, the paper argues.” But, ceteris paribus, these states (and/or the federal government, depending on the fiscal cost sharing) would need to charge lower taxes (a price, afterall) for the same remaining mix of transfers and services.

Yet the foregoing discussion assumes that the causal mechanism can be defensibly inferred from the HHS’ statistical analysis. Let’s take a gander.

While the study provides 13 different specifications, the primary model of interest is #12 in Table A3. We can see that between matched counties on either side of a border that divides a non-expansion vs. an expansion state, the average age-adjusted “silver-rated” private insurance plan was $15 lower than its matched non-expansion county, or roughly 7% of the average premium. Moreover, this variable is significant at the 0.01 level.

But this finding hinges crucially on the many, discrete methodological choices made by the researchers, any one of which could fundamentally change the results. Taking a quick glance at the other control variables:
  • The coefficients are not standardized, and nowhere in the study is the unit of measurement for any of these variables revealed. For instance, the effect size of Percentage of Adults Who Are Current Smokers, 2015 is 129.7. This means it cannot be measured in percent terms- so what is the unit of measurement? A number per 1,000 residents? Similarly, Hospital Beds Per Capita 2012 has a whopping -977.8 coefficient, Primary Care Physicians Per Capita 2013 clocks in at -4,001, but Physicians Per Capita 2013 has a positive effect of 860.7. This would all be much easier to suss out if the study ever explained what units we’re dealing with.
  • There is almost certainly a multicollinearity issue, the last two variables I cited alone should suffice to raise that worry.
  • This wouldn’t be as big of an issue if the paper actually varied the control variables across the 13 models. Instead, it varies the observations, but applies the same battery of controls to each subset. How do we know if this effect is robust to different control variables?
  • While the border-matching design does do a good job of controlling for regional variations in relevant demographics, it does not account for interstate differences that could affect premiums other than Medicaid expansion. For instance, private health insurance companies in different states face potentially very different state-level taxes, minimum wages, essential fringe benefits that they need to pay to their employees, etc. Different operating costs affected by state-level variables will undoubtedly affect premiums. Such interstate differences need to be comprehensively accounted for in any non-experimental or quasi-experimental research design. 
  • The very same left-leaning economists and public health experts who sing the praises of the ACA’s Medicaid expansion would be very discomfitted by some of the un-touted findings in this very same study. The much-lamented rise in concentration among firms in the U.S. economy, including hospitals, has no effect on private sector premiums. Presumably, in states where the hospital sector was excessively concentrated, hospitals would be able to wield this market power and extract larger reimbursements from health insurers, who would pass these costs along to beneficiaries. The Model 12 specification shows no significant effect of hospital HHI, and the only significant effect appears in Model 9, where the dummy Hospital HHI < 2500 (that is, less concentrated) has a positive effect on premiums that is 3x larger than the negative effect of Medicaid.
All in all, Katz’ boilerplate “adjusting for several differences between them” is insufficiently nuanced, especially if we’re meant to actually make public policy changes on the basis of the evidence presented."  

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.