Friday, March 30, 2018

The Highly Positive Impacts of Vouchers

By Corey A. DeAngelis of Cato.
"It looks like we have another terrible case of cherry-picking the evidence. But this time it’s shockingly misleading. Instead of simply pretending that the evidence on school choice is “mixed,” the Center for American Progress took it a step further by saying that the voucher evidence is “highly negative.” They are absolutely wrong. Here’s why.

The Four Evaluations

Their review of the research relies on only four voucher studies – Indiana, Ohio, Louisiana, and D.C. Two of these studies – Indiana and Ohio – are non-experimental, meaning that the researchers could not establish definitive causal relationships. But let’s go ahead and entertain them anyway.

The Ohio study used an econometric technique called regression-discontinuity-design, which can only replicate experimental results when a large number of students are used right around a treatment cutoff point. The intuition behind the method is that it is essentially random chance that students fall just around either side of the cut point, and therefore the students are randomly assigned to the voucher treatment or not.

The Ohio program used a cutoff variable - the performance of the child’s public school – to determine program eligibility. However, the researchers used student observations that were not right around the cut point and even removed the observations that were closest to the discontinuity. In other words, the authors could not establish causality, and it is more likely that the children assigned to receive the voucher program were less advantaged than those who were ineligible. After all, students in lower-performing public schools were the ones that were eligible for the choice program.

Even then, the model with the largest sample size actually found that being eligible for the program led to positive test score impacts. But the authors at CAP never mentioned that.

The Indiana study was also non-experimental, as it compared voucher students to those remaining in traditional public schools. But let’s look at it anyway. While the authors did find small negative effects of the program on test scores initially, voucher students caught up to public school students in math and performed better in reading after four years. How in the world can a positive result like this be “highly negative?” Weird.

The Louisiana experiment did find large negative effects on test scores in the first two years. However, voucher students caught up to their public school peers in both math and reading after three years. The CAP authors argue that the main model – although clearly preferred by the Louisiana research team – is less “accurate” because of the “restricted sample size.” That is odd, as using more control variables (and a consistent sample) usually makes econometric models more accurate – not less. Another thing that is odd: the CAP authors chose not to report the positive Ohio results – which came from their larger sample of students – and instead chose to report the negative results – which came from a sample that was less than a tenth of the size. Why the change in criteria?

The CAP review heavily relies on the most recent experimental evaluation of the D.C. voucher program. It just so happens to be one of the only two voucher experiments in the world to find negative effects on student test scores.

The first-year evaluation of the D.C. voucher program found a 7.3 point loss in math scores and no effects on reading scores. However, the CAP authors overstated this loss by saying that the effect was “the same as missing 68 days of school.” But that suggests voucher students lost ground in all subjects, while the D.C. experiment concluded that voucher students did not lose any learning in reading.

What’s more – prior research has found that switching schools – for whatever reason – reduces student math achievement by at least a tenth of a standard deviation. After all, students and schools need to adjust to their new environments. That the average voucher student only lost 7.3 points  from switching schools suggests that the private schools in D.C. may have actually had positive effects on academic outcomes net of the temporary negative effects of a one-time school switch.

Further, the recent D.C. evaluation only looks at students after one year – when they are still adjusting to their new schools. And the meta-analysis of 19 voucher experiments shows that voucher programs’ effects on test scores get better over time. In fact, the positive test score trend was found in both Louisiana and Indiana. In addition, about half of the students in the control group in the D.C. experiment went to schools of choice. In other words, the first-year loss in math scores was relative to a mix of students in both traditional public schools and public charter schools.

And we cannot forget about the unequal playing field in our nation’s capital. D.C. voucher students only receive around $9,600 per year, while children in charter schools receive 46 percent more resources, while students in traditional public schools receive around 3 times the amount of education dollars. It’s amazing that D.C. voucher students are doing as well as they are with such a huge funding disadvantage.

The True State of the Evidence

So what does the evidence actually say?

When synthesizing any body of research, we ought to rely on the most rigorous studies – the experiments. We should also look at all of the studies so we are sure not to fall prey to cherry-picking.

Eleven of the 17 existing voucher experiments in the United States find positive effects on test scores for some or all students, and a recent meta-analysis of 19 voucher experiments around the world finds positive effects overall. Only 2 of the 17 experimental evaluations find any negative effects on student test scores – and those are also the only two evaluations solely looking at effects after the first year.

But what about the students that are left behind in public schools? It turns out that competition benefits those students as well. At least 24 studies exist on this topic. And 23 of the 24 studies find positive effects on student achievement for kids in public schools. None of these studies find negative effects.

But we shouldn’t only look at test scores. After all, families do not care all that much about test scores, especially since test scores are weak predictors of long-term outcomes. It just so happens that private school choice programs have much more positive effects on non-test score outcomes.
I found 11 studies in my review of the most rigorous studies linking private school choice programs to civic outcomes like student tolerance levels and political participation. The majority of the studies found large positive effects. For instance, researchers from Harvard University and the University of Arkansas found that children that won a random lottery to use the D.C. voucher program were about 90 percent more likely to permit individuals from groups they oppose to give a speech in their community. No studies found negative effects. And another review by Patrick J. Wolf similarly found that private school choice largely improves civic outcomes.

Only one experiment – in D.C. – links a voucher program to high school graduation. And it finds that winning the lottery to use a voucher increases the likelihood that a student will graduate high school by 21-percentage points. That is huge.

Another systematic review of the evidence finds that voucher programs lead to racial integration. In fact, 7 of the 8 rigorous studies that exist on the topic find positive effects. None of the studies find negative effects. Unsurprisingly, when vouchers allow disadvantaged children to leave their segregated neighborhood schools, society becomes more integrated.

It’s time we set the record straight. The preponderance of the evidence suggests that private school choice improves test scores, high school graduation rates, tolerance, civic engagement, criminality, racial integration, and public school performance. And, of course, all of these benefits come at a lower cost to the taxpayer.

With the substantial body of scientific evidence suggesting precisely the opposite, claiming that voucher impacts are “highly negative” is almost as absurd as saying that the Earth is flat. Anyone making such a claim needs to seriously reevaluate their position."

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.