Studies Bring Bad News for Vouchers… With Lots of Caveats

We’ve covered quite a bit of positive research regarding private school choice in recent months. Back in May, I wrote about a meta-study by researchers at the University of Arkansas that found positive effects from vouchers in the U.S. and a couple of other countries. The following month, we dug into the Friedman Foundation’s latest review of random-assignment studies on private school choice programs in the United States. Fourteen of the 18 studies included in that review found positive effects for at least some groups of students. Two found no visible effects, and two more—both from Louisiana—found significant negative effects.

As I’ve said before, there are good reasons to believe that program design and implementation issues played a role in the negative findings in Louisiana. Now, though, I’m sorry to report that I’ve become aware of less easily explained bad news on voucher programs in Ohio and Indiana. But don’t fret just yet; there are some major caveats that need to be considered before we start jumping to broad conclusions. Buckle up, today’s post will be a long and nerdy one.

Before we hop into the new research, I feel compelled to note that the infamously anti-choice, union-funded National Education Policy Center in Boulder, Colorado, has continued its crusade against free-market thought in education policy by  publishing a rebuttal of both the University of Arkansas meta-study and the Friedman Foundation literature review since I wrote my posts on those publications.

I read through NEPC’s complaints, and although there are a few potentially valid points raised, the rebuttal mostly comes across as a half-baked attempt to swat aside findings that the Center would rather not acknowledge. No doubt a fairer, more balanced literature review or meta-analysis of the research from NEPC would find that vouchers cause children to fail out of school, join religious cults, and/or go blind. Thus, my longstanding and vehement hatred of meta-studies and literature reviews. Feel free to read the NEPC rebuttal at your leisure and draw your own conclusions. For now, we’re going to get nerdy and dive into today’s studies. Well, one of them, anyway.

I’d love to detail the methodology and findings of the paper about Indiana’s voucher program, but I can’t. It was presented as a “panel paper” at a November 2015 conference in Miami by two researchers from the University of Notre Dame. It hasn’t been officially published at this stage, or at least not anywhere I can access it—though I should note that it is briefly cited in one of the Louisiana studies I mentioned earlier. The study is entitled “Vouchers in the Crossroads: Heterogeneous Impacts on Student Achievement and Attendance Across Private Schools in Indiana.” Hopefully the conference also included a panel about how to write interesting titles…

Anyway, the available abstract summarizes the study’s methodology and findings as follows:

We analyze longitudinal, student-level data by using a student fixed effects model to estimate the impacts on student achievement of using a voucher and switching to a private school. Similarly, we use a conditional logistic regression model to estimate impacts on student attendance. We disaggregate the impacts by various characteristics of private schools that voucher students attend, including schools’ racial/ethnic composition, proportion of English-language learners enrolled, and average academic achievement.

Overall, we find that voucher students who transfer to private schools experience significant losses in mathematics achievement, with null gains in English/language arts in comparison to their achievement gains in their previous public schools. We also find that voucher students improved attendance once enrolled in private schools, by means of a substantially decreased likelihood of having an unexcused absence. For both achievement and attendance outcomes, we find some degree of variation in impacts based on the characteristics of the private schools that students attend. These findings suggest that the choice of private school may be important for students choosing to use a voucher to transfer from a public school.

The Louisiana study’s citation of the paper adds a little further clarity by saying:

… recent non-experimental yet rigorous analysis of a statewide voucher program in Indiana also reports significant negative achievement effects in the short-run that decrease in size over time.

In other words, the researchers did some complicated math, discovered that Indiana voucher students tended to do worse in math and no better in ELA, and then went on to find that the negative effects decreased over time. I wish I could tell you more, but I can’t. In place of further analysis of this paper’s findings, please enjoy this picture of the city where the conference occurred:

That was nice. Fortunately (unfortunately?) for you, the second study, conducted on behalf of the Thomas B. Fordham Institute by well-known education researcher David Figlio and Krzysztof Karbownik, is publicly available. Fordham is unashamedly supportive of private school choice, and the organization commissioned this study in the hopes of demonstrating significant academic improvements for the roughly 20,000 voucher students participating in Ohio’s Educational Choice Scholarship Program (EdChoice). As you may have guessed, the findings were not what they had hoped.

The Forham study’s main findings were threefold, and we’ll take each individually.

“There appears to be positive selection, as measured by prior academic performance and family advantage, among voucher-eligible students into private schools as part of the EdChoice program.”

While this initially comes across as evidence of the “cream-skimming” behavior so often alleged by choice opponents, it needs to be interpreted with some caution. The report does not find that eligible students who opted to take a voucher were wealthy folks looking for an opportunity to capitalize on free private schooling. Eligible students in Ohio predominantly come from economically disadvantaged backgrounds. However, roughly 85 percent of eligible students who took vouchers “had a history of economic disadvantage” versus 95 percent students who did not take vouchers. In other words, voucher students were still overwhelmingly poor, but not quite as overwhelmingly poor as voucher-eligible students not participating in the program.

Voucher students also tended to be comparatively higher performing in terms of prior test scores. The differences between voucher and non-voucher students here were considerable. I find this gap somewhat fascinating, as it stands in near-direct contradiction to a number of studies on other school choice programs, and is far more pronounced than similar findings in other analyses. (You can find a list of studies to reference on this subject in endnote five.) Given these differences, the most plausible explanation for Figlio’s findings is design. Students in the EdChoice program have to apply to and be accepted by a private school before they can apply for a voucher, and that structural abnormality may have played a role here.

We should also note that girls were more likely to take vouchers than boys. I hypothesize that this because girls are, in general, smarter than we boys. No, seriously. They really are. Your mom wasn’t lying when she said that to your dad. See here and here if you need more compelling evidence.

But I digress. The big takeaway here is that we definitely need more research in this area (not about the girl thing; that’s settled science). It’s far too soon to hop on the cream-skimming bandwagon given the fact that the bulk of research on private school choice programs has produced different findings.

“Although the estimates are sensitive to the specific assumptions made, and some assumptions lead to zero rather than positive findings, the evidence in general suggests that the EdChoice program improved the performance of students eligible to participate—most of whom remained in the public schools.”

Put into simpler terms, this finding indicates (again) that all the silliness about private school choice “destroying” or “dismantling” or some-other-scary-verb-ing public education is off base. In fact, it has repeatedly been found that the competition provided by private school choice programs leads to improvements in the public school system. There was little reason to believe things would be different in Ohio. As the Fordham report puts it:

… the voucher program has worked as intended when it comes to competitive effects. Importantly, this finding helps to address the concern that such programs may hurt students who remain in their public schools, either as a result of funds lost by those schools or the exodus of higher-performing peers.

I will point out that this finding seems to somewhat counteract the real concern underlying theories of “cream-skimming”: that taking higher-performing students out of schools will result in worse results for those left behind (“peer effects” to academic nerds). Here, despite the fact that voucher students tended to be higher-performing, the students “left behind” in public schools tended to do better.

There are a few caveats here. First, this particular voucher program requires that students attend a low-rated school in order to be eligible for a voucher. The Fordham report exploits this design component to produce more statistically compelling results, but in the process resorts to comparing only schools that exist on the cusp of voucher eligibility—those that either barely avoided their students becoming eligible for vouchers, or those which barely reached that threshold. That methodology means the researchers were really examining competitive effects on public schools that are relatively higher performing than other voucher-eligible schools, which makes it tough to say what competitive effects might look like for lower-performing voucher schools. It’s fair to guess that those effects would be significantly more pronounced.

Second, the findings in this area varied significantly by the assumptions used, with some assumptions resulting in findings of zero improvement. Then again, none of the assumptions produced negative results. And, once again, including lower-performing schools in the analysis may have altered the results.

Finally, there is some subgroup variation buried inside the overall finding. The positive effects tended to be strongest for white students, students who were comparatively more advantaged economically, and male students. Hispanic students also saw large effects under some assumptions.

“We can only credibly study the performance effects of moving to private schools under the EdChoice program for those students leaving comparatively high-achieving public schools. Those students, on average, who move to private schools under the EdChoice program tend to perform considerably worse than observationally similar students who remained in public schools.”

This is the big-ticket finding, the one that has my friend Diane Ravitch cooing happily. And sure enough, the above statement is a fair summary of the report’s findings. It’s hard to get away from the fact that something is likely wrong in Ohio. But unlike Dianne, I like to actually read studies before I slap up a blog post touting a specific finding as a broadly generalizable truth. As is usually the case, actually reading the study instead of just skimming the executive summary raises some important points.

As we’ve already noted, the study’s design imposes pretty significant limitations on how its findings should be interpreted. The undisputed king of education research is the random-assignment study, in which the question of who gets a voucher is decided purely by chance rather than some other factor or combination of factors. This usually requires a lottery. We should note right off the bat that, NEPC’s protestations notwithstanding, the vast majority of random-assignment studies on private school choice find positive impacts for at least some students.

Unfortunately, the design of the EdChoice program makes random-selection research impossible. As a result, Figlio and his associate had to rely on a different method of analysis called “propensity score matching.” This research approach compares voucher students to non-voucher students who are as observably similar as possible. In practice, this design meant that the Fordham report had to once again resort to examining only schools existing on the very cusp of voucher eligibility under Ohio’s performance-rating system. Put differently, he had to compare voucher students to students in comparatively higher-performing schools that managed to avoid voucher eligibility, if only barely.

This methodology does two things. First, it makes it difficult to generalize the findings to other public schools. It’s very possible that the effects of receiving a voucher and attending a private school look significantly different for students coming from far lower-performing schools that fell well below a level that would have allowed them to avoid the voucher threat. Those effects could very well be positive, but we have no way to know. Second, it muddies the waters surrounding the magnitude of the negative effects. In other words, it’s hard to say exactly how much worse students did in private schools than they would have done in voucher-eligible public schools overall. That said, we should note that the negative findings are consistent enough that it would be tough to argue that there is nothing wrong in Ohio.

Finally, unlike the Louisiana studies, there is little solid evidence in the Fordham study that could help us make educated guesses about what is going on in the EdChoice program. The evidence of overregulation observed in Louisiana is tougher to pin on Ohio, as the state’s private schools were heavily regulated long before the voucher program came into being.

But it could be that private school curricula don’t align cleanly with Ohio’s state academic standards, or that those schools spend less time preparing for or care less about the required state assessments, or that the public schools are getting better and catching up to private schools, or any one of a variety of other things. We don’t know for sure, and we can’t reliably say without further research. Similarly, this study can’t tell us if other outcomes—graduation rates, attainment levels, attendance rates, crime rates, etc.—are positively or negatively affected by the voucher program. Also, this study doesn’t tell us much about parental satisfaction, which is one of the most important measures in any school choice programs. There are myriad reasons parents choose schools, and many of those reasons are not academic. We should always remember to take that into account.

One potential way to start fleshing out potential causes of the negative findings in the EdChoice program would be to conduct an evaluation of the longstanding Cleveland Scholarship Program, which also offers vouchers to students who wish to attend private schools. If the findings differ, we would have some place to start when it comes to thinking about what’s going on at a programmatic level. If they’re the same, we would have to acknowledge that there’s a more systemic issue in play.

Right now, the best we can do is chalk this study up to one more piece of the school choice puzzle. I’ve said for a long time that there is no such thing as a silver bullet in education, and we all know that design decisions in individual programs can have huge effects on outcomes. For the time being, we can take comfort in the fact that the highest-quality research still strongly supports private school choice—as does the commonsense notion that parents ought to be empowered to choose whichever educational path they deem appropriate for their children.

See you next time for a (hopefully) much shorter post.