SIG Program: "The Greatest Failure in the History of the U.S. Department of Education"

Almost a year ago, I highlighted a Denver Post analysis detailing the general failure of millions of dollars in federal grant money—given out in the form of School Improvement Grants—to produce the kinds of results we might expect in many underperforming Colorado schools. It now turns out that the overall results of this $7 billion federal turnaround endeavor are worse than we might have thought.

Education policy maven Andy Smarick has been a staunch critic of the SIG program since its inception, and made a compelling case against the program as early as 2010. As he says in the Denver Post story above:

If you funnel a whole lot of money to the same dysfunctional districts that have been running the dysfunctional schools, these are the results you should expect. What’s mystifying to me is that people thought the school improvement grant program was going to get dramatically different results than the dozens of other similar efforts at school turnaround in the past.

It turns out Smarick was right, not only in Denver, but in the nation overall. His latest blog post for Education Next is a scathing indictment of $7 billion spent on the SIG program, which he now brands as potentially the greatest failure in the history of the United States Department of Education. Yikes.

From the blog post:

Despite its gargantuan price tag, SIG generated no academic gains for the students it was meant to help. Failing schools that received multi-year grants from the program to “turn around” ended up with results no better than similar schools that received zero dollars from the program. To be clear: Billions spent had no effect.

When Washington spends billions of dollars on something, it’s reasonable to assume it will do some good, especially when the Secretary of Education promises “transformation not tinkering.” But not with SIG.

No matter how the researchers crunched the numbers, the abysmal results were the same. SIG didn’t improve math scores. Or reading scores. Or high school graduation rates. Or college enrollment. SIG didn’t improve elementary or secondary schools. It didn’t help schools in Race-to-the-Top states or non-Race-to-the-Top states.

The results are almost too much to believe. How in the world do you spend billions and billions of dollars and get no results—especially after Secretary Duncan promised it would turn around 5,000 failing schools and hailed it as the biggest bet of his tenure?

The findings to which Smarick is referring come from a major Institute of Education Sciences report on the SIG program that just came out this month. Just in case you’re you think he’s =embellishing, here’s the direct language about the study’s findings from its executive summary:

Overall, across all grades, we found that implementing any SIG-funded model had no significant impacts on math or reading test scores, high school graduation, or college enrollment.

That’s pretty tough to argue with, but I’m sure there are those who will argue that these findings mask some positive changes at some schools. The Denver Post analysis notes that some schools did make improvements, and there is some earlier evidence showing modest improvements at some SIG schools. But this new report casts serious doubt on the assertion that any observed gains can be fairly attributed to the SIG program rather than other factors. In fact, it’s pretty explicit that the program produced no significant impacts. Given this new research, some cynical political observers (certainly not me) might take previous reports and statements highlighting small gains for some schools as an effort to spin unflattering information by the Obama Administration. But that never happens, right?

So, where does this leave us? First of all, we now know that the $7 billion—that’s $7,000,000,000 for anyone keeping score—spent on the SIG program was not effective. In fact, it could be fairly argued that all those billions of dollars were squandered entirely. With that in mind, we ought to seriously reconsider approaches that involve, as Smarick puts it, “funneling money to the same dysfunctional districts that have been running the dysfunctional schools.” Instead, we should be pushing any future intervention or turnaround plans toward spending money where it actually can be maximally effective. And then we should be monitoring the use of the money and holding folks accountable for producing the promised results. I think we’ll hear more on this topic in the coming ESSA conversations here in Colorado, so stay tuned.

Second, we now have yet another piece of evidence illustrating the broader truth that simply throwing money at education is probably not an effective way to go about trying to improve public education. I feel like we’ve heard that somewhere before

Lastly, I should mention that those of us who doubt the power of buckets of money to produce positive change in education should not be celebrating these results. They strengthen our policy positions, yes. But they also indicate that our system has once again failed thousands of children and families—families who were promised that things would improve. Those broken promises are financially and politically unacceptable without question. More importantly, though, they are morally unacceptable. We must do better for these students. And sometimes that means taking a long, sober look into the face of previous failures.