Colorado and Washington, DC: A Tale of Two School Principal Evaluation Systems
Crafting policy often can be much more art than science. Several years back research showed us that educator evaluation systems were not making meaningful distinctions, and that 98 or 99 percent of teachers were rated effective on a two-tier scale. As a result of such findings, the move to update evaluations has been a big agenda item in many states, with Colorado one of the pioneers.
You know what I’m talking about… SB 191? Right. A core piece of the legislation required that at least 50 percent of the evaluation must be tied to measures of student academic growth (including multiple measures beyond the state assessment regime). School districts could use their own systems that abide by the standard. But most districts adopted the state’s model plan, which clearly defines the other 50 percent of the evaluation.
One of the great strengths of SB 191 was that it focused on upgrading evaluations for school principals, parallel with teachers. Union officials thrive off the fear that building leaders might subjectively and unfairly target instructors. That (real or apparent) threat is greatly diminished if a principal is rated on the same standard.
Colorado spent the last two years building out a statewide pilot of the principal evaluation system. The results, reported at Chalkbeat Colorado, caught my eye. It’s important to note that during the pilot — measuring about one in seven of the state’s principals and assistant principals — they were only testing 50 percent of the evaluation system:
Some 94 percent of participating Colorado principals and assistant principals were ranked as proficient or higher in a pilot test of the state’s evaluation system, according to a new report from the Department of Education.
But the results may not be indicative of how school leaders will perform when the full evaluation system, which will eventually incorporate measures of student growth on standardized tests, rolls out….
Indeed, the rate of effective principals well may change when student testing is factored in. But that 94 percent, not much lower than the 98 or 99 percent I mentioned earlier, contrasted greatly with the latest results from the District of Columbia’s IMPACT evaluation system:
D.C. Public Schools officials have changed how they evaluate principals in response to complaints that the previous system — which rated more than half of the city’s principals below “effective” — was unfair and too tightly hitched to student test scores.
So interestingly, there in the nation’s capital you have the case of a principal evaluation system too closely tied to student test scores, resulting in more than half of school leaders receiving ineffective marks. Contrast that with Colorado, where a system that hadn’t yet incorporated the student test scores, found only 6 percent of principals to be ineffective.
Guess I’m saying that’s hardly a coincidence. The question is: What’s the right balance? Student test scores don’t offer the be-all and end-all of education performance. But they do tell us something significant. Using a 50 percent ratio seems like a good place to start. I’ll be keeping an eye on how the evaluations shake out for this coming year.