What the Heck is Academic Growth, Anyway?
Growth is exciting. I love watching my mom and dad mark another notch on the wall every year, and it’s been crazy to watch my favorite little puppy grow into a full-size dog almost as big as me. Education wonks get excited about growth too, although the growth you often hear policy nerds talking about has nothing to do with how tall someone is and everything to do with how much academic progress he or she is making.
Academic growth sparked a wave of nerdy jubilation yesterday when the Colorado Department of Education (finally) released growth data for our viewing pleasure after the switch to the PARCC assessment. All those juicy numbers are just waiting for you to explore them—assuming, of course, you can successfully navigate the department’s notoriously terrible SchoolView site. For those of you who would rather peruse curated information presented in a more digestible way, Chalkbeat Colorado’s Nic Garcia put together a helpful story that includes some interactive spreadsheets and charts. You should definitely head over there and see how your school and/or district stacked up.
Those of you expecting me to do a deep dive into the growth scores of various schools and districts are about to be disappointed. We’ll have to save that for another time. Those of you who have absolutely no idea what “academic growth” even means, on the other hand, are in for an educational treat.
I have noticed during my edu-wanderings that very few people understand academic growth in Colorado. Even fewer understand why they should care about it. So rather than throwing a bunch of numbers at you today, I think it makes some sense to explain what academic growth is, why it’s different from simple proficiency rates, and why we edu-nerds care so much about it. Hold on to your hats, this is going to get pretty geeky.
Growth scores in our state our calculated under the Colorado Growth Model, which uses a complicated math thingy (technical term) called quantile regression to compare students’ progress to that of their “academic peers,” or students with similar state test score histories. So, a student with consistently low test scores will have his progress compared to other students with similar academic backgrounds. Similarly, a high-performing student will be compared to other high-performing students. This system gets us as close as possible to an apples-to-apples comparison of the academic progress made by students around the state. It also allows us to focus in on progress for all students, not just those who have fallen behind.
Student growth is calculated in the form of student growth percentiles (SGPs). A student growth percentile below 50 percent typically indicates that a student is making less progress than his or her academic peers, while an SGP above 50 indicates that he or she is making more progress.
Importantly, student growth percentiles are not perfectly precise because they necessarily include some measurement error. This error makes it tough to read too much into really granular differences. For instance, it would be hard to say that student with an SGP of 60 saw significantly more growth than a student with an SGP of, say, 57. However, we can comfortably say that a student with an SGP of 20 made significantly less growth than a student with an SGP of 50. For this reason, kiddos are broadly categorized into larger growth groups indicating whether they made low, typical, or high academic growth.
When enough data are available, we can also figure out whether students are on track to “catch up” if they’re behind, “keep up” if they’re already proficient, or “move up” to more advanced levels within three years. The amount of growth students need to make in order to catch up or keep up is called adequate growth, and this growth is used hold schools and districts accountable under the performance framework system.
For reporting purposes, SGPs are aggregated at various levels—grade level, school level, district level. They can also be aggregated at the classroom level, or at other levels of interest to school leaders. Regardless of which group is being aggregated, the SGP in the middle of the ordered distribution (importantly not the average, which doesn’t mesh well with percentiles) is reported to the public as a median growth percentile, or MGP.
For example, if you had a classroom of five students, each of those students would be assigned individual SGPs. Let’s say those SGPs are 25, 35, 45, 60, and 75. The group’s MGP would be the middle number, which is 45 in this case. The same would be true of larger student groups at the school or district levels, except those groups would include more SGPs. So when you see a school’s median growth percentile, you are looking at the individual student growth percentile that falls exactly in the middle of that school’s ordered distribution.
Easy enough, right? But here’s the big question: Why should you care?
Growth percentiles are important because they are distinct from the point-in-time test scores used to calculate proficiency rates in two ways. First, they look at the same students over time rather than different cohorts of students. A proficiency rate change year over year in, say, sixth grade in a single school is not terribly informative because two different classes of students took the test. We aren’t looking at performance changes among the same group of students, we’re looking at differences in performance between two different groups of students in different years. And because we’re looking at two different populations of kiddos, it would be hard to say what exactly caused the change in this case.
Second, growth offers a less demographically biased way of looking at student progress because it is detached from absolute performance levels. It is very possible—and, indeed, common—for a school with very low test scores to post high growth scores. Similarly, a school with very high test scores may be very mediocre when it comes to producing academic growth among its students. Thus, we can look at the progress students are making without worrying as much about the fact that certain subgroups of students (remember that achievement gap we’re always talking about?) tend to do worse on point-in-time test scores than others. And we can more accurately see how well a school is doing at actually moving students academically without that motion being masked by high or low scores linked to demography.
So yeah, growth is pretty great. But there’s an important caveat: Just as you should never rely solely upon proficiency rates and point-in-time test scores to judge a school, you shouldn’t rely solely upon growth. That’s because there may be situations where a school is showing fantastic growth but still posting scores indicating its students can’t read, write, or do math at grade level. The school may eventually reach higher rates of proficiency in those subjects, and we should certainly recognize and reward progress toward those goals, but it could take many years for the school to reach a point at which its students are proficient academically. In the meantime, the fact that a given school will be excellent in 75 years offers little comfort to parents whose students will only be there for a few years—especially if those students may leave the school without the skills and tools they need to succeed.
Hopefully all that information helps clarify why growth scores have been so eagerly anticipated by the education community in Colorado since the shift to PARCC, as well as why geeks like me refer to them so often. Growth percentiles can’t tell us everything we’d like to know about schools and districts, but taken together with other academic indicators, they can help round out the picture of how well our public schools are serving students.
See you next time!