MCA Results: What Do They Mean?
Yesterday, the Minnesota Department of Education released the results of the spring 2010 MCA tests (MInnesota Comprehensive Assessment). The MDE's official press release begins with the teaser: "Data reveals success of strong high stakes graduation requirement." The press release goes on to tout improvements in test scores over previous years.
Northfield schools continued to score higher than the state average, while schools in Faribault generally fell below the state average at each grade level, despite improvements over last year. Unsurprisingly, Faribault's superintendent, Bob Stepaniak, is "frustrated," and Northfield's superintendent, Chris Richardson, is "encouraged."
What does this mean? What does it mean that Northfield outperformed a school district only fifteen miles away?
Part of the answer surely is in demographics. Faribault has a higher percentage of "free and reduced price lunch" students—a measure of poverty—than Northfield. In Northfield, the median family income is $61,000; in Faribault, it's not quite $50,000. Socioeconomic factors certainly have an impact on standardized test scores. These factors are largely outside the control of the school districts, and are not taken into consideration in determining whether districts are making "adequate yearly progress" toward a goal of 100% proficiency in 2014.
Behind the aggregate numbers are individual students, each with his or her own strengths and weaknesses, each with individual needs. As Northfield superintendent Richardson told the Northfield News, "The best information...lies below the surface of the data. Parents and educators...will learn much more by looking at individual students' data, considering the areas in which they performed well and struggled."
Let's imagine an average 4th grade classroom. The state average in reading is 72.5% proficient. As you can imagine, there are all sorts of scenarios by which a classroom might be "average." With twenty students per classroom, the numbers might look like this: 10 students at 90% proficiency; 5 students at 70% proficiency; 5 students at 40% proficiency. Half the class is very high performing (90%), a quarter of the class is about average, a quarter of the class is well below average. The significant information here is not the aggregate score (72.5%, the state average), it's the performance of the individual students. What can the teacher do to continue to challenge the students at 90% and to improve the outcomes of the students at 70% or 40%? This, not the aggregating of numbers and the making of graphs, is the real challenge for educators. How do you help real students who are more complex than mere numbers?
Iv'e plugged in some simple numbers in a small sample size. Imagine the situation in a classroom of 30 or more in the Faribault Public Schools. The challenge becomes even more daunting.
The aggregate data fails to see students as individuals, and the determination of whether schools are "failing" or "succeeding" fails to see that data in context. I would much rather see the data put in context, and used to provide targeted instruction for individual students, than used in aggregate as a simplistic and decontextualized method of determining whether schools are "passing" or "failing."
To me, the aggregate numbers mean little. More important is how the school works with each individual student to assess and address his or her strengths and weaknesses. Assessment results should be part of a feedback loop that provides information to teachers to help them target their instruction. Those results should not be part of a system of punishment and reward.