- The American Statistical Association (ASA) says the widely popular “value-added method” (VAM) for conducting teacher evaluations is unreliable.
- VAM claims to take a student’s standardized test score and, through a series of complicated formulas, find out the “value” a teacher added to that student’s learning. The technique has been picked up by various states to evaluate teachers, with the results affecting everything from teacher pay and tenure to the survival of the schools
- ASA, however, says the VAM formula is too focused on standardized test scores and therefore does not account for teacher value towards other student outcomes. Additionally, they say, “VAM typically measure correlation, not causation: Effects – positive or negative – attributed to a teacher may actually be caused by other factors that are not captured in the model.”
By streamlining the impact of a teacher into a simple, repeatable formula, VAM has oversimplified the entire act and purpose of teaching — and this new study by statisticians shows exactly why this oversimplification is detrimental. A student’s achievement or struggle is the product of many factors beyond just the teacher. Additionally, standardized tests are hopefully not the only form of achievement we want to see our students succeed in. VAM is limiting.
Additionally, VAM not only simplifies what we look at (standardized tests) but it even makes brushstroke statements based on these simplifications. For example, the formula can only measure math and literacy standardized test scores — which means history and science teachers are evaluated based on a subject that they don't even teach.
While teacher evaluations are important — as are evaluations, critiques, and growth opportunities in any fieldn— the idea that we can boil down the accomplishments and value of a teacher to a simple test score is difficult to buy.
People — especially students — are complex and multi-faceted; our evaluations of what is being done in the classroom should represent that.