In Tuesday's NYTimes, and article called "Formula to Grade Teachers' Skills Gains Acceptance, and Critics" calls attention to a new trend for teacher accountability that is taking place in various school districts across the US. This new trend involves using test data to determine how much a group of students has grown or improved between two standardized tests, and attributing a portion of these or all of this change to the teacher who taught them that year. This data can enable supervisors, districts and parents to get a new perspective on teacher effectiveness.
The biggest improvement this trend makes comes down to the fact that this "method can be more accurate for rating schools than the system now required by federal law, which compares test scores of succeeding classes, for instance this year’s fifth graders with last year’s fifth graders." This certainly seems like a more thoughtful approach to using test data to evaluate teachers, but there are still variables that are difficult to control, as the article describes:
"Millions of students change classes or schools each year, so teachers can be evaluated on the performance of students they have taught only briefly, after students’ records were linked to them in the fall.
In many schools, students receive instruction from multiple teachers, or from after-school tutors, making it difficult to attribute learning gains to a specific instructor. Another problem is known as the ceiling effect. Advanced students can score so highly one year that standardized state tests are not sensitive enough to measure their learning gains a year later."
Nevertheless, these difficulties should not blind us to the fact that value-added teacher evaluation makes a great deal of sense--we just need to control these variables, and make sure that evaluation involves a few other measures besides simply standardized test score data. What if we apply the value-added approach to student portfolios and school designed interim assessments, two of the student centered, results-oriented pieces of teacher evaluation that anti-data types lobby to be part of teacher evaluation?
Perhaps this calls up the bigger question of "How should teachers be evaluated?" Almost without question, teachers should have a transparent set of criteria they need to meet to be seen as "successful." What should the criteria include? Perhaps it ought to look something like this:
-Value added student scores: 20%
-Value added student portfolios: 20%
-Absolute student scores: 10%
-Student and parent survey data: 10%
-Teacher professionalism: 10%
-Teacher curriculum, unit, and lesson plan development: 10%
-Other contributions to school community, environment and programs: 20%
No comments:
Post a Comment