There is a real need for overhauling teacher evaluation and making the process more effective across the country. This article provides some insight and possible solutions that we can all learn from.
Rethinking Teacher Evaluation: Leaders Advocate for More Meaningful Measures
Across the United States, school districts are making good on the teacher evaluation promises laid out in Race to the Top applications, and states are tackling the challenge of designing evaluation systems that provide both formative and summative information on teacher performance. In Chicago, public schools were empty for days as teachers protested proposed evaluation policies that would tie 35 percent of a teacher's rating to student test scores.
Far from the Chicago picket line, the Carnegie Institute in Washington, D.C., hosted a forum of practitioners, policymakers, and other leaders in education who discussed the controversies and challenges surrounding teacher evaluations and offered insights on how using multiple measures of evaluation can strengthen teacher performance. The Revisiting Teacher Evaluation Forum was an initiative of the Carnegie Foundation's Assessing-Teaching Improving-Learning program, which seeks to help policymakers and practitioners learn from emerging teacher evaluation practices and build more effective information systems to advance teacher quality.
What's the Problem?
Heather Peske, vice president of programs at Teach Plus, a nonprofit organization that develops teacher leaders in select urban districts in the United States, said that she sees common challenges in the teacher evaluation systems that have been rolling out in U.S. states and districts. One, according to Peske, is that states and districts lack the capacity to conduct effective evaluations because they are trying to roll out multiple-measures systems quickly. In one district, Peske said, administrators were expected to conduct 12,568 teacher observations in three months.
Inadequate communication between teachers and their evaluators is another huge stumbling block. When expectations and processes are not clearly communicated, both parties feel frustrated. And, on top of that, some evaluation policies are just plain confusing. An evaluation rubric with five rows and six columns leaves teachers wondering where to focus their energy, said Peske.
Most important, noted Peske, is that observers and evaluators need training and should have some understanding of what they are evaluating. The sentiment among teachers is, "These observers know nothing about what I do or [about] my students; how can I use this to improve my practice?" said Peske.
To alleviate capacity issues and address the relevance factor, Peske suggested that schools create more differentiated leadership roles for teachers. "Principals are overwhelmed, and they might not have the credibility or content knowledge, so why don't we draw on teachers' expertise?" Peske asked.
In a video created by Teach Plus, teachers discussed their frustrations with their teacher evaluation experiences. "We're not afraid of being evaluated; we just want it to be a fair process that we are partners in," one teacher explained. Another educator complained that the feedback provided wasn't extremely useful. "We had information on how the classroom should look but not how our teaching should look, what student scores should be but not how to get there." Teachers were also left wondering what next steps to take and how to access professional development that aligns with improvement measures. These comments reflect that, in practice, evaluations are often severed from professional collaboration toward improvement goals.
Give Teachers Useful FeedbackMany teacher evaluation systems lack the substance needed to help teachers improve their performance and inspire greater achievement from their students. "We should be using evaluations to identify strengths and break-through areas that can be leveraged across the school," Robin Gelinas, senior policy advisor at Education Counsel, said at the Carnegie Institute forum. "We need to think about how we can manage teachers not just as individuals but as a team."
Steve Cantrell, who is the head of research and evaluation at the Bill and Melinda Gates Foundation, said evaluation practices should be fine-tuned to deliver the most usable feedback to teachers. "If we can communicate to teachers, 'Here's what these measures enable,' and if we are critical of the measures [and ask], 'Do they truly correlate to effectiveness?' then teachers will see that 'it's not about me—it's about improving the whole system,'" said Cantrell. Evaluators must be trained to provide feedback that is specific and grounded in evidence, but not too prescriptive, Cantrell added. "Being clear about expectations, and giving teachers a way to mark their progress [toward those expectations], is revolutionary," he said.
Peske agreed. "Teachers are not desperate for the [evaluation] measure; they're desperate for what to do with the information once they get feedback on their practice," she said.
In a system of support, not judgment, well-trained evaluators would provide critical observation data. To get that 360-degree assessment, teachers need feedback from people other than just the principal. "If you're a special education teacher, you need other special education teachers in the mix of who's evaluating you," said Peske. Students should be part of that process too, said Cantrell, although he hopes it will be in more meaningful ways than websites such as RateMyProfessor.com.
Department of Education Chief of Staff Joanne Weiss championed using observation data, but she stressed the importance of relevancy. Observers need to know what effective teaching looks like, and that teacher practices aligned with Common Core expectations may look different from what we've expected in the past, Weiss said.
Ronald Ferguson, senior lecturer at Harvard Graduate School of Education and Harvard Kennedy School of Government, said that going forward, capacity, communication, and credibility challenges could be resolved in a system defined by "multiple measures, multiple times, over multiple years." Weiss agreed, saying she would like to see multiple measures, combined with some professional judgment, guide new approaches to teacher evaluation. She identified Delaware, Colorado, Massachusetts, and Rhode Island as thinking about evaluation in these ways.
Beyond Buy-In: Rebuilding Accountability"Teacher 'buy-in' on evaluation practices is the wrong [phrase]," said Weiss. "It should be 'coconstructing.' What does this system need to look like? Help us design it."
"We need to drive fear out of the system," Cantrell said. "Then we will get teachers participating and leading the processes," he said. And what about union support? "It's not a hard sell to unions that current evaluation systems are broken and need improvement. I'm optimistic," said Cantrell.
Teacher preparation programs have a role to play in aligning candidate competencies with the educational priorities and performance outcomes outlined in newly adopted evaluation systems. In most programs, said Weiss, education students don't learn about professional learning communities, using data to inform instruction, teaching in teams, or integrating technology in meaningful ways.
"How often are student teachers paired with mentor teachers based on who's available versus who's really worth learning from?" Weiss asked.
Programs such as STEP at Stanford and Columbia's Teachers College are doing a good job of preparing high-quality teacher candidates, but, unfortunately, these exemplary programs prepare the smallest number of teachers, while mediocre schools turn out thousands, said Weiss. Accrediting systems and labor market demands (i.e., school districts) can be important levers for changing the status quo at education schools, she said.
Peaks and Tweaks in TennesseeTennessee has been on the vanguard for its whole-system approach to improving teacher evaluation. In 2011–12, Tennessee adopted a new evaluation system that uses a model from the Teacher Advancement Program (TAP). Teacher effectiveness scores are weighted half on observations—six observations for new teachers and four for experienced teachers—and half on student test data. Of the 50 percent of the evaluation that is test-data dependent, 35 percent of the data comes from value-added calculations of student growth on the Tennessee Comprehensive Assessment Program (TCAP). A challenge to this calculus is that TCAP is administered from 4th to 12th grade, so K–3 and specialist teachers are evaluated according to the average gains for 4th graders in their schools.
In the November 2012 Educational Leadership article "Weighing the Pros and Cons of TAP," Michelle Pieczura, a 4th grade teacher in Tennessee, says that it may not be valid to use 4th grade growth scores to evaluate kindergarten or physical education teachers and that it also places an unfair burden on 4th grade teachers, while giving K–3 teachers little to help them improve. Pieczura says that the 4th grade teacher can study test data subcategories to identify student weaknesses and target instruction to pull up those skills, but K–3 teachers are handed a score with no guidance for how they might make it relevant to their students or how to improve their instruction.
The multipoint rubric Tennessee is rolling out for teacher observations generally gets praise from Pieczura, other teachers, and policymakers. Pieczura notes that there are areas for improvement (e.g., in some cases, the rubric matches neither the type of lesson nor the level of complexity of content that's being taught) and more abstract topics may take several lessons to raise students to mastery, but she also knows that a low score on one indicator will not sink her whole rating.
Time may tell that what's most remarkable about Tennessee's evaluation system is that it is informed and refined by an ongoing feedback loop between architects and users of the system. Weiss believes teacher improvement will come from systems with "a mix of formative data, bringing student work to the table, team teaching and learning best practices from each other, and a continuous feedback loop from principals and peers."
Full article available at: http://www.ascd.org/publications/newsletters/education-update/dec12/vol54/num12/Rethinking-Teacher-Evaluation@-Leaders-Advocate-for-More-Meaningful-Measures.aspx