What to Think About the DC IMPACT Study
Blog Post
Oct. 23, 2013
Few teacher evaluation reforms have been as contentious as the IMPACT system in D.C. Public Schools. But a new study published by Thomas Dee and James Wyckoff provides the first empirical evidence that the controversial policy could be encouraging effective teachers to stay in the classroom – and improve their practice.
Dee and Wyckoff examined teachers that scored on the cusp of various IMPACT performance levels– namely, teachers just above and just below the cutoff for effective and highly effective (HE) ratings. The idea is that teachers near the cut points share similar characteristics, regardless of their final rating. By examining these teachers’ outcomes in subsequent years, researchers can isolate the effect of IMPACT’s incentives on teacher behavior. Do teachers that barely receive a HE rating fare differently than those that just missed the distinction? And do minimally effective (ME) teachers close to the effective cut point respond differently than teachers who barely cleared the effective hurdle?
Turns out, they do. The incentive structure within IMPACT had significant effects on retention and performance, particularly after the second year of implementation (2010-11) when IMPACT gained credibility. At that time, teachers with two ME ratings became eligible for termination and those with two HE ratings earned permanent salary increases, not just bonuses. Teachers that received their first ME rating after the 2010-11 year were significantly more likely to leave DCPS (over 10 percentage points) than teachers that scored just above the cut point. Further, the threat of dismissal improved the performance of ME teachers that chose to stay for the 2011-12 year – their scores improved by 12.6 IMPACT points compared to teachers that just received an effective rating, an increase of five percentile points. Similar effects were seen for teachers that could become eligible for increases in their base pay if they remained HE – their 2011-12 IMPACT scores improved by nearly 11 points compared to teachers that missed the HE cutoff, an increase of seven percentile points.
So what do these results tell us about IMPACT and teacher evaluation reform overall? Is this a moment for cautious – or all-out – optimism?
1. Evaluation systems like IMPACT don’t necessarily improve the performance of teachers across the effectiveness spectrum. That’s because Dee and Wyckoff only examined a narrow band of DCPS teachers: those scoring right at the cut points between ratings. These teachers are the most likely to be influenced by the incentives built into IMPACT – say, when the ratings affect job security. Instead, the research demonstrates the effect of certain incentives, on a certain group of teachers. Those incentives worked –and worked well – but we still don’t know how the performance of most teachers changed in response to the new evaluation system.
2. That said, the research is rigorous, and the results are encouraging. There is evidence that the district’s teacher workforce improved overall. Some ME teachers voluntarily chose to leave DCPS, and the newly hired teachers that replaced them in the 2011-12 year had higher IMPACT scores, on average. And there is no evidence that highly effective teachers were pushed out of the system by IMPACT. Further, many ME and HE teachers tended to improve on IMPACT when they remained with DCPS.
However, more research is needed to determine what interventions were most effective in helping these teachers improve – and to determine whether other teachers (not just those near the cut points) saw similar outcomes. Evaluation systems must define what effective teaching is, and also provide the knowledge and support for teachers to meet these expectations. We know far more about identifying effective teachers than we know about what to do next.
Of course, that brings up another important caveat: improvements in performance here are measured based on changes in IMPACT scores. The authors don’t link these results to student learning explicitly – another area for future research.
3. Finally, while the results are positive and provide some of the best evidence to date on the success of IMPACT, the research may not be widely applicable to other districts and states. IMPACT and DCPS remain outliers in many respects:
- IMPACT uses value-added data to measure an individual teachers’ contribution to student learning, which many evaluation systems have eschewed.
- IMPACT includes not one, not two, but five observations of classroom practice over the course of the year. Further, two of these observations are conducted by master educators, rather than school principals. Hiring and training objective observers takes time, capacity, and resources that many states and districts do not have – or are unwilling to dedicate – for evaluation.
- IMPACT’s improvement and incentive structures are also well-developed and supported. DCPS has made a concerted effort to improve the quality of its coaching and professional development and link it to IMPACT. Further, the bonuses and salary increases for highly effective teachers are substantial, thanks in part to foundation funding. While this external support may raise questions of sustainability, these incentives have been institutionalized in the district’s contract with the Washington Teachers Union.
- In a way, IMPACT operates at both a state- and district-level. Some of the lessons learned from IMPACT may not be applicable in states, which face additional layers of governance and greater heterogeneity. On the flip side, IMPACT may not be a model for other districts, where administrators could have less autonomy to develop, implement, and revise evaluation systems.