Extra Credit: Incentivizing Teachers

Extra Credit: Incentivizing Teachers


A RAND report shows New York City’s bonus program for teachers did not lead to improved student achievement. Why?

Earlier this month, the RAND Corporation released a study proclaiming New York City’s Performance Bonus program a failure. Implemented in a subset of schools for three years starting in 2007, the program was not associated with any improvement in student achievement. That is, trends in student achievement in New York were fairly similar in both schools that participated and those that didn’t.

Both foes and proponents of incentive pay for teachers were quick to spin the results. Coverage in the New York Times emphasized that the New York results echoed other recent evaluations showing no impact of financial incentives for teachers. Other commentators were quick to point out that New York’s program, which awarded bonuses to schools rather than individual teachers, might not tell us much about the effects of rewarding teachers for their own performance, rather than the average performance of their schools.

While the New York City program ostensibly rewarded schools for strong performance, in practice it rewarded schools erratically.

These arguments are all wrong. The New York City experience should not be taken to construe that incentives—or at the very least group-level incentives—don’t work. The most important lesson is an obvious one: an incentive that nobody understands is worse than no incentive at all.

Here’s how the ideal incentive scheme works. You tell a person, or a group, what they need to do, and how quickly, in order to earn a reward. You make it possible for them to monitor their own progress, so they can make adjustments before time is up. And when time is up, you reward them if they accomplished what you told them to and don’t if they didn’t.

Here’s how things worked in New York—where the scheme was the product of a complicated negotiation between the city's Department of Education and the United Federation of Teachers, a union. Schools were instructed to reach a target score on an index that weighed three things: student proficiency in math and reading (or graduation rates in high schools), improvements in test scores from the previous year (or course performance in high schools), and “school environment.” In reality, school environment counted attendance and student, parent, and teacher surveys, so really the index weighed six things. Moreover, schools could earn bonus points for adequately serving three different categories of high-risk student. So that makes nine things.

The most important lesson is an obvious one: an incentive that nobody understands is worse than no incentive at all.

So all a school had to do was figure out how to push this index score—a function of nine things, most of which they wouldn’t actually observe until the end of the school year—above a certain threshold, set on the basis of their past performance. That’s how the system was introduced to participating schools in 2007. Before the school year was up, the system had been modified—favorably, from teachers’ perspective—to permit schools to earn a partial bonus if they came reasonably close to their predetermined threshold. Although the Department of Education had good intentions, changing the rules in the middle of the game set a disastrous precedent.

The rules changed again for the second year of the program, to make it easier for high-performing schools to earn bonuses, but harder for lower-performing schools. In the third year, according to the RAND report, there was much squabbling about whether to change the rules yet again, implying that nobody could be exactly sure what rules would be in effect until the game was actually over. To make matters worse, New York State changed its own proficiency standards, which trickled down to the city’s bonus rules.

In the third year, there was much squabbling about whether to change the rules yet again, implying that nobody could be exactly sure what rules would be in effect until the game was actually over.

As it turns out, the year one rules worked pretty well. Just over half of participating schools received bonuses, which established a precedent that the bonus was neither a sure thing nor impossible to get—exactly what you want if your goal is to incentivize people. The precedent was weakened in the second year. The bonus became a virtual certainty in some schools—95 percent of all middle schools were rewarded—but not others. In the third year, the state’s tightened proficiency standards effectively shut down the program. Only 13 percent of schools—and exactly zero middle schools—earned a bonus.

So, while the New York City program ostensibly rewarded schools for strong performance, in practice it rewarded schools erratically, using complicated rules that changed every year, and consequently bestowed very different outcomes upon schools that were probably behaving consistently over time. Psychologically, the anticipated effect of such a system is learned helplessness, not increased motivation.

Here’s a conclusion that follows straight from common sense: Don’t make your incentive scheme the subject of a complex negotiation with a bunch of stakeholders. Keep it simple. Hold people accountable for things they actually have control over, and give them a chance to correct their course along the way. If you discover your rules are lousy—that you made things too easy, or too hard—it’s OK to change them. But never do it in the middle of a game.

Jacob L. Vigdor is professor of public policy and economics at Duke University.