Spotlight Research: Assessing the Value of Value-Added Models

John Mooney | January 20, 2011 | Education
Judge teachers on how students do on standardized tests? That's the job of the VAM

It’s better known by its acronym these days and not easy to explain anyway, but get used to the term “value-added model” (VAM) when it comes to how New Jersey and other states determine what makes a good teacher.

VAM, in brief, is a method of judging teachers’ performance by how much their students improve (or not) on standardized tests. And it has been embraced by a number of states, not to mention the federal government, as a central component in any valid teacher evaluation system.

Needless to say, it’s also very controversial. And while New Jersey is not there yet, its leaders are now weighing VAM pros and cons as part of Gov. Chris Christie’s proposals to revamp the teacher evaluation system and determine how educators are promoted, protected and paid.

To help advance or at least illuminate some of the debate, NJ Spotlight was among several sponsors of a research symposium held yesterday that heard from some top researchers and experts on the issue of teacher evaluation and VAMs.

The event was co-sponsored and hosted by the Educational Testing Service (ETS) in Princeton. Other sponsors included the New Jersey Education Association, (NJEA), the Education Law Center (ELC), Rutgers Graduate School of Education, Garden State Coalition of Schools, Newark Teachers Union, New Jersey Policy Perspective, and the New Jersey school boards, school administrators, and principals and supervisors associations.

More than 100 educators and leaders from many of the top education groups attended the four-hour symposium, as did the chairman of Christie’s task force.

The panelists were largely critics of VAM, or at least of the overreliance on such measures when judging a teacher’s impact. But some also spoke to the merits of these tools when used properly and cautiously.

A video of the presentation will be available soon, but in the meantime, we let the panelists speak for themselves — and readers judge for themselves — in a series of reports and studies discussed yesterday at the symposium.

Richard Rothstein is a research associate of the Economic Policy Institute (EPI). He is the author of Grading Education: Getting Accountability Right (Teachers College Press and EPI, 2008), among many publications. He contributed to the following report on the challenges of using student scores in teacher evaluation:

“Teachers, Performance Pay, and Accountability”

Sean P. Corcoran is an assistant professor of educational economics at New York University’s Steinhardt School of Culture, Education, and Human Development:

“Can Teachers Be Evaluated by their Students Test Scores? Should They Be?”

Henry Braun, a former assistant professor of statistics at Princeton University, was ETS’s vice president for research management from 1990 to 1999 and held the title of distinguished presidential appointee from 1999 to 2006. In 2007, he took the position of Boisi Professor of Education and Public Policy in the Lynch School of Education at Boston College:

“Using Student Progress to Evaluate Teachers: A Primer on Value-Added Models”