The Christie administration last week released the latest data on how New Jersey’s public school teachers have fared under the much-vaunted reforms to teacher evaluations — a joint effort undertaken with the Legislature.
The data indicates how many teachers at every school are ranked “effective” or better or rated “ineffective” under the state’s new standardized system for the 2014-2015 school year.
The big headline: More than 98 percent of New Jersey’s public school teachers were rated “effective” or “highly effective.” But it’s a bit more complicated than that when looking at an individual school or district.
to the state’s data.
Here are a few things to keep in mind:
The data shows that New Jersey teachers overall do pretty well under the new system.
“Clearly, it shows a level of effectiveness for all of our teachers,” said Patricia Wright, executive director of the New Jersey Principals and Supervisors Association.
That’s a good thing, of course, and falls in line with the fact that New Jersey schools generally do well by their students when compared with their peers across the country.
But the stated intention of the new system was to weed out teachers that were falling below par. Instead, it generally confirms the status quo.
“We spent millions of dollars and tens of thousands of hours to confirm what we could have empirically shown you without it,” said Michael Cohan, a senior officer with the New Jersey Education Association.
Nevertheless, the Christie administration points out that 1,600 teachers have been found to be lacking, and the system is in place to either support them or ultimately remove them from the classroom.
Still, the state’s numbers are limited, at best — due in part to the administration being very conservative about the data it provides.
The state withheld, or suppressed, the numbers for any school with less than 10 teachers falling into one category. So a school might show 11 teachers considered highly effective or even ineffective, but leave out the majority of its staff whose ratings fall somewhere else. That makes the actual situation hard to gauge, and final comparisons between schools or districts equally difficult to ascertain.
“When you have that kind of variability, you can’t compare one school to another,” Cohan said.
Still, more variability — and debate — could be coming soon. The administration this fall increased the weight that student test scores have on teachers’ ratings, climbing to 30 percent of their evaluation from a previous 10 percent.
Some state officials have said that changing the weightings will have a discernible effect, but some advocates are skeptical.
“I don’t see [big changes] much occurring, but we’ll have to wait and see,” Wright said.