The Christie administration’s report on the inaugural year of New Jersey’s teacher evaluation system made headlines mostly because of the overwhelmingrated “effective” or better -- and what that said about both the system and the educators working in it.
But the report on the 2013-14 rollout, released last week, raised a number of other points worth noting for administrators and teachers.
Much of the debate over how teachers are evaluated centered on the use of student test scores, which accounted for 30 percent of the evaluation for certain teachers. And while that ended up being less consequential than expected in changing scores of many teachers, another measure of gauging student progress may be more important in years to come: “student growth objectives.”
SGOs are agreed-upon benchmarks that are not part of the state testing system. They could be anything from classroom assessments that look at specific skills to attendance figures.
According to state officials, these numbers -- like the overall teacher evaluations -- skewed high, with three-quarters of teachers scoring 3.5 or higher on a four-point scale.
State officials said they want to take a closer look at the development of SGOs, saying them may not be rigorous enough. For example, the report found that nine in 10 of the SGOs were based on a single data point, usually through a pretest.“Districts are still adjusting to using all aspects of their instructional practice instruments and educator-set goals often focused more on the “achievable” than the “ambitious” last year,” read the report.
The majority of teachers’ ratings are still based on classroom observations, the time-tested practice of a supervisor watching a teacher in action and judging whether he or she is meeting specific expectations.
As in previous years, supervisors typically gave teachers positive ratings. For instance, 87 percent of supervisors awarded teachers a 3.0 or better on the four-point scale.
The report also looked at the frequency of classroom observations, and found teachers getting more supervisor visits and more feedback than ever. Meanwhile, supervisors are finding observations less time-consuming than first feared.
But the state’s report also pointed out inconsistencies, such as when supervisors weren’t using the full measurement tool for teachers, only scoring them on some of the required fields.
Administrators fared just as well as teachers in this first year of evaluation: 97 percent were found “effective” or better. In their case, it was 62 percent “effective,” and 35 percent “highly effective.”
Like teachers, principals were evaluated on a range of measures, including student performance and observations of their daily work. But they were also measured on how well they implemented the evaluations of teachers, and the report said that 92 percent were found to be “effective” or better.
State officials said last week that they hope the next years will focus less on implementing the evaluation system and more on developing principals and school leaders.
“We need a more laser-like focus on the principals,” said assistant commissioner Peter Shulman last week. “Teachers have grabbed much of the spotlight, but how do we next focus on principals and how they strengthen evaluations and strengthen their support of teachers.”