With New Jersey several months into the second year of a new teacher-evaluation system, the Christie administration has released its first report on how the first year went for tens of thousands of affected teachers.
The news was encouraging in some areas, less so in others.
The interim report, sent to the school districts by the state Department of Education last week, included both qualitative information on how the evaluations were received to hard data on what changes resulted.
For instance, the report said that teachers saw a jump in the number of classroom observations conducted by supervisors, with each tenured teacher seeing an average of one additional visit above the required three.
But it also found that there was little variance in the scores resulting from those observations, raising questions about the rigor of the process.
The report also quantified the use of student performance measures known as “student growth objectives” (SGOs) that were based on more classroom-based assessments and not statewide standardized tests.
Of those SGOs, the report said, 70 percent were deemed to be “high quality,” based on detailed and specific growth measures, but most of the rest lacked specificity. It said a vast majority of the SGOs used a single assessment and data point, while the state aimed for multiple measures.
Still, the report was limited to only a few key areas and left a host of unanswered questions. For one, just 53 of more than 500 districts were represented in the most in-depth sections of the report, making for a limited sample pool.
In addition, the report does not divulge how affected teachers of math and language arts fared in test-based measures, the most controversial piece of the new evaluation system.
And the report does not include an overall breakdown of how many teachers fell into each of the four evaluation categories mandated under the new system: highly effective, effective, partially effective, or ineffective. Teachers ranked in the two lowest categories for two consecutive years are subject to losing their tenure protection and, potentially, their jobs.
The report said those key details will be provided in coming months. Still, the report is full of interesting information among both the districts it sampled and the state’s schools as a whole.
With 25 classroom observation models to choose from, 58 percent of all New Jersey districts chose the model developed by Charlotte Danielson of the Danielson Group in Princeton. Of the rest, virtual all chose one of four other prominent models (Stronge, McREL, Marzano and Marshall).
In reviewing a sample of SGOs, it found only about one-third had separate targets for distinct categories of students, rather than one target for the whole class.
Three quarters of sampled districts said administrators conducted between 40 and 80 classroom observations, with only one reporting less than 40 observations and three reporting more than 80 on average.
There was not a broad differentiation in the sampled districts, with the vast majority of observations coming in between 2.5 and 3.5 out of a maximum score of 4. The report said it was rare to see either exemplary observations at a score of 4 or poor ones at a score of 1.