On March 6, the New Jersey Department of Education submitted to the State Board new teacher and principal evaluation regulations, which will be required to be fully implemented beginning in September 2013. Before we begin a spirited debate on the details of the department’s proscriptive approach in the code, we should take a moment to focus on the goals of educator effectiveness and the realities of implementing school reforms, on the ground-floor level, in New Jersey schools.
Ready or not, all schools will begin to conduct evaluations using new models, new criteria, and new processes — and the results of these new evaluations will generate new consequences for those who underperform. Clearly, change is coming, and while NJPSA supports many school-reform initiatives, we must take advantage of the opportunity we have in front of us to do it right and ensure that change equals improvement in student learning.
The DOE states that, “by implementing robust and meaningful teacher and principal evaluations, we aim to improve teacher quality and thus student outcomes. A meaningful evaluation system is critical for helping New Jersey educators improve education for all New Jersey students.”
It is important to remember, however, that educator evaluation is not, in and of itself, reform. It is intended to be a driver of school reform. Real reform can only begin when we deepen the conversation of teacher and leader practice from a focus on evaluation checklists and labels to what is needed to affect change: time and resources to focus on what truly matters — higher levels of student achievement.
Ensuring schools have the tools, structures, and resources to support a deep dialogue related to learning is the real reform. This will allow the new tenure reform legislation to be implemented fairly and personnel decisions to be made based on reliable and valid data.
As Michael Fullan points out in his article entitled, “Choosing the Wrong Drivers for Whole School Reform,” real reform takes place when schools have the time and resources to focus on student learning, instruction, and assessment. As full implementation of both the teacher and principal evaluation systems looms for September 2013, it is imperative that boards of education, district leaders, and the DOE ensure that principals and teachers have a viable curriculum based on the Common Core Standards; valid and reliable assessment tools to measure growth in every subject area (tested and nontested); and time to work in professional teams to set growth targets, analyze data, and provide the appropriate instructional interventions for every student.
Schools will also need enough supervisory personnel to fulfill the number of required observations and conferences so that principals can attend to the myriad of other duties that go with managing a school and providing strong instructional leadership. Developing these foundational structures that underpin a meaningful evaluation process will not happen in all schools by September 2013. New Jersey schools that have been involved in a two-year pilot of the evaluation process will attest to the fact that they are still creating the building blocks to support meaningful implementation.
Educators also need high-quality professional learning opportunities beyond “being trained” in an evaluation model. NJPSA has been working collaboratively with the Department of Education on ways to support education leaders so that they can learn more about the Common Core, assessment design, instructional models, and the leadership and school culture necessary to foster shared accountability for student learning.
The Widget Effect, a 2009 major research project by the New Teacher’s Project, was one impetus for reforming the evaluation system. This report focused on the inability of existing evaluation tools to distinguish between different levels of educator performance, finding, among other things, that nearly all teachers were rated as “good” or “great,” even in schools where students failed to meet basic academic standards.
A recent Education Week article points to the experience of several states that have implemented new teacher evaluation models. In short, the Widget Effect is still alive and well as there remains little variation in educator ratings. If the focus remains on the compliance components of evaluation systems rather than on the practices and processes that create true education reform, then New Jersey is likely to see similar results.
We agree that a meaningful evaluation system that includes measures of student growth is essential for helping educators improve practice. We also believe, however, that evaluation alone will never accomplish this task without the basic foundational components upon which a meaningful evaluation system rests. Without the time and resources that truly allow that system to be utilized to support both student and educator learning, the outcome we seek will never be realized: every student — career or college ready.