I don’t do well with fads. The mood ring, pet rocks, and boy bands tend to rush by without much thought and interest. So too, the flavor of the day in public policy. I am long past the stage where the “new, new “ to save the world appears, only to disappoint and recede with more silence than the noise with which it appeared.
So I find it somewhat surprising that a new fad promising nothing but hard work, but with the potential to address public policy challenges in a unique way, has piqued my interest. “Collective impact” is now the rage. Many national funders are calling for its use as a framework for action, and many communities have adopted it as the base for addressing education reform, juvenile justice reform, and workforce development.
So what is collective impact? The term encompasses a great deal, but at base it is a data- and measurement-driven approach to tackling policy challenges. The best-known version of this approach, Strive, was launched in 2006.
Institutional and community leaders in Cincinnati and northern Kentucky lamented the limited results from fragmented efforts to boost student achievement. They recognized that they needed to improve educational outcomes not only as a civic obligation but also to make their region economically competitive. Higher educational leaders, business leaders, local government, and nonprofit leaders all agreed to work collectively on improving educational outcomes from cradle to career. More than 300 organizations that affect student achievement participate in the Strive.
The partners set targets for different portions of the pipeline and defined data needs based on accountability measures agreed to by the partners. The partnership achieves collective impact through several strategies described in its mission statement: “collaborative action around shared priorities and outcomes; building a culture of continuous improvement by using data effectively to drive improved results for children; and aligning our community’s leadership capacity, volunteer resources, and funding to what works for children.“
The Strive partnership terms this its Roadmap to Success. What is key here is that as the process unfolded, Strive partners were able to engage the superintendent of schools in the three districts, Cincinnati, Covington, and Newport. This was no small accomplishment given the blame district leaders are wrongly or rightly given for lagging student achievement across the country.
The Strive partnership addressed the issue of blaming school systems by asking members of the executive committee to sign a memorandum of understanding (MOU) laying out the rules of engagement and limiting finger pointing. The MOU encouraged shared accountability for the problems, however defined, and ways to address them. The group agreed on five major goals relating to either raising test scores or increasing participation rates: Every child is prepared for school; every child is supported inside and outside of school; every child succeeds academically; every child enrolls in some form of postsecondary education; every child graduates and enters a career.
Apart from engaging the three school systems, the Strive partnership developed strategies for coordinating organizations that provide support services to youth and families in the pipeline. So the overall thrust of the effort rests not only on accountability measures but on accountability-based on strategies for building a coordinated system to assist the children and families who pass through. Existing public, private, and philanthropic resources are then aligned to help the efforts come to fruition. The phrase “antipoverty strategy” is associated with Strive. Indeed, it has been compared to the highly regarded Harlem Children’s Zone in its breadth and comprehensiveness.
Has an evaluation been done? Strive has gone through a process evaluation, which simply means that the evaluators observed the process and talked to stakeholders. No evaluation using tests that look at statistical impact has been done so far, but the Strive partnership might argue that evaluation comes at the many points where the partners assess indicators for each of the five priorities. They are constantly assessed for progress, and major and minor changes made when a target is not achieved. So the goal is not a static evaluation, but continual improvement. Cincinnati, Covington and Newton still embrace the continual-improvement process even as key leaders have left the stage. This is always the test of the stable as opposed to the idiosyncratic.
So why my concern that collective impact may be a fad, especially in the face of replications and admiring copies all over the place? Experience suggests that this is an important method to improve outcomes in education and other policy areas. It reduces what economists might call friction costs involved in public policy reform. But what we often do in replication is go straight for the easy part, forgetting that preparation, struggle, conflict, and failure are essential parts of a process like this. Most important, this type of process cannot be accomplished in a two-year grant cycle. It takes strong leaders and partners who understand this is a long-term commitment. Is it what we need here in New Jersey? Maybe, but if communities and others were to consider its use as a tool, we should not treat it like a fad.