Deans for Impact Policy Agenda Calls for Better Data Access
Navigating the opportunities and challenges that new data sources and reporting requirements present was a frequent theme at this year’s AACTE Annual Meeting. In one well-attended session, representatives of the group Deans for Impact (DFI) released their latest policy paper, From Chaos to Coherence: A Policy Agenda for Accessing and Using Outcomes Data in Educator Preparation, also described here on the DFI blog. (You may recall that DFI, started in 2015 by Benjamin Riley when he left the New Schools Venture Fund, shares AACTE’s commitment to using outcomes-focused data to inform and improve educator preparation. Its 22 member deans include 15 from current AACTE member institutions, many of whom serve or have served on AACTE committees and in other leadership roles.)
The brief calls on policy makers to make better data on graduates’ performance in the field available to programs—an important priority that resonates across the educator preparation profession. As the report notes, despite widespread calls for connecting evidence of new teachers’ effectiveness back to their preparation programs, “there has been no coordinated effort to provide these programs with valid, reliable, timely, and comparable data about the [educators] they prepare” (p. 2). Individual institutions, state university systems, AACTE state chapters and their leadership group, and our accreditor have all called attention to this persistent problem.
To illustrate the point, DFI conducted case studies of its members’ programs (actually of the 23 members from last year) and found that only six of the institutions had access to student achievement data connected to the teachers they had prepared. The accompanying chart shows the patchwork availability of seven postgraduation data points across 23 providers.
Source: Deans for Impact, From Chaos to Coherence, p. 7
Even a quick glance at the data display validates DFI’s claim that state-level policy makes a difference: The two institutions with the most complete data sets are both in North Carolina, and that is no accident, thanks to the UNC system’s Educator Quality Dashboard. The value of having multiple sources of evidence to provide a comprehensive picture and multiple perspectives can’t be overestimated. And while locally developed instruments will always be part of the picture, lack of common instruments precludes benchmarking that would be useful to institutions. Readers familiar with the Key Effectiveness Indicators framework developed by Teacher Preparation Analytics for CAEP will recognize it in the detailed breakout of data coverage (Appendixes A and B). The case studies are essentially a local application of that framework.
In pursuit of solutions, some institutions and coalitions have managed to build their own—often with educator preparation providers (EPPs), not policy makers, taking the lead. For example, a 2011 accreditation self-study from New York University’s Steinhardt School reported graduate follow-up data including surveys and value-added data from graduates’ students in NYC schools (that report is available in full here as a PDF). In a statewide effort over a decade ago, a group of State University of New York deans were awarded a federal grant to develop a system for tracking new teachers into the state’s schools (although the flow of data ended with the grant; not until Race to the Top, a decade later, was the effort revived). AACTE has worked on the issue at the national level as well: We lobbied successfully for a provision in the Higher Education Opportunity Act of 2008 that states be required to provide EPPs with data on their graduates contained in state data systems, and we subsequently worked with the Data Quality Campaign on a policy template to guide decisions about what teacher data states should be collecting in the first place.
So while policy makers have a vital role to play in building and maintaining data systems that provide reliable data at a useful grain size, the field has a vital role as well. The DFI policy brief rightly points out that systems have to be developed in conversation with providers; look at how the first reports in states like Louisiana returned data to providers in unusable aggregated forms. Reporting practices have, happily, become more useful in response to guidance from the field.
The DFI brief concludes with a more speculative idea: Using the “GREAT Act” provisions of the Every Student Succeeds Act (ESSA), develop new processes for states to recognize and reward programs that voluntarily embrace an outcomes-based performance system, roughly analogous to the building industry’s LEED certification. This part of the new law allows federal dollars to support teacher and school leader preparation “academies” with a major clinical component; ongoing approval and funding depends on evidence of program completers’ effectiveness in raising student achievement. The act gives providers much latitude but limits states’ abilities to set boundaries: No requirements for facilities, faculty credentials, content preparation (a content test must be allowed to suffice), course structure, or accreditation are allowed from the authorizer’s side.
While the language in the act leaves so much room for mischief that AACTE and many others opposed it in draft, in the right hands—as Sharon Robinson noted in a recent column—it may prove a useful tool for addressing shortages or other local needs. Indeed, states and institutions should consider all allowable uses of ESSA funds. It is vital that preparation providers be at the table as states and districts work out how they will use the funds.
The DFI brief provides a welcome shot in the arm to the field’s ongoing calls for better-quality data, and its proposal to use professionally established frameworks to audit local data systems might be particularly worth a look in states seeking to address certain personnel shortages. Give it a read here.
Tags: advocacy, Annual Meeting, data, federal issues, teacher quality