• Home
  • Debunking Myths
  • Research Fellows: Data-Collection Challenges Hold Implications for Accountability Measures

Research Fellows: Data-Collection Challenges Hold Implications for Accountability Measures

Editor’s Note: AACTE’s two Research Fellowship teams will present a joint session at the Association’s Annual Meeting, Saturday, February 28, at 1:30 p.m. in Room A704 of the Atlanta Marriott Marquis. This post provides background on the fellowship at the University of Southern Maine.

The recent release of proposed federal reporting requirements for educator preparation programs stirred up intense interest in the methods and metrics used to evaluate programs. As many people noted in their letters of comment to the U.S. Department of Education earlier this month, several of the proposed new measures are unprecedented and would require investment of significant time and money to collect, analyze, and report data on an annual basis.

With funding from AACTE’s Research Fellowship program, our research team at the University of Southern Maine is investigating the potential use of one of these proposed measures, job placement rates, as an indicator of program quality; this research is in progress and will be reported in detail at the 2016 AACTE Annual Meeting. This year, though, we’ll be sharing potential challenges to securing data to inform measures such as those proposed in the federal regulations.

In the initial stage of the project, we conducted a survey of over 1,400 graduates of teacher preparation programs by using contact information provided by multiple institutions to gather survey data from program completers—which essentially modeled the data collection process state agencies would use. The challenges we experienced in this effort foreshadow those that state agencies (or programs) are likely to encounter in the future.

Our upcoming presentation in Atlanta will focus on our process findings and their implications for the proposed Title II reporting measures. We will share the response rates and patterns of our survey (spoiler alert: underwhelming) for various student subgroups and institution types. In addition, we have compiled information about historical survey response rates from selected states that already make systematic use of survey data from teacher preparation program graduates. We will discuss the implications for accountability measures that rely on survey data and suggest variables that may require standards for minimum response rates and numbers. These minimums would have different impacts for institutions of varying size and student demographics. 

Of course, mandatory reporting requirements can create an opportunity. Sometimes programs lack data they would find useful and informative because they are not enabled to access the information from their state education agency or do not have the institutional resources (including time, money, or know-how) to collect and analyze it for themselves. Compelling states to supply such data could provide the leverage needed to make the data collection and analysis possible.

However, we must not lose sight of the goal of providing feedback that informs program improvement. If programs are unable to obtain accurate information about their graduates, the opportunity for helpful empirical data is lost. Thus when considering which data measures to incorporate into accountability systems, it is critically important to consider whether they can be obtained reliably and validly.

We hope you will join us in Atlanta to bring your perspectives to this conversation!


Tags: , , , , ,

Amy Johnson

University of Southern Maine

Cathie Fallona

University of Southern Maine