Member Voices: Improving Teacher Preparation: Right Destination, Hazardous Route
This post originally appeared in The Chronicle of Higher Education and is reposted with permission.
With high rates of retirement by an aging teaching force and continuing growth in school enrollments, we as a nation need more than ever to focus on how, where, and how well we prepare our future educators. Fortunately, the U.S. Department of Education has recognized the need to move on those issues. But one of its proposed solutions, in the form of regulations for evaluating the quality of higher-education programs that prepare elementary and secondary school teachers, could take us down a hazardous track.
The federal government spends close to $3 billion a year on a variety of programs aimed at improving the quality of elementary and secondary teachers. About $100 million of that sum goes toward TEACH grants of up to $4,000 per year to students who agree to serve as full-time teachers in critical subject areas at high-need schools for at least 4 years. Those well-intentioned investments in teaching—a profession that most Americans rightly believe contributes considerably to society’s well-being—come with a legitimate demand for evidence that taxpayers’ dollars are being well spent.
Unfortunately, the metrics now in place have not produced very believable or reassuring answers. According to the Department of Education, “over the last dozen years, 34 states have never identified a single low-performing or at-risk teacher-preparation program” in traditional institutions of higher education.
And that, sadly, leads to the commonly heard but woefully inaccurate conclusion that the teaching force is brimming with incompetents who were trained in subpar programs. Extravagant rhetoric notwithstanding, there is good reason to support the development and adoption of an accountability system that more forcefully separates good teacher-preparation programs from lousy ones and that provides usable information to the hundreds of institutions that are working hard to do better.
In response, the department has announced plans for a new approach to the rules for assessing quality, one the secretary of education hopes will ensure that “the measures by which states judge the quality of teacher-preparation programs reflect the true quality of these programs and provide information that facilitates program self-improvement and, by extension, student achievement.”
Evaluation is important, but it needs to be done sensibly. Without more attention to the subtleties of statistics and scientific evidence on appropriate uses and interpretations of evaluation tools, this laudable effort may yield a host of unintended negative consequences. Three elements of the proposed rules stand out as especially sensitive and have garnered the most intense reactions from educators, faculty members, deans, and experts in the uses of data for accountability.
First, a reliance on student test-score growth and so-called value-added measures to evaluate the quality of teacher-preparation programs generally, and determine eligibility for TEACH grants specifically, is not supported by sufficient evidence. Attributing test-score gains of individual students to their teachers raises formidable methodological questions in itself, but among experts there is even less confidence in the idea of further linking teachers’ ratings to the programs that prepared them.
Second, and particularly worrisome, is the likelihood that the plan would undermine the basic purposes of the TEACH program—to increase the number of teachers from low-income and minority backgrounds and to encourage new teachers to work in high-need schools. The regulations would establish federal quality standards by requiring states and institutions to collect data on specific indicators, such as employment pathways of recent graduates. But that requirement could backfire if it prompts teacher preparation programs to discourage their graduates from working in troubled schools with potentially high turnover for fear that the turnover might erroneously be attributed to the quality of the teacher preparation program.
Third, requiring separate evaluations of each preparation program offered by an institution could undermine the intent and efficacy of innovative accreditation systems. The proposed regulations seem to require the equivalent of a program-specific accreditation system, which might collide with alternative approaches that have been in the works for several years. What the system does not need now is more confusion about the sources, purposes, and uses of multiple and potentially conflicting accrediting strategies.
Evaluating teacher preparation programs is a necessary—and necessarily complex—undertaking. And it is an area in which the federal government has a completely legitimate role. But before moving ahead with a risky national-ratings system, policy makers and the institutions to be evaluated should understand more about the pros and cons, the evidence, and the likely effects of the regulations being proposed.
With the public-comment period for the proposed regulations now closed, the department is poised to continue the journey toward better evaluation, sensible accountability, and improved preparation of future teachers. It is clear that the destination is worthy—now it’s just a matter of choosing the smartest and safest route.
Tags: data, federal issues, program evaluation