Posts Tagged ‘program evaluation’
Sometimes the story is as good as the headlines, and sometimes it’s even better. The New York Times op-ed “Teachers Aren’t Dumb” (Sept. 8) by Psychologist Daniel T. Willingham is a case in point. As Willingham notes, contrary to popular belief, new teachers are solid academic performers. And as his article asserts, they can benefit from the research on effective teaching that is being conducted in the schools of education that prepare them. Willingham also points out—with rhetorical hyberbole—that not all preparation programs are using the latest research. While program quality varies, the excellent preparation provided by the universities whose researchers he cites illustrates that teacher education has strong exemplars. Unfortunately, Willingham does not acknowledge the widespread change within the education preparation community.
The direction of today’s preparation programs is truly good news. Willingham accurately identifies two guiding principles for improving teacher preparation and program accountability: evaluate programs based on graduates’ performance on a rigorous, credible culminating assessment, and base that assessment (and programs’ content) on evidence of what works best for student learning.
I am delighted to announce AACTE’s new Quality Support Initiative, which is designed to provide resources and support to educators interested in assessment and accreditation. Starting next month, we will offer Online Professional Seminars (OPSs) for faculty at AACTE member and nonmember institutions, undergraduate and graduate students, PK-12 teachers—or anyone involved in educator preparation.
As part of our mission to advocate and build capacity for high-quality educator preparation, AACTE has established this initiative to support the profession’s work in continuous improvement and accreditation. The OPSs provide professional development for individuals and promote organizational development for institutions in a convenient, flexible format.
This post originally appeared in Dean Feuer’s blog, “Feuer Consideration,” and is reposted with permission. The views expressed in this post do not necessarily reflect the views of AACTE.
The dean of the Curry School of Education at the University of Virginia recently wrote an op-ed for The Washington Post that was well meaning but misleading. It was surprising and disappointing to see a distinguished educator miss an opportunity to dispel conventional myths and clarify for the general public what is really going on in the world of teacher preparation and its evaluation.
For those who may have missed Robert Pianta’s short article, here is a summary and rebuttal.
The following letter to the editor was published in the Washington Post February 23, in response to the February 20 commentary by the University of Virginia’s Robert C. Pianta, “Teacher Prep Programs Need to Be Accountable, Too.”
Robert C. Pianta vastly oversimplified the narrative about accountability among those who prepare educators.
Educator preparation programs should indeed be accountable, and the profession has been busy creating data tools and processes for accountability. States such as Louisiana, California, and Georgia are working to determine the best ways to use data collected through existing assessments and surveys to document program impact. These systems rely on access to K-12 student achievement data as one indicator.
This post originally appeared in The Chronicle of Higher Education and is reposted with permission.
With high rates of retirement by an aging teaching force and continuing growth in school enrollments, we as a nation need more than ever to focus on how, where, and how well we prepare our future educators. Fortunately, the U.S. Department of Education has recognized the need to move on those issues. But one of its proposed solutions, in the form of regulations for evaluating the quality of higher-education programs that prepare elementary and secondary school teachers, could take us down a hazardous track.
Editor’s Note: AACTE’s two Research Fellowship teams will present a joint session at the Association’s Annual Meeting, Saturday, February 28, at 1:30 p.m. in Room A704 of the Atlanta Marriott Marquis. This post provides background on the fellowship based in New Jersey at Kean University, Rowan University, and William Paterson University.
Is there a difference in teacher persistence in urban districts attributable to specific pathways? Why do teachers say they persist in urban districts? Researchers from Kean University, Rowan University, and William Paterson University came together to explore these and other related questions as part of the AACTE Research Fellowship.
Editor’s Note: AACTE’s two Research Fellowship teams will present a joint session at the Association’s Annual Meeting, Saturday, February 28, at 1:30 p.m. in Room A704 of the Atlanta Marriott Marquis. This post provides background on the fellowship at the University of Southern Maine.
The recent release of proposed federal reporting requirements for educator preparation programs stirred up intense interest in the methods and metrics used to evaluate programs. As many people noted in their letters of comment to the U.S. Department of Education earlier this month, several of the proposed new measures are unprecedented and would require investment of significant time and money to collect, analyze, and report data on an annual basis.
’Twixt Scylla and Charybdis: Navigating the Paradoxes of Data Use, Accountability, and Program Improvement
Academic leaders in teacher education are currently faced with unprecedented policy pressures related to collecting, reporting, and acting on an intensifying array of program outcome measures. Moreover, many of the state and federal policies driving these pressures are saturated with paradox, attempting to address multiple and often contradictory goals. Perhaps the most fundamental of these is related to the essential tension between policy goals related to identifying and eliminating “low-performing programs,” and those related to “program improvement.” Coping with contradictory discourses and policies related to accountability, program improvement, and “data use” has become one of the facts of life experienced by virtually all contemporary teacher educators.
A study of 30 teacher residency programs funded through the federal Teacher Quality Partnership (TQP) Program finds that graduates of the residencies feel more prepared at the start of their careers and more supported during their time in the classroom than their same-district peers from other pathways.
As teacher educators wait to see the U.S. government’s latest proposal for rating their programs, a new report commissioned by the Council for the Accreditation of Educator Preparation (CAEP) attempts to lay out a useful framework of “key effectiveness indicators” to answer the fundamental question: How do we identify high-performing preparation programs that routinely produce effective teachers (as well as programs that do not)?