Posts Tagged ‘program evaluation’
AACTE and Westat are partnering with state chapters and education agencies this spring to pilot new surveys of beginning teachers and their supervisors. By developing common instruments to be used across states that can also be customized with state-specific questions, the partners aim to fill the need for both national benchmarks for preparation programs (as called for in accreditation standards) and state-determined priorities.
AACTE staff conducted exploratory work last year, collecting and studying state-level instruments currently used for surveying program completers in 13 states that were willing to share both their instruments and their most recent survey results. We found that all of the instruments align with the InTASC model standards for beginning teachers, although their length and emphasis areas vary. Meanwhile, we began talking with state education agencies (SEAs) and AACTE state chapters and member institutions to gauge their interest in consolidating these state and institution data collection efforts in a national-level instrument.
A new study finds that using observational ratings of beginning teachers may be a viable alternative—or a useful complement—to relying solely on controversial “value-added” modeling (VAM) in evaluation of educator preparation providers (EPPs).
An article about the study by Matthew Ronfeldt and Shanyce Campbell of the University of Michigan School of Education, published in the journal Educational Evaluation and Policy Analysis, is now available online.
In what the authors describe as the first study to investigate the use of teachers’ observational ratings to evaluate their preparation programs and institutions, the results are compelling.
“The demands for teacher preparation accountability continue to grow, from the proposed federal regulations to new accreditation standards,” said Ronfeldt, who was also the 2016 recipient of AACTE’s Outstanding Journal of Teacher Education Article Award. “We sorely need better ways to assess program quality. Although VAM makes an important contribution to our understanding of program outcomes, we likely need multiple measures to capture something as complex as preparation quality. We are excited to find that teacher observational ratings could be a viable supplement.”
It’s axiomatic that experts in a field are better equipped than outsiders to design interventions that will work. Yet in education, we face a constant barrage of external reform efforts that fail to incorporate professional knowledge and expertise—and they just don’t work.
This point is reinforced in recent research out of the National Education Policy Center. In this study, Marilyn Cochran-Smith and her colleagues at Boston College (MA) examine the evidentiary base underlying four national initiatives for teacher preparation program accountability and improvement. They find that only one of the initiatives—the beginning-teacher performance assessment edTPA, designed and managed by the profession—is founded on claims supported by research. With a measure that is valid, scoring that is reliable, and therefore results that are accurate, we have a serious tool for program improvement.
A new policy brief out of the National Education Policy Center (NEPC) reviews the evidentiary base underlying four national initiatives for teacher preparation program accountability and finds that only one of them—the beginning-teacher performance assessment edTPA—is founded on claims supported by research. The other three mechanisms included in the study are the state and institutional reporting requirements under the Higher Education Act (HEA), the Council for the Accreditation of Educator Preparation (CAEP) standards and system, and the National Council on Teacher Quality (NCTQ) Teacher Prep Review.
Holding Teacher Preparation Accountable: A Review of Claims and Evidence, conducted by Marilyn Cochran-Smith and colleagues at Boston College (MA), investigated two primary questions: What claims does each initiative make about how it contributes to the preparation of high-quality teachers? And is there evidence that supports these claims? In addition, researchers looked at the initiatives’ potential to meet their shared goal of reducing educational inequity.
Accreditation work involves considerable project management to track logistics and the activities of stakeholders. Resource management is a usual business practice of academic units, but the tools are not typically suitable for tracking projects with due dates and multiple actors. Tune in to AACTE’s upcoming Online Professional Seminars (OPSs) to learn about specialized software and methods for managing assessment cycles, quality assurance systems, and accreditation submissions.
In a session starting January 25, OPS #6: Leveraging Accreditation for Quality Improvement will cover topics such as ethical considerations, tools, checklists, site visits, mock visits, and walk-throughs. Or join us starting February 8 for OPS #5: Preparing for Accreditation, where we’ll cover teamwork, readiness, calendar planning, document control, best practices, and more.
On December 2, 2015, the members of the Tennessee Association of Colleges for Teacher Education (TACTE) held their collective breath as the Tennessee State Board of Education released the 2015 Report Card on the Effectiveness of Teacher Training Programs. After 5 years of publicity nightmares as programs’ ratings and rankings received widespread media attention, would this year’s report be any better?
Back in 2007, the Tennessee General Assembly passed legislation requiring the publication of a report on the effectiveness of educator preparation programs (EPPs) throughout the state. The report was to provide the following information on program graduates: placement and retention rates, Praxis II scores, and teacher effect data based on the Tennessee Value-Added Assessment System (TVAAS). Meghan Curran, director of Tennessee’s First to the Top programs, noted, “It is our intent that the report cards will help institutions identify both what they do well and where there is room for growth based on the outputs of their graduates.”
Did you know that AACTE’s six Online Professional Seminars (OPSs) can be taken in any order? In fact, the seminars have no prerequisites, meaning you can skip what you already know and jump right in to the professional learning you need most.
Or are you looking for a well-rounded understanding of assessment and accreditation issues for educator development, program improvement, and quality assurance systems? Then start from the beginning and run through the complete sequence of courses.
Offered through AACTE’s Quality Support Initiative, the seminars are scheduled to be not only flexible but also convenient. Each course is completed asynchronously over a 3- to 4-week period, and multiple session options let you work around your schedule. We’ll be starting several course sections this month, including some that run over the holidays, if that suits your needs—see the current schedule of available dates.
The immediate value of taking part in AACTE’s Online Professional Seminars is obvious: You get to enhance your peer network while gaining knowledge on crucial issues in the field, from assessment and data use to quality assurance systems and the nuts-and-bolts of preparing for national or regional accreditation. But there are other, long-term advantages to participating in the seminars offered through AACTE’s Quality Support Initiative.
The OPSs provide a framework that allows you and your institution to focus on your faculty. The professional development offered through the seminars strengthens your performance in your current position and prepares you for future ones. By developing participants’ skills regarding assessment and accreditation, the OPS series builds individuals’ confidence and enhances their competence.
Data are ubiquitous in this day and age, and making sense of all the numbers and trends can be overwhelming. Yet using data wisely is critical to be able to learn from experience and determine strategic directions for improving what we do. So where do we start—how do we identify what information we need and appropriate sources to use? How do we recognize patterns in the data and their lessons for our work? And how do we put it all together to improve our programs and demonstrate our accountability?
Sometimes the story is as good as the headlines, and sometimes it’s even better. The New York Times op-ed “Teachers Aren’t Dumb” (Sept. 8) by Psychologist Daniel T. Willingham is a case in point. As Willingham notes, contrary to popular belief, new teachers are solid academic performers. And as his article asserts, they can benefit from the research on effective teaching that is being conducted in the schools of education that prepare them. Willingham also points out—with rhetorical hyberbole—that not all preparation programs are using the latest research. While program quality varies, the excellent preparation provided by the universities whose researchers he cites illustrates that teacher education has strong exemplars. Unfortunately, Willingham does not acknowledge the widespread change within the education preparation community.
The direction of today’s preparation programs is truly good news. Willingham accurately identifies two guiding principles for improving teacher preparation and program accountability: evaluate programs based on graduates’ performance on a rigorous, credible culminating assessment, and base that assessment (and programs’ content) on evidence of what works best for student learning.