Being Accountable for Program Effectiveness
The colleges and universities that prepare our nation’s educators are deeply committed to program quality, innovation, and accountability, and important progress is under way in each of these areas at the institutional, state, and national levels. While our priorities are unchanged by the presence or absence of federal regulations, the regulations that were voted down by Congress last week would have impeded this progress by redirecting already-tight resources to create an onerous new reporting and rating system for teacher preparation programs. Now, thanks to the robust advocacy efforts of the field, our professional commitments can proceed unhampered by burdensome mandates and prescriptive-yet-unproven methods.
Absent these regulations, educator preparation providers (EPPs) participate in numerous public reporting and quality assurance systems. Both EPPs and states are required by Title II of the Higher Education Act to submit annual reports to the U.S. Department of Education, and states must report at-risk and low-performing programs. Programs also must meet state review standards, and several states have developed data dashboards that display information for all providers to help the public compare program quality. A plurality of EPPs also undergo national examination through the Council for the Accreditation of Educator Preparation – a professional peer-review process using standards that are developed by the field and based on research.
But there’s more to the story than these external accountability mechanisms. EPPs operate at the critical intersection of PK-12 and higher education and are deeply invested in improving the lives and learning of students throughout their communities, including in their own programs. To inform continuous improvement, EPP leaders want and need data on their programs and graduates, including how these alumni perform on the job. This information is difficult to collect and maddeningly hard to access in many states. While better access to impact data is a laudable goal, the federal regulations mandated a system that was simply not feasible.
On the bright side, some places are beginning to solve this problem. States have developed data dashboards and new instruments to collect and report information on program impact; Georgia and Louisiana are two leaders in this regard whose early experiences are informing efforts in other states. Coalitions of EPPs, district leaders, standards boards, and state education agencies are now developing common graduate and employer surveys, for example, to ease the reporting burden and improve response rates. AACTE is working with Westat to support these initiatives along with our state affiliates and education departments in Hawaii, Iowa, Kansas, New York, and Pennsylvania; others are eager to join in the next stage of the project.
Another angle AACTE is exploring is how EPPs are successfully constructing a functioning quality assurance system from multiple sources of evidence and data. We also aim to study ways for programs to routinely and effectively engage graduates, employers, and other stakeholders in evidence-based program improvement efforts. Many of our members are clamoring for better program impact information and engagement metrics.
One essential category of evidence has been developed by the field: measurements of teacher candidates’ knowledge, skills, and performance, before they even graduate. Thanks to the collaborative efforts of teacher educators, measurement experts, policy leaders, and others in just the past few years, a remarkable 75% of the students enrolled in teacher preparation programs nationwide this fall will experience a performance assessment as a program or licensure requirement. The results of these assessments provide valuable feedback about what graduates can do and where programs need to improve.
Teacher educators’ commitment to accountability goes hand-in-hand with their work to innovate and improve their programs. Their mutually beneficial collaboration with PK-12 schools – which are both the consumers of EPPs’ “product” and suppliers of future students – leads to new and better clinical partnerships, featuring locally determined variations from preservice residencies to coteaching arrangements and even induction programs for novice educators. The AACTE Clinical Practice Commission is helping to advance the growth of high-quality clinical programs by articulating more explicit expectations, defining common terms, and assembling the latest research and models for clinical preparation and partnerships to operationalize them more broadly across the country.
Two federal investments in teacher preparation have contributed significantly to these successes. First, many clinical partnerships have thrived under the support of federal Teacher Quality Partnership grants. Additionally, programs are increasing the diversity of their candidate pools and recruiting students who are committed to teaching in high-need fields and districts with the support of federal TEACH grants. AACTE and its members would love to see these beneficial programs continue.
Eliminating the federal regulations for teacher preparation programs, however, was a good call. Proponents of the rule shared the assumption that the teacher preparation field is unwilling to change or to be held accountable for its impact, and is content with the status quo. In fact, EPPs and their partners are hard at work to improve programs, develop better measures of effectiveness, conduct needed research, and advocate for effective policies. Now let’s keep it up.
Tags: accreditation, data, federal issues, research