• AACTE 70th Annual Meeting, Baltimore, MD

Member Voices: Bring It On: Teacher Education Ready for Sensible Evaluation

This post originally appeared in Dean Feuer’s blog, “Feuer Consideration,” and is reposted with permission. The views expressed in this post do not necessarily reflect the views of AACTE.

The dean of the Curry School of Education at the University of Virginia recently wrote an op-ed for The Washington Post that was well meaning but misleading. It was surprising and disappointing to see a distinguished educator miss an opportunity to dispel conventional myths and clarify for the general public what is really going on in the world of teacher preparation and its evaluation.

For those who may have missed Robert Pianta’s short article, here is a summary and rebuttal.

With reference to reactions that have been voiced to proposed new federal regulations for the evaluation of teacher preparation programs, Dean Pianta says he is embarrassed that “professionals responsible for the preparation of teachers seem to oppose so adamantly efforts to evaluate the competence of the workforce they produce . . . .”

He needn’t be so embarrassed. Although generalizations are always a bit dangerous, I would argue that the educator community, with all its diversity, has a shared understanding of concepts of accountability and quality control and is not afraid of evaluation. The majority of teacher educators recognize that our teaching force needs improvement, and that with significant retirements expected along with growth in the elementary and secondary school age population, now is the time for action to assure that all our kids have excellent teachers in their classrooms.

But like their colleagues in law, medicine, engineering, and business, teacher educators rightly insist on sensible evaluation, a modicum of basic trust, and a fair application of the results. They want to get the right data—and to get the data right. They rightly do not have much confidence in the current system of evaluating teacher preparation, which has done a poor job of identifying low-performing preparation programs or helping them to improve. But when a treatment is not effective, the rational scientific response is not to substitute an alternative that has been shown to be worse; it is rather to work toward the development of better treatments.

The fact is there are many remarkable efforts under way, at leading institutions, to evaluate and upgrade teacher preparation and provide models for wider adoption. Two examples Pianta surely knows about but chose not to mention include TeachingWorks at the University of Michigan and edTPA, created at Stanford with the help of other educators from across the country. Pianta also neglected the multiyear effort by the Council for the Accreditation of Educator Preparation (CAEP), which has involved literally dozens (if not hundreds) of teacher educators, teachers, and assessment professionals and has led to a new set of standards, including quantitative metrics of student achievement growth. Like many major reforms, the CAEP process has encountered criticism and some opposition; but its goals and principles have also garnered tremendous support from professionals in teaching, assessment, and accreditation.

Are these innovations perfect? Certainly not. But they reflect a commitment to balance, reasonableness, and utility—themes that were central in a recent report of the National Academy of Education—which many professionals find either missing or unclear in the proposed federal regulations. That is why the same community that devotes countless hours to the evaluation and enhancement of teacher preparation has taken the time to respond so vocally to the federal proposal.

Chief among the concerns articulated by many faculty and deans (including me, in a longer essay that appeared in the Chronicle of Higher Education), is the misapplication of so-called “value-added” or growth models, along with the possibility that the new data requirements will undermine the goals of TEACH grants that aim to increase the quality and supply of teachers working in poor schools.

Probably the saddest thing about Pianta’s analysis is the phrase “teacher wars,” which in his formulation seems to emphasize (and blame) only one side—those overly defensive teacher educators fighting against the virtuous forces of assessment and accountability. The even more painful truth, though, is that many teachers do find themselves embattled in a punitive and hostile system that distracts attention from their classroom challenges and undermines real professional improvement. Instead of warring with teachers, we should be working with them toward the shared goal of student progress in all aspects of learning—while not shying away from encouraging the bad teachers to find other work.

Whether the new federal system will advance us toward this goal or pour more fuel on the flames of discontent and anger is the question that troubles so many conscientious teacher educators. The U.S. Department of Education should be commended for its willingness to intervene in this extremely important issue. And though I don’t speak for them, I believe strongly that the teacher educator and measurement communities are ready, willing, and eager to cooperate and provide their best technical and experiential judgment toward the design and implementation of sensible evaluations.


Tags: , , ,

Michael Feuer

Graduate School of Education and Human Development, George Washington University, and President, National Academy of Education

Comments (1)

  • Frank Murray

    |

    In the discussions TEAC had with the 200+ programs it accredited, we never heard anyone try to lower our standards. What we found instead were programs proud of their graduates’ accomplishments and eager to display the evidence they truly relied on to support their beliefs. What looked like complaining of the sort that embarrasses Bob Pianta and others were irritations with requirements by states and others to be held accountable by measures in which they had no confidence. They were simply unwilling to be defined by measures that were patently invalid for the purposes to which they were being put by others With regard to the evidence that mattered to them, however, they were more than willing to let the accreditation process inform them about its value in supporting their claims about their graduates — and to have it lead them to make needed modifications when the data weren’t fully supportive. One other feature of our accreditation process, now embedded in CAEP, was that our site-visitors invariably found better evidence of the program faculty’s claims than the faculty had advanced in the first place.

    Reply

Leave a comment

I have read and accept the Terms of Use policy.

Time limit is exhausted. Please reload CAPTCHA.

On Twitter

AACTE Tools

Follow Us