TAG Session Explores Fairness, Evidence in EPP Accreditation

The authors are members and leaders of the AACTE topical action group called “All Things Accreditation.” The views expressed in this post do not necessarily reflect the views of AACTE.

At the most recent AACTE Annual Meeting, we hosted a session on behalf of the All Things Accreditation Topical Action Group (TAG) to explore the expectations of Standard 4 of the Council for the Accreditation of Educator Preparation (CAEP). The session, “A Courageous Conversation About Fairness, Justice, and Accountability in EPP Assessment and Impact on P-12 Student Learning,” aimed to evaluate current practices specific to CAEP Standard 4 as well as the merit of using standardized or criterion-referenced state tests designed to evaluate PK-12 student learning as a metric to judge the viability of educator preparation providers (EPPs). We discussed complications around value-added measures (VAM) and the fairness of judging EPPs by their graduates’ impact on student learning.

We designed the session to be interactive, offering brief introductory comments before asking participants to respond to seven guiding questions, first in table conversations and later as a whole group:

  1. Which assessments are currently being used in your state to evaluate your EPPs effectiveness in impacting P-12 student learning?
  2. How are these tests chosen?
  3. What data is shared to your EPP?
  4. How is sharing of data handled/managed?
  5. How much do the test reveal about the quality of your program?
  6. What is the level of cultural competence in the assessments that are being used?
  7. What is your wish list of assessments that you believe would indicate how well your completers do in their classrooms?

After discussing each question, participants used a shared Google document to report their answers by state; represented states included Arkansas, Ohio, Oklahoma, Minnesota, and Tennessee. A majority of these states, according to the survey, are using or planning to use a VAM metric to provide EPPs feedback on completer impact on PK-12 student learning–although survey respondents viewed VAM has having limited utility due to confounding variables, particularly variables commingled with cultural competence, those involving school context, and how data are presented to EPPs.

States currently appear to be using a variety of measures to provide feedback to EPPs, including not only VAM but also teacher retention rates, teacher evaluations, and student achievement results. Few respondents felt EPPs could be fairly evaluated based on PK-12 student assessments or that completer impact is an accurate reflection of EPP quality.

All five states reportedly use measures including Praxis and edTPA testing for vetting completers as well as end-of-program, graduate, and employer surveys. Session participants saw these measures as more relevant and appropriate for judging EPP effectiveness.

We hope to continue this conversation with a session at the 2019 AACTE Annual Meeting and have proposed a session where sources of evidence for meeting accreditation expectations can be examined across EPP contexts to determine common (“best”) practices and recommendations for supporting EPPs in demonstrating preparation program quality. CAEP has also been invited to be part of the session leadership.

Although we’ve already submitted the session proposal for peer review, we are interested in collaborators for the session in the role of table leaders to monitor table discussions and share them in another Google document. If you would like to be involved in this session (if it is accepted), or for more information about the TAG, please contact Donna Wake at dwake@uca.edu.

Tags: ,

Donna Wake

University of Central Arkansas

Sue Corbin

Notre Dame College

Sean Bedard-Parker

University of Minnesota Duluth