Tennessee EPPs Optimistic About Changes to State Report Card
On December 2, 2015, the members of the Tennessee Association of Colleges for Teacher Education (TACTE) held their collective breath as the Tennessee State Board of Education released the 2015 Report Card on the Effectiveness of Teacher Training Programs. After 5 years of publicity nightmares as programs’ ratings and rankings received widespread media attention, would this year’s report be any better?
Back in 2007, the Tennessee General Assembly passed legislation requiring the publication of a report on the effectiveness of educator preparation programs (EPPs) throughout the state. The report was to provide the following information on program graduates: placement and retention rates, Praxis II scores, and teacher effect data based on the Tennessee Value-Added Assessment System (TVAAS). Meghan Curran, director of Tennessee’s First to the Top programs, noted, “It is our intent that the report cards will help institutions identify both what they do well and where there is room for growth based on the outputs of their graduates.”
While that might have been the state’s goal, it has not been the reality. Too little information on graduates was provided for anyone to make informed decisions about graduates or licensure programs. Instead, since the release of the first report card in 2010, faculty and administrators of Tennessee EPPs have met the annual release with fear and trepidation.
Until last year, the Tennessee Higher Education Commission produced the report card, supported by Race to the Top funds. Once this funding ended, starting in 2015, the report was produced by the Tennessee State Board of Education.
So how did it turn out? The good news for Tennessee educators was that there was less emphasis on the ranking of programs and more on helping programs improve, with the promise that information in future reports would provide more information for continuous improvement. While the members of TACTE hope future reports will be helpful, they still express concern over many aspects of the current and past reports. For example:
- While the General Assembly required data on placement and retention rates and Praxis II scores, judgments about programs have been made solely on teacher effect data based on TVAAS.
- Of the 43 EPPs in Tennessee, six cannot be evaluated because of low numbers of completers. Therefore, the report is not useful to these institutions for program approval, accreditation, or continuous growth.
- A large percentage of graduates are not included in the report. The report provides data on graduates who are in their first 3 years of teaching. Between 2011 and 2014, 13,833 candidates completed a teacher education program in Tennessee, yet only 2,272 (16%) were included on the report. Why? Graduates were not included who taught outside Tennessee, taught at a private school, or taught a nontested subject.
- The report provides no way to determine which licensure programs were weak or which were strong. For example, data on the TVAAS scores for students in 4-8 mathematics do not show if the teachers of those students had an endorsement in elementary education, middle grades education, or high school mathematics. Although institutions began receiving information in 2013 for individual teachers who completed a licensure program at their institution, hours of research are required for institutions to make judgments about programs from this information.
- No trend data are provided in the report, but cited areas of strength and weakness shift from year to year. One year, data might show a weakness in a program for those graduating with a teaching license in Algebra I. However, the following year, such a weakness might not appear. Without trend data, it is impossible to know if an actual problem exists or if there was an anomaly for that one year.
- The report includes individuals who were teaching outside their endorsement area or teaching on a waiver. Many schools are allowed to have faculty teach one subject outside their endorsement area. Unfortunately, when a graduate in history has a low TVAAS score in English I, it appears the institution’s licensure program in English is weak when in reality the graduate was never prepared to teach English.
- Teachers may be linked to a particular institution because they graduated from that institution, but it may not be where they received their training to be a teacher. This is especially true for those earning their teaching credentials at the postbaccalaureate or master’s level. Including their data skews the evaluation for the degree-granting institution.
- Those hired on an alternative license are given equal weight in evaluating an institution as those who earned a traditional license. It doesn’t make sense to give the same weight to the candidate who completed a 120-hour traditional program as to the teacher on an alternative license who took no courses from the institution.
- Until the most recent report, the reports highlighted those programs that were viewed as being most successful. However, the criteria for that determination for success were unclear. For example, in 2014, the following explanation was provided for the ranking: “Programs with 3 years of available TVAAS data were analyzed using the percent of results available compared to the percent of positive and negative statistically significant results for their combined Apprentice and Transitional completers.” Such explanations were useless and provided insufficient evidence for program change.
Nonetheless, members of TACTE are optimistic about the positive changes in the 2015 report. It is our hope that the original goal—to “help institutions identify both what they do well and where there is room for growth based on the outputs of their graduates”—will be realized in future reports.
Carlette Jackson Hardin is dean of the Martha Dickerson Eriksson College of Education at Austin Peay State University (TN) and president of the Tennessee Association of Colleges for Teacher Education.
Tags: program evaluation, research, state affiliate, state policy