Toward a Valid and Reliable Student Teacher Observation Tool
The North Dakota Association of Colleges for Teacher Education received a 2017-2018 AACTE State Chapter Support Grant for work on supervisor training modules to enhance the reliability and utility of the state’s new student teacher observation tool. Other AACTE chapters have also recently pursued collaborative work around assessment instruments, including those in Kansas and Ohio.
In 2016, the 12 member institutions of the North Dakota state chapter of AACTE collaborated to develop a student teacher observation tool (STOT). We were seeking a high-quality instrument to facilitate program improvement through meaningful, valid, and reliable data. We also knew that working together decreased the workload for all and leveraged resources and expertise across campuses. Finally, we were interested in adding to the common metrics used statewide to enable continued collaboration to improve teacher preparation in North Dakota.
The development of the STOT spanned 2 years and included multiple revisions that resulted in a tool with high internal validity and reliability. (The analysis report can be found here.) To continue this work, NDACTE members applied for and received a second AACTE state chapter grant to create training modules for university supervisors and cooperating teachers who evaluate student teachers using the STOT. The training modules will support interrater reliability and accuracy in scoring, which in turn will provide meaningful and useful data for candidate growth and program improvement.
Module development began with expert panels to establish “expert panel scores” for the training materials. The panels included at least five faculty members with experience observing student teachers and knowledge of the InTASC standards.
The scoring process involved several steps. First, the expert panel was given a rating sheet with the description of an InTASC standard and one indicator for that standard from the STOT. Next the panelists studied the scoring instructions and rubric descriptions for each performance level. They then watched a 3- to 4-minute video clip of a teacher in a real classroom, focusing on the performance described in the indicator. Each panelist independently rated the teacher’s performance, documenting evidence to support the rating. A facilitator recorded the ratings, and the group discussed their ratings and evidence. Following the discussion, the panelists re-rated the teacher’s performance, with the mode used as the consensus score.
There are modules for early childhood, elementary, and secondary levels. Each follows the same format, beginning with a short lesson about reducing bias and increasing accuracy in scoring. This lesson is followed by two short, video-based tasks for which the evaluators rate the teachers’ performances for one indicator each time. A third task focuses on a case study using a written entry from a preservice teacher targeting an indicator in the category of Professionalism, InTASC Standards 9 and 10.
For each task, the modules are programmed to identify whether the evaluators’ ratings are within an acceptable range of the expert panels’ ratings. For the purposes of this training, one point is considered within acceptable range. If the rating is more than one point off from the expert panel’s rating, the evaluator is taken back through the lesson on evidence-based scoring and reducing bias. The modules are each designed to be completed in less than 30 minutes.
At the end of the training modules, the evaluators are able to enter e-mail addresses for themselves and their institutional representative. The system sends an e-mail to these recipients with a certificate of completion for that module.
The modules are being hosted in Qualtrics and are in final development, to be available September 2018 at the NDACTE website: http://ndacte.org/.
Tags: assessment, funding, program improvement, state affiliate