• Home
  • General
  • Authors Discuss Research on ‘Opportunity to Learn’ in Teacher Preparation?

Authors Discuss Research on ‘Opportunity to Learn’ in Teacher Preparation?

This Journal of Teacher Education (JTE) interview features insights on the article entitled, “What Constitutes an ‘Opportunity to Learn’ in Teacher Preparation?” by Julie Cohen and Rebekah Berlin. The article was published in the September/October 2020 issue of the JTE. AACTE members have free access to the articles in the JTE online archives—just log in with your AACTE profile.

What motivated you to pursue this particular research topic?

The goal of the paper was to surface issues around measuring what happens in teacher preparation, in particular, the construct of “opportunity to learn” or OTL. Much of the prior research on OTL has relied on survey data, and scholars have often treated the idea of OTL as an objective reality, contingent on features of coursework or fieldwork made available in a given program. However, self-reports of “opportunities” are often divergent from other measures—like observations—of the same events. Rather than assume that self-reports tell us something conclusive about a particular program, we wanted to use the rich, multifaceted data we had from a longitudinal study of teacher preparation to analyze whether program features and candidate characteristics explain variation in reported OTL.

Were there any specific external events (political, social, economic) that influenced your decision to engage in this research study?

The specific impetus for the paper was evidence that we often see more variation among graduates of the same teacher education program than we do between programs, and this was also case with our data. Because the graduates of the same program end up performing very differently as teachers of record, it is hard to isolate particularly “effective programs,” and then replicate their corresponding features. This large within-program variation is also used in political debates to question the value of university-based teacher preparation, writ large. If, the logic goes, a particular teacher preparation program were providing sufficiently robust training, then this would be reflected in conclusively stronger performance across program graduates.

As a teacher educator, the logic of this argument—that teacher preparation “must not matter” because we struggle to consistently identify stronger or weaker programs– always struck me as faulty. There are so many reasons why we might see more variation within a teacher education program than between programs. Candidates take methods courses with different instructors who may be differentially experienced and skilled. So, too, do they work with mentor teachers who provide different kinds of learning experiences in K-12 classrooms.

Beyond actual differences in coursework and fieldwork, candidates in the same program are also different from each other in terms of content knowledge, beliefs, pedagogical skills, and personality traits. Anyone who has taught a methods course before can attest that candidates learn different things over the course of the semester, despite our best efforts to reach and support everyone. Moreover, when we receive course evaluations, we are confronted with the fact that candidates also assign distinct types of value to the learning experiences we provide.

Our goal with this paper was to empirically explore between and within program variation to identify factors that do and do not contribute to observed variation. Ultimately, our data suggest that an OTL may well be an individually experienced phenomena rather than a uniform feature of a teacher preparation program. We argue that we would be well-served to attend to individual characteristics to better understand how candidates experience and potentially benefit from different learning opportunities in teacher preparation.

What were some difficulties you encountered with the research?

There is always 20-20 hindsight when you end up pursuing research questions that were not central to your original study design, but we wished we had more information about how candidates made sense of the experiences they had in methods courses. We had not observed the content methods courses that we analyzed in the paper, nor had we interviewed the candidates during teacher preparation about how they perceived the opportunities to learn in these courses. We had to rely on surveys, interviews with methods instructors, and course documents– like syllabi—to begin to understand what kinds of opportunities to learn teacher candidates might have experienced in these programs. There were pragmatic reasons for this. We studied five relatively large programs, each with many sections and instructors for any given methods course, making observational data collection logistically challenging and resource-intensive. We interviewed program graduates multiple times when they were teachers of record, but, like many other research teams, we had assumed that surveys would give us a relatively reliable measure of OTL during the preparation phase. Ultimately, we made the most of the data we had, but we certainly would have loved to have more data to speak to these individual experiences.

What current areas of research are you pursuing?

Several of my current research projects focus on the use of mixed reality simulations as a practice space and assessment platform for pre-service teachers. This work has been very exciting because we are able to observe how candidates’ develop classroom practices along distinct trajectories. Unlike resource-intensive, classroom-based observation research, the simulation platform provides a tremendous amount of standardized data across large numbers of candidates in different programs. We also collect a lot of quantitative and qualitative data about individual candidates, so we can begin to develop more nuanced theories about the relationship between individual characteristics and the development of teaching skills.

What new challenges do you see for the field of teacher education?

This is not a new challenge, but teacher education would benefit from more systematic and large- scale systems for tracking both what happens during preparation and after, as candidates become teachers of record. We were only able to conduct many of the analyses we share in the paper because one of our participating universities, Oriole, collected a tremendous amount of data about their candidates. Ideally all five universities would have collected comparable data, so we could have looked at these relationships across a larger and more varied group. Moving forward, it would be great if universities could work together to build more comprehensive data systems, and then partner with states and local districts to track growth over time. There are so many questions we want to answer about teacher preparation that are largely unanswerable due to data issues. The second-author, Rebekah Berlin, is now engaged in such work at Deans for Impact, but we need many more data sharing systems among the many institutions that prepare teachers. Our research will be better able to capture the myriad ways that candidates develop when we coordinate our efforts.

What advice would you give to new scholars in teacher education?

I would encourage new scholars to think creatively about how to measure the complex interactions that occur during teacher preparation. Technologies and corresponding methods are advancing rapidly and could provide a whole new world of insight into what is happening during the pre-service period. For instance, we might be able to use natural language methods to analyze the linguistic features of “triad conferences” between clinical supervisors, mentors, and candidates. Some of these features might be impossible for human raters to reliably code, but they may well have strong relationships with candidate outcomes. Working with methodologists and others outside our immediate field might help provide novel insight into longstanding questions in teacher education, including what constitutes an “opportunity to learn” and how can we measure those opportunities well.

Second, in the paper we argue that there is a real need for more mixed-methods work to allow us to examine broad patterns among large numbers of candidates but also delve deeply into the nuances of individual differences. Again, such efforts are more likely when scholars with deep knowledge about the particulars of teacher preparation work collaboratively with those who have expertise working with large-scale data.

 


Tags:

JTE Cover

JTE