Using value-added data from the Los Angeles Unified School Districts, the researchers determine individual teacher effect estimates and investigate their stability across models. This study also investigates the instructional practices of a sub-sample of 30 highly effective and 30 less effective sixth-grade mathematics teachers. Five classroom lessons are videotaped per teacher. The videotapes are coded and analyzed by researchers who are blind to the value-added effectiveness of the teachers. The reliability of measured teaching practice will be investigated by applying hierarchical linear models. Practices of highly effective and less effective teachers will be compared through analyses of variance and regression models.

Project Report

Understanding what constitutes effective mathematics teaching, and devising reliable and valid ways to measure it, are key prerequisites for improving instruction and increasing student learning. In this study we explored value-added models as one potential approach to measuring teacher performance, and then studied the relationship between teachers’ value-added scores and observational indicators of instructional quality. Working in a large urban school district, we started by analyzing value-added data over a four-year period for a large sample of fifth-grade teachers. A subsample of teachers was videotaped teaching fractions, and these videos were analyzed using observational rubrics we developed to assess instructional quality. Our goal was to investigate the stability of value-added scores over time, and to see whether teachers with higher value-added scores also had higher instructional quality as measured by our observational rubrics. As a secondary goal, we also explored the number of classroom lessons one would need to videotape to obtain accurate information about teaching quality for individual teachers. Teacher value-added scores reflect the degree to which students of a given teacher learned more or less than expected as measured by standardized test scores over a given school year. If students learned more than would be expected based on their prior learning, their teacher is said to have added value; if they learned less, their teacher did not add value or added less value. Our methodological investigations revealed that fifth grade mathematics teachers’ value-added scores were reasonably stable over three consecutive cohorts of students. Further, the number of student test scores available was the largest factor affecting teacher rank-ordering and performance group designation over time. We also learned that teacher value-added scores estimated from multiple cohorts, which reflects teacher’s average weighted performance, provided the most robust and precise estimate of teacher performance. To examine how instructional quality relates to teacher value-added scores we recruited 57 fifth-grade mathematics teachers in the district and videotaped them five times as they taught a unit on fractions. To control for any systematic relationships between teacher value-added scores and student characteristics – e.g., the possibility that more effective teachers might be assigned to teach higher achieving students – we only videotaped classrooms of teachers who taught in average performing schools of average size and who had at least two years of teaching experience in the district. We reasoned that this design would allow us to find out whether teachers teaching somewhat comparable students produced differential learning outcomes as measured by students’ value-added scores on standardized tests. We observed notable variation in teachers’ value-added scores within our sub-sample of teachers, with some having value-added scores considerably above, and some considerably below, the district average. The observational rubrics we developed to measure instructional quality were organized around two broad dimensions: the degree to which the underlying mathematics was made visible for students, and the kind and amount of mathematical work students engaged in during the lesson. Both of these had been found in the literature, across different studies, to enhance student learning. We applied the instructional quality codes reliably to the entire set of lesson videos. All lesson videos were coded blind, meaning that none of the raters had any information on teachers’ value-added scores. We aggregated each teacher’s instructional quality information across the five videotaped lessons and obtained a reliable estimate of overall instructional quality for each teacher. We also learned that rubrics that assess highly frequent events, such as the quality of teacher student interactions, can be measured reliably based on only two classroom lessons. This is an interesting finding for future development of instructional quality measures that seek to be practical and efficient. We found that teachers’ value-added scores, when estimated from multiple student cohorts, were positively and moderately related to our observational measure of instructional quality. In other words, on average, teachers with higher value-added scores had higher instructional quality as measured by our video codes than teachers who had lower value-added scores. This relationship was somewhat weaker, but still statistically significant, when we used teachers’ value-added scores estimated from a single cohort of students. The results confirm that teaching practices that help students to understand the underlying mathematics and that allow them to engage in meaningful mathematical work do have an impact on their learning, and that teachers who are able to implement these strategies are more effective teachers. Our results also suggest that value-added measures can be tied to some actual differences in instructional quality. While our results suggest that helping teachers develop skills to implement high leverage teaching strategies in their classrooms may produce higher student learning, it less clear how the observed relationships between teacher value-added scores and instructional quality, especially when estimated from single student cohort, might inform their use and weight in high stakes accountability and teacher evaluation systems.

Project Start
Project End
Budget Start
2009-07-01
Budget End
2014-09-30
Support Year
Fiscal Year
2009
Total Cost
$1,262,024
Indirect Cost
Name
University of Arizona
Department
Type
DUNS #
City
Tucson
State
AZ
Country
United States
Zip Code
85721