There is widespread agreement that for teachers to effectively teach their students having lots of knowledge is important, but not enough. To benefit instruction and student learning, teachers need to be able to access and flexibly use their knowledge in the classroom in actual teaching situations and teaching tasks. Yet, measures to assess teachers' usable knowledge have remained scarce. We still know little about how the knowledge teachers acquire as part of teacher preparation courses and professional development becomes usable, how it develops over time, and how teachers use it in the process of teaching. To address both assessment needs in this project, The project will develop a set of scalable, classroom-focused measures of usable mathematics teaching knowledge that are aligned with state standards. The new measures will extend the classroom video analysis approach, which is based on teachers' ability to analyze and respond to teaching episodes shown in short video clips of authentic classroom instruction, by aligning video clips and assessment tasks to standards. The new measures, which will be made available online, will be a valuable tool for researchers, policy makers, and school districts to monitor teacher knowledge over time and to gauge teacher preparedness for implementing state standards in mathematics. The measures will also provide new insights into usable knowledge and knowledge use and advance a much-needed theory of teacher knowledge. Finally, the project extends and refines a promising assessment methodology that can be adapted to any future content frameworks or standards and that can also be used for instrument development in other practice-based knowledge domains. The Discovery Research K-12 program (DRK-12) seeks to significantly enhance the learning and teaching of science, technology, engineering and mathematics (STEM) by preK-12 students and teachers, through research and development of innovative resources, models and tools (RMTs). Projects in the DRK-12 program build on fundamental research in STEM education and prior research and development efforts that provide theoretical and empirical justification for proposed projects. This project is also supported by NSF's EHR Core Research (ECR) program. The ECR program emphasizes fundamental STEM education research that generates foundational knowledge in the field.
The project will develop a scalable, classroom-focused measure of usable mathematics teaching knowledge that is aligned with the state standards through a classroom video analysis measure (CVA-M)) in three content areas: (a) fractions for grades 4 and 5, (b) ratio and proportions for grades 6 and 7; and (c) variables, expressions, and equations for grades 6 and 7. The project will examine the psychometric properties of the new items and scales, including the reliability of scores, and collect evidence on content, substantive, structural, and external aspects of validity to evaluate the overall construct validity of the CVA-M. The project builds on an innovative and promising assessment methodology that uses video clips of authentic classroom instruction that teachers are asked to view and analyze to elicit their usable knowledge. Teachers analyze the teaching episodes shown in the video clips from different assessment tasks that reflect authentic teaching tasks, such as diagnosing student thinking, generating mathematically targeted teacher question, or relating specific content and mathematical practices to teaching episodes shown in the clips. To develop each of the three scales, video clips will be mapped to state level content and mathematical practice standards. Assessment tasks and rubrics will also be aligned with these standards. To create items, video clips will be combined with analysis prompts that ask for a written answer, multiple-choice or rating scales. To make the constructed response items, which need to be scored by trained raters, easier to use at scale, computational approaches will be employed to develop classifiers to automate scoring. Using responses from large samples of teachers, the psychometric properties of the new CVA-M items and scales will be analyzed using factor analysis, classical test theory and item response theory. A series of validity investigations will be conducted. Teachers' scores on the new CVA-M scales will be compared to their scores on another measure of teacher knowledge, the Mathematics Knowledge for Teaching (MKT) instrument, and each scale's predictive validity will be explored vis-a-vis student learning by relating teachers' CVA-M scores to their students' learning as measured by a pre-post quiz and by students' standardized test scores.