This EAGER project proposes an initiative to review existing methods and to create a design framework for the development of new and innovative approaches for evaluation of teacher education programs.

The George Washington University School of Education and Human Development and the National Academy of Education will lead the effort and the project will result in the following products: (1) a synthesis of existing research and experiential approaches in evaluating teacher education program quality and effectiveness; (2) an exploration of new and innovative approaches to evaluation in teacher education; and (3) a roadmap of issues and recommendations for new approaches. Issues in teacher education are politically at the forefront of the education policy agenda at the present time and an EAGER is an appropriate vehicle for exploring alternatives to current evaluation practices. Specific products will include commissioned papers, a synthesis report, policy briefs and other materials for dissemination.

Teacher education is at a crossroads with calls for increasing accountability and efforts to guage teachers' performance to student outcomes and to the institutions/programs where they were prepared. The synthesis and roadmap provided by this project will link traditional preparation programs with alternatives of various kinds and shed light on new ways to evaluate all different kinds of programs.

Project Report

Graduate School of Education and Human Development of the George Washington University, in partnership with the National Academy of Education Project Title: Evaluation of Teacher Education Programs: Toward a Framework for Innovation NSF Award No.: 1153848 In recent years the improvement of teaching and teacher preparation has become a high political priority. Numerous approaches are currently used or are being proposed to evaluate teacher preparation programs (TPP) – particularly during a time when traditional programs are being called upon to prove their utility and when new teacher pathways are emerging with greater frequency. However, these evaluation mechanisms (e.g., national accreditation systems, state program approval, media and independent rating systems, and program self-studies) are works in progress, and the design of new and better methods necessitates a willingness to invest in exploration. Simply put, there has at present been no obvious blueprint to guide the development of new methods, but rather the beginnings of an evidentiary base to support disciplined (and multi-disciplinary) exploration coupled with a commitment to assessment, revision, and continued experimentation in the design of evaluation systems for improving teacher preparation. With generous support from the National Science Foundation, the George Washington University (prime sponsor) and the National Academy of Education (NAEd) conducted a study to examine existing methods of evaluating the quality of teacher preparation programs, and to develop a framework for designing and implementing innovative evaluation systems. A steering committee comprised of the following members was assembled: Michael Feuer (chair), Deborah Ball, Jeanne Burns, Robert Floden, Susan Fuhrman (ex-officio), Lionel Howard, and Brian Rowan. The steering committee organized two workshops (the first in 2012, and a second workshop in 2013) that brought together leaders with diverse expertise to deliberate and identify emerging issues in the current landscape of evaluating preparation programs. To inform the study process, the committee commissioned four papers that address various aspects of evaluation systems: Recent Developments in STEM Education Relevant to the Qualities of Teacher Preparation Programs Suzanne Wilson, University of Connecticut Protecting the Public: Ensuring Nursing Education Quality Jean Johnson & Christine Pintz, School of Nursing, George Washington University Inspecting Initial Teacher Education in England – the Work of Ofsted John Furlong, University of Oxford Variations in Teacher Preparation Evaluation Systems: International Perspectives Maria Teresa Tatto, Joseph Krajcik, & James Pippin, Michigan State University A final report, Evaluation of Teacher Preparation Programs: Purposes, Methods, and Policy Options, provides (1) a clear overview of the current landscape of teacher preparation program evaluation and (2) an analysis of key issues in designing and implementing evaluation mechanisms. The report synthesizes existing knowledge about the purposes, contexts, and principles of evaluation systems, and presents the concept of mapping, i.e., linking characteristics of evaluation systems with various purposes and intended uses. Comparative information about how selected professions (e.g., nursing education) and other countries evaluate their pre-service education and training programs, as well as issues in evaluating science and mathematics teacher preparation, are interspersed throughout the report. It concludes with a decision framework that can be used by policymakers, evaluation practitioners, researchers, and administrators for designing, using, and interpreting evaluation mechanisms. Decision Framework. The environment for research leading to policy reforms and practical alternatives can be risky. In addition to potential risks associated with the "political surround" of developing and using evaluation mechanisms, the actual use of metric systems may create incentives for unintended and/or undesired behavior. The report presents a decision framework that designers and users of TPP evaluation systems ought to address.[1] A coherent evaluation system that serves its intended purposes and leads to valid interpretations about program quality should account for the following set of questions: What is the primary purpose of the teacher preparation program evaluation system? Which aspects of teacher preparation matter most? What sources of evidence will provide the most accurate and useful information about the aspects of teacher preparation that are of primary interest? How will the measures be analyzed and combined to make a judgment about program quality? What are the intended and potentially unintended consequences of the evaluation system for programs and for education more broadly? How will transparency be achieved? What steps will be taken to help users understand how to interpret the results and use them appropriately? How will the evaluation system be monitored? [1] Feuer, M.J., Floden, R.E., Chudowsky, N., and Ahn, J. (2013). Evaluation of teacher preparation programs: Purposes, methods, and policy options. Washington, DC: National Academy of Education. Available at: http://naeducation.org/cs/groups/naedsite/documents/webpage/naed_085581.pdf

Project Start
Project End
Budget Start
2011-10-01
Budget End
2014-09-30
Support Year
Fiscal Year
2011
Total Cost
$299,996
Indirect Cost
Name
George Washington University
Department
Type
DUNS #
City
Washington
State
DC
Country
United States
Zip Code
20052