The primary goal for this project is to nationally disseminate an innovative assessment instrument and encourage its use for improving students' critical thinking skills. The CAT instrument (Critical thinking Assessment Test) was refined with previous NSF support and collaboration with six other institutions across the country (University of Texas, University of Colorado, University of Washington, University of Hawaii, University of Southern Maine, and Howard University). This support allowed refinement of the CAT instrument so that it has (1) high face validity when evaluated by a broad spectrum of faculty across the country in STEM and non-STEM disciplines; (2) good criterion validity when compared to other instruments that measure critical thinking and intellectual performance; (3) good construct validity by using expert input from learning sciences; (4) good reliability; and (5) demonstrated cultural fairness. The current project focuses on the dissemination of this instrument in institutions across the country. The project activities focus on three interrelated goals: (1) Designing and conducting CAT workshops to train the trainers from universities and community colleges across the country; (2) Expanding institutional use of the CAT instrument for assessment; and (3) Collecting national user norms. In an increasingly technological and information-driven society, the ability to think critically has become a cornerstone to both workplace development and effective educational programs. Critical thinking is central to the National Science Standards (Forawi, 2001) and the National Educational Technology Standards (International Society for Technology Education, 2003). According to Derek Bok (2006), president of Harvard University, over ninety percent of faculty across the nation feel that critical thinking is the most important goal of an undergraduate education. Despite the central importance of critical thinking in the workplace and education, existing assessment tools are plagued by problems related to validity, reliability, and cultural fairness (U.S. Department of Education, 2000). According to Bransford et al. (2000) a challenge for the learning sciences is to provide a theoretical framework that links assessment practices to learning theory. One feature of the CAT instrument that makes it particularly well suited for quality improvement initiatives is that the instrument is scored by an institution's own faculty. The advantage of using faculty scorers is that it increases faculty understanding of student weaknesses and subsequently their willingness to pursue program improvements. The intellectual merit of this project is that it is providing national access to an innovative assessment tool that is based upon contemporary theory in learning sciences and that has high face validity for a broad spectrum of faculty across the country in many different disciplines. The unique characteristics of the assessment instrument greatly increase faculty buy-in and provide strong motivation to link assessment activities to educational improvement initiatives. The broader impacts of this project include the fact that the CAT instrument does not have racial, ethnic, or gender bias; thus, improvement initiatives directed at the underlying skills assessed by this instrument will benefit women and minorities. Dissemination of the instrument is being achieved through broad partnerships with all types of educational institutions including community colleges, minority institutions, and accrediting agencies.