Diagnostic classification models are an important statistical tool in cognitive diagnosis and can be employed in a number of disciplines, including educational assessment and clinical psychology. This project focuses on the statistical analysis of cognitive assessment. The research addresses issues concerning fundamental statistical inference and experimental design. It aims at both theoretical development and applications. First, the project will focus on the statistical inference of the item-attribute relationship, which in the current context is formulated as the so-called Q-matrix. The topics include point estimation of the Q-matrix, hypothesis testing, dimension reduction, and model diagnosis. Second, the project will focus on the individualized adaptive design of items so as to measure the attribute profiles more accurately with fewer items. In particular, a criterion is first proposed to measure the efficiency of an item-selection rule based on the large deviations theory. In addition, adaptive item selection schemes are proposed to approach the optimal design.
This research is motivated by applications in educational assessments, psychiatric evaluations, and other disciplines using classification models. In educational applications, the research will help to obtain a data-driven calibration of the skill requirements for exam problems and also to validate the subjective beliefs of such skill requirements, so that better and more accurate assessments of students' knowledge status and skill mastery are obtained. In psychiatric assessment, this study will help to improve evidence-based diagnosis by more accurately identifying the symptom-disorder relationship. The adaptive item selection helps by shortening the lengths of exams (in educational testing) and interviews (in psychiatric evaluation) while maintaining the assessment and diagnosis accuracy. Project results have the potential to positively impact these and other areas of study.