This research project will develop statistical methods to monitor and control the quality of low-stakes assessment and questionnaire data. Many assessment tests and surveys given by researchers in the social and behavioral sciences are perceived as low stakes by participants, yet researchers rely heavily on such data to address their research questions. Data from low-stakes assessments may contain a substantial portion of inattentive responses, driven by carelessness or fatigue from participants, or sometimes from malicious attempts by survey-bots. The increasing popularity of online platforms for participant recruitment and data collection further exacerbates the problem. The investigators will develop statistical methods to detect inattentive responses. They will benchmark the performance of these methods and identify the best method for different types of inattentiveness. Guidelines, protocols, and user-friendly software for researchers will be created as part of the project. Students participating in this project will receive training and research experience in statistical quality control. The methods and tools to be developed will impact multiple disciplines, including psychology, education, marketing, and public health.

This project will focus on the detection of inattentive responses in multidimensional assessments and questionnaires that use polytomous items. Polytomous items are the most widely used type of items in low-stakes contexts. Statistical methods will be developed based on the multidimensional graded response model, a type of item response theory (IRT) model that can capture multidimensional polytomous response data. Methods to be developed include IRT-model-based person-fit statistics and quickest change detection methods, such as change point analysis and cumulative sum control chart. When many aberrant response patterns exist in the data, outlying cases become difficult to detect, a phenomenon known as the "masking effect." To counteract this effect, robust versions of these methods will be developed by iteratively down-weighting outlying cases. The performance of these methods will be evaluated through extensive Monte Carlo simulations that mimic various real-life scenarios.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Social and Economic Sciences (SES)
Type
Standard Grant (Standard)
Application #
1853166
Program Officer
Cheryl Eavey
Project Start
Project End
Budget Start
2019-09-01
Budget End
2022-08-31
Support Year
Fiscal Year
2018
Total Cost
$330,000
Indirect Cost
Name
University of Notre Dame
Department
Type
DUNS #
City
Notre Dame
State
IN
Country
United States
Zip Code
46556