The long-term goal of this program of research is to improve scientific inference in psychological science. The topic is investigated in the context of computational models of cognition, which can be extremely difficult to distinguish experimentally because of their complexity and the extent to which they mimic each other. Statistical methods (goodness-of-fit, Akaike Information Criterion) have been the dominant means of model evaluation and selection, and are applied after data have been collected in an experiment. The current project explores a new approach to improving inference by developing corresponding statistical methods that are applied on the front-end of an experiment, while the experiment is being designed. In this approach, dubbed adaptive design optimization (ADO), an experiment is divided into a series of mini-experiments. The design of each mini-experiment is updated based on performance in the preceding mini-experiment. The choice of design values is dictated by a sophisticated search algorithm that constantly pressures the models of interest to fit more and more challenging data points until one model emerges as superior. The adaptive nature of the methodology ensures the design is optimal throughout the testing session, and thereby maximizes the informativeness of the experimental results. Furthermore, the focus on optimizing the design simultaneously ensures that the experiment is highly efficient (e.g., fewer trials and participants). The three specific aims of the proposal are to (1) develop ADO so that it is applicable to a broad range of problems (e.g., various experimental designs, different modeling goals) in the discipline;(2) improve the ADO algorithm so that it can be used in real-time experiments;(3) develop web-based resources to enable researchers to learn about and take advantage of the methodology. The achievement of these three goals is intended to provide researchers with a new technology that can accelerate scientific discovery.
Experimentation is one backbone of public health research. The research methodology to be developed in this application has the potential to increase simultaneously the efficiency of experimentation (thereby reducing the cost of doing science) and the informativeness of what is learned (thereby accelerating the advancement of science).
|Aranovich, Gabriel J; Cavagnaro, Daniel R; Pitt, Mark A et al. (2017) A model-based analysis of decision making under risk in obsessive-compulsive and hoarding disorders. J Psychiatr Res 90:126-132|
|Gu, Hairong; Kim, Woojae; Hou, Fang et al. (2016) A hierarchical Bayesian approach to adaptive vision testing: A case study with the contrast sensitivity function. J Vis 16:15|
|Kim, Woojae; Pitt, Mark A; Lu, Zhong-Lin et al. (2016) Planning Beyond the Next Trial in Adaptive Experiments: A Dynamic Programming Approach. Cogn Sci :|
|Cavagnaro, Daniel R; Aranovich, Gabriel J; McClure, Samuel M et al. (2016) On the Functional Form of Temporal Discounting: An Optimized Adaptive Test. J Risk Uncertain 52:233-254|
|Hou, Fang; Lesmes, Luis Andres; Kim, Woojae et al. (2016) Evaluating the performance of the quick CSF method in detecting contrast sensitivity function changes. J Vis 16:18|
|Kim, Woojae; Pitt, Mark A; Lu, Zhong-Lin et al. (2014) A hierarchical adaptive approach to optimal experimental design. Neural Comput 26:2465-92|
|Montenegro, Maximiliano; Myung, Jay I; Pitt, Mark A (2014) Analytical Expressions for the REM Model of Recognition Memory. J Math Psychol 60:23-28|
|Pitt, Mark A; Tang, Yun (2013) What should be the data sharing policy of cognitive science? Top Cogn Sci 5:214-21|
|Myung, Jay I; Cavagnaro, Daniel R; Pitt, Mark A (2013) A Tutorial on Adaptive Design Optimization. J Math Psychol 57:53-67|
|Kim, Woojae; Pitt, Mark A; Myung, Jay I (2013) How do PDP models learn quasiregularity? Psychol Rev 120:903-16|
Showing the most recent 10 out of 13 publications