Many of our everyday tasks, such as spotting a friend in a crowd or picking a bottle of soda from the refrigerator, require us to perform visual search. Due to this ubiquity of visual search, its study promises to shed light on the fundamental processes that control our visual attention so efficiently in natural tasks. To quantitatively assess search behavior, previous research using simple, artificial displays has employed eye-movement recording to analyze saccadic selectivity, that is, the bias of saccadic endpoints (""""""""landing points"""""""" of eye movements) towards display items that share certain features with the search target. Recently, saccadic selectivity in natural, complex displays has been examined as well (Pomplun, 2006), giving a first insight into eye-movement control as it is performed during everyday tasks.
The aim of this project is to devise, implement, and evaluate a general, computational model of saccadic selectivity in visual search tasks. Due to its quantitative nature, absence of freely adjustable parameters, and support from empirical research results, the Area Activation Model (Pomplun, Shen, & Reingold, 2003) is a promising starting point for developing such a model. Its basic assumption is that eye movements in visual search tasks tend to target display areas that provide a maximum amount of task-relevant information for processing. To advance this model towards a general model of saccadic selectivity in visual search, additional eye-movement studies are performed to provide detailed information on the influence of color and target size on saccadic selectivity. Based on the data obtained, various aspects of the influence of display and target features on eye- movement patterns are quantified. These data are used to devise the advanced version of the Area Activation Model. The crucial improvements include the elimination of required empirical a-priori information, the consideration of bottom-up activation, and the applicability of the model to search displays beyond artificial images with discrete items and features. Ideally, the resulting model will be straightforward, consistent with natural principles, and carefully avoiding any freely adjustable model parameters to qualify it as a streamlined and general approach to eye-movement control in visual search. Such a model will be important for understanding the functionality of the visual system and will also have significant impact on the fields of computer vision, human-computer interaction, and cognitive modeling. The project will deepen our understanding of visual attention in general and the visual factors underlying saccade programming in particular. Such understanding will advance the possibilities for surgical and therapeutic treatment of visual illnesses. Moreover, the results of the study can directly be applied to improve current human-computer interfaces for computer-assisted surgery and x-ray image analysis. ? ? ?

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Academic Research Enhancement Awards (AREA) (R15)
Project #
1R15EY017988-01A1
Application #
7305151
Study Section
Cognition and Perception Study Section (CP)
Program Officer
Oberdorfer, Michael
Project Start
2007-08-01
Project End
2009-07-31
Budget Start
2007-08-01
Budget End
2009-07-31
Support Year
1
Fiscal Year
2007
Total Cost
$209,463
Indirect Cost
Name
University of Massachusetts Boston
Department
Biostatistics & Other Math Sci
Type
Schools of Arts and Sciences
DUNS #
808008122
City
Boston
State
MA
Country
United States
Zip Code
02125
Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa (2013) The effects of task difficulty on visual search strategy in virtual 3D displays. J Vis 13:
Hwang, Alex D; Wang, Hsueh-Cheng; Pomplun, Marc (2011) Semantic guidance of eye movements in real-world scenes. Vision Res 51:1192-205
Spering, Miriam; Pomplun, Marc; Carrasco, Marisa (2011) Tracking without perceiving: a dissociation between eye movements and motion perception. Psychol Sci 22:216-25
Hwang, Alex D; Higgins, Emily C; Pomplun, Marc (2009) A model of top-down attentional control during visual search in complex scenes. J Vis 9:25.1-18