Our visual world does not only consist of low-level features such as color or contrast, but it also contains high- level features such as the meaning of objects and the semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. The proposed project will study guidance of eye movements by semantic similarity between objects during real-world scene inspection and search using a novel methodology that was developed in preliminary studies. By selecting scenes from the LabelMe object-annotated image database and applying Latent Semantic Analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects'gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one (transitional semantic guidance). Furthermore, during the course of a scene search, subjects'eye movements were progressively guided toward objects that were semantically similar to the search target (target-induced semantic guidance). These preliminary findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. The proposed project will build on these results to establish a new field of research and a broader model of attentional control in real-world scenes. First, it will investigate two potential factors that control semantic guidance, namely the observer's individual semantic space (Study 1) and the semantic consistency of the visual scene (Study 2). Second, the project will examine the ecological function of two aspects of semantic guidance: the finding of transitional semantic guidance during scene inspection (Study 3) and the observation of a gradual increase in target-induced semantic guidance during the course of a search process (Study 4). The results of Studies 1 to 4 will direct the course of future behavioral and neurophysiological investigations of semantic guidance. Moreover, the first, basic understanding of semantic guidance developed in these studies will be used in Study 5 to devise a computational model of this knowledge and combine it with traditional models of attentional control that are limited to the influence of low-level visual features. The resulting two-level attentional control model will advance the field by helping researchers to form a more comprehensive view of visual attention in real-world scenes.
The proposed work will examine the influence of high-level, semantic information in everyday visual scenes on eye movements and visual attention. While previous models of visual attention focused on low-level visual features such as color or contrast, the current project aims to obtain a more comprehensive understanding of attentional control in real-world situations. Such knowledge would be useful, for example, for diagnosing and studying various attention disorders or semantic impairment in Alzheimer's disease.
|Attar, Nada; Schneps, Matthew H; Pomplun, Marc (2016) Working memory load predicts visual search efficiency: Evidence from a novel pupillary response paradigm. Mem Cognit 44:1038-49|
|Wang, Hsueh-Cheng; Hsu, Li-Chuan; Tien, Yi-Min et al. (2014) Predicting raters' transparency judgments of English and Chinese morphological constituents using latent semantic analysis. Behav Res Methods 46:284-306|
|Wu, Chia-Chien; Wang, Hsueh-Cheng; Pomplun, Marc (2014) The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes. Vision Res 105:10-20|
|Wang, Hsueh-Cheng; Schotter, Elizabeth R; Angele, Bernhard et al. (2013) Using singular value decomposition to investigate degraded Chinese character recognition: evidence from eye movements during reading. J Res Read 36:S35-S50|
|Wang, Hsueh-Cheng; Pomplun, Marc (2012) The attraction of visual attention to texts in real-world scenes. J Vis 12:|