We are studying the integration of multiple, parallel sources of information in the human visual system. Early stages of visual processing consist of a set of semi-autonomous modules processing different basic features of the input in parallel across the visual field (e.g. color, orientation ,size, motion). Later stages allow for more sophisticated processing but only in a restricted region of the field at any moment in time (serial processing). In this proposal, the integration of information from multiple parallel modules is studied within the visual search paradigm. Visual search tasks involve the search for a target item among distractor items (e.g. find the red spot among spots of other colors). Of particular interest are conjunction searches where the target is defined by two or more simple features. In these searches, our data show that parallel modules are able to guide subsequent processing. For example, in a search for a red vertical item among green vertical and red horizontal items, a module processing color in parallel across the visual field can guide attention toward red items while an orientation module can guide attention toward vertical items. The combination of these two sources of guidance leads to an efficient search for a conjunction of color and orientation even though no single module processes this conjunction in parallel. This concept of parallel guidance of serial processing is at the heart of our """"""""Guided Search"""""""" model (Wolfe, Cave, and Franzel, 1989; Cave and Wolfe, in press).
Our aims for the next grant period are: 1) to study guidance using a new method that allows some tracking of the movements of attention during visual search, 2) To examine the role of size in visual search, 3) To investigate the division of complex scenes into the """"""""items"""""""" used in visual search, and 4) To determine if a single parallel module can be queried about the presence of more than one attribute at one time (e.g. Can the color module be asked simultaneously about """"""""red"""""""" and """"""""green"""""""" items?).
|Palmer, Evan M; Horowitz, Todd S; Torralba, Antonio et al. (2011) What are the shapes of response time distributions in visual search? J Exp Psychol Hum Percept Perform 37:58-71|
|Wolfe, Jeremy M; Palmer, Evan M; Horowitz, Todd S (2010) Reaction time distributions constrain models of visual search. Vision Res 50:1304-11|
|Wolfe, Jeremy M; Reijnen, Ester; Van Wert, Michael J et al. (2009) In visual search, guidance by surface type is different than classic guidance. Vision Res 49:765-73|
|Wolfe, Jeremy M; Horowitz, Todd S; Van Wert, Michael J et al. (2007) Low target prevalence is a stubborn source of errors in visual search tasks. J Exp Psychol Gen 136:623-38|
|Wolfe, Jeremy M; Horowitz, Todd S; Michod, Kristin O (2007) Is visual attention required for robust picture memory? Vision Res 47:955-64|
|Wolfe, J M; Klempen, N; Dahlen, K (2000) Postattentive vision. J Exp Psychol Hum Percept Perform 26:693-716|
|Wolfe, J M; Bennett, S C (1997) Preattentive object files: shapeless bundles of basic features. Vision Res 37:25-43|
|Wolfe, J M (1995) The pertinence of research on visual search to radiologic practice. Acad Radiol 2:74-8|
|Bilsky, A B; Wolfe, J M (1995) Part-whole information is useful in visual search for size x size but not orientation x orientation conjunctions. Percept Psychophys 57:749-60|
|Wolfe, J M; Friedman-Hill, S R; Bilsky, A B (1994) Parallel processing of part-whole information in visual search tasks. Percept Psychophys 55:537-50|