The human visual system can solve the complex task of detecting objects in natural scenes within a fraction of a second. Computational simulations along with EEG and intracranial recording studies have indicated that such "rapid" object recognition can be done based on a single pass through the visual hierarchy from primary visual cortex to task circuits in prefrontal cortex, in about 150-180 ms. Within this computational model, it is generally assumed that there is a progression from relatively simple features such as edges at the first cortical stages, to combinations of these simple features at intermediate levels, to "objects" at the top of the system. However, this "Standard Model" was recently challenged by behavioral demonstrations that reliable saccades to images containing animals were initiated as early as 120-130 ms after image onset, with even faster saccades to faces - within 100 ms. Given that saccadic programming and execution presumably need at least 20 ms, the underlying visual processing must have completed within 80-100 ms. These ultra- rapid detection times thus pose major problems for the current "Standard Model" of visual processing. The proposed project aims to test the hypothesis that the visual system can increase its processing speed on particular tasks by basing task-relevant decisions on signals that originate from intermediate processing levels, rather than requiring that stimuli are processed by the entire visual hierarchy. This hypothesis will be tested using a tightly integrated multidisciplinary approach consisting of behavioral studies using eye tracking to determine the capabilities of human ultra-rapid object detection, EEG and fMRI studies to determine when and where in the brain object-selective responses occur, and computational modeling studies to determine whether such multilevel object mechanisms make sense and can account for human performance levels. Instead of the classic hierarchical model, in which objects can only be coded at the very top of the system, this project will show how "objects" can be detected by neurons located in early visual areas - especially when those objects are behaviorally very important and need to be localized accurately - with fundamental implications for our understanding of the role of earlv and intermediate visual areas in obiect detection.
The visual system's ability to rapidly localize complex objects is foundational to one's daily life. It is currently thought that objects can only be coded at the very top of the visual system. In contrast, this project aims to. show how behaviorally important objects can be detected by neurons located in early visual brain areas, potentially rewriting the book on how the brain detects objects and how early brain areas are involved in comnlex visual functions Project results will be leveraged to hein build imnroved visual aids for the blind
|Cox, Patrick H; Riesenhuber, Maximilian (2015) There Is a "U" in Clutter: Evidence for Robust Sparse Codes Underlying Clutter Tolerance in Human Vision. J Neurosci 35:14148-59|