The goal of this proposal is to arrive at an accurate description of the multiple roles that visual areas in the brain play when we are looking for a specific visual object. Locating a specific object requires that we 1) activate a remembered, neural representation of our sought target, 2) develop a neural representation of each item we encounter as we search, and 3) compare these two representations to determine whether the object currently in view matches the object that we seek. We know that the process of remembering an object activates neurons in classically defined 'visual'areas of the brain, suggesting that these visual areas may play a role beyond statically processing the visual scene when searching for an object. This notion is supported by other lines of evidence, including our own preliminary data.
In Specific Aim 1, we propose to characterize this role by measuring the degree to which the visual representation of a currently viewed stimulus also depends on the identity of a sought target at different levels of visual processing (i.e. areas V4 and IT). Upon finding such dependencies, we will seek to determine what they accomplish. For example, the visual system might adjust its detectors to increase the discriminability of a target. Similarly, the responses of these visual neurons may represent the outcome of a comparison between the target and the scene currently held in view.
In Specific Aim 2, we are motivated by the notion that real-world object search involves finding an object across changes in position, size, background and pose, and we propose to characterize the role of the visual system in searching for a target across a subset of these identity- preserving object transformations. Finally, in Specific Aim 3, we propose to characterize the activity of visual neurons when a visual stimulus is held in memory but no visual stimulus is physically present (""""""""persistent activity"""""""") to better understand the neural signature of the remembered and sought target in the visual system.
To understand how visual information is processed in the brain, it is fundamental to understand how the external world is represented by neural signals in visual cortex and how memories and goals influence and interact with that representation. This proposal seeks to understand the fundamental principles by which the visual system transforms the neural representation of images when looking for a specific visual object. Understanding the neural coding of objects is critical to a deep understanding of human visual perception and memory, and is needed to meaningfully repair the disruption of these brain processes or to create prosthetics that may stand in for such disruption.
|Pagan, Marino; Simoncelli, Eero P; Rust, Nicole C (2016) Neural Quadratic Discriminant Analysis: Nonlinear Decoding with V1-Like Computation. Neural Comput :1-29|
|Pagan, Marino; Rust, Nicole C (2014) Dynamic target match signals in perirhinal cortex can be explained by instantaneous computations that act on dynamic input from inferotemporal cortex. J Neurosci 34:11067-84|
|Pagan, Marino; Rust, Nicole C (2014) Quantifying the signals contained in heterogeneous neural responses and determining their relationships with task performance. J Neurophysiol 112:1584-98|
|Pagan, Marino; Urban, Luke S; Wohl, Margot P et al. (2013) Signals in inferotemporal and perirhinal cortex suggest an untangling of visual target information. Nat Neurosci 16:1132-9|