It is now possible to design computers that compute millions of calculations per second, perform useful tasks, and recognize objects. However, despite these advances, computers fall short of emulating brain function in the domain of flexibility. One of the remarkable aspects of the brain is that it interprets stimuli and organizes its actions in a highly situation-dependent and flexible manner. With regard to visual stimuli, the brain is able to learn the structure and significance of a large number of stimulus categories. For some categories, such as faces, its performance is utterly remarkable: we readily can discriminate between and recognize thousands of different individuals based on very subtle differences in the face components and their geometrical configuration. This is all the more impressive given that each time we see a given face, its image on our retina is different from the last time. Two different individuals seen from the same distance and same lighting conditions may cast very similar retinal images, at least at a coarse level, than the same individual seen twice under different conditions. Nonetheless, we are able to fluidly and effortlessly recognize people, objects, landmarks, and scenes based on a single glance. How does this ability come about? One answer to this question relates to the manner in which complex visual stimuli are encoded in the brain. This topic has been a central focus of our research. In the past year, we have published papers related to the neural representation of stimuli in object-encoding regions V4 and TE of the visual cortex. We have previously shown that individual faces are systematically encoded based on their distinctiveness relative to an average face, so called norm-based encoding. In other words, the brain encodes to a given face according to how it differs in its structure from a mean, prototypical face. We first provided evidence for this means of encoding by conducting human behavioral experiments involving visual adaptation. In those experiments the presentation of one face for a few seconds altered the way a subsequently presented face was perceived. The misperceptions closely matched the expectations of norm-based encoding. This was strengthened by more recent neurophysiological recordings in nonhuman primates showing the neurons in the inferotemporal cortex adjust their firing rate based on the relative difference of a face from the average of many faces. Both lines of research point to the conclusion that the brain encodes face identity systematically, and relative to a prototypical average. A second important feature of face perception is the ability to learn and remember new faces, a process that undoubtedly involves changes in the brain. Unlike some skills (e.g. language acquisition), our capacity to learn new faces remains strong into adulthood. This implies that the neural machinery underlying our recognition remains, in a sense, plastic. How does experience modify neural responses? During the past year we have begun to approach this problem by monitoring the tuning functions of individual neurons over periods of days and weeks. While recording from single cells is a routine process, monitoring them for extended periods of time poses enormous challenges. We have overcome these challenges by developing, with the help of an outside collaborator, a novel inertialess microelectrode bundle array, which maintains close proximity to individual neurons by moving with the small movements of the brain. The advantage of this approach is that in recording the responses of isolated neurons over a period of many days the effects of visual learning on neural selectivity can be assessed. In the laboratory, nonhuman primates are presently being trained to learn new categories of stimuli, including novel human and simian faces, as neural responses are monitored. The results from this study will shed light on how we are able to learn stimuli because of changes in the selectivity of visual neurons.

Project Start
Project End
Budget Start
Budget End
Support Year
4
Fiscal Year
2010
Total Cost
$443,160
Indirect Cost
Name
U.S. National Institute of Mental Health
Department
Type
DUNS #
City
State
Country
Zip Code
Mundinano, Inaki-Carril; Fox, Dylan M; Kwan, William C et al. (2018) Transient visual pathway critical for normal development of primate grasping behavior. Proc Natl Acad Sci U S A 115:1364-1369
Dougherty, Kacie; Cox, Michele A; Ninomiya, Taihei et al. (2017) Ongoing Alpha Activity in V1 Regulates Visually Driven Spiking Responses. Cereb Cortex 27:1113-1124
Toarmino, Camille R; Yen, Cecil C C; Papoti, Daniel et al. (2017) Functional magnetic resonance imaging of auditory cortical fields in awake marmosets. Neuroimage 162:86-92
Taubert, Jessica; Wardle, Susan G; Flessert, Molly et al. (2017) Face Pareidolia in the Rhesus Monkey. Curr Biol 27:2505-2509.e2
Park, Soo Hyun; Russ, Brian E; McMahon, David B T et al. (2017) Functional Subpopulations of Neurons in a Macaque Face Patch Revealed by Single-Unit fMRI Mapping. Neuron 95:971-981.e5
Leopold, David A; Russ, Brian E (2017) Human Neurophysiology: Sampling the Perceptual World. Curr Biol 27:R71-R73
Russ, Brian E; Kaneko, Takaaki; Saleem, Kadharbatcha S et al. (2016) Distinct fMRI Responses to Self-Induced versus Stimulus Motion during Free Viewing in the Macaque. J Neurosci 36:9580-9
Murphy, Aidan P; Leopold, David A; Humphreys, Glyn W et al. (2016) Lesions to right posterior parietal cortex impair visual depth perception from disparity but not motion cues. Philos Trans R Soc Lond B Biol Sci 371:
Kaskan, P M; Costa, V D; Eaton, H P et al. (2016) Learned Value Shapes Responses to Objects in Frontal and Ventral Stream Networks in Macaque Monkeys. Cereb Cortex :
Miller, Cory T; Freiwald, Winrich A; Leopold, David A et al. (2016) Marmosets: A Neuroscientific Model of Human Social Behavior. Neuron 90:219-33

Showing the most recent 10 out of 27 publications