There is a fundamental gap in understanding of face recognition: why are there multiple neuronal codes for faces in the brain, and how they are transformed through a network of interconnected face-processing areas to support face recognition? Continued existence of this gap constitutes an important problem because of the social importance of faces and because, until it is filled, the neural mechanisms for object recognition remain largely incomprehensible. The long-term goal of the proposed research is to gain a mechanistic understanding of face-recognition in a highly evolved face-processing system similar to that of humans. The overall objective of this particular application is to identiy the rules and mechanisms of informational neuronal transformations between the cortical nodes of the face-processing network. The proposed work will test the central hypothesis that this network is organized as an information-processing hierarchy serving robust face-recognition, in which each processing level performs a face-specific transformation. It takes advantage of the functional organization of the model system that consists of spatially distinct, but interconnected nodes with unique functional specializations - and the fact that these nodes are readily identifiable with brain imaging due to their selectivity for a known visual object category, faces. The rationale of this proposal is that, after completion of this research, we will understand core operations of high-level object recognition at a computational, representational, and mechanistic level. Guided by strong preliminary data, the central hypothesis will be tested by pursuing three specific aims: 1) What visual features do face cells use to represent complex facial information? 2) How do face areas interact to generate high-dimensional facial codes? 3) What is the causal role of face areas for facial coding and face detection? Under the first aim, we will combine single unit electrophysiological recordings in three brain-imaging identified face areas with parametric visual stimulation to reveal the computational mechanisms single cells use to code facial information. Under the second aim, joint electrophysiological recordings from multiple areas will be analyzed to reveal how inter-areal interactions generate face-representations that differ across areas and change over time.
Under aim 3, targeted inactivation will be used to reveal the causal role different face processing areas play for informational transformations and for face detection. The proposed research is significant, because it is expected to directly show how an information processing network is organized to transform visual representations of a high-level object category and utilizes them for visual behavior. In doing so it will lift our understanding of visual object recognition to a new level. The research proposed is conceptually and methodologically innovative because it in order to take a systems perspective to the problem of object recognition, tracing transformations of information through a multi-node network and integrating, through a novel combination of methodologies, analyses of single cell mechanisms and population codes with causal interrogation of network function.
The neural mechanisms that underlie face processing are essential to human social life, and altered face recognition are characteristics of psychiatric disorders like autism spectrum disorders, face blindness, Fragile X, and Williams syndrome. Studies proposed in this application will examine how the brain makes sense out of the many different features of faces and will thus help us to understand the neuronal mechanisms of face recognition which may be impaired in these disorders.
Showing the most recent 10 out of 11 publications