People form impressions about other people from surprisingly minimal information, particularly from faces, which are a particularly rich source of social information. Despite the saying "don't judge a book by its cover," people automatically evaluate faces on multiple social dimensions such as competence and trustworthiness, and these evaluations predict important social outcomes ranging from electoral success to judicial sentencing decisions. In order to understand the functional and neural basis of face evaluation, it is necessary to identify the basic dimensions of face evaluation, introduce tools for formally modeling how faces vary on these dimensions, and probe the neural responses to faces that vary on these dimensions. With support from the National Science Foundation, Dr. Alexander Todorov and colleagues at Princeton University will address these questions by combining computer modeling of how faces vary on social dimensions with behavioral studies, Virtual Reality (VR) studies, and brain imaging (functional Magnetic Resonance Imaging) studies. The studies involve not only participants who have typical face perception abilities but also prosopagnosics, those who are unable to recognize individuals by face alone. One current hypothesis is that faces are evaluated on two fundamental dimensions--valence and dominance--that are sensitive to different types of facial information. Valence evaluation of faces tracks expressions signaling whether the person should be avoided (angry expression) or approached (happy expression) and dominance evaluation is sensitive to features signaling physical strength (masculinity and facial maturity). The current studies will investigate how behavioral and brain responses to faces change as a function of their perceived valence and dominance.

The findings will be central for understanding the neural mechanisms underlying face perception and social cognition. Characterizing the processes of face evaluation is essential for building comprehensive models of person perception and social cognition and, ultimately, understanding the social brain. The findings of this research will be important for social psychologists, cognitive neuroscientists, political scientists, and behavioral economists, and will be of interest to vision and computer scientists. This project also includes opportunities for research experience by undergraduate and graduate students in Social Neuroscience.

Project Report

Faces are one of the most important social stimuli conveying information about mental states, behavioral intentions, and social categories. Research shows that people automatically evaluate other people based on their appearance. Moreover, social judgments from facial appearances predict important social outcomes ranging from sentencing decisions to electoral success, even though these judgments are not necessarily accurate. The primary objective of this project was to understand the cognitive and neural mechanisms underlying evaluation of faces on social dimensions. The main specific aims of the project were to develop and validate data-driven computational models of social perception of faces, characterize the cognitive processes of face evaluation, and map the systems of neural regions involved in evaluation of faces on social dimensions. The experiments combined data-driven computational modeling of social perception of faces with behavioral and functional Magnetic Resonance Imaging (fMRI) studies. As a result of the NSF funding, we were able to complete multiple modeling, behavioral, and fMRI studies. A core part of the project was the use of statistical tools for building computational models of social judgments of faces. With these models, we can precisely visualize what changes in the face lead to changes in a specific judgment such as trustworthiness or competence. More importantly, these models can be used as a discovery tool. By exaggerating the facial features that define the variation of a face on a specific dimension, one can discover the cues in the face that are critical for judgments on this dimension. For example, whereas exaggerating faces in the negative direction of the trustworthiness dimension produces angry faces, exaggerating faces in the positive direction produces happy faces. We have built models of many social judgments. These models show that social judgments from novel faces are grounded in similarity of the faces to emotional expressions, masculinity/ femininity, facial maturity, and resemblance to familiar faces. These models make possible to precisely manipulate the social perception of faces and to explore the neural basis of this perception. Using faces generated by our computational models, we studied both the automaticity of social perception processes and their underlying neural substrate. Our behavioral findings show that people cannot help but evaluate others from minimal information. Our functional neuroimaging studies show that this evaluation relies on the coordinated activity of multiple brain regions, including the amygdala, inferior temporal cortex, and lateral prefrontal cortex.

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Application #
0823749
Program Officer
Akaysha Tang
Project Start
Project End
Budget Start
2008-09-01
Budget End
2012-08-31
Support Year
Fiscal Year
2008
Total Cost
$549,992
Indirect Cost
Name
Princeton University
Department
Type
DUNS #
City
Princeton
State
NJ
Country
United States
Zip Code
08540