Past research has been very successful in defining how facial expressions of emotion are produced, including which muscle movements create the most commonly seen expressions. These facial expressions of emotion are then interpreted by our visual system. Yet, little is known about how these facial expressions are recognized. The overarching goal of this proposal is to define the form and dimensions of the cognitive (computational) space used in this visual recognition. In particular, this proposal will study the following three hypotheses: Although facial expressions are produced by a complex set of muscle movements, expressions are generally easily identified at different spatial and time resolutions. However, it is not know what these limits are. Our first hypothesis (H1) is that recognition of facial expressions of emotion can be achieved at low resolutions and after short exposure times.
In Aim 1, we define experiments to determine how many pixels and milliseconds (ms) are needed to successfully identify different emotions. The fact that expressions of emotion can be recognized quickly at low resolution indicates that simple features robust to image manipulation are employed. Our second hypothesis (H2) is that the recognition of facial expressions of emotion is partially accomplished by an analysis of configural features. Configural cues are known to play an important role in other face recognition tasks, but their role in the processing of expressions of emotion is not yet well understood.
Aim 2 will identify a number of these configural cues. We will use real images of faces, manipulated versions of these face images, and schematic drawings. It is also known that shape features play a role in facial expressions (e.g., the curvature of the mouth in happiness).
In Aim 3, we define a shape-based computational model. Our hypothesis (H3) is that the configural and shape features are defined as deviations from a mean (or norm) face as opposed to being described as a set of independent exemplars (Gnostic neurons). The importance of this computational space is not only to further justify the results of the previous aims, but to make new predictions that can be verified with additional experiments with human subjects.

Public Health Relevance

Understanding how facial expressions of emotion are processed by our cognitive system will be important for studies of abnormal face and emotion visual processing in schizophrenia, autism and Huntington's disease. Also, abused children are more acute at recognizing emotions, suggesting a higher degree of expertise to some image features. Identifying which features are used by the cognitive system will help develop protocols for reducing their unwanted effects. Understanding the limits in spatial and time resolution will also be important for studies of low vision (acuity), which are typical problems in several eye diseases and in the normal process of aging.

National Institute of Health (NIH)
National Eye Institute (NEI)
Research Project (R01)
Project #
Application #
Study Section
Cognition and Perception Study Section (CP)
Program Officer
Wiggs, Cheri
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
Ohio State University
Engineering (All Types)
Schools of Engineering
United States
Zip Code
Du, Shichuan; Tao, Yong; Martinez, Aleix M (2014) Compound facial expressions of emotion. Proc Natl Acad Sci U S A 111:E1454-62
Bian, Wei; Zhou, Tianyi; Martinez, Aleix M et al. (2014) Minimizing nearest neighbor classification error for nonparametric dimension reduction. IEEE Trans Neural Netw Learn Syst 25:1588-94
Benitez-Quiroz, C Fabian; Rivera, Samuel; Gotardo, Paulo F U et al. (2014) Salient and Non-Salient Fiducial Detection using a Probabilistic Graphical Model. Pattern Recognit 47:
Benitez-Quiroz, C Fabian; Gökgöz, Kadir; Wilbur, Ronnie B et al. (2014) Discriminant features and temporal structure of nonmanuals in American Sign Language. PLoS One 9:e86268
You, Di; Benitez-Quiroz, Carlos Fabian; Martinez, Aleix M (2014) Multiobjective optimization for model selection in kernel methods in regression. IEEE Trans Neural Netw Learn Syst 25:1879-93
Du, Shichuan; Martinez, Aleix M (2013) Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion. J Vis 13:13
Rivera, Samuel; Martinez, Aleix (2012) Learning Deformable Shape Manifolds. Pattern Recognit 45:1792-1801
Du, Shichuan; Martinez, Aleix M (2011) The resolution of facial expressions of emotion. J Vis 11:24