Past research has been very successful in defining how facial expressions of emotion are produced, including which muscle movements create the most commonly seen expressions. These facial expressions of emotion are then interpreted by our visual system. Yet, little is known about how these facial expressions are recognized. The overarching goal of this proposal is to define the form and dimensions of the cognitive (computational) space used in this visual recognition. In particular, this proposal will study the following three hypotheses: Although facial expressions are produced by a complex set of muscle movements, expressions are generally easily identified at different spatial and time resolutions. However, it is not know what these limits are. Our first hypothesis (H1) is that recognition of facial expressions of emotion can be achieved at low resolutions and after short exposure times.
In Aim 1, we define experiments to determine how many pixels and milliseconds (ms) are needed to successfully identify different emotions. The fact that expressions of emotion can be recognized quickly at low resolution indicates that simple features robust to image manipulation are employed. Our second hypothesis (H2) is that the recognition of facial expressions of emotion is partially accomplished by an analysis of configural features. Configural cues are known to play an important role in other face recognition tasks, but their role in the processing of expressions of emotion is not yet well understood.
Aim 2 will identify a number of these configural cues. We will use real images of faces, manipulated versions of these face images, and schematic drawings. It is also known that shape features play a role in facial expressions (e.g., the curvature of the mouth in happiness).
In Aim 3, we define a shape-based computational model. Our hypothesis (H3) is that the configural and shape features are defined as deviations from a mean (or norm) face as opposed to being described as a set of independent exemplars (Gnostic neurons). The importance of this computational space is not only to further justify the results of the previous aims, but to make new predictions that can be verified with additional experiments with human subjects.

Public Health Relevance

Understanding how facial expressions of emotion are processed by our cognitive system will be important for studies of abnormal face and emotion visual processing in schizophrenia, autism and Huntington's disease. Also, abused children are more acute at recognizing emotions, suggesting a higher degree of expertise to some image features. Identifying which features are used by the cognitive system will help develop protocols for reducing their unwanted effects. Understanding the limits in spatial and time resolution will also be important for studies of low vision (acuity), which are typical problems in several eye diseases and in the normal process of aging.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY020834-05
Application #
8669977
Study Section
Cognition and Perception Study Section (CP)
Program Officer
Wiggs, Cheri
Project Start
2010-09-30
Project End
2015-05-31
Budget Start
2014-06-01
Budget End
2015-05-31
Support Year
5
Fiscal Year
2014
Total Cost
Indirect Cost
Name
Ohio State University
Department
Engineering (All Types)
Type
Biomed Engr/Col Engr/Engr Sta
DUNS #
City
Columbus
State
OH
Country
United States
Zip Code
43210
Pumarola, Albert; Agudo, Antonio; Martinez, Aleix M et al. (2018) GANimation: Anatomically-aware Facial Animation from a Single Image. Comput Vis ECCV 11214:835-851
Zhao, Ruiqi; Wang, Yan; Martinez, Aleix M (2018) A Simple, Fast and Highly-Accurate Algorithm to Recover 3D Shape from 2D Landmarks on a Single Image. IEEE Trans Pattern Anal Mach Intell 40:3059-3066
Martinez, Aleix M (2017) Computational Models of Face Perception. Curr Dir Psychol Sci 26:263-269
Martinez, Aleix M (2017) Visual perception of facial expressions of emotion. Curr Opin Psychol 17:27-33
Zhao, Ruiqi; Martinez, Aleix M (2016) Labeled Graph Kernel for Behavior Analysis. IEEE Trans Pattern Anal Mach Intell 38:1640-50
Hamsici, Onur C; Martinez, Aleix M (2016) Multiple Ordinal Regression by Maximizing the Sum of Margins. IEEE Trans Neural Netw Learn Syst 27:2072-83
Benitez-Quiroz, C Fabian; Wilbur, Ronnie B; Martinez, Aleix M (2016) The not face: A grammaticalization of facial expressions of emotion. Cognition 150:77-84
Srinivasan, Ramprakash; Golomb, Julie D; Martinez, Aleix M (2016) A Neural Basis of Facial Action Recognition in Humans. J Neurosci 36:4434-42
Du, Shichuan; Martinez, Aleix M (2015) Compound facial expressions of emotion: from basic research to clinical applications. Dialogues Clin Neurosci 17:443-55
Du, Shichuan; Tao, Yong; Martinez, Aleix M (2014) Compound facial expressions of emotion. Proc Natl Acad Sci U S A 111:E1454-62

Showing the most recent 10 out of 25 publications