Past research has been very successful in defining how facial expressions of emotion are produced, including which muscle movements create the most commonly seen expressions. These facial expressions of emotion are then interpreted by our visual system. Yet, little is known about how these facial expressions are recognized. The overarching goal of this proposal is to define the form and dimensions of the cognitive (computational) space used in this visual recognition. In particular, this proposal will study the following three hypotheses: Although facial expressions are produced by a complex set of muscle movements, expressions are generally easily identified at different spatial and time resolutions. However, it is not know what these limits are. Our first hypothesis (H1) is that recognition of facial expressions of emotion can be achieved at low resolutions and after short exposure times.
In Aim 1, we define experiments to determine how many pixels and milliseconds (ms) are needed to successfully identify different emotions. The fact that expressions of emotion can be recognized quickly at low resolution indicates that simple features robust to image manipulation are employed. Our second hypothesis (H2) is that the recognition of facial expressions of emotion is partially accomplished by an analysis of configural features. Configural cues are known to play an important role in other face recognition tasks, but their role in the processing of expressions of emotion is not yet well understood.
Aim 2 will identify a number of these configural cues. We will use real images of faces, manipulated versions of these face images, and schematic drawings. It is also known that shape features play a role in facial expressions (e.g., the curvature of the mouth in happiness).
In Aim 3, we define a shape-based computational model. Our hypothesis (H3) is that the configural and shape features are defined as deviations from a mean (or norm) face as opposed to being described as a set of independent exemplars (Gnostic neurons). The importance of this computational space is not only to further justify the results of the previous aims, but to make new predictions that can be verified with additional experiments with human subjects.
Understanding how facial expressions of emotion are processed by our cognitive system will be important for studies of abnormal face and emotion visual processing in schizophrenia, autism and Huntington's disease. Also, abused children are more acute at recognizing emotions, suggesting a higher degree of expertise to some image features. Identifying which features are used by the cognitive system will help develop protocols for reducing their unwanted effects. Understanding the limits in spatial and time resolution will also be important for studies of low vision (acuity), which are typical problems in several eye diseases and in the normal process of aging.
|Martinez, Aleix M (2017) Computational Models of Face Perception. Curr Dir Psychol Sci 26:263-269|
|Martinez, Aleix M (2017) Visual perception of facial expressions of emotion. Curr Opin Psychol 17:27-33|
|Zhao, Ruiqi; Martinez, Aleix M (2016) Labeled Graph Kernel for Behavior Analysis. IEEE Trans Pattern Anal Mach Intell 38:1640-50|
|Hamsici, Onur C; Martinez, Aleix M (2016) Multiple Ordinal Regression by Maximizing the Sum of Margins. IEEE Trans Neural Netw Learn Syst 27:2072-83|
|Benitez-Quiroz, C Fabian; Wilbur, Ronnie B; Martinez, Aleix M (2016) The not face: A grammaticalization of facial expressions of emotion. Cognition 150:77-84|
|Srinivasan, Ramprakash; Golomb, Julie D; Martinez, Aleix M (2016) A Neural Basis of Facial Action Recognition in Humans. J Neurosci 36:4434-42|
|Du, Shichuan; Martinez, Aleix M (2015) Compound facial expressions of emotion: from basic research to clinical applications. Dialogues Clin Neurosci 17:443-55|
|Du, Shichuan; Tao, Yong; Martinez, Aleix M (2014) Compound facial expressions of emotion. Proc Natl Acad Sci U S A 111:E1454-62|
|Bian, Wei; Zhou, Tianyi; Martinez, Aleix M et al. (2014) Minimizing nearest neighbor classification error for nonparametric dimension reduction. IEEE Trans Neural Netw Learn Syst 25:1588-94|
|Benitez-Quiroz, C Fabian; Gökgöz, Kadir; Wilbur, Ronnie B et al. (2014) Discriminant features and temporal structure of nonmanuals in American Sign Language. PLoS One 9:e86268|
Showing the most recent 10 out of 23 publications