Autism spectrum disorders (ASD) are a range of pervasive developmental disorders that strike one out of 100-150 children. One hallmark feature of individuals with ASD is their difficulty with emotional and communicative facial expressions. Although there is a growing literature on the ability of individuals with ASD to process faces and facial expressions, virtually nothing is known about their ability to produce communicative facial expressions. Our preliminary work shows that facial expressions produced by adolescents with ASD are consistently perceived by typically developing (TD) individuals as unnatural or awkward. The purpose of the proposed project is to combine such subjective coding of facial expression productions in adolescents with ASD with cutting-edge, infrared motion-tracking of facial features to determine which feature movements drive the perception of awkwardness. We hypothesize strong correlations between motion-tracking data and the subjective coding of intensity and awkwardness for facial expressions. We expect group differences between the ASD and TD participants to be strongest in elicited narratives, weaker in conversations, and weakest in spontaneous reactions to emotion inducing images.
Our specific aims are: 1a.To characterize the spontaneous facial communicative behavior of adolescents with ASD through subjective coding. 1b. To compare facial communication in spontaneous conversations with expressions produced during elicited narratives. 2a. To specify the underlying facial feature movements creating expressions perceived as unnatural or awkward. 2b. To collect preliminary data to inform development of interventions aimed at improving facial expression production.
For aims 1 a and 1b, we will apply our previously developed coding system for the capture of facial intensity and naturalness to existing videos of spontaneous conversations between adolescents with ASD and TD peers. We will then compare this dataset with coding data of elicited narratives collected during a pilot study.
For aims 2 a and 2b we will use a motion-tracker to record movements of 33 reflective markers on each participant's face and head with high-speed infrared cameras that are frame-synched to a digital video camera. These two sets of cameras will allow us to correlate subjective, perceptual coding of the face with precisely time-synched data of discrete facial feature movements. We expect specific feature movements to be correlated with the perception of awkwardness. We will analyze facial expressions in elicited narratives, guided conversations, and spontaneous affective reactions to emotion inducing images. These three tasks will allow us to compare spontaneous and elicited facial expressions, as well as contrast natural affective responses and communicative emotional expressions. Our ultimate goal is to determine the underlying mechanics of facial expression production in individuals with ASD and begin developing teaching strategies for the production of more naturalistic expressions that may lead to greater social acceptance.

Public Health Relevance

Individuals with autism spectrum disorders (ASD), even those with seemingly intact language skills, rarely succeed at face-to-face social interaction, with one of their most noticeable deficits being the production of atypical and often jarring facial expressions. The proposed project will provide a significant contribution to the field by using cutting-edge motion-tracking technology combined with subjective coding to establish a first cohesive picture of the communicative facial expression productions of adolescents with ASD. These data will ultimately allow us to derive new intervention approaches to improve facial communication and thereby social acceptance of individuals with ASD.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Exploratory/Developmental Grants (R21)
Project #
5R21DC010867-02
Application #
8029540
Study Section
Child Psychopathology and Developmental Disabilities Study Section (CPDD)
Program Officer
Cooper, Judith
Project Start
2010-04-01
Project End
2013-08-31
Budget Start
2011-04-01
Budget End
2013-08-31
Support Year
2
Fiscal Year
2011
Total Cost
$171,215
Indirect Cost
Name
University of Massachusetts Medical School Worcester
Department
Pediatrics
Type
Schools of Medicine
DUNS #
603847393
City
Worcester
State
MA
Country
United States
Zip Code
01655
Grossman, Ruth B; Steinhart, Erin; Mitchell, Teresa et al. (2015) ""Look who's talking!"" Gaze Patterns for Implicit and Explicit Audio-Visual Speech Synchrony Detection in Children With High-Functioning Autism. Autism Res 8:307-16
Guha, Tanaya; Yang, Zhaojun; Ramakrishna, Anil et al. (2015) On Quantifying Facial Expression-Related Atypicality of Children with Autism Spectrum Disorder. Proc IEEE Int Conf Acoust Speech Signal Process 2015:803-807
Grossman, Ruth B (2015) Judgments of social awkwardness from brief exposure to children with and without high-functioning autism. Autism 19:580-7
Grossman, Ruth B; Edelson, Lisa R; Tager-Flusberg, Helen (2013) Emotional facial and vocal expressions during story retelling by children and adolescents with high-functioning autism. J Speech Lang Hear Res 56:1035-44
Metallinou, Angeliki; Grossman, Ruth B; Narayanan, Shrikanth (2013) QUANTIFYING ATYPICALITY IN AFFECTIVE FACIAL EXPRESSIONS OF CHILDREN WITH AUTISM SPECTRUM DISORDERS. Proc (IEEE Int Conf Multimed Expo) 2013:1-6
Grossman, Ruth B; Tager-Flusberg, Helen (2012) Quality matters! Differences between expressive and receptive non-verbal communication skills in adolescents with ASD. Res Autism Spectr Disord 6:1150-1155
Grossman, Ruth B; Tager-Flusberg, Helen (2012) ""Who said that?"" Matching of low- and high-intensity emotional prosody to facial expressions by adolescents with ASD. J Autism Dev Disord 42:2546-57
Grossman, Ruth B; Bemis, Rhyannon H; Plesa Skwerer, Daniela et al. (2010) Lexical and affective prosody in children with high-functioning autism. J Speech Lang Hear Res 53:778-93