We will develop novel computer vision tools to reliably and precisely measure nonverbal social communication through quantifying communicative facial and bodily expressions. Our tools will be designed and developed in order to maximize their usability by non-engineer behavioral scientists, filling the enormous gap between engineering advances and their clinical accessibility. Significance: Social interaction inherently relies on perception and production of coordinated face and body expressions. Indeed, atypical face and body movements are observed in many disorders, impacting social interaction and communication. Traditional systems for quantifying nonverbal communication (e.g., FACS, BAP) require extensive training and coding time. Their tedious coding requirements drastically limits their scalability and reproducibility. While an extensive literature exists on advanced computer vision and machine learning techniques for face and body analysis, there is no well-established method commonly used in mental health community to quantify production of facial and bodily expressions or efficiently capture individual differences in nonverbal communication in general. As a part of this proposal, we will develop a computer vision toolbox including tools that are both highly granular and highly scalable, to allow for measurement of complex social behavior in large and heterogeneous populations. Approach: Our team will develop tools that provide granular metrics of nonverbal social behavior, including localized face and body kinematics, characteristics of elicited expressions, and imitation performance. Our tools will facilitate measurement of social communication both within a person and between people, to allow for assessment of individual social communication cues as well as those that occur within bidirectional social contexts. Preliminary Data: We have developed and applied novel computer vision tools to assess: (1) diversity of mouth motion during conversational speech (effect size d=1.0 in differentiating young adults with and without autism during a brief natural conversation), (2) interpersonal facial coordination (91% accuracy in classifying autism diagnosis in young adults during a brief natural conversation, replicated in an independent child sample), and (3) body action imitation (85% accuracy in classifying autism diagnosis based on body imitation performance). As apart of current proposal, we will develop more generic methods that can be used in normative and clinical samples.
Aims. In Aim 1, we will develop tools to automatically quantify fine-grained face movements and their coordination during facial expression production;
in Aim 2, we will develop tools to quantify body joint kinematics and their coordination during bodily expression production;
in Aim 3, we will demonstrate the tools? ability to yield dimensional metrics using machine learning. Impact: Our approach is designed for fast and rigorous assessment of nonverbal social communication, providing a scalable solution to measure individual variability, within a dimensional and transdiagnostic framework.
This project develops novel tools for measuring nonverbal social communication as manifested through facial and bodily expressions. Using advanced computer vision and machine learning methodologies, we will quantify humans? communicative social behavior. The results of this project will impact public health by facilitating a rich characterization of normative development of social functioning, providing access to precise phenotypic information for neuroscience and genetics studies, and by measuring subtle individual differences to determine whether some interventions or treatments work better than others.