When humans converse, semantic verbal content is accompanied by vocal prosody (the emphasis and timing of speech), head nods, eye movements, eyebrow raises, and mouth expressions such as smiles. Coordination between conversants' movements and/or facial expressions can be observed when an action generated by one individual is predictive of a symmetric movement by another: symmetry formation. The interplay between such symmetry formation and subsequent symmetry breaking in nonverbal behavior is integral to the process of communication and is diagnostic of the dynamics of human social interaction. The PIs propose a model in which low level contributions from audition, vision, and proprioception (the perception of the angle of our joints) are combined in a mirror system that assists affective and semantic communication through the formation and breaking of symmetry between conversants' movements, facial expressions and vocal prosody. In the current project, naive participants will engage in dyadic (one-on-one) conversations with trained laboratory assistants over a closed-circuit video system that displays a computer reconstructed version of the lab assistant's head and face. Both the naive participant's and the lab assistant's motions, facial expressions, and vocalizations will be recorded. The visual and auditory stimuli available to the naive participant will be manipulated to provide specific hypothesis tests about the strength and timing of the effects of head movement, facial expression, and vocal prosody. The visual manipulation will be provided by a photorealistic reconstructed avatar head (i.e., a computer animation) driven partially by tracking the lab assistant's head and face, and partially from manipulation of timing and amplitude of the avatar's movement and facial expression. A combined differential equations and computational model for the dynamics of head movements and facial expression will be constructed and tested in real-time substitution for lab assistant's head motion or facial expression as realized by the avatar. The broader impact of this project falls into three main areas: enabling technology for the study of human and social dynamics, applications to the treatment of psychopathology, applications to human-computer interface design and educational technology. (1) These experiments will result in the advancement of methods for testing a wide variety of hypotheses in social interaction where the research question involves a manipulation of perceived social roles. (2) Automated analysis of facial expression provides on-line analysis of social interactions in small group, high stress settings in which emotion regulation is critical such as in residential psychiatric treatment centers and in psychotherapist-client interactions. The reliability, validity, and utility of psychiatric diagnosis, assessment of symptom severity, and response to treatment could be improved by efficient measurement of facial expression and related non-verbal behavior, such as head gesture and gaze. (3) Successful outcomes from the computational models may lead to the development of automated computer interfaces and tutoring systems that could respond to students' facial displays of confusion or understanding, and thereby guide more efficient instruction and learning.

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Type
Standard Grant (Standard)
Application #
0527444
Program Officer
Amber L. Story
Project Start
Project End
Budget Start
2006-01-01
Budget End
2009-12-31
Support Year
Fiscal Year
2005
Total Cost
$174,345
Indirect Cost
Name
Carnegie-Mellon University
Department
Type
DUNS #
City
Pittsburgh
State
PA
Country
United States
Zip Code
15213