It is known that there exist important perceptual differences between deaf native users of American Sign Language (ASL) and hearing people with no prior exposure to ASL. This project will systematically investigate the differences between these two groups as they observe and classify images of faces with regard to the displayed emotion. These perceptual differences may have they roots in the distinct manner in which native users of ASL and non-users code and analyze 2D and 3D motion patterns. We will thus study how these differences relate to the perception of movement. Finally, we will develop a face avatar that can emulate the facial movements of users and non-users of ASL. To achieve this goal, we will develop a set of computer vision algorithms that can be used to study the differences in production of facial expressions of emotions in native users of ASL and non-signers. A necessary step for this is to collect a database of facial expressions of emotions as produced by users of ASL. This will reveal differences at the production level and will allow for the study of perceptual differences.

The research described above addresses several critical issues. First, these studies are fundamental to fully understand the underlying mechanisms used by the brain to analyze, code and recognize facial expressions of emotions. While research on facial expressions of emotion has proven extremely challenging to date, most of the studies have only targeted the hearing. This proposal will study the underlying mechanisms associated to code, produce and interpret facial expression of emotions of native users of ASL. Unfortunately, the computer vision algorithms necessary to carry out these studies are not available. The research in this project is set to remedy this shortcoming.

The facial analysis studies that will be conducted during the course of this proposal can be used in a large number of applications, for example, from human-computer interaction systems where the computer interprets expressions form its user, and to study the role that each facial feature plays in the grammar of ASL. Furthermore, the study of emotional gestures will be valuable to those anthropologists attempting to understand and model the evolution of emotions, and could be used to develop mechanisms to detect lies and deceit. The database of facial expressions collected during the course this project will be made available to the research community and to educators of ASL. We will open collaborations with the School for the Deaf and encourage deaf students to pursue careers in computing and engineering.

URL: http://cbcsl.ece.ohio-state.edu/research/

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
0713055
Program Officer
Jie Yang
Project Start
Project End
Budget Start
2007-08-15
Budget End
2011-07-31
Support Year
Fiscal Year
2007
Total Cost
$366,171
Indirect Cost
Name
Ohio State University
Department
Type
DUNS #
City
Columbus
State
OH
Country
United States
Zip Code
43210