American Sign Language (ASL) is a full natural language, with a linguistic structure distinct from English, used as the primary means of communication for approximately one half million deaf people in the United States. Furthermore, because they are unable to hear spoken English during the critical language acquisition years of childhood, the majority of deaf high school graduates in the U.S. have only a fourth grade English reading level. Because of this low English literacy rate and because English and ASL have such different linguistic structure, many deaf people in the United States could benefit from technology that translates English text into animations of ASL performed by a virtual human character on a computer screen. But previous English-to-ASL machine translation projects have made only limited progress. Instead of producing actual ASL animations, these projects have produced restricted subsets of the language, thus allowing them to side-step many important linguistic and animation issues, including in particular the ubiquitous ASL linguistic constructions called "classifier predicates" that are required in order to translate many English input sentences. Classifier predicates are an ASL phenomenon, in which the signer uses the space around his or her body to position invisible objects representing entities or concepts under discussion; the signer's hands show the movement and location of these objects in space. Classifier predicates are the ASL phenomenon that is most unlike elements of spoken or written languages, and they are therefore difficult to translate by machine translation software. In this research the PIs and their graduate students will build on prior research in ASL linguistics, machine translation and artificial intelligence, 3D graphics simulation and human animation, to design and implement a prototype software system capable of producing animations of classifier predicates from English text. In doing so, they will address some of the most challenging issues in English-to-ASL translation, with the goal of producing a software design that can serve as a robust framework for future implementation of a complete English-to-ASL machine translation system. The prototype implementation will have sufficient visual quality and linguistic breadth to enable a pilot evaluation of the design and the quality of the output animations by deaf native ASL signers.

Broader Impacts: This research will lead to significant advances in the state of the art relating to English-to-ASL machine translation software, which will eventually allow development of new applications to provide improved access to information, media and services for the hundreds of thousands of deaf Americans who have low English literacy. Instead of displaying English text, devices like computers, closed-captioned televisions, or wireless pagers could show deaf users an animation of a virtual human character performing ASL. Novel educational reading applications software for deaf children to promote English literacy skills could also be developed. The project will also expose the graduate students involved to research issues relating to ASL and animation, and will support a summer ASL language training program at Gallaudet University for these students.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
0520798
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
2005-06-01
Budget End
2006-11-30
Support Year
Fiscal Year
2005
Total Cost
$77,830
Indirect Cost
Name
University of Pennsylvania
Department
Type
DUNS #
City
Philadelphia
State
PA
Country
United States
Zip Code
19104