Linguistic research on ASL has been held back by the lack of precise tools for measurement, over large corpora, of the non-manual articulations (i.e., facial expressions and head gestures) that carry key grammatical information in sign languages. The same limitations have, until now, also held back computer science research on sign language recognition and generation. The PIs have created valuable resources, through prior NSF support, to serve the research, education, and sign language communities, including: computational techniques for analysis of American Sign Language (ASL) videos and the SignStream software for linguistic annotation of sign language data; large linguistically annotated and computationally analyzed corpora with videos from native signers; and an online Data Access Interface (DAI) that enables intuitive and flexible searching, browsing, and download, to provide easy access to these publicly shared corpora. They have also exploited these corpora for research on the linguistic structure of ASL and on computer-based sign language recognition from video. Recently, they have developed new versions of SignStream and the DAI with many new features that are now ready to be released publicly. Both represent major improvements over earlier versions of these applications and, combined with the public release of large new richly annotated and readily searchable data sets, constitute resources that will be of great value to researchers, educators, and students in linguistics and computer science, by opening up whole new avenues of research and enabling dramatic improvements in computer-based sign language recognition and generation. The resulting wide-ranging research advances will also contribute to future computer-based applications that will enhance communication for and with deaf individuals, as well as applications that will have educational benefits and overall improve the lives of those who are deaf and hard-of-hearing. The part-time effort to be funded for the two key software developers will also enable them to provide the limited technical support that is essential during the first year of the public release of SignStream 3 and DAI 2.

The goal of this project is to further improve the existing applications by incorporating several powerful enhancements and additional functionalities to enable the shared tools and data to support new kinds of research in both linguistics (for analysis of linguistic properties of ASL and other signed languages) and computer science (for work in sign language recognition and generation). Specifically, the PIs will incorporate into their displays, within both the annotation software and the Web interface, graphical representations of computer-generated analyses of ASL videos, so that users will be able to visualize the distribution and characteristics of key aspects of facial expressions and head movements that carry critical linguistic information in sign languages (e.g., head nods and shakes, eyebrow height, and eye aperture). The most challenging aspect of sign language generation has been the production of natural-looking, appropriately timed, facial expressions and head movements. The sophisticated approach to tracking and 3D modeling of such expressions that has been developed recently by Metaxas et al. makes it possible to derive precise information about these facial expressions and head gestures for large sets of video files.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1748022
Program Officer
Ephraim Glinert
Project Start
Project End
Budget Start
2017-08-01
Budget End
2018-07-31
Support Year
Fiscal Year
2017
Total Cost
$54,999
Indirect Cost
Name
Rutgers University
Department
Type
DUNS #
City
Piscataway
State
NJ
Country
United States
Zip Code
08854