American Sign Language (ASL) is used by as many as two million people in the United States, with additional users elsewhere in North America. The purpose of this "planning grant" is to enable the PI and her multi-institutional team to explore the case for a possible future NSF investment in an annotated, publicly available, and easily searchable corpus consisting of terabytes of ASL video data (deriving in part from prior work by the PI and her colleagues), including diverse types of content such as dialogues, narratives, elicited sentences illustrating specific grammatical constructions, and isolated signs. The PI contends such a resource would constitute an important infrastructure that would be exploited by a broad research community to advance the fields of linguistics (the structure of ASL), computer vision (machine recognition of gestures), indexing of visual information (through the expansion of mark up vocabularies), and education. The PI notes that the potential value of the existing corpora remains largely untapped, notwithstanding their extensive and productive use by her team and others, due to hardware and software limitations that make it cumbersome to search, identify, and share data of interest.
Broader Impacts: The new resource would be easily accessible by the research community and the broader public, via a user-friendly Web-based interface. Availability of the resource online would allow ASL teachers and users, and others, to access the data directly. Users would be able to look up an unknown sign by submitting a video example of that sign. Students of ASL would be able to retrieve video showing examples of a specific sign used in actual sentences, or examples of a grammatical construction. ASL instructors and teachers of the Deaf would have easy access to video examples of lexical items and grammatical constructions as used by a variety of native signers, for use in language instruction and evaluation.
This project is part of a multi-year effort by Boston University, Rutgers University, and the University of Texas at Arlington, to build large, annotated datasets of American Sign Language (ASL) videos for use in linguistic and computer science research, as well as education. As a result of this effort, we now have datasets containing terabytes of annotated ASL videos. These datasets are publicly available, and have been a valuable resource for researchers studying the linguistics of ASL, as well as to researchers designing methods for computer-based recognition of ASL. However, the full potential of these assets has not been realized because of the difficulties in browsing the datasets for materials relevant to specific types of research and downloading the appropriate subsets of materials. The NSF grant that we were awarded for this project was a "planning grant", allocating a total of $100,000 to the three collaborating institutions (Boston University, Rutgers, and the University of Texas at Arlington). We should note that the components of this project that were to be carried out through Rutgers and UT Arlington have been completed (April 1, 2010-March 31, 2011); however, the Boston University component continues (with an end date of March 31, 2012). Thus, this collaborative project as a whole has not yet reached completion. The main goals were (1) to develop a proposal for a larger grant intended to establish the necessary infrastructure for enabling these rich datasets to be maximally useful as a community resource, and (2) to lay the foundation for making the data accessible, by developing the prototype of a Web interface for accessing these large datasets, so that researchers can easily locate and download content of interest that is available. For example, a computer vision researcher may be looking for a video dataset of signs to be used for training and testing a sign recognition algorithm; for this purpose, the researcher might deem it useful to consider signs that have at least 5 examples from each of at least 3 different subjects. Before the onset of this project, the researcher would have to spend considerable time browsing through our datasets, and manually select and download the data of interest. This is the type of search that we aim eventually to automate via a new interface to our datasets. The three collaborating institutions have used this planning grant in the following ways: (1) to establish interactions with other sign language researchers, and discuss ways in which they have been using our datasets and ways in which they would like to be using these datasets in the future (so that we can plan on incorporating the most desired functionalities into our interface). (2) to design a prototype of the interface, incorporating some of the most basic functionalities that we would like the final interface to include. Substantial progress has been made on the prototype. The full implementation of the Web interface and broad public release of the searchable corpora (as a community resource) are predicated on the availability of funding for the required infrastructure and for the further enhancements described in our pending CRI proposal.