This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
The proposed work is focused on gestural human computer interfaces, camera-based interfaces for individuals with severe physical disabilities, communication through sign language and gesture, and/or motion of the human body and hands. The proposed infrastructure enhancement would make it possible, as well as practical, to conduct video-based research outside the lab. The infrastructure includes a ruggedized mobile system for capture of synchronized, multi-view digital video on site, for instance in schools for the Deaf and in homes for the disabled. Software infrastructure will be enhanced to support the development of the interface for deployment of the systems to be used in the field, and for annotation of video collected in the field. Video data capture from a larger and more diverse pool of subjects will lead to a more diverse collection of videos for study of linguistic variation and for training and testing of computer vision algorithms.
The camera-based assistive technology developed as part of this project will have a positive impact on the quality of life of adults and children with severe physical disabilities, as well as their friends, families and caregivers; the software will be disseminated at special care facilities and will also be available on the internet via free download. The automated gesture spotting, indexing, matching, and retrieval methods developed in this project would enable sign-based search of ASL literature, lore, poems, performances, courses, from digital video libraries and DVDs. Such capability could have far-reaching implications for improving education, opportunities, and access for the deaf. The gestural analysis, matching, and retrieval methods developed in this project should also accelerate linguistic and cross-linguistic research on signed languages and the gestural components of spoken languages.