This Small Business Innovation Research (SBIR) Phase II research project will develop a new authoring tool that will allow persons proficient in American Sign Language (ASL) to create animated stories and instructional material in ASL. The goals of this authoring tool are: 1) to support the creation of instructional materials that assist Deaf and Hard-of-Hearing (HH) students in the elementary and middle grades in learning to read; 2) to support the creation of animated ASL stories, including the full range of ASL grammar, that can be enjoyed by Deaf and Hard-of-Hearing students; and 3) to provide a tool that can be used by older students, at the secondary and university levels, to learn about the ASL by creating animated ASL passages. The project includes the development and testing of exemplary reading instruction for Deaf students reading at grade level K-6.

The research will result in improved, computer-based reading instruction for the 50,000+ K-12 Deaf/HH students in the U.S. whose first language is ASL, as well as students that are taking ASL courses. Currently, Deaf children are delayed in developing language skills, to the extent that the average reading level of a Deaf high school graduate is no greater than 4th grade. Since Deaf children have difficulty developing phonemic awareness, and are often isolated from contextual information available to hearing students, teaching reading to Deaf children requires the application of several unique methods that go far beyond simply translating English text. By providing educators and developers of educational software with products that allow them to develop personalized signing avatar tutors for Deaf children, this project will make possible the creation of instruction that is available 'anytime, anywhere' for assisting Deaf children in developing literacy skills.

Project Report

It is well-known that, for Deaf children, learning to read English can be extremely challenging. The great majority of these children grow up in homes in which no family member achieves competency in sign language, with the result that their language development is critically delayed. The average high school graduate from a school for the Deaf only achieves about a fourth-grade reading level. Since 1997, Vcom3D has been developing tools and run-time software that allow persons with no previous training in animation to rapidly create animated American Sign Language (ASL). We developed a library of over 4,000 signs and 66 facial expressions, and commercialized the technology in the form of both authoring tools and end-user applications. The most successful of these, SigningAvatar Friends, was licensed to Topics Entertainment, which sold over 120,000 copies of version 1.0. Although the earlier SigningAvatar technology provided the ability to translate English into Signed English (SE) and ASL, prior to the current project, the facial animation was limited to a few emotional expressions, lip-sync, and grammatical. For this project, our goal has been to expand the expressiveness of the avatars, with an emphasis on the use of the face. This expressiveness increases both engagement and understanding. Highlights of the intellectual merit and broader impacts of the project include: We demonstrated the ability to create American Sign Language (ASL) animations that include grammatically correct facial expressions. These animations were found to improve engagement and understanding among Deaf elementary school children. We compiled, interpreted, and annotated hundreds of images/videos of skilled educators teaching young Deaf/Hard of Hearing children. From these videos we identified critical uses of gesture and facial expression for effectively communicating stories. We have also analyzed the frequency of their use. We developed a new system for animating the face. This system provides life-like animation of the full range of ASL "mouth morphemes", and includes the ability to blend multiple expressions, while varying the intensity of each component. We adapted Paul Ekman’s Facial Action Coding System (FACS) (Ekman and Friesen, 1976) for characterizing facial motions of different types, including speech, mouth morphemes, emotion, and grammatical uses. We applied the FACS system to encode more than 150 facial expressions identified in images and video recordings. We used the enhanced facial animation technology and understanding of the uses of facial movement to develop or enhance several commercial products, including a mobile speech translator, mobile English-to-ASL translator, and Vcommunicator Studio authoring tool. Most significantly, we developed a popular mobile app called Sign 4 Me. Sign 4 Me translates whole English sentences to American Sign Language (ASL). For several weeks after its introduction, Sign4Me was the second highest ranked iTunes reference app in sales volume. As of February 2013, 13,655 Sign 4 Me apps had been downloaded from iTunes.

Project Start
Project End
Budget Start
2008-12-15
Budget End
2012-09-30
Support Year
Fiscal Year
2008
Total Cost
$1,279,584
Indirect Cost
Name
VCOM3, Inc.
Department
Type
DUNS #
City
Orlando
State
FL
Country
United States
Zip Code
32817