This Small Business Innovation Research (SBIR) Phase II project will complete the development of technology to supplement ordinary face-to-face language interaction for the millions of individuals who are deaf or hard of hearing or face other speech/language challenges. The goal of the project is to enable such individuals to fully participate in the spoken language community. The need for language and speech intelligibility aids is pervasive in today's world. Millions of individuals live with language and speech challenges (such as 36 million Americans with hearing deficits), and these individuals require additional support for communication and language learning. The Phase I research developed and tested the behavioral science and technology for iGlasses. Building on this research, the proposed research is to complete and bring to market an innovative intervention that can bring spoken language and culture into the lives of individuals who are currently marginalized because of hearing loss or other speech/language challenges. The proposed research will advance the state of the art in human machine interaction, speech, machine learning, and assistive technologies.

The broader/commercial impact of this project will benefit the deaf and hard-of-hearing populations as well as the scientific community by providing a research and theoretical foundation for a speech aid that would be naturally available to almost all individuals at a very low cost. It does not require literate users because no written information is presented as would be the case in a captioning system; it is age-independent in that it might be used by toddlers, adolescents, and throughout the lifespan; it is functional for all languages because it is language independent given that all languages share the same phonetic features with highly similar corresponding acoustic characteristics; it would provide significant help for people with hearing aids and cochlear implants; and it would be beneficial for many individuals with language challenges and even for children learning to read. Finally, regardless of the advances or lack of advances in speech recognition technology, it will always be more accurate and effective to pick off the fundamental acoustic features of speech than it is to recognize entire phonemes which are more complex combinations of these basic properties.

Project Start
Project End
Budget Start
2010-02-01
Budget End
2012-01-31
Support Year
Fiscal Year
2009
Total Cost
$561,843
Indirect Cost
Name
Animated Speech Corporation
Department
Type
DUNS #
City
Burlingame
State
CA
Country
United States
Zip Code
94010