We propose the development of Optic-speech, a system designed to use real-time animation of 3D models to visualize tongue movement during speech for use in guiding speech therapy for dysarthria. Approximately 3 million Americans suffer from dysarthria yearly, and the inability to speak can quickly lead to a reduced quality of life. Existing therapies can be intensive, laborious, and often ineffective. Optic-speech improves upon existing therapies by accurately tracking a patient's tongue position in real-time and providing visual feedback during speech formation to guide therapy. Preliminary tests suggest that it has the potential to dramatically improve therapeutic outcomes. In addition, the quantified data captured by Optic-speech could be shared between clinicians and researchers to guide the development of more effective speech therapies. We expect that Optic-speech will rapidly become the state-of-the-art in speech therapy for patients with severe dysarthria as well as a broad range of speech disorders.

Public Health Relevance

We are testing the feasibility of using a 3D EMA system to monitor the tongue's shape during speech and to provide an interactive virtual environment to assist with speech therapy. The goal is to provide speech therapists with a real-time view of the patient's tongue and provide targets for correct tongue shapes during speech.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Small Business Innovation Research Grants (SBIR) - Phase I (R43)
Project #
1R43DC013467-01A1
Application #
8713831
Study Section
Special Emphasis Panel (ZRG1-RPHB-R (12))
Program Officer
Shekim, Lana O
Project Start
2014-03-04
Project End
2015-02-28
Budget Start
2014-03-04
Budget End
2015-02-28
Support Year
1
Fiscal Year
2014
Total Cost
$209,650
Indirect Cost
Name
Vulintus, LLC
Department
Type
DUNS #
963247833
City
Dallas
State
TX
Country
United States
Zip Code
75252
Katz, William F; Mehta, Sonya (2015) Visual Feedback of Tongue Movement for Novel Speech Sound Learning. Front Hum Neurosci 9:612