Movements of the tongue during speech are largely hidden from view, which limits the feedback that speech language pathologists can provide to patients during speech therapy. Until now, clinicians have relied upon verbal cueing and gesturing to indicate the desired tongue placement during sound production, but this type of verbal feedback can be difficult for the patient to understand and relate back to their own movements. Vulintus' Opti-Speech system removes the limitations on cueing and biofeedback by providing unprecedented visualization of tongue movements during speech. With the system, patients' tongue positions are accurately tracked in real-time using 3D electromagnetic articulography (EMA) sensing, which is then translated and mapped onto an animated avatar that depicts the motions of the head, tongue and jaw. Customizable targets can be placed in the virtual environment to indicate desired tongue placement, such that when a patient moves their tongue correctly targets light up to reinforce correct articulation. The Phase I testing for this project showed that patients can readily use the system to guide correct tongue placement for the formation of sounds. In this Phase II project, Vulintus is establishing efficacy, improving efficiency, and producing a clinical prototype of the Opti-Speech system. Opti-Speech has the potential to dramatically improve therapeutic outcomes in speech therapy. In addition, the rich datasets captured by Opti-Speech can be shared between clinicians and researchers to guide the development of more effective speech therapies. We expect that Opti-Speech will rapidly become the state-of-the-art in speech therapy for patients with a broad range of speech disorders.

Public Health Relevance

Vulintus is developing and testing a software system that uses 3D Electromagnetic Articulography (EMA) to map tongue, lip and jaw movements to an animated avatar during speech therapy in real-time. The interactive virtual environment helps speech-language pathologists to guide patients' tongue movements to hit customizable targets and provides precise visual feedback to both the therapist and the patient to help correct normally unseen problems.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Small Business Innovation Research Grants (SBIR) - Phase II (R44)
Project #
2R44DC013467-02
Application #
9048202
Study Section
Special Emphasis Panel (ZRG1-RPHB-R (12))
Program Officer
Shekim, Lana O
Project Start
2013-06-01
Project End
2017-08-31
Budget Start
2015-09-16
Budget End
2016-08-31
Support Year
2
Fiscal Year
2015
Total Cost
$750,011
Indirect Cost
Name
Vulintus, LLC
Department
Type
DUNS #
963247833
City
Dallas
State
TX
Country
United States
Zip Code
75252
Katz, William F; Mehta, Sonya (2015) Visual Feedback of Tongue Movement for Novel Speech Sound Learning. Front Hum Neurosci 9:612