Movements of the tongue during speech are largely hidden from view, which limits the feedback that speech language pathologists can provide to patients during speech therapy. Until now, clinicians have relied upon verbal cueing and gesturing to indicate the desired tongue placement during sound production, but this type of verbal feedback can be difficult for the patient to understand and relate back to their own movements. Vulintus' Opti-Speech system removes the limitations on cueing and biofeedback by providing unprecedented visualization of tongue movements during speech. With the system, patients' tongue positions are accurately tracked in real-time using 3D electromagnetic articulography (EMA) sensing, which is then translated and mapped onto an animated avatar that depicts the motions of the head, tongue and jaw. Customizable targets can be placed in the virtual environment to indicate desired tongue placement, such that when a patient moves their tongue correctly targets light up to reinforce correct articulation. The Phase I testing for this project showed that patients can readily use the system to guide correct tongue placement for the formation of sounds. In this Phase II project, Vulintus is establishing efficacy, improving efficiency, and producing a clinical prototype of the Opti-Speech system. Opti-Speech has the potential to dramatically improve therapeutic outcomes in speech therapy. In addition, the rich datasets captured by Opti-Speech can be shared between clinicians and researchers to guide the development of more effective speech therapies. We expect that Opti-Speech will rapidly become the state-of-the-art in speech therapy for patients with a broad range of speech disorders.
Vulintus is developing and testing a software system that uses 3D Electromagnetic Articulography (EMA) to map tongue, lip and jaw movements to an animated avatar during speech therapy in real-time. The interactive virtual environment helps speech-language pathologists to guide patients' tongue movements to hit customizable targets and provides precise visual feedback to both the therapist and the patient to help correct normally unseen problems.
Katz, William F; Mehta, Sonya (2015) Visual Feedback of Tongue Movement for Novel Speech Sound Learning. Front Hum Neurosci 9:612 |