Speech is a complex and intricately timed task that requires the coordination of numerous muscle groups and physiological systems. While most children acquire speech with relative ease, it is one of the most complex patterned movements accomplished by humans and thus susceptible to impairment. Approximately 2% of Americans have imprecise speech either due to mislearning during development (articulation disorder) or as a result of neuromotor conditions such as stroke, brain injury, Parkinson's disease, cerebral palsy, etc. An equally sizeable group of Americans have difficulty with English pronunciation because it is their second language. Both of these user groups would benefit from tools that provide explicit feedback on speech production clarity. Traditional speech remediation relies on viewing a trained clinician's accurate articulation and repeated practice with visual feedback via a mirror. While these interventions are effective for readily viewable speech sounds (visemes such as /b/p/m/), they are largely unsuccessful for sounds produced inside the mouth. The tongue is the primary articulator for these obstructed sounds and its movements are difficult to capture. Thus, clinicians use diagrams and other low-tech means (such as placing edible substances on the palate or physically manipulating the oral articulators) to show clients where to place their tongue. While sophisticated research tools exist for measuring and tracking tongue movements during speech, they are prohibitively expensive, obtrusive, and impractical for clinical and/or home use. The PIs' goal in this exploratory project, which represents a collaboration across two institutions, is to lay the groundwork for a Lingual-Kinematic and Acoustic sensor technology (LinKa) that is lightweight, low-cost, wireless and easy to deploy both clinically and at home for speech remediation.

PI Ghovanloo's lab has developed a low-cost, wireless, and wearable magnetic sensing system, known as the Tongue Drive System (TDS). An array of electromagnetic sensors embedded within a headset detects the position of a small magnet that is adhered to the tongue. Clinical trials have demonstrated the feasibility of using the TDS for computer access and wheelchair control by sensing tongue movements in up to 6 discrete locations within the oral cavity. This research will leverage the sensing capabilities of the TDS system and PI Patel's expertise in spoken interaction technologies for individuals with speech impairment, as well as Co-PI Fu's work on machine learning and multimodal data fusion, to develop a prototype clinically viable tool for enhancing speech clarity by coupling lingual-kinematic and acoustic data. To this end, the team will extend the TDS to track tongue movements during running speech, which are quick, compacted within a small area of the oral cavity, and often overlap for several phonemes, so the challenge will be to accurately classify movements for different sound classes. To complement this effort, pattern recognition of sensor spatiotemporal dynamics will be embedded into an interactive game to offer a motivating, personalized context for speech motor (re)learning by enabling audiovisual biofeedback, which is critical for speech modification. To benchmark the feasibility of the approach, the system will be evaluated on six individuals with neuromotor speech impairment and six healthy age-matched controls.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1449211
Program Officer
Ephraim Glinert
Project Start
Project End
Budget Start
2014-09-01
Budget End
2018-08-31
Support Year
Fiscal Year
2014
Total Cost
$150,000
Indirect Cost
Name
Georgia Tech Research Corporation
Department
Type
DUNS #
City
Atlanta
State
GA
Country
United States
Zip Code
30332