The trained ear of the speech-language pathologist is the gold standard assessment tool for clinical practice in motor speech disorders. However, perceptual judgments are vulnerable to bias and their relationship with estimates of listener intelligibility ? the final arbiter of speech goodness ? is indeterminate. Interpretable, objective, and robust outcome measures that provide targets for treatment are urgently needed in order to provide more precise care and reliably monitor patient progress. Based on theoretical models of speech perception, in our previous grants we have developed a novel set of outcome measures that provide a multi- dimensional intelligibility profile (MIP) by using custom speech stimuli and a new coding strategy that allows us to capture the types of errors that listeners make when listening to dysarthric speech. This has led to a more complete intelligibility profile that codifies these errors at different levels of granularity, from global to discrete. Simultaneously, we have also developed a computational model for evaluation of dysarthric speech capable of reliably estimating a limited set of intelligibility measures directly from the speech acoustics. To date, both the outcome measures and the objective model have been evaluated on cross-sectional data only. In this renewal application, our principal goal is to evaluate specific hypotheses regarding expected changes in this multidimensional intelligibility profile as a result of different intervention instruction conditions (loud, clear, slow). A secondary goal of the proposal is to further refine our objective model to predict the complete intelligibility profile and to evaluate its ability to detect intelligibility changes within individual speakers. This is critical for clinicians who currently have no objective ways to assess the value of their interventions. With the aim of improving the standard of care through technology, the long-term goal of this proposal is to develop stand-alone objective outcome measures for dysarthria that can provide clinicians with reliable treatment targets. Such applications have the potential to dramatically alter the current standard of care in speech pathology for patients with neurological disease or injury. Furthermore, these applications also have the potential to reduce health disparities by partially automating clinical intervention and providing easier access to these services to those in remote areas or in underdeveloped countries.
There is an urgent need in the field of speech-language pathology for objective outcome measures of speech intelligibility that provide clinicians with actionable information regarding treatment targets. This proposal seeks to leverage theoretical advances in speech intelligibility to evaluate the sensitivity of a novel multidimensional intelligibility profile that quantifies the perceptual effects of speech change. Using listener transcriptions of dysarthric speech, along with a suite of automated acoustic metrics, the predictive model uses machine-learning algorithms to learn the relationship between speech acoustics and listener percepts. Ultimately, this model will allow clinicians to predict the outcomes of an intervention strategy to assess its utility for a patient. This has the potential to dramatically alter the current standard of care in speech pathology for patients with neurological disease or injury.
|Fletcher, Annalise R; Wisler, Alan A; McAuliffe, Megan J et al. (2017) Predicting Intelligibility Gains in Dysarthria Through Automated Speech Feature Analysis. J Speech Lang Hear Res 60:3058-3068|
|Fletcher, Annalise R; McAuliffe, Megan J; Lansford, Kaitlin L et al. (2017) Assessing Vowel Centralization in Dysarthria: A Comparison of Methods. J Speech Lang Hear Res 60:341-354|
|Fletcher, Annalise R; McAuliffe, Megan J; Lansford, Kaitlin L et al. (2017) Predicting Intelligibility Gains in Individuals With Dysarthria From Baseline Speech Features. J Speech Lang Hear Res 60:3043-3057|
|Dorman, Michael F; Liss, Julie; Wang, Shuai et al. (2016) Experiments on Auditory-Visual Perception of Sentences by Users of Unilateral, Bimodal, and Bilateral Cochlear Implants. J Speech Lang Hear Res 59:1505-1519|
|Berisha, Visar; Wisler, Alan; Hero, Alfred O et al. (2016) Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure. IEEE Trans Signal Process 64:580-591|
|Lansford, Kaitlin L; Berisha, Visar; Utianski, Rene L (2016) Modeling listener perception of speaker similarity in dysarthria. J Acoust Soc Am 139:EL209|
|Vitela, A Davi; Monson, Brian B; Lotto, Andrew J (2015) Phoneme categorization relying solely on high-frequency energy. J Acoust Soc Am 137:EL65-70|
|Carbonell, Kathy M; Lester, Rosemary A; Story, Brad H et al. (2015) Discriminating simulated vocal tremor source using amplitude modulation spectra. J Voice 29:140-7|
|Utianski, Rene L; Caviness, John N; Liss, Julie M (2015) Cortical characterization of the perception of intelligible and unintelligible speech measured via high-density electroencephalography. Brain Lang 140:49-54|
|Berisha, Visar; Liss, Julie; Sandoval, Steven et al. (2014) Modeling Pathological Speech Perception From Data With Similarity Labels. Proc IEEE Int Conf Acoust Speech Signal Process 2014:915-919|
Showing the most recent 10 out of 32 publications