There currently exists no functional model of speech perception in dysarthria that captures the critical interface between speech signal characteristics and the cognitive-perceptual processes brought to bear on that signal by the listener. Yet such a model is necessary, not only to explain intelligibility deficits, but also to guide and justify treatment decisions in clinical practice. Two series of experiments will be undertaken, which focus on the signal-listener interface for lexical segmentation, or the perceptual task of parsing the continuous acoustic stream into discrete words. The first set will focus on the nature of the intelligibility deficit by examining speech perception errors among different forms and severity levels of dysarthria. This will define and establish the relationships among segmental and suprasegmental deficit patterns, dysarthria severity levels, and the perceptual consequences of each. The second set of experiments will focus on the sources of intelligibility gains, by directly manipulating listener constraints in a training paradigm. In both series of experiments, predictions proposed by two accounts of lexical segmentation will be tested. These include the Metrical Segmentation Strategy Hypothesis (MSS; Cutler & Norris, 1988; Cutler & Butterfield, 1992), and the Hierarchical Model of Speech Segmentation (HMSS; Mattys, S.L. The hierarchical model of speech segmentation. BBSRC, 2003-2006). Lexical boundary error and segmental analyses will be conducted on listeners' transcription of phrases produced by speakers with different forms and severities of dysarthria. It is predicted that, for a given pattern of dysarthria (form), there will be evidence of differences in the effectiveness of listeners' cognitive-perceptual strategies, directly traceable to severity of speech deficit. For a given level of speech deficit severity, there will be evidence of differences in the effectiveness of listeners' cognitive-perceptual strategies, directly traceable to dysarthria form. By examining perceptual error patterns elicited in a training paradigm, it will be possible to identify which aspects of the acoustic signal are of perceptual salience in a default mode, and which features can be elevated in perceptual salience via training. Information learned about differences in the perceptual processing of different forms and severities of dysarthria will be used to develop a model of speech intelligibility deficits in dysarthria, and will have applicability to management programs in speech rehabilitation.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC006859-03
Application #
7082759
Study Section
Motor Function, Speech and Rehabilitation Study Section (MFSR)
Program Officer
Shekim, Lana O
Project Start
2004-07-01
Project End
2009-06-30
Budget Start
2006-07-01
Budget End
2007-06-30
Support Year
3
Fiscal Year
2006
Total Cost
$297,812
Indirect Cost
Name
Arizona State University-Tempe Campus
Department
Other Health Professions
Type
Schools of Arts and Sciences
DUNS #
943360412
City
Tempe
State
AZ
Country
United States
Zip Code
85287
Fletcher, Annalise R; Wisler, Alan A; McAuliffe, Megan J et al. (2017) Predicting Intelligibility Gains in Dysarthria Through Automated Speech Feature Analysis. J Speech Lang Hear Res 60:3058-3068
Fletcher, Annalise R; McAuliffe, Megan J; Lansford, Kaitlin L et al. (2017) Assessing Vowel Centralization in Dysarthria: A Comparison of Methods. J Speech Lang Hear Res 60:341-354
Fletcher, Annalise R; McAuliffe, Megan J; Lansford, Kaitlin L et al. (2017) Predicting Intelligibility Gains in Individuals With Dysarthria From Baseline Speech Features. J Speech Lang Hear Res 60:3043-3057
Dorman, Michael F; Liss, Julie; Wang, Shuai et al. (2016) Experiments on Auditory-Visual Perception of Sentences by Users of Unilateral, Bimodal, and Bilateral Cochlear Implants. J Speech Lang Hear Res 59:1505-1519
Berisha, Visar; Wisler, Alan; Hero, Alfred O et al. (2016) Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure. IEEE Trans Signal Process 64:580-591
Lansford, Kaitlin L; Berisha, Visar; Utianski, Rene L (2016) Modeling listener perception of speaker similarity in dysarthria. J Acoust Soc Am 139:EL209
Vitela, A Davi; Monson, Brian B; Lotto, Andrew J (2015) Phoneme categorization relying solely on high-frequency energy. J Acoust Soc Am 137:EL65-70
Carbonell, Kathy M; Lester, Rosemary A; Story, Brad H et al. (2015) Discriminating simulated vocal tremor source using amplitude modulation spectra. J Voice 29:140-7
Utianski, Rene L; Caviness, John N; Liss, Julie M (2015) Cortical characterization of the perception of intelligible and unintelligible speech measured via high-density electroencephalography. Brain Lang 140:49-54
Berisha, Visar; Liss, Julie; Sandoval, Steven et al. (2014) Modeling Pathological Speech Perception From Data With Similarity Labels. Proc IEEE Int Conf Acoust Speech Signal Process 2014:915-919

Showing the most recent 10 out of 32 publications