This application requests support for a program of basic and clinical research on speech perception and spoken word recognition. The primary objective of this project is to understand how spoken words are recognized and how acoustic-phonetic and indexical information in the speech signal interact with other knowledge sources to support robust spoken language understanding. The proposed research will involve behavioral studies of speech perception and spoken word recognition as well as computational analyses of the sound patterns of word-forms in the mental lexicon to study global organization and connectivity patterns of spoken words.
Four specific aims will be studied: (1) lexical knowledge and organization, (2) perceptual learning and adaptation, (3) speech perception under adverse listening conditions, and (4) individual differences in working memory dynamics (capacity and speed) in hearing-impaired listeners with cochlear implants (CIs). The research findings will provide a much stronger conceptual and theoretical basis for explaining the core underlying factors that are responsible for the variability and individual differences observed in speech and language processing in normal-hearing typical-developing listeners. The results from this project will also have important direct clinical implications for understanding individual differences in speech and language outcomes in hearing-impaired children and adults who use CIs.
The objective of this research project is to understand how spoken words are recognized and how acoustic- phonetic and indexical information encoded in the speech signal interact with other knowledge sources to support robust spoken language processing. The proposed research will involve behavioral studies of speech perception and spoken word recognition as well as computational analyses of the sound patterns of word- forms in the mental lexicon. The results will have direct clinical implications for understanding and explaining the enormous individual differences in speech and language outcomes in hearing-impaired children and adults who use CIs, especially deaf children who may be at high risk for poor outcomes following implantation.
|Deocampo, Joanne A; Smith, Gretchen N L; Kronenberger, William G et al. (2018) The Role of Statistical Learning in Understanding and Treating Spoken Language Outcomes in Deaf Children With Cochlear Implants. Lang Speech Hear Serv Sch 49:723-739|
|Pisoni, David B; Broadstock, Arthur; Wucinich, Taylor et al. (2018) Verbal Learning and Memory After Cochlear Implantation in Postlingually Deaf Adults: Some New Findings with the CVLT-II. Ear Hear 39:720-745|
|Kronenberger, William G; Henning, Shirley C; Ditmars, Allison M et al. (2018) Verbal learning and memory in prelingually deaf children with cochlear implants. Int J Audiol 57:746-754|
|Moberly, Aaron C; Harris, Michael S; Boyce, Lauren et al. (2018) Relating quality of life to outcomes and predictors in adult cochlear implant users: Are we measuring the right things? Laryngoscope 128:959-966|
|Kramer, Scott; Vasil, Kara J; Adunka, Oliver F et al. (2018) Cognitive Functions in Adult Cochlear Implant Users, Cochlear Implant Candidates, and Normal-Hearing Listeners. Laryngoscope Investig Otolaryngol 3:304-310|
|Casserly, Elizabeth D; Wang, Yeling; Celestin, Nicholas et al. (2018) Supra-Segmental Changes in Speech Production as a Result of Spectral Feedback Degradation: Comparison with Lombard Speech. Lang Speech 61:227-245|
|Castellanos, Irina; Kronenberger, William G; Pisoni, David B (2018) Psychosocial Outcomes in Long-Term Cochlear Implant Users. Ear Hear 39:527-539|
|Kronenberger, William G; Castellanos, Irina; Pisoni, David B (2018) Questionnaire-based assessment of executive functioning: Case studies. Appl Neuropsychol Child 7:82-92|
|Moberly, Aaron C; Castellanos, Irina; Vasil, Kara J et al. (2018) ""Product"" Versus ""Process"" Measures in Assessing Speech Recognition Outcomes in Adults With Cochlear Implants. Otol Neurotol 39:e195-e202|
|Hunter, Cynthia R; Pisoni, David B (2018) Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences. Ear Hear 39:378-389|
Showing the most recent 10 out of 91 publications