Speech perception poses two difficult problems for listeners. First, the acoustic signal is variable and context dependent, making phoneme identification difficult. Second, it unfolds over time and at early points in a word there may not e sufficient information to identify it.
This research aims to understand how listeners solve both problems, how these problems relate to each other, and to use this to understand two groups of impaired listeners: listeners with Language Impairment (LI) and listeners who use Cochlear Implants (CIs). Project 1 asks how listeners compensate for variation due to talker and phonetic context, and how compensation interacts with unfolding competition between candidate words that listeners momentarily consider during word recognition. It employs event related potentials to assess whether compensation occurs at the level of auditory encoding or during later categorical processes. It also uses eye-tracking to examine moment-by-moment activation of lexical competitors (how strongly listeners consider multiple words in parallel), asking when acoustic cues and compensation processes impact lexical processing. Finally, it examines CI users whose difficulty identifying talkers may inhibit their compensation abilities. This may lead to better processing strategies, device configurations and therapies. Project 2 examines how listeners represent the order of information in a word (e.g., how they distinguish anadromes like cat and tack). Most models use the serial order of the phonemes to exclude anadrome competitors. However, recent data indicate that listeners do not completely rule out anadromes, suggesting that order is not explicitly represented. Project 2 uses eye-tracking and visual world paradigm with known words and small artificial languages to determine whether listeners use fine-grained acoustic detail (differences in how a phoneme is pronounced in syllable-initial and final positions) as a proxy for order. It also examines listeners with LI, who may have deficits with both fine-grained auditory detail and serial order;and CI users who lack access to fine-grained spectral detail. This will assess theories of language impairment that emphasis auditory or sequencing deficits as the source of LI. It will also help us understand the variability in outcomes among CI users and further refine our understanding of what acoustic information must be transmitted by the CI. Project 3 asks how long lexical competitors remain active during word recognition. The prior grant discovered that listeners with LI do not fully suppress lexical competitors during word recognition. Project 3 develops an eye-tracking paradigm to assess how long competitors are active, and to ask what mechanisms maintain it, examining inhibition between words, echoic memory and phonological short-term memory. It ex- amines listeners with LI and CI users to determine the consequences of this heightened competition, how it relates to other language processes, and the locus of the impairment. Across all three projects, this proposal aims to better characterize the underlying mechanisms of speech perception in normal listeners with the goal of using this characterization to better understand the unique problems faced by impaired listeners.
Language impairment affects as many as 8% of children, and cochlear implants have become common as a treatment for hearing impairment, yet we still do not understand the nature of language impairment or the reasons for the substantial variability in outcomes among CI users. By examining the basic mechanisms that underlie listeners'ability to quickly and accurately recognize spoken words from a highly variable and time-dependent auditory signal, this project aims to characterize the deficits of both groups in terms of differences in underlying processing. This should lead to better diagnosis and therapies for both groups and better device configuration and processing strategies for CI users.
Showing the most recent 10 out of 51 publications