It is estimated that 3 out of every 1000 people in the United States are deaf, and 60,000 Americans have cochlear implants, prosthetic devices that electrically stimulate the auditory nerve to simulate hearing. Previous research has shown that CI users exhibit a wide range of variability in word recognition and outcome measures of speech perception. Some work has focused on CI users'auditory deficits, but with recent advances in our understanding of the online processing necessary to speech perception and word recognition, we ask whether CI users'online processing abilities contribute to their differences in these tasks. The proposed research seeks to examine speech processing in the face of a degraded signal, and to determine how listeners adapt to that signal. Our goal is to characterize the nature of online processing and adaptation in this population, with three specific aims: to determine the factors that account for differences in lexical dynamics;to examine phonetic category structure and response to degradation in CI users via lexical activation;and to examine the nature and duration of competitor activation and in CI users.
These aims will help us understand how CI users adapt to degraded signals and how sounds and words compete for activation in the lexicon. To measure online processing and lexical activation, we need a real-time measure of lexical activation, including that of the stimulus word and its competitors. We will use eye tracking in the visual world paradigm to measure this. As the listener does a basic word identification task on a computer, his or her eye movements are monitored as a measure of how strongly the listener is considering different words as a match to the auditory stimulus at that time. This yields a real-time measure of which competitors are being considered throughout processing. The task is straightforward and requires no metalinguistic interference, and as a result can be easily used with impaired populations like children and adult CI users. The proposed research has both clinical and theoretical implications. Clinically, we hope that by understanding specific ways in which CI users'online processing differs from that of normal hearers, and the ways in which CI users differ from one another in adaptation strategies, we can inform diagnostic criteria and provide data that will improve the processing strategies of the implants themselves. From a theoretical perspective, no model of speech perception has attempted to account for how degraded speech is perceived, and our preliminary results challenge some of the most basic principles of word recognition. The processing of degraded speech has broad implications: every time we hear speech in a noisy room, or talk on a cell phone, we are processing speech that has been degraded in some way. Understanding and modeling this process may thus inform researchers'strategies in coping with degraded speech. This set of studies will thus expanding our account of the cognitive system as a whole.
The research proposed here has direct relevance to a clinical population: deaf individuals, specifically those with a prosthetic cochlear implant (CI). The incidence of deafness in the US is around 3 cases per 1000 individuals, and there are approximately 60,000 cochlear implant users;the proposed studies attempt to account for the wide variation in outcome measures for this population by examining how CI users process speech and recognize words in real-time. The results are expected to inform diagnostic criteria for this population;moreover, they will enhance our understanding of the processing of a degraded signal in general, which affects many situations confronted by normal hearers on a daily basis, like hearing speech in a noisy room or talking on a cell phone.
|Farris-Trimble, Ashley; McMurray, Bob; Cigrand, Nicole et al. (2014) The process of spoken word recognition in the face of signal degradation. J Exp Psychol Hum Percept Perform 40:308-27|
|Farris-Trimble, Ashley; McMurray, Bob (2013) Test-retest reliability of eye tracking in the visual world paradigm for the study of real-time spoken word recognition. J Speech Lang Hear Res 56:1328-45|