Speech perception is a process of mapping continuous detail in the acoustic signal onto discrete units of meaning like words. Given the variability in the signal and speed at which it arrives, the system must cope with a great deal of variation in a small amount of time. The long term objective of this research is to understand how this process works. This proposal tests an implication of existing work showing that lexical activation is sensitive to continuous detail in the signal and for current models of spoken word recognition: that online lexical-activation processes (which are fast and work in parallel to build stable representations) can actively integrate continuous detail over time to anticipate upcoming material, resolve ambiguity in the past, and organize perceptual processes. This will be tested in four series of behavioral experiments based on visual world paradigm. In this paradigm, subjects hear carefully controlled spoken language and manipulate objects in a visual environment while eye-movements are monitored. The probability of fixating each object yields a moment-by-moment estimate of the activation for that word (how much the system is considering that word) as it unfolds over time. The first two projects examine these temporal integration processes in unimpaired listeners for the perception of phonologically modified speech and compensation for speaking rate. In each we will show that system can actively anticipate upcoming material and resolve ambiguous material in the past and that these processes are modulated by lexical factors. In the third project we explicitly test this framework examining situations in which continuous detail could facilitate ambiguity resolution, but only if it can retained longer than short-term echoic memory stores are known to operate. This would suggest that lexical processes play a unique role in this maintenance. The fourth project applies this framework to language impairments, testing the hypothesis that perceptual deficits associated with SLI originate in lexical, not perceptual, processes. Ultimately this project will contribute to basic knowledge of speech perception and its relationship to language disorders. Since perceptual and lexical abilities typically develop before higher level language, diagnostics and therapies based on them may be applied earlier (and as a result, more successfully) than other techniques. Thus, the basic knowledge acquired here may contribute to earlier detection and treatment of SLI. ? ? ?

National Institute of Health (NIH)
National Institute on Deafness and Other Communication Disorders (NIDCD)
Research Project (R01)
Project #
Application #
Study Section
Language and Communication Study Section (LCOM)
Program Officer
Cooper, Judith
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
University of Iowa
Schools of Arts and Sciences
Iowa City
United States
Zip Code
Smith, Nicholas A; McMurray, Bob (2018) Temporal Responsiveness in Mother-Child Dialogue: A Longitudinal Analysis of Children with Normal Hearing and Hearing Loss. Infancy 23:410-431
McMurray, Bob; Ellis, Tyler P; Apfelbaum, Keith S (2018) How Do You Deal With Uncertainty? Cochlear Implant Users Differ in the Dynamics of Lexical Processing of Noncanonical Inputs. Ear Hear :
McMurray, Bob; Danelz, Ani; Rigler, Hannah et al. (2018) Speech categorization develops slowly through adolescence. Dev Psychol 54:1472-1491
Roembke, Tanja C; Wiggs, Kelsey K; McMurray, Bob (2018) Symbolic flexibility during unsupervised word learning in children and adults. J Exp Child Psychol 175:17-36
Kapnoula, Efthymia C; Winn, Matthew B; Kong, Eun Jong et al. (2017) Evaluating the sources and functions of gradiency in phoneme categorization: An individual differences approach. J Exp Psychol Hum Percept Perform 43:1594-1611
Samuelson, Larissa K; McMurray, Bob (2017) What does it take to learn a word? Wiley Interdiscip Rev Cogn Sci 8:
McMurray, Bob; Farris-Trimble, Ashley; Rigler, Hannah (2017) Waiting for lexical access: Cochlear implants or severely degraded input lead listeners to process speech less incrementally. Cognition 169:147-164
Apfelbaum, Keith S; McMurray, Bob (2017) Learning During Processing: Word Learning Doesn't Wait for Word Recognition to Finish. Cogn Sci 41 Suppl 4:706-747
Oleson, Jacob J; Cavanaugh, Joseph E; McMurray, Bob et al. (2017) Detecting time-specific differences between temporal nonlinear curves: Analyzing data from the visual world paradigm. Stat Methods Med Res 26:2708-2725
McMurray, Bob (2016) Nature, Nurture or Interacting Developmental Systems? Endophenotypes for learning systems bridge genes, language and development. Lang Cogn Neurosci 31:1093-1097

Showing the most recent 10 out of 51 publications