Language deficits have devastating effects on one's ability to function in society. Designing appropriate interventions depends in part on understanding spoken language processing in healthy adults. Indeed, similarity metrics based on spoken word recognition research have allowed the design of more sensitive tests for hearing and language deficits. In this proposal, four projects examine the effects on spoken word recognition of the temporal distribution of similarity in spoken words, learning, and top-down knowledge. Time course measures are obtained from eye tracking during visually guided tasks under spoken instructions. The eye tracking is complemented by more traditional paradigms, allowing direct comparisons of the measures and providing data for items not amenable to eye tracking. Natural English stimuli and artificial lexicons are used as stimuli. Real words do not fall into conveniently balanced levels on the dimensions of interest, while artificial lexicons allow precise control over phonological similarity and frequency, and therefore competition neighborhood. They also provide a paradigm for studying learning, whether in the case of new words or changes in the relative frequencies of competitors. The results of the projects are used to refine similarity metrics for spoken words and develop a computational model of spoken word processing and learning.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC005765-05
Application #
7110357
Study Section
Biobehavioral and Behavioral Processes 3 (BBBP)
Program Officer
Cooper, Judith
Project Start
2002-09-23
Project End
2009-01-31
Budget Start
2006-09-01
Budget End
2009-01-31
Support Year
5
Fiscal Year
2006
Total Cost
$216,783
Indirect Cost
Name
University of Connecticut
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
614209054
City
Storrs-Mansfield
State
CT
Country
United States
Zip Code
06269
Magnuson, James S; Mirman, Daniel; Luthra, Sahil et al. (2018) Interaction in Spoken Word Recognition Models: Feedback Helps. Front Psychol 9:369
Huang, Jingyuan; Holt, Lori L (2012) Listening for the norm: adaptive coding in speech categorization. Front Psychol 3:10
Kukona, Anuenue; Fang, Shin-Yi; Aicher, Karen A et al. (2011) The time course of anticipatory constraint integration. Cognition 119:23-42
Mirman, Daniel; Yee, Eiling; Blumstein, Sheila E et al. (2011) Theories of spoken word recognition deficits in aphasia: evidence from eye-tracking and computational modeling. Brain Lang 117:53-68
Viswanathan, Navin; Magnuson, James S; Fowler, Carol A (2010) Compensation for coarticulation: disentangling auditory and gestural theories of perception of coarticulatory effects in speech. J Exp Psychol Hum Percept Perform 36:1005-15
Viswanathan, Navin; Fowler, Carol A; Magnuson, James S (2009) A critical examination of the spectral contrast account of compensation for coarticulation. Psychon Bull Rev 16:74-9
Mirman, Daniel; Magnuson, James S (2009) Dynamics of activation of semantically similar concepts during spoken word recognition. Mem Cognit 37:1026-39
Mirman, Daniel; Strauss, Ted J; Dixon, James A et al. (2009) Effect of Representational Distance between Meanings on Recognition of Ambiguous Spoken Words. Cogn Sci 34:161-173
Mirman, Daniel; Magnuson, James S (2009) The effect of frequency of shared features on judgments of semantic similarity. Psychon Bull Rev 16:671-7
Mirman, Daniel; Magnuson, James S; Estes, Katharine Graf et al. (2008) The link between statistical segmentation and word learning in adults. Cognition 108:271-80

Showing the most recent 10 out of 18 publications