Temporal features in the speech signal are essential for normal speech perception in most languages. Nevertheless, the neuroanatomical basis for decoding temporal elements of speech in the human auditory system remains elusive. The primary goal of the proposed work is to test an influential hypothesis that describes how the central auditory system decodes two perceptually-relevant ranges of temporal modulations in speech: temporal modulations in the range of 150-300 msec and 20-50 msec. To this end, we will employ novel and powerful methods for probing central auditory function using functional magnetic resonance imaging (MRI). In one experiment, functional MRI will measure brain responses to speech stimuli that vary in these two temporal modulation ranges to identify neuroanatomical """"""""fingerprints"""""""" associated with specific temporal features in speech. A second fMRI experiment will measure brain responses to speech sounds that vary according to rapidly changing spectral features to identify neuroanatomical structures underlying the discrimination of stop-consonant phonemes. Results will provide important knowledge regarding the structure and function of the human auditory system, and will further elucidate the biological bases of speech and language. These data will provide an essential foundation for studying populations who suffer from auditory temporal deficits associated with speech and language function, including reading-impaired and elderly individuals.

Public Health Relevance

Properly hearing the """"""""timing"""""""" of events in speech is critical for speech understanding, and here we seek to understand how the brain is able to efficiently sort-out timing information in speech. This is an important question since auditory timing deficits have been seen in clinical populations with hearing and language impairments, including elderly and reading-impaired individuals. Understanding how the healthy brain sorts-out auditory timing information will help us understand brain deficits in these clinical populations.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Postdoctoral Individual National Research Service Award (F32)
Project #
1F32DC010322-01A2
Application #
7999467
Study Section
Communication Disorders Review Committee (CDRC)
Program Officer
Cyr, Janet
Project Start
2010-07-01
Project End
2012-06-30
Budget Start
2010-07-01
Budget End
2011-06-30
Support Year
1
Fiscal Year
2010
Total Cost
$47,606
Indirect Cost
Name
Stanford University
Department
Psychiatry
Type
Schools of Medicine
DUNS #
009214214
City
Stanford
State
CA
Country
United States
Zip Code
94305
Abrams, Daniel A; Ryali, Srikanth; Chen, Tianwen et al. (2013) Multivariate activation and connectivity patterns discriminate speech intelligibility in Wernicke's, Broca's, and Geschwind's areas. Cereb Cortex 23:1703-14
Ashkenazi, Sarit; Black, Jessica M; Abrams, Daniel A et al. (2013) Neurobiological underpinnings of math and reading learning disabilities. J Learn Disabil 46:549-69
Abrams, Daniel A; Ryali, Srikanth; Chen, Tianwen et al. (2013) Inter-subject synchronization of brain responses during natural music listening. Eur J Neurosci 37:1458-69
Abrams, Daniel A; Lynch, Charles J; Cheng, Katherine M et al. (2013) Underconnectivity between voice-selective cortex and reward circuitry in children with autism. Proc Natl Acad Sci U S A 110:12060-5