The goal of this research program is to develop a theoretically motivated and neurobiologically grounded framework for understanding auditory processing, in general, and particularly speech perception, in the context of the cerebral lateralization of auditory perceptual processes. One of the generalizations that has emerged about the cortical basis of speech is that left hemisphere regions, especially in the temporal and frontal lobes, are differentially better at processing information that rapidly changes in time. Because important aspects of the speech signal are characterized by rapid spectro-temporal changes (e.g. formant transitions associated with consonant-vowel syllables), it has been proposed that what makes the left hemisphere well suited to the analysis of the speech signal is its sensitivity to temporal signal properties. Two concepts derived from psychophysics and neurophysiology are exploited to develop a physiological account of temporal processing asymmetries, the concept of temporal integration windows and the concept of neuronal oscillations. Temporal integration windows provide time-based, logistical constraints on central nervous system processing Oscillations have been implicated in recent years in a variety of neurophysiologic contexts, including as potential mechanisms for binding sensory information to yield coherent percepts. It is hypothesized that oscillations reflect the quantization of processing in a appropriate temporal windows. The experiments use high-density electroencephalography (EEG) to characterize the auditory evoked responses elicited by complex sounds, including speech. The experiments are designed to explore the idea that the left and right hemispheres differentially analyze sensory information in the time domain. The overall hypothesis is that the left and right temporal lobes have temporal integration windows of different sizes (25-35ms and 150-250ms, respectively), and that this will be reflected in asymmetric oscillatory responses in the gamma versus theta spectral bands. This """"""""asymmetric sampling in time"""""""" model will be investigated in the speech domain using continuous spoken language and in the non- speech domain using ripple stimuli. Continuous speech is the most ecologically natural spoken language stimulus. By comparison, ripples are the auditory analogue of visual gratings and provide well characterized, dynamic, broadband stimuli in which relevant temporal parameters can be manipulated. The use of these two types of stimuli permits us to test whether the observed rhythmic activity is conditioned in significant ways by stimulus properties or occurs independently of stimulus-related acoustic variation.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Small Research Grants (R03)
Project #
1R03DC004638-01
Application #
6211487
Study Section
Special Emphasis Panel (ZDC1-SRB-O (23))
Program Officer
Luethke, Lynn E
Project Start
2000-08-01
Project End
2003-07-31
Budget Start
2000-08-01
Budget End
2001-07-31
Support Year
1
Fiscal Year
2000
Total Cost
$74,000
Indirect Cost
Name
University of Maryland College Park
Department
Miscellaneous
Type
Schools of Arts and Sciences
DUNS #
City
College Park
State
MD
Country
United States
Zip Code
20742
van Wassenhove, Virginie; Grant, Ken W; Poeppel, David (2007) Temporal window of integration in auditory-visual speech perception. Neuropsychologia 45:598-607
Poeppel, David; Guillemin, Andre; Thompson, Jennifer et al. (2004) Auditory lexical decision, categorical perception, and FM direction discrimination differentially engage left and right auditory cortex. Neuropsychologia 42:183-200
Lakshminarayanan, Kala; Ben Shalom, Dorit; van Wassenhove, Virginie et al. (2003) The effect of spectral manipulations on the identification of affective and linguistic prosody. Brain Lang 84:250-63