For the past 60 years, the peripheral auditory system has been modeled as a bank of bandpass filters, which is analogous to a graphic equalizer on a high-end stereo system. Some filters respond only to low frequencies whereas other filters respond only to high frequencies. In effect, the output from the filterbank represents a rough analysis of the spectral content of the input, which is critical information for discriminating and identifying sounds. However, a number of key experiments during the last few years have shown that a single filterbank model is too simplistic. Acoustic information exists in different forms that require different processing mechanisms. A more sophisticated view of the auditory system, that holds much promise, is one of multiple parallel processes. The envelope of the sound waveform, for instance, contains much useful information and requires a temporal analysis rather than a spectral analysis. This research introduces the theory that temporal and spectral processes have distinct filtering mechanisms that operate in parallel. There is some psychophysical data to support this idea but a more thorough investigation is needed to lay the groundwork for developing a theory that is sufficiently complete to have practical applications. For example, a complete understanding of the filtering properties of the auditory system is crucial for improving the design of cochlear implants and hearing aids.

Project Report

Bruce G. Berg- Principal Investigator Department of Cognitive Sciences, University of California, Irvine The term "auditory periphery" is an abstract theoretical construct that refers to the initial stages of any auditory process. It is the point at which the acoustic information is transformed into what can be called a neural code. For example, the firing rates of neurons in the auditory nerve increase as the intensity of a sound increases, and so it can be said that firing rate is a neural code the conveys loudness information. A greater understanding of neural code is important for the development of theories of hearing. More important, a sophisticated understanding of the auditory periphery is bound to have a significant impact on the refinement of devices such as hearing aids and cochlear implants. Cochlear implants stimulate the auditory nerve through a pattern of electronic pulses delivered at different locations along the length of the implant (inserted into the cochlear duct). The temporal-spatial patterns of pulses are controlled by signal processing algorithms built into an external central processor. This is the point at which theoretical psychoacoustics can contribute the most to the collective enterprise of improving the effectiveness of cochlear implants. A more complete understanding of the peripheral neural code will provide a target to be potentially matched by an "electronic neural code". In theory, information conveyed by a sound can be coded in different ways. One is the so called "place-equals-frequency" information in reference to the fact that the locations of maximum displacement along the length of the basilar membrane correspond to the vibration frequencies of a sound. Neurons at specific locations are "tuned" to specific frequencies and so the end result is a "place code", also referred to as a "spectral code", that conveys information about the spectral content of a complex sound. Acoustic information can also be coded by the time pattern of neural firings. For frequencies critical to speech perception, neural discharges generally occur only with upward motions of the basilar membrane, whereas the neural response is relatively quiet with downward motions. A complex pattern of on-off cycles in neural firings is thus produced that is synchronized to the sound. This "temporal code" is the primary focus of the project and the key contribution is the development, testing, and refinement of an algorithm that quantifies the acoustic information carried by the time pattern, or cadence, of neural discharges. Theoretically, two sounds that can be discriminated should produce different discharge cadences and quantifying the information carried by the cadence is an important step in testing this assertion. A "systems-level" approach is taken in which simulated data from a model that bases decisions on the cadence algorithm is compared to the data of listeners in several different listening tasks. Thus far, the model accounts for data from several historically important experiments as well as new experiments designed to test specific assumptions of the theory. One question of interest is how many neurons are required to transmit the essential information. This is a difficult problem with no current solution, but it can be approached to some extent by estimating the effective bandwidth of the underlying auditory process. Our laboratory uses three different experimental paradigms to measure the bandwidth of temporal processing and all three yield results that are wider than conventional measurements associated with the spectral processing system (i.e. critical bands). Other laboratories have reported similar findings. Collectively, these results suggest that the peripheral stages of spectral and temporal neural processes are different. The existence of distinct peripheral processes for the two systems is a new idea that is inconsistent with text-book explanations, but one that is in agreement with an increasing body of data. This advancing trend calls for a broad reordering of current theory and forecasts a new era in our understanding of the auditory neural code.

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Application #
0746403
Program Officer
Betty H. Tuller
Project Start
Project End
Budget Start
2008-04-15
Budget End
2011-09-30
Support Year
Fiscal Year
2007
Total Cost
$355,086
Indirect Cost
Name
University of California Irvine
Department
Type
DUNS #
City
Irvine
State
CA
Country
United States
Zip Code
92697