In the everyday world listeners and sound sources move, which presents a challenge for localizing sound sources. The auditory spatial cues (e.g., interaural differences) used for locating sound sources change based on the relative relationship between the position of sound sources and that of listeners. For instance, when listeners move, the auditory spatial cues indicate that the sound source moved, even if it was stationary. In the everyday world a stationary sound source is not perceived as moving when a listener moves. How does the auditory system determine that a stationary sound source does not move when listener movement caused the auditory spatial cues to change? This is the basic challenge this proposal addresses. A hypothesis, based on research in vision, is that sound source localization is based on the interaction of two sets of cues, one based on auditory spatial cues and another that indicates the position of the head (or body or perhaps the eyes). Both sets of cues are required for the brain to determine the location of sound sources in the everyday world.
Two Specific Aims i nvestigate this hypothesis:
Aim 1 involves experiments examining the role of auditory spatial cues and their interaction with head-position cues when listeners and/or sound sources move.
Aim 2 is concerned with cues used to determine the position of the head (or body or eyes) when listeners and/or sound sources move and listeners are asked to make sound source localization judgments. A unique facility is used in which listeners rotate and listen to sounds presented from multiple sources. New psychophysical procedures have been developed to study this much under-researched topic, i.e., sound source localization when listeners and/or sound sources move. The hypothesis leads to a conclusion that sound source localization is a multisystems process, and as a consequence assisting people to localize sound sources cannot be based just on understanding the processing of auditory spatial cues.
Locating the position of a sound source based only on sound affords a listener several benefits: listeners can determine the location of objects they cannot see (e.g., objects behind them), they can use sounds from various sources to navigate in complex environments, they might gain their balance when they can localize sound sources when visual cues are absent, they can process sounds of interest in reflective and reverberant spaces, and they can understand a target sound in a background of interfering sounds when the target and interfering sounds come from spatially separated sound sources. This proposal investigates sound source localization when listeners and sound sources move, as often occurs in the everyday world, in order to better understand sound source localization processing and ways to improve the benefits such processing might provide for people with sensory disorders.