Spatial hearing is fundamentally multidimensional, in that numerous acoustical features (?cues?) vary simultaneously with spatial aspects of the auditory scene. These include the binaural cues (interaural time differences [ITD] and interaural level differences [ILD]) along with monaural spectral cues, temporal cues, and intensity cues that correlate with source and environmental features. In natural listening, the various cues are not redundant; they are often in conflict with one another, and they take on values that change from moment to moment and differ across frequency. Each individual cue X time X frequency combination provides potentially useful information about auditory space, but defines only one of multiple sensory dimensions that must be integrated to support robust perception and behavior in acoustically complex environments. Previous work has studied that process by measuring the relative influence?or perceptual weight?of binaural cues across cue type (e.g., ITD/ILD ?trading;? Harris 1960, Stecker 2010), time (Saberi 1996, Stecker 2014), and frequency (Heller & Trahiotis 1996, Bibee & Stecker 2016). The default weighting patterns revealed by that work suggest that cue weighting generally follows the statistics of natural auditory scenes: perceptually dominant cues (e.g., low-frequency ITD and sound-onset cues) tend to be those least affected by distortion due to echoes, reverberation, and competing sounds. Previous work has focused on cue-weighting in typical hearing and in neutral contexts, i.e. the ?default? weighting patterns. Thus, key questions remain: are these weighting patterns fixed or do they change with experience? Can they adapt rapidly to the actual statistics of the current scene (?reweighting?)? And, if so, is reweighting driven by structural changes to the stimulus at hand, or by information gleaned from recent exposure? The answers to these questions have important implications for signal-processing algorithms that alter the balance of available cues in aided and augmented listening, and for the habilitation of auditory spatial awareness in diverse listeners. The proposed study addresses these questions in both normal-hearing and hearing-impaired listeners' spatial judgments of sounds that carry multiple spatial cues. Multiple regression will be used to measure listeners' relative sensitivity to those cues when some of the cues have been distorted (for example by hearing aids or simulated reverberation) and when target sounds are preceded by other sounds with consistent versus inconsistent features. Comparing experienced and inexperienced hearing-aid users will quantify differences in weighting patterns due to hearing-aid use and long-term exposure to altered cues, paving the way for future work on the potential habilitation of spatial hearing.
Many patient populations (aging, hearing impaired, cochlear implant users) exhibit impairment of listening in noisy and reverberant environments. The proposed research characterizes the mechanisms that allow normal- hearing and hearing-impaired listeners to deal with these challenges by optimally combining different forms of auditory spatial information. Ultimately, the results will improve (a) theoretical descriptions of auditory processing deficits, and (b) algorithms for signal processing in hearing aids and cochlear implants, for example by guiding the distribution of digital signal-processing resources to most effectively preserve relevant spatial information and de-emphasize potentially misleading information.