Human and non-human animals alike rely on a set of abilities that allow them to segregate sounds of interest from noisy background sounds, for example, when we listen to someone talking on a crowded, noisy bus. These abilities, collectively called 'auditory scene analysis,' have been the focus of several decades of laboratory research. However, recent research has pointed out the need for more empirical and theoretical work to explain the diversity of phenomena that occurs during sound segregation. Joel Snyder from the University of Nevada, Las Vegas, focuses on context effects because they show promise for significantly advancing our theoretical understanding of the mechanisms and levels of processing necessary to explain auditory scene analysis. Key issues to be addressed include 1) What features of sound patterns influence context effects? 2) Are sensory or decision levels of processing best suited to explain context effects? 3) Do attention or awareness influence context effects?
The findings from this project may have technical and health applications such as prosthetic design of hearing aids, speech and music recognition devices, and amelioration of auditory impairments that occur in normal hearing, hearing impairments, developmental disorders, and schizophrenia.
This project studied how perception of sound patterns is affected by prior context, specifically the characteristics of sound patterns that were heard in the recent past and how listeners interpreted these prior patterns. The specific sound patterns that were studied consisted of two tones of different frequencies (or pitches) that when presented in alternation (low-high-low-high…) can be heard as a single melody or as splitting into two separate streams of sound (low--low--… and high--high--…) when the difference in frequency is relatively large (as if a male and a female were taking turns saying words). The experiments consistently showed that when the difference in frequency between tones in the prior pattern are large, this actually decreases the likelihood that the next pattern will be heard as two streams. They also found that when the prior pattern is heard as two separate streams, the next pattern is likely to be heard in the same way. Both of these context effects last for at least several seconds; furthermore, data showed that a component of the memory for the patterns is stable over time with no evidence of loss of the memory over time. This is consistent with evidence from a broad range of studies, suggesting that auditory memory does not necessarily decay over time, which needs to be taken into account by models of sensory memory. Separate experiments found that the rhythm of the prior pattern (but not the frequency range of the prior pattern) affects how large the effect of prior frequency separation is on perception of the later pattern. These findings are consistent with the interpretation that the effect of prior frequency separation takes place at relatively late stages of auditory processing in which sounds are not represented using tonotopic maps and are integrated over long time spans, most likely in secondary auditory cortex regions. Another set of experiments found that the effect of prior frequency separation is somewhat smaller when people do not pay much attention to the prior pattern, but that it is still quite prevalent under these conditions. This means that this context effect is relatively automatic, in contrast to another context effect called "buildup" that occurs during stream segregation. Overall, these findings have revealed much new information about how prior sensory and perceptual experience influences interpretation of relatively simple sound patterns, what types of memory representations mediates the effects of context, and the extent to which attention is required for context effects. The studies also raise important new questions, such as what specific parts of the brain mediate context effects, how these brain areas interact with each other, and to what extent context effects are important in everyday situations such as when trying to perceptually segregate speech in a noisy restaurant or while listening to a particular instrument in a musical ensemble. This project involved undergraduate, graduate, and post-doctoral level trainees (including female and minority students) in research, some of whom have played a role in high-level aspects of the studies described above, including analyzing data and writing papers. The research has also helped improve the overall capacity for high-level research at the University of Nevada, Las Vegas and in particular in the Department of Psychology. Finally, this research promises to have implications for health and technology applications, for example by inspiring new speech recognition software or cochlear implant processors that take into account the powerful effects of context that were studied.