Normal-hearing listeners are able to effortlessly interpret their acoustic surroundings, parsing ongoing sound into distinct sources and tracking these sources through time. To perform this task, the brain represents relevant sensory information in memory to be compared to future inputs, but the nature of this memory representation is not well understood. Typically, this mechanism is studied using predictable patterns in sound sequences, where listeners are asked to detect deviants from established patterns. Previous work has demonstrated the brain is sensitive to a wide variety of patterns in sound along multiple acoustic dimensions. These patterns, however, do not probe how the brain represents sound in natural listening environments, where relevant information is often not predictable and cannot be represented explicitly and with certainty. This project uses stochastic sound sequences?which exhibit statistical properties rather than deterministic patterns?to investigate the extent to which the brain represents statistical information from sequences of sounds in the presence of uncertainty. Our central hypothesis is that the brain collects high-dimensional statistical information (beyond mean and variance) to capture uncertainty across time and across perceptual features to interpret ongoing sound. In a series of change detection experiments, listeners will be asked to detect changes in the entropy of sound sequences varying along multiple perceptual features: pitch, timbre, and spatial location. A computational model for predictive processing will be developed to compare alternative representations of statistical information in the brain. Perceptual constraints in the model will be fit to individual behavior, and the fitted model will be used to predict deviance responses in Electroencephalography (EEG) data. Additionally, individual differences in perceptual abilities will be measured using a separate task in the same listeners, and these measures will be compared to findings from the model to add interpretative heft and improve the model. A computational model for how the brain processes complex sounds will open the possibility of investigating more natural, ?messy? stimuli in the laboratory, and a better understanding of the individual differences in perception of stochastic sounds could lead to better diagnostic tools for assessing temporal processing abilities.
Healthy, normal-hearing listeners can effortlessly interpret their acoustic surroundings, extracting useful information over time from ongoing sound despite the dynamic, often random nature of everyday listening. In this project, we will investigate the computational mechanisms used by the brain to build representations of stochastic sound sources. Our findings will provide a greater understanding of temporal processing and insight into individual differences in perception illuminated by uncertainty.