Normal-hearing listeners are able to effortlessly interpret their acoustic surroundings, parsing ongoing sound into distinct sources and tracking these sources through time. To perform this task, the brain represents relevant sensory information in memory to be compared to future inputs, but the nature of this memory representation is not well understood. Typically, this mechanism is studied using predictable patterns in sound sequences, where listeners are asked to detect deviants from established patterns. Previous work has demonstrated the brain is sensitive to a wide variety of patterns in sound along multiple acoustic dimensions. These patterns, however, do not probe how the brain represents sound in natural listening environments, where relevant information is often not predictable and cannot be represented explicitly and with certainty. This project uses stochastic sound sequences?which exhibit statistical properties rather than deterministic patterns?to investigate the extent to which the brain represents statistical information from sequences of sounds in the presence of uncertainty. Our central hypothesis is that the brain collects high-dimensional statistical information (beyond mean and variance) to capture uncertainty across time and across perceptual features to interpret ongoing sound. In a series of change detection experiments, listeners will be asked to detect changes in the entropy of sound sequences varying along multiple perceptual features: pitch, timbre, and spatial location. A computational model for predictive processing will be developed to compare alternative representations of statistical information in the brain. Perceptual constraints in the model will be fit to individual behavior, and the fitted model will be used to predict deviance responses in Electroencephalography (EEG) data. Additionally, individual differences in perceptual abilities will be measured using a separate task in the same listeners, and these measures will be compared to findings from the model to add interpretative heft and improve the model. A computational model for how the brain processes complex sounds will open the possibility of investigating more natural, ?messy? stimuli in the laboratory, and a better understanding of the individual differences in perception of stochastic sounds could lead to better diagnostic tools for assessing temporal processing abilities.

Public Health Relevance

Healthy, normal-hearing listeners can effortlessly interpret their acoustic surroundings, extracting useful information over time from ongoing sound despite the dynamic, often random nature of everyday listening. In this project, we will investigate the computational mechanisms used by the brain to build representations of stochastic sound sources. Our findings will provide a greater understanding of temporal processing and insight into individual differences in perception illuminated by uncertainty.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Predoctoral Individual National Research Service Award (F31)
Project #
5F31DC017629-02
Application #
9844411
Study Section
Special Emphasis Panel (ZDC1)
Program Officer
Rivera-Rentas, Alberto L
Project Start
2018-12-18
Project End
2020-12-17
Budget Start
2019-12-18
Budget End
2020-12-17
Support Year
2
Fiscal Year
2020
Total Cost
Indirect Cost
Name
Johns Hopkins University
Department
Engineering (All Types)
Type
Biomed Engr/Col Engr/Engr Sta
DUNS #
001910777
City
Baltimore
State
MD
Country
United States
Zip Code
21205