Humans with normal hearing excel at deriving information about the world from sound. Our auditory abilities represent stunning computational feats that only recently have been replicated to any extent in machine systems. And yet our auditory abilities are highly vulnerable, being greatly compromised in listeners with hearing impairment, cochlear implants, and auditory neurodevelopmental disorders, particularly in the presence of noise. Difficulties in recognition often lead to frustration and social isolation, and are not adequately addressed by current hearing aids, implants, and remediation strategies. The long-term goal of the proposed research is to reveal the basis of auditory recognition and to provide insights that will facilitate improved prosthetic devices and therapeutic interventions. The development of more effective devices and therapies is currently limited by an incomplete understanding of the factors that underlie real-world recognition by normal-hearing listeners. In particular, although responses to sound in subcortical auditory pathways are relatively well studied, little is known about the transformations that occur within the auditory cortex to create representations of meaningful sound structure. We propose to enrich the understanding of auditory recognition with three sets of experiments that examine the cortical representation of real-world sounds in human listeners, combining functional magnetic resonance imaging (fMRI) with computational modeling of the underlying representations.
Aim 1 develops artificial neural network models of speech and music processing and compares their representations to those in the auditory cortex, synthesizing and then measuring brain responses to sounds that generate the same response in a model, and probing the time scale of the auditory analysis of speech and music.
Aim 2 develops and tests models of pitch perception in noise, exploring the hypothesis that pitch perception is constrained both by the statistics of natural sounds and the frequency selectivity of the cochlea.
Aim 3 develops and tests models that jointly localize and recognize sounds, and probes the brain representations of sound identity and location using fMRI. The results will reveal the mechanisms underlying robust sound recognition by the healthy auditory system and will set the stage for investigations of the cortical consequences of hearing impairment and auditory developmental disorders, hopefully suggesting new strategies for remediation.
People with normal hearing are typically able to recognize, understand and localize sounds of interest, but this ability is often compromised in listeners with hearing disorders. The proposed research will enrich the understanding of the neural mechanisms underlying auditory recognition and localization in normal listeners. The results will likely provide insight into the brain processes whose alteration in hearing disorders underlies listening difficulties, potentially leading to improved remediation strategies.