In everyday activity, our sense of space is guided by coordinated multisensory analyses of sensory information from the surrounding environment. Multisensory spatial information is critical for identifying and attending to a target sound in a noisy environment (e.g., night bars, restaurants). Multisensory information is also crucial for detecting heard but unseen dangers because the spaces encoded by sounds and sights do not always align. For foveal species like humans and monkeys, the visual field is restricted to the frontal space, whereas the auditory field is panoramic, covering the entire frontal and rear space. The rear space, however, has been largely overlooked in multisensory research. It remains largely unknown where and how vision directly influences auditory spatial processing in the brain. The long-term objective of this study is to understand the fundamental strategies of multisensory spatial perception and cortical neural mechanisms that implement these strategies in the brain. This proposal will investigate how visual information modulates auditory encoding of 360-degree, panoramic space in auditory cortex using an integrated approach based on neurophysiology, mechanistic computational modeling, and predictive statistical modeling. We hypothesize that visuo-spatial information increases auditory representation of the frontal space by changing the directional preference of neural network dynamics. Neurophysiological experiments will provide a comprehensive assessment of changes in the 360-degree spatial tuning of auditory cortex neurons after frontal visual stimulation. Computational models will aid in identifying putative cell types and reveal how heterogeneous recorded extracellular spiking waveforms depend on stimulus conditions and cell type. Predictive statistical modeling will determine the sources of variance in cortical neuron spiking data and will predict spiking output of different cell types under different conditions, all with laminar specificity. This integrated approach will provide an understanding of visual modulation of auditory spatial processing with a focus on the layer-specific interactions between local rhythm generators and single unit activity. The impact of this work will be maximized through sharing of data in standardized formats, rigorous and transparent model validation, and use of model description standards, which allows for code generation for simulating models in many different programming languages or simulation platforms for model re-use.

Public Health Relevance

The ability of the nervous system to integrate multisensory inputs is essential to communications in complex sensory and social environments. Impairment of this ability is the most noticeable outcome of hearing loss. Identifying how the auditory cortex encodes sound features in a visual environment will improve our understanding of how multisensory perception might be implemented in neural circuits, thereby revealing potential sources of perceptual impairment in real-world conditions.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
1R01DC019278-01
Application #
10147984
Study Section
Special Emphasis Panel (ZRG1)
Program Officer
Poremba, Amy
Project Start
2020-07-01
Project End
2024-06-30
Budget Start
2020-07-01
Budget End
2021-06-30
Support Year
1
Fiscal Year
2020
Total Cost
Indirect Cost
Name
Arizona State University-Tempe Campus
Department
Biomedical Engineering
Type
Sch Allied Health Professions
DUNS #
943360412
City
Tempe
State
AZ
Country
United States
Zip Code
85287