The proposed project will enhance our knowledge of the neurophysiology of AV integration in spoken language. A neural model, whose key claims are supported by preliminary data, guides the experiments in the two aims, which use EEG, fMRI, and behavioral techniques to elucidate the functioning and interconnectivity of the brain structures involved in AV integration. Future work will extend the aims to a wider range of listening environments and clinical populations. These include individuals with hearing loss and those with cochlear implants (in collaboration with faculty in the departments of Speech and Hearing and Psychology and the PI's home department - Otolaryngology).

Public Health Relevance

The proposed research will expand our current knowledge about the brain mechanisms that underlie the integration of auditory (speech) and visual (mouth movements) information in spoken language understanding in noisy environments (e.g., degraded speech, multi-talker 'cocktail party', and video chatting over a slow internet connection). Such understanding should position us well to study the neural adaptation that individuals with hearing loss, language deficits and other multisensory disorders (e.g., autism) undergo to maintain intelligibility in AV adversities. In turn, this should contribute to interventon and treatment strategies for these populations.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
7R01DC013543-05
Application #
9816018
Study Section
Language and Communication Study Section (LCOM)
Program Officer
Poremba, Amy
Project Start
2014-12-04
Project End
2019-11-30
Budget Start
2018-09-01
Budget End
2018-11-30
Support Year
5
Fiscal Year
2018
Total Cost
Indirect Cost
Name
University of California Merced
Department
Type
Schools of Arts and Sciences
DUNS #
113645084
City
Merced
State
CA
Country
United States
Zip Code
95343
Shahin, Antoine J; Backer, Kristina C; Rosenblum, Lawrence D et al. (2018) Neural Mechanisms Underlying Cross-Modal Phonetic Encoding. J Neurosci 38:1835-1849
Shatzer, Hannah; Shen, Stanley; Kerlin, Jess R et al. (2018) Neurophysiology underlying influence of stimulus reliability on audiovisual integration. Eur J Neurosci 48:2836-2848
Shahin, Antoine J; Shen, Stanley; Kerlin, Jess R (2017) Tolerance for audiovisual asynchrony is enhanced by the spectrotemporal fidelity of the speaker's mouth movements and speech. Lang Cogn Neurosci 32:1102-1118
Moberly, Aaron C; Bhat, Jyoti; Shahin, Antoine J (2016) Acoustic Cue Weighting by Adults with Cochlear Implants: A Mismatch Negativity Study. Ear Hear 37:465-72