The ability to pay attention to sound sources of interest is critical for verbal communication in many situations. When multiple talkers are speaking simultaneously, being able to direct and redirect attention is necessary to extract relevant information. However, the neural processing involved in directing auditory attention is not fully understood. Moreover, establishing how top-down control of attention interacts with the bottom-up differentiability of sound sources provides a critical first step in understanding how hearing deficits can impact effective cognitive control;understanding this issue could inform the design of improved coding algorithms for hearing aids and cochlear implants. Previous neuroimaging studies in both audition and vision have implicated the involvement of several cortical structures in the voluntary switching of attention using spatial information, but the mechanisms and dynamics of switching attention using non-spatial information are unknown. We will study orientation and switching of auditory attention based on non-spatial stimulus features in behavioral and neuroimaging studies of human subjects performing psychoacoustical tasks. To capture cortical dynamics, we will employ a multimodal imaging approach, combining the temporal resolution provided by magneto- and electro-encephalography (M-EEG) while co-constraining the localization of cortical activation using anatomical magnetic resonance imaging (MRI) scans.
Our first aim i s to perform a set of psychophysical experiments designed to examine the cost of switching attention based on non-spatial stimulus features as a function of the peripheral separability of competing sounds.
Our second aim i s to use neuroimaging techniques to examine the cortical dynamics of switching attention based on non-spatial features. By combining psychophysics and neuroimaging, we will determine the temporal dynamics of the cortical network involved in orientation and switching of auditory attention. This research will allow us to better understand how the auditory system enables us to communicate effectively in challenging acoustic environments.
In everyday settings, people can choose to pay attention to particular sound sources despite the presence of many other competing noise sources in the environment. However, many listeners with hearing problems struggle with this task, and it is currently unknown what processing the brain uses to accomplish it. We will study this process by using imaging techniques (magneto- and electro- encephalography) to capture brain dynamics during challenging auditory tasks, enabling us to better understand how the brain utilizes auditory information to facilitate effective communication.
|Hands, Gabrielle L; Larson, Eric; Stepp, Cara E (2014) Effects of augmentative visual training on audio-motor mapping. Hum Mov Sci 35:145-55|
|Thorp, Elias B; Larson, Eric; Stepp, Cara E (2014) Combined Auditory and Vibrotactile Feedback for Human-Machine-Interface Control. IEEE Trans Neural Syst Rehabil Eng 22:62-8|
|Gramfort, Alexandre; Luessi, Martin; Larson, Eric et al. (2014) MNE software for processing MEG and EEG data. Neuroimage 86:446-60|
|Lee, Adrian K C; Larson, Eric; Maddox, Ross K et al. (2014) Using neuroimaging to understand the cortical mechanisms of auditory selective attention. Hear Res 307:111-20|
|Larson, Eric; Lee, Adrian K C (2014) Potential Use of MEG to Understand Abnormalities in Auditory Function in Clinical Populations. Front Hum Neurosci 8:151|
|Larson, Eric; Lee, Adrian K C (2014) Switching auditory attention using spatial and non-spatial features recruits different cortical networks. Neuroimage 84:681-7|
|Larson, Eric; Lee, Adrian K C (2013) Influence of preparation time and pitch separation in switching of auditory attention between streams. J Acoust Soc Am 134:EL165-71|
|Larson, Eric; Terry, Howard P; Canevari, Margaux M et al. (2013) Categorical vowel perception enhances the effectiveness and generalization of auditory feedback in human-machine-interfaces. PLoS One 8:e59860|
|Wronkiewicz, Mark; Larson, Eric; Lee, Adrian K C (2013) Towards a next-generation hearing aid through brain state classification and modeling. Conf Proc IEEE Eng Med Biol Soc 2013:2808-11|