Research has revealed much about the mechanisms of the visual system. However, perceptual experience is usually multimodal, with close relationships between visual and auditory modalities. Auditory signals influence neural activation throughout the visual pathways including in the mid brain and primary visual cortex. It is therefore important to extend the rigorous theories of vision to integrate multimodal contexts. Prior research on auditory-visual interactions has primarily focused on perception of space, timing, duration, motion, and speech, whereas recent research has demonstrated auditory-visual interactions in the perception of objects and faces. The goal of the proposed research is to fill the gap in our understanding of auditory-visual interactions at the level of visual feature processing. We will characterize which acoustic patterns uniquely interact with processing of low-level (e.g., spatial frequency), intermediate-level (e.g., material texture and 2D shape), and high-level (e.g., common objects, words, face identity, and facial expressions) visual features. To understand these interactions, we will combine psychophysics and computational modeling (AIM 1) to determine how associated sounds influence basic mechanisms of visual feature processing, including those that control image visibility (front-end signal-to-noise ratio and sampling efficiency), those that control signal competition for visual awareness, and those that control the strength and reliability of neural population coding of visual features in the presence of between- and within-receptive-field signal interactions. The results will provide an integrative understanding of how sounds influence visual signals, sampling, competition, and coding, for the processing of low-, intermediate-, and high-level visual features. The proposed research will also allow development of cross- modal methods for assisting visual perception by enhancing specific spatial scales, materials, shapes, objects, and facial expressions. For example, our preliminary results suggest that sounds can be used to boost and tune the perception of facial expressions, and to direct attention to specific spatial frequencies. In the translational aim (AIM 2), we will systematically investigate how sounds can be used to aid visual perception, for example, to direct attention to an object, material, word, or facial expression in search, facilitate object recognition via directing attention to diagnostic spatial-frequency components, and enrich scene understanding via directing attention to multiple spatial scales. Because feature-specific auditory signals are readily presented over headphones, the proposed research may provide a means to, for example, counter biased perception (e.g., perceiving facial expressions as negative due to social anxiety), and to direct attention to specific objects and spatial scales (e.g., details versus gist) for individuals with visual challenges such as low vision, strokes affecting vision, or with attention disorders. Thus, the proposed research will not only systematically integrate auditory influences into the current models of visual feature processing, but it may also provide a means to aid visual processing by using auditory signals.
Visual signals are often accompanied by related auditory signals and therefore understanding auditory influences on visual processes is important for understanding how the visual system works in realistic contexts. Recent results suggest that auditory-visual interactions involve the perception of objects;for example, playing a characteristic sound of a target object (e.g., """"""""meow"""""""" for a cat) facilitates visual search even when the sound is spatially uninformative. Understanding the nature of these interactions may provide new insights for alleviating vision problems such as age-related visual impairments by using auditory stimulation.
|Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia et al. (2017) An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence. J Cogn Neurosci 29:435-447|
|Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru (2017) Comparing the effects of implicit and explicit temporal expectation on choice response time and response conflict. Atten Percept Psychophys 79:169-179|
|Parrott, Stacey; Guzman-Martinez, Emmanuel; Orte, Laura et al. (2015) Direction of Auditory Pitch-Change Influences Visual Search for Slope From Graphs. Perception 44:764-78|
|Sherman, Aleksandra; Grabowecky, Marcia; Suzuki, Satoru (2015) In the working memory of the beholder: Art appreciation is enhanced when visual complexity is compatible with working memory. J Exp Psychol Hum Percept Perform 41:898-903|
|Skogsberg, KatieAnn; Grabowecky, Marcia; Wilt, Joshua et al. (2015) A relational structure of voluntary visual-attention abilities. J Exp Psychol Hum Percept Perform 41:761-89|
|Zweig, L Jacob; Suzuki, Satoru; Grabowecky, Marcia (2015) Learned face-voice pairings facilitate visual search. Psychon Bull Rev 22:429-36|
|Brang, David; Towle, Vernon L; Suzuki, Satoru et al. (2015) Peripheral sounds rapidly activate visual cortex: evidence from electrocorticography. J Neurophysiol 114:3023-8|
|List, Alexandra; Iordanescu, Lucica; Grabowecky, Marcia et al. (2014) Haptic guidance of overt visual attention. Atten Percept Psychophys 76:2221-8|
|Paller, Ken A; Suzuki, Satoru (2014) The source of consciousness. Trends Cogn Sci 18:387-9|
|Paller, Ken A; Suzuki, Satoru (2014) Response to Block et al.: first-person perspectives are both necessary and troublesome for consciousness science. Trends Cogn Sci 18:557-8|
Showing the most recent 10 out of 28 publications