Virtually every visual and visuomotor function hinges first and foremost on the visual system's ability to consistently and accurately localize stimuli, and this is especially true in cluttered and dynamic scenes. However, it is unknown what sorts of information are integrated to determine perceived position, how such integration occurs, or what the neural mechanism(s) and loci of this process are. Perceptual localization in typically cluttered and dynamic scenes requires the visual system to both assign and update object positions; to do this it relies not just on the retinal location of the object of interest, but als principally on four additional factors: visual motion, frames of reference, eccentricity biases, an contrast adaptation. To understand how the visual system assigns and updates object positions, we must approach the task of localization not as an isolated process, but as an integrative one-one that depends on contextual information in the scene. Our proposed experiments have two goals; first, to psychophysically measure perceived position as a function of retinal position, contrast adaptation, visual motion, eccentricity bias, and frames of reference in order to generate a novel multifactorial integration field model; we will use this model in fMRI experiments to test the hypothesis that neurotopic organization across visual cortex is heterogeneous (unique position codes in each visual area) or homogeneous (identical across visual areas). Our pilot results suggest that there are unique position codes in different visual areas. The second goal of the proposal is to test whether these unique position codes (the differences in topographic organization) have perceptual consequences. We will use psychophysics to test these predicted double dissociations in perceived location and then use TMS to test the causal contribution of heterogeneous visual cortical topographic organization to perceived position. One novelty of our approach lies in developing a new mixed generative and discriminative model of spatial coding, that can be applied to psychophysical and fMRI data in tandem, and further allows us to make predictions from fMRI results about perceptual outcomes in specific situations. The causal relationship between fMRI results and perceptual outcomes will then be tested with TMS. Our experiments will provide novel insight on how cues are integrated to determine perceived position at each stage of visual processing, which is crucial to understanding the fundamental localization deficits that occur in a range of visual and cognitive impairments ranging from amblyopia and macular degeneration, to autism. Until we understand how position is assigned in the typical brain, we lack the necessary insight to develop diagnostic tools, predictive markers, and treatment outcome measures for these impairments.
Perceptual localization is arguably the most important function of the visual system, but many neurological disorders produce specific deficits in visual and visuomotor localization, including amblyopia, optic ataxia, dyslexia, akinetopsia, autism, fragile X syndrome, and schizophrenia, among others. Until we understand how perceived position is determined in the normal brain for images that are constantly seen in cluttered, dynamic scenes, we lack the necessary insight to develop diagnostic tools, predictive markers, and therapies for the spatial component of these impairments.
|Maus, Gerrit W; Whitney, David (2016) Motion-Dependent Filling-In of Spatiotemporal Information at the Blind Spot. PLoS One 11:e0153896|
|Wolfe, Benjamin A; Whitney, David (2015) Saccadic remapping of object-selective information. Atten Percept Psychophys 77:2260-9|
|Sweeny, Timothy D; Wurnitsch, Nicole; Gopnik, Alison et al. (2015) Ensemble perception of size in 4-5-year-old children. Dev Sci 18:556-68|
|Wolfe, Benjamin A; Kosovicheva, Anna A; Leib, Allison Yamanashi et al. (2015) Foveal input is not required for perception of crowd facial expression. J Vis 15:11|
|Liberman, Alina; Fischer, Jason; Whitney, David (2014) Serial dependence in the perception of faces. Curr Biol 24:2569-74|
|Yamanashi Leib, Allison; Fischer, Jason; Liu, Yang et al. (2014) Ensemble crowd perception: a viewpoint-invariant mechanism to represent average crowd identity. J Vis 14:26|
|Fischer, Jason; Whitney, David (2014) Serial dependence in visual perception. Nat Neurosci 17:738-43|
|Sweeny, Timothy D; Whitney, David (2014) Perceiving crowd attention: ensemble perception of a crowd's gaze. Psychol Sci 25:1903-13|
|Arnold, Derek H; Marinovic, Welber; Whitney, David (2014) Visual motion modulates pattern sensitivity ahead, behind, and beside motion. Vision Res 98:99-106|
|Kosovicheva, Anna A; Wolfe, Benjamin A; Whitney, David (2014) Visual motion shifts saccade targets. Atten Percept Psychophys 76:1778-88|
Showing the most recent 10 out of 57 publications