Our ability to see in the natural world depends on neural representations of objects. Signals sent from the eye to the brain are the basis for what we see, but these signals are transformed at many later stages of vision in order to perceive objects. The focus of this research is mid-level vision, a critical intermediate stage after the image-based representation of light in the eye but before knowledge-based object recognition. It is the first stage of representation for edges of surfaces, separation of surfaces from each other, and figure-ground segregation. Mid-level vision includes vital processes for object perception but is studied relatively little. Mid-level representations must solve two basic problems. First, they must integrate spatially separated parts of the scene selectively. Not all parts of a scene should be integrated into the representation for each distinct object, of course, so the selectivit is essential. Though normally effortless, the ability to integrate information from separate visual regions is made salient by perceptual deficits in some cases of Alzheimer's disease and schizophrenia. Second, the information available to be integrated is often ambiguous. Normally the implicit ambiguity is resolved without conscious awareness, though images that fluctuate between two different bi-stable percepts are textbook examples demonstrating neural ambiguity. The research here links these two challenges for mid-level vision by determining how spatially separated ambiguous representations become grouped together. This work requires the ability (1) to experimentally create and control ambiguity in separate mid-level representations localized for distinct regions of visual space and (2) to quantify how each of the separate ambiguous representations is perceived. Both are achieved using an innovative technique called chromatic interocular switch rivalry, which generates ambiguous representations for color at a level beyond monocular stages of visual processing (beyond monocular representations in cortical area V1). Chromatic ambiguity provides a model neural representation for understanding grouping of mid-level ambiguity. Experiments will test how separated ambiguous representations are grouped and resolved. Initial hypothesis tests will determine the temporal and spatial characteristics, motion and three-dimensional percepts that cause mid-level ambiguities in separate areas to become grouped. Further studies will test whether the range covered by the ambiguity itself mediates the grouping that serves to resolve it. Other experiments will extend the research to fully rendered three-dimensional objects to test whether resolution of ambiguity by grouping follows from color-constant surface percepts that discount illumination. In sum, this research will discover new knowledge about ambiguous mid-level representations, delivering an important advance for understanding normal vision as well as visual impairments.
Vision depends on the light entering the eye but what we actually see follows from neural responses in the eye and brain. This research will determine how neural signals from the eye are transformed so that we see complete objects and surfaces. New knowledge will reveal fundamental properties of human vision that are important for understanding normal vision as well as visual impairments.
Coia, Andrew J; Shevell, Steven K (2018) Chromatic induction in space and time. J Opt Soc Am A Opt Image Sci Vis 35:B223-B230 |
Elliott, Sarah L; Shevell, Steven K (2018) Illusory edges comingle with real edges in the neural representation of objects. Vision Res 144:47-51 |
Slezak, Emily; Shevell, Steven K (2018) Perceptual resolution of color for multiple chromatically ambiguous objects. J Opt Soc Am A Opt Image Sci Vis 35:B85-B91 |
Shevell, Steven K; Martin, Paul R (2017) Color opponency: tutorial. J Opt Soc Am A Opt Image Sci Vis 34:1099-1108 |
Christiansen, Jens H; D'Antona, Anthony D; Shevell, Steven K (2017) Chromatic interocular-switch rivalry. J Vis 17:9 |