Retinal images are inherently fragmentary and ambiguous because images of separate entities overlap. But the early visual mechanisms are not equipped to parse the overlapping 2-D retinal images into distinct 3-D entities. The job of parsing these images falls on the mid-level mechanisms, whose main role is to represent the distinct entities as separate surfaces. The represented surface information then serves as inputs to the WHAT and WHERE systems that underlie our 3-D perception of objects and space, respectively. As such, the mid-level mechanisms are not just simple conduits of information between early and late level visual mechanisms but play a crucial role in determining the quality and reliability of the visual information conveyed. Compared to other aspects of visual processing, less is known about the mid-level mechanisms. One of the biggest challenges is to discover how the often fragmentary and ambiguous retinal information is transformed into reliable surface representations, presumably, through a spreading-in operation. At times, when an image belonging to the same entity is broken into parts due to occlusion, a surface interpolation operation is required to integrate the parts into a global surface. Moreover, inputs from the two eyes that contribute to these operations can be disparate in content and location. In the face of the myriad complexities of the visual inputs, it is further proposed that the mid-level mechanisms must rely on internal assumptions (perceptual rules) and feedbacks from the higher visual levels for guidance in representing surfaces. But how these operations are accomplished is still unclear. Remedying it, this proposal uses the human psychophysical approach to investigate the above issues by focusing on three specific aims.
Aim 1 investigates how the spreading-in operation represents surfaces with texture patterns, which is more complex than representing texture-free surfaces. It is proposed the principle of reducing coding redundancy that governs the spreading-in operation causes the global surface representation operation to be efficient but prone to poor resolution. The latter could be one basis of the well-known crowding effect phenomenon.
Aim 2 investigates the texture-surface interpolation operation. Cognizant of the roles of attention and object knowledge, the research investigates how these top-down factors influence surface integration.
Aim 3 investigates the long-term plasticity of the mid-level mechanisms. Perceptual learning experiments will be conducted to reveal how extensive training modifies the perceptual rules implemented at the mid-level. The long-term goal of this proposal is to advance our knowledge of how visual information is processed and represented by the mid-level mechanisms. This knowledge helps us better understand how humans perceive the visual world, and provides a clinical basis for behavioral diagnoses and treatments of visual dysfunctions related to amblyopia, strabismus and aging.

Public Health Relevance

Early level visual information is often fragmentary and ambiguous because images of distinct entities overlap. A role of the mid-level mechanisms is to 'make sense' of this information by representing the distinct entities as separate surfaces. Discovering how this is achieved leads to better scientific understanding of how humans perceive the visual world, and provides a clinical basis for non-invasive diagnoses and treatments of visual dysfunctions related to amblyopia and strabismus.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY023561-02
Application #
8889261
Study Section
Cognition and Perception Study Section (CP)
Program Officer
Wiggs, Cheri
Project Start
2014-08-01
Project End
2019-07-31
Budget Start
2015-08-01
Budget End
2016-07-31
Support Year
2
Fiscal Year
2015
Total Cost
Indirect Cost
Name
University of Louisville
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
057588857
City
Louisville
State
KY
Country
United States
Zip Code
40208
He, Zijiang J; Ooi, Teng Leng; Su, Yong R (2018) Perceptual mechanisms underlying amodal surface integration of 3-D stereoscopic stimuli. Vision Res 143:66-81
Han, Chao; He, Zijiang J; Ooi, Teng Leng (2018) On Sensory Eye Dominance Revealed by Binocular Integrative and Binocular Competitive Stimuli. Invest Ophthalmol Vis Sci 59:5140-5148
Zhou, Liu; Deng, Chenglong; Ooi, Teng Leng et al. (2016) Attention modulates perception of visual space. Nat Hum Behav 1:
Zhou, Liu; Ooi, Teng Leng; He, Zijiang J (2016) Intrinsic spatial knowledge about terrestrial ecology favors the tall for judging distance. Sci Adv 2:e1501070
Verma, Shefali S; Frase, Alex T; Verma, Anurag et al. (2016) PHENOME-WIDE INTERACTION STUDY (PheWIS) IN AIDS CLINICAL TRIALS GROUP DATA (ACTG). Pac Symp Biocomput 21:57-68
Ooi, Teng Leng; He, Zijiang J (2015) Space perception of strabismic observers in the real world environment. Invest Ophthalmol Vis Sci 56:1761-8