The project will study how humans see three-dimensional (3D) objects and 3D scenes under natural viewing conditions. Note that any laboratory experiment is always a simplification of what actually pertains in real life for two related reasons: (i) the experimenter must have a full control over the experimental conditions: this will not be possible if the experiment contains too many parameters, and (ii) the interpretation of the results is easier and more convincing if the experiment is simpler. In most, prior research, a number of different criteria were used to decide how to best simplify the experimental conditions. In this project, unlike in all prior work, the criterion used is that huma 3D vision in the lab will be veridical as it is in our everyday life. Veridical, here, simply means that we see the 3D shapes and 3D scenes the way they are "out there", that is, we see them accurately. This has never been done before because no available theory could explain how veridical vision is possible from a mathematical and computational point of view. Such a theory has finally become available. The project's goals are three-fold: (i) provide empirical evidence about the nature of veridicality of 3D vision, (ii) determine and characterize the limits of veridial vision: this will be done by specifying the geometry of the stimuli, as well as viewing conditions, for which vision ceases to be veridical, and (iii) formulate computational models explaining veridical vision and its failures. Achieving these goals will be instrumental in (a) designing machine vision devices that can help and assist the blind and visually impaired, (b) assessing implications of visual impairments in everyday life as well as in job related activities, and (c) explaining brain mechanisms responsible for 3D vision: this is essential for evaluating the effects of brain injuries and exploring possibilities for compensation by the brain. There will be four sets of behavioral experiments: half will use real objects in real scenes and the other half, 3D models of objects and real scenes rendered by means of virtual reality devices. The first set of experiments will examine the role of the a priori constraints normally operating in our natural environment, such as the symmetry of objects, the presence and direction of gravity, and the orientation of the ground surface. The second set of experiments will evaluate the effect of degradation of the visual stimulus by lowering its luminance, contrast, and spatial resolution. The third set of experiments will examine Figure-Ground Organization;specifically, the ability of a human observer to detect and locate objects in naturalistic 3D cluttered scenes using both foveal and peripheral vision. The fourth set of experiments will examine 3D scene recovery;specifically, the ability of a human observer to see the positions, sizes and orientations of 3D objects, as well as the empty spaces among and behind the objects. The results of these experiments will be used to formulate and test a computational theory of 3D vision. This theory will take the form a Regularization or Bayesian inference in which a priori constraints are optimally combined with the visual information.
Vision is arguably our most important sense because it provides us with veridical (accurate) information about objects and events in the external world. Characterizing veridical natural vision, as well as determining the limits of its operation are essential for (i) assessing implications of visual impairments, (ii) designing devices that will assist the blind and visually impaired, and (iii) setting the stage for understanding the brain mechanisms underlying vision.