Dr. Alan Gilchrist at Rutgers University will conduct a series of experiments to discover how the human visual system can accurately determine the gray shade of visible surfaces even when the contrast in the image reaching the eye has been severely reduced. This occurs when a scene is viewed through a sheet of glass. Light reflected off the front surface of the glass (e.g., a reflection of the sky on a car windshield) combines with the pattern of light coming through the glass, reducing the contrast of that pattern. A bright glare source in the visual field can have the same effect, due to intense scattered light within the eyeball. The experiments will systematically isolate image features that are present in 3D scenes (where it is known that humans can successfully correct for such reflected light) and 2D patterns (where the correction fails). These features include shadows, light-absent crevices, and certain patterns where overlapping edges intersect. In addition, the perception of colored surfaces overlain with colored reflections will also be studied.
The human visual system is able to determine the gray shade of objects despite changes in illumination level, changes in the background behind the object, and changes in the media that intervene between the viewer and the object. This latter problem has been almost totally neglected by vision research. A solution will advance our understanding of how the brain computes the gray shade of visible objects. It is widely agreed that in order to determine the gray level intensity of a surface, the human visual system relies crucially on the strength of contrast at the edges of that surface and neighboring surfaces. When the entire scene containing that surface is overlain with reflected light (or light scattered from a glare source), the strength of contrast at those edges is dramatically reduced. Nevertheless, when the scene is 3D and somewhat complex, the human eye can automatically disentangle the reflected light from the scene itself, and perceive the gray shade of the surface correctly. No machine visual system can do that. Knowledge about how the human visual system achieves this feat will contribute to the enhanced ability to program a machine to replicate it.
The ability of the human visual system to perceive the correct shade of gray of objects has not yet been explained scientifically. Although the eye has light-sensitive receptor cells, the light reflected from a gray object and received by the eye does not reveal the gray shade of the object, due to variations in illumination level. A black object in sunlight can reflect much more light than a white object in shadow. The aspect of this problem that has been investigated in this grant concerns the remarkable ability of the brain to determine the correct shade of gray of a surface even when that surface is seen through what is called a veiling luminance. An example is shown below. A veiling luminance, or veil for short, is essentially a sheet of light that is superimposed over the image of a scene that reaches the eye. The simplest example of a veiling luminance is fog. But it also occurs when a sheet of light is reflected off a glass surface that one is looking through. For example, it is often difficult to perceive who is driving a car because the sky is reflected off the windshield and this dramatically reduces the contrast of the image made by the driver’s face. Yet despite this reduction in contrast due to the veil of light, the human brain is often able to correctly compute the shade of gray of surfaces seen through a veil. Prior research had shown that the brain is successful in discounting the veil when it covers a three-dimensional scene but not when covering a 2D Mondrian-type pattern. This implies that gradients, that is variations, in light intensity along curved surfaces plays an important role. We conducted a series of experiments to determine what information in the image received by the eye is used by the brain to determine that a veil is present and to discount the reduction in contrast. We discovered that chromatic information is essential. The veil is successfully discounted either when the scene viewed through the veil contains at least one colored object, or when the veil is at least somewhat different in color from the light illuminating the scene behind the veil. This implies that the brain is exploiting correlations variations in intensity within the retinal image and variations in color or saturation. These results will help researchers to eventually understand the visual software used by the brain in seeing. Among many potential applications, this will provide the basis for artificial vision systems, which are currently extremely primitive.