As one views a natural scene, the visual system rapidly and effortlessly transforms the retinal image into a representation of discrete, recognizable objects. This process is referred to as scene segmentation. Although scene segmentation is of crucial importance to visual perception, relatively little is known about the neuronal mechanisms that underlie this process. The proposed experiments address three important questions concerning the neural basis of scene segmentation. 1) How do neurons in the visual cortex detect and encode the boundaries of objects in three-dimensional space? Experiment 1 will examine the hypothesis that neurons in extrastriate area MT encode depth discontinuities by virtue of disparity-tuned surround inhibition. 2) How does segmentation information contribute to the ability to discriminate between visual patterns? Experiment 2 will examine the influence of depth segmentation on neuronal and psychophysical performance in a direction discrimination task. 3) How are the spatially distributed image features that belong to an object linked together? Experiment 3 tests the hypothesis that synchronization of neuronal responses occurs when disparate image features are perceptually bound together. Each of these issues will be investigated by recording from neurons in the extrastriate visual cortex of monkeys trained to perform an appropriate behavioral task. The advantage of this approach is that responses of cortical neurons can be directly related to the behavior of the animal. The results of these experiments will provide a more comprehensive understanding of the neuronal mechanisms underlying scene segmentation.