American Sign Language (ASL) displays complex linguistic structure, but unlike spoken languages, conveys much of its structure of manipulating spatial relations, thus exhibiting properties for which each of the hemispheres of hearing people shows a different predominant functioning. The study of deaf signers with left or right hemisphere damage offers a particularly revealing vantage point for understanding the organization of higher cognitive functions in the brain, and how modifiable that organization may be. We propose three major series of experimental studies, each bringing to bear a special property of the visual gestural modality on the investigation of the brain for language: 1) Brain Organization for a Visuospatial Language. We explore in depth the nature of sign language impairments due to brain damage. We investigate the relative contributions of the cerebral hemispheres to language, with special reference to the linguistic functions and the spatial mechanisms that convey them. We focus on three levels of structure unique to a language in a different mode: processing 'phonology' without sound, vertically arrayed morphology, and spatially organized syntax. 1) Brain Organization for Visuospatial Functions and Facial Signals. We examine brain organization for nonlinguistic visuospatial processing in deaf signers, evaluating visuoperceptive, visuoconstructive, and visuospatial nonlanguage functions. Additionally we investigate the breakdown of facial signals that serve two very different functions in ASL: affective versus specifically linguistic. 3) Neural Mechanisms for Sign Language. We investigate the dissociability of apraxia from sign aphasia through experiments that separately evaluate impairments in linguistic, symbolic, and motor functions as well as analyze the neural correlates for sign language. This research has broad implications for the theoretical understanding of the neural mechanisms underlying the human capacity for language. Patterns of breakdown of a visual-spatial language in deaf signers allow new perspectives on the nature of cerebral specialization for language, since in sign language there is interplay between visual spatial and linguistic relations within one and the same system.