Research over the past two decades has shown that signed languages conform to the same grammatical constraints and exhibit the same linguistic principles found in spoken languages. However, unlike spoken languages, signed languages depend upon high level visual-spatial processes for interpretation. This project investigates the use of physical space to encode linguistic distinctions within American Sign Language (ASL) and builds upon this lab's previous studies of on-line processing in deaf ASL signers. Specifically, the proposed experiments examine the spatialized encoding of a) co-reference and pragmatic contrasts within a discourse and of b) locational contrasts and spatial perspective.
The research will advance our understanding of human language by illuminating the ways in which the sensory modality of a language affects its structure and processing. In addition, the results have clear implications for how ASL is used within an educational setting for the deaf by indicating how signing space can be manipulated to foster the comprehension of complex information. Finally, this project will promote the participation of Deaf people in research, providing an environment and training that facilitate entrance into scientific fields.