The objective of this research is the development of methods for the automatic recognition of American Sign Language (ASL) utterances using as input the 3D shape and motion parameters of a subject's face, hands and arm. These parameters are extracted based on the use of computer vision techniques on relevant image sequences. The novel aspects of this research are: 1) use of 3D information from the vision-based analysis of the data that consists of the three-dimensional hand, arm and face shape and motion, 2) use of Hidden Markov Models (HMMs) to recognize the ASL structure at multiple levels, and 3) coupling of computer vision and HMMs to go beyond the limitations of both computer vision and HMMs. Novel computer vision methods will be developed based on the use of deformable models, visual cues, and knowledge of anthropometry to allow the accurate tracking of a human's face and upper limbs. The 3D output from the vision system will be used as input to the HMM for ASL recognition. To improve the robustness of the system, research on feedback mechanisms between the computer vision system and the HMMs will also be conducted. The final goal of this research is to demonstrate the feasibility of building an automated robust system with high recognition accuracy (recovery of sign sequences at the sign level) that is capable of handling the inflectional and derivational properties of ASL in a systematic way.