Recent work in automatic speech recognition has benefitted from the use of neural network algorithms so that not only the words, but also connected speech, can be recognized with some degree of confidence. There is reason to believe, therefore, that this technique also will be useful in automatically recognizing American Sign Language (ASL). The proposed research will lay the foundation for this long range objective. Automatic sign recognition can greatly aid the communication of the congenitally profoundly deaf with the people in their hearing environment, since 90% of deaf children are born into hearing families and the rest also must function in a hearing world. Before automatic sign recognition is possible, researchers need to identify the technology, the input coding variables, and the required size of the associative networks. These three objectives form the specific aim of the small research grant request. In the proposed project, the research team will take the joint and position sensor information from a Polhemus DataGlove and establish the minimum information necessary to recognize a given set of signs, using the cheremic code proposed by sign language researchers. Sensor data fusion techniques using associative neural networks will be used to set up the training and the recognition systems. The goal of the proposed study is to recognize three sets of five signs, grouped into the following categories: shape fragile, motion fragile, and position fragile. The efficiency of the shape, motion, and position detection of these signs will be measured, and its variability with the speed of forming these signs will be studied. At the end of the two years, the research team hopes to establish the sensor configuration and the type of network which might be useful in studying automatic sign language recognition.