With National Science Foundation support, linguist Dr. Ronnie B. Wilbur and electrical engineer Dr. Avinash C. Kak will conduct two years of integrated linguistic-computational research on the problem of automatic recognition of American Sign Lanugage (ASL). This novel approach takes advantage of the fact that signs are composed of components (handshape, place of articulation, orientation, movement, and possible nonmanuals, such as face/head position), in much the same way that words are composed of consonants and vowels. Instead of attempting sign recognition as a process of matching the input against a stored set of lexical signs, sign recognition is treated as the end result of several separate but interrelated component recognition procedures, of which the current project focuses on handshape recognition. The approach aims to identify an input handshape from among the set of possible ASL handshapes. When coupled with similar procedures that identify place of articulation, orientation, movement, and nonmanuals, the composite identified set of components should yield a single lexical sign in a dictionary look-up, much like the original dictionary created by Stokoe and colleagues in 1965. The project uniquely integrates several areas of basic research: linguistic research on the structure of signs in ASL, psycholinguistic research on human perception of ASL, and advanced techniques from statistical pattern recognition and computer vision to analyze input from stereo images of signers. The procedure is designed so that linguistic information about ASL handshapes can be used to assist the computer in deciding which handshape it "sees".
Two scientific questions motivate this research. First, the project contributes to our understanding of the general phonetic and phonological organization of ASL when computers and humans are compared on the formational properties they use to perceive and categorize different handshapes. Second the project contributes to the development of algorithmic approaches to computer vision and pattern recognition of moving 3-D deformable objects, a theoretical issue in itself, by using as the objects of analysis the set of ASL signs that move though space and change handshapes. In addition to its scientific merit, this project contributes to the long-term development of an ASL-English machine translation device. Such a device would support interactions between signers and speakers in practical settings, including the workplace and classrooms.