The long-term goal of Phase I and Phase II research is to provide an innovative technology for synthesizing Sign Language based on the translation of the written form of a sign into abstract formational components -- handshapes, locations, movements -- and the compilation of graphic animations from these abstractly encoded elements. The language processing approaches under investigation are based on a phonemic-level orthography for Sign Language called SignFont, which was previously created by the proposing organization. This system would allow users to enter Sign Language text (including grammatically altered and novel sign-forms) from a keyboard or file and see the actual signs in computer animation. The technological innovations inherent in the proposed design include (1) novel approaches to the storage of Sign Language structural data in microcomputer environment; (2) new algorithms for animating signs; (3) a user interface designed to accept phonemic-level or phonetic-level representations of signs; and (4) implementation of the software on a Macintosh computer with its excellent graphic capability. Phase I will focus on the application of the software for the synthesis of signs sharing components from a constrained subset of Sign Language structural parameters.