This project is concerned with the development of theoretical and experimental bases for understanding and modeling the production of speech. The first section is focused on the computational modeling of the linguistic and sensorimotor processes that shape articulatory movement patterns according to an intended utterance's phonological content. To these ends we begin with the Linguistic Gestural Model embodying the articulatory phonology of Browman & Goldstein (1986, in press). This model provides an explicit, context-sensitive specification of the gestural parameter sets and intergestural relative timing information for an intended input utterance. The capabilities of this model will be expanded by replacing the current linear rule-interpreter with a multi- dimensional interpreter. We turn next to the Task Dynamic Model (Saltzman, 1986) that generates patterns of coordinated articulator movements corresponding to the input information specified by the Linguistic Gestural Model. The Task Dynamic Model will be extended in order to provide more finely differentiated and accurately constrained control of articulatory behavior. Finally, we will explore a Hybrid Dynamical Model that incorporates a connectionist, serial dynamics, in order to provide an intrinsically dynamical account of intergestural relative timing and multigesture cohesion. The second section deals with the functional geometry of vocal- tract behavior, focusing on the relationships that exist between articulation, vocal tract shape, and speech acoustics. To these ends, we will employ MRI techniques to obtain static, anatomical information concerning the relationship between articulation and vocal tract shape. Additionally, we will employ kinesiological techniques to obtain electromyographic and kinematic information on the effective degrees-of-freedom used dynamically by the tongue in shaping the vocal tract. Data from these MRI and kinesiological studies will be used to refine the articulatory geometry and the transformation from articulation to tract-shape in the Vocal Tract Model (articulatory transformation from articulation to tract-shape in the Vocal Tract Model (articulatory synthesizer; ASY). Finally, we describe computational developments in modeling articulatory- acoustic relationships, focusing on time-domain synthesis methods for modeling sound propagation and the voice source, and on connectionist, associative-network methods for modeling the direct and inverse mappings between articulation and acoustics.