Recent measurements at UC Davis and Lawrence Livermore National Laboratory have shown that very low power, low cost radars can be used to measure human speech articulator positions and motions as speech is articulated. This additional information provides separate streams of data on the motions of the vocal folds (i.e., providing voiced excitation information) and on the soft palate, jaw, tongue, and lips. This project is designed to provide examples of these data from a representative sample of speakers to the speech community for their analysis. These include the use of two or more speech optimized radars with simultaneous speech, optained in several noise environments. In addition, studies of pitch synchronous processing using the excitation information and ARMA methods will be provided for selected voiced speech segments.