The long-term goal of this project is to wed state-of-the-art technology for imaging the vocal tract with a linguistically informed analysis of vocal tract constriction actions in order to understand the cognitive control and production of the compositional units of spoken language. We have developed the use of real time MRI to illuminate the inherently dynamic speech production process. Our approach is to observe the time-varying changes in vocal tract shaping and to understand how these emerge lawfully from the combined effects of multiple constriction events distributed over space (subparts of the tract) and over time. An understanding of dynamic vocal tract actions as fundamental to linguistic organization will do much to add to the field's current-basically static-approach to describing speech. In the previous (and first) funding period of the proposal, our team developed and refined our novel real time MRI acquisition ability that has made veridical real-time movies of speech production possible for the first time without X-rays. Data show clear real-time movements of the lips, tongue, and velum, providing exquisite information about the spatiotemporal properties of speech gestures in both the oral and pharyngeal portions of the vocal tract. We also developed novel noise-mitigated image-synchronized strategies to record speech in-situ during imaging as well as signal processing strategies for deriving linguistically-meaningful measures from the data (Bresh and Narayanan, 2009). We have demonstrated the utility of this approach for linguistic studies of speech communication that were hitherto not possible (e.g., Byrd, Tobin, Bresch, Narayanan, 2009;Bresch et al, 2008). Building on these foundational efforts, we situate the specific research aims of our competing renewal proposal as follows.
The specific aims of this proposal are to further develop the technology and analysis platform of real-time MRI, which provides the scaffolding for the project, while pursuing speech production studies with an overarching theme of examining the decomposition of speech into cognitively-controlled action units, or gestures. Specifically, we aim to investigate the compositionality of speech in three domains-each being areas of study that are not approachable using exclusively acoustic speech data without direct access to the dynamic information from the entire vocal tract, which can only be supplied with real-time MRI.
Our specific aims examine (i) compositionality in space: deployment of concurrent constriction events distributed spatially, that is, over distinct constriction effectors within the vocal tract, (ii) compositionality in time: deployment of constriction events distributed temporally, (iii) compositionality in cognition: deployment of constriction events during speech planning that mirror those observed during speech production. We propose to use the real-time MRI approach we've developed to advance our understanding in all these three aspects of linguistic structuring. Our approach to decomposing speech shaping into multiple discrete events, in space and over time, can be further validated by demonstrating that we can capture the observed data time-functions using a computational model having only discrete gestural input. To do this, we will employ a computational implementation of Articulatory Phonology and Task Dynamics (called TaDA). The model is particularly appropriate because it provides a hypothesized ensemble of gestures arrayed over time for any input utterance. The model is biologically plausible and produces as its output explicit time-functions of constriction events in the vocal tract, which is precisely what we measure directly with real-time MRI. We anticipate a highly synergistic relation between model and data that can bootstrap our understanding of the structure of speech. The model has not, to this point, been optimized using real data, as the appropriate data did not exist before real-time MRI. And, in turn, the use of real-time MRI as a tool in understanding speech depends on having an analytical procedure for relating the observed shaping changes to underlying (multiple) controls, which is what the model provides. The project's final specific aim is to continue to advance our technical real-time MRI approach for investigating the physical realization of phonological structure by: (i) improved image signal to noise ratio through the use of a novel custom 16 receiver head neck coil, (ii) doubling the 2D acquisition frame rate through the use of novel pulse sequences in conjunction with new joint acquisition-processing optimization, and (iii) fast 3D imaging using more sophisticated pulse-sequences to supplement the single plane fast imaging work. These challenges will be pursued in tandem with the design of data-driven analyses suitable for distilling the high-dimensional information provided by real-time MRI and with the synchronized acoustic speech signal, critical for deriving linguistically-meaningful measures. Specifically, we pursue robust and faster image segmentation and articulatory tracking, and methods for dynamical modeling using the derived time series constriction data.
The vocal tract is the universal human instrument, played with great dexterity and skill in the production of spoken language. In order to produce the elegant acoustic structure of speech, the linguistically significant actions of the vocal tract must be choreographed with remarkable spatiotemporal precision. The vocal tract airway is also critically involved in functions such as swallowing and breathing. Disruptions to speech and other airway function can have significant effects on the health, well-being, and overall quality of life of individuals. The proposed effort's theoretical, experimental, and methodological approaches focusing on the dynamics of vocal tract shaping are hence significant along several dimensions. The unique capability our team has created to allow direct imaging of the moving vocal tract with MRI, with reconstruction rates of up to 24 images per second with synchronized audio recording, has made veridical real-time movies of speech production possible for the first time without X-rays. The present proposal aims to further develop the technology and analysis platform of real-time MRI while pursuing speech production studies with the overarching linguistic goal of understanding the composition of speech from cognitively-controlled action units, or gestures. Specifically, we aim to investigate the compositionality of speech in three domains-in space, in time, and in cognition-each being an area of study not approachable using exclusively acoustic speech data, because the question of compositionality requires direct access to dynamic information about articulation along the entire vocal tract, which can only be supplied with real-time MRI. In addition to illuminating details of unimpaired speech production, the proposed work provides both technological tools and theoretical tools to look at clinical disorders in a new way. In disordered speech it is often critical to have direct articulatory data to accurately describe the spoken language deficit. Further, the theoretical framework that pursues an understanding of speech as composed of cognitively-planned action units creates a scientific foothold for evaluating the dissolution and lack of coherence commonly found in disordered speech articulation. Beyond speech production studies, the work has potential broad impact on clinical applications such as those related to swallowing disorders, sleep apnea, and recovery of speech function after stroke or surgery, e.g., glossectomy. Further, because speech presents the only example of rapid, cognitively-controlled, internal movements of the body, the unique challenges of speech production imaging offer the wider biomedical imaging community traction for advances that have already improved temporal and spatial image resolution;advances with potential import for cardiac and other imaging. Scientific knowledge of the orchestration of articulatory activity that creates speech is a necessary element in understanding the human communication process. And we feel that it is no exaggeration to say that the advent of real-time MRI for speech has initiated a dramatic change in the way speech production research is conducted.
|Bone, Daniel; Li, Ming; Black, Matthew P et al. (2014) Intoxicated Speech Detection: A Fusion Framework with Speaker-Normalized Hierarchical Functionals and GMM Supervectors. Comput Speech Lang 28:|
|Narayanan, Shrikanth; Toutios, Asterios; Ramanarayanan, Vikram et al. (2014) Real-time magnetic resonance imaging and electromagnetic articulography database for speech production research (TC). J Acoust Soc Am 136:1307|
|Ramanarayanan, Vikram; Lammert, Adam; Goldstein, Louis et al. (2014) Are articulatory settings mechanically advantageous for speech motor control? PLoS One 9:e104168|
|Kim, Jangwon; Lammert, Adam C; Ghosh, Prasanta Kumar et al. (2014) Co-registration of speech production datasets from electromagnetic articulography and real-time magnetic resonance imaging. J Acoust Soc Am 135:EL115-21|
|Lammert, Adam; Proctor, Michael; Narayanan, Shrikanth (2013) Morphological variation in the adult hard palate and posterior pharyngeal wall. J Speech Lang Hear Res 56:521-30|
|Zhu, Yinghua; Kim, Yoon-Chul; Proctor, Michael I et al. (2013) Dynamic 3-D visualization of vocal tract shaping during speech. IEEE Trans Med Imaging 32:838-48|
|Lammert, Adam; Goldstein, Louis; Narayanan, Shrikanth et al. (2013) Statistical Methods for Estimation of Direct and Differential Kinematics of the Vocal Tract. Speech Commun 55:147-161|
|Ramanarayanan, Vikram; Goldstein, Louis; Byrd, Dani et al. (2013) An investigation of articulatory setting using real-time magnetic resonance imaging. J Acoust Soc Am 134:510-9|
|Proctor, Michael; Bresch, Erik; Byrd, Dani et al. (2013) Paralinguistic mechanisms of production in human "beatboxing": a real-time magnetic resonance imaging study. J Acoust Soc Am 133:1043-54|
|Ghosh, Prasanta K; Narayanan, Shrikanth S (2013) On smoothing articulatory trajectories obtained from Gaussian mixture model based acoustic-to-articulatory inversion. J Acoust Soc Am 134:EL258-64|
Showing the most recent 10 out of 22 publications