A major aim of project 2 is to continue development of our theoretical perspective of speech, according to which phonetic gestures of the vocal tract are at once primitives of a linguistic theory of phonology, the smallest units of speech production and the smallest perceivable units of a speech message. A view in which the primitive units are the same in these three domains has, we believe, substantial advantages over other conceptualizations. Most notably, it permits a view that units of a linguistic message are literally produced in the vocal tract of a speaker, and therefore, structure the acoustic speech signal directly. Because, the units of the message are public in this way, rather than wholly covert mental categories, listeners can recovery them directly from the signal. We propose to develop the theory along four lines. In one, we investigate word-internal organizations of phonetic properties superordinate to the gesture - organizations of syllables into onsets and rhymes, for example - and test how those organizations are realized gesturally. A second research line examines effects on gestural organization of variation in speaking style and its acoustic and perceptual consequences. Whereas the effects of style variation on gestural organization are, we propose, simple, acoustic consequences are complex. In particular, we will test a view that just two rules govern changes in gestural organization in a shift from formal, read speech to faster, more casual or spontaneous speech: gestures may slide in relative time or they may be reduced in magnitude. In addition, however, because acoustic consequences of these simple gestural changes are complex, we propose to investigate the extent to which and the manner in which phonological contrasts are signaled to listeners acoustically in spontaneous as compared to read speech where considerable gestural sliding and reduction may take place. A third line of investigation looks specifically at how an acoustic speech signal provides information to a listener. We test a view that the listener """"""""parses"""""""" the acoustic signal into acoustic constellations the components of which are consequences of a phonetic gesture. In related work, we use our computational gestural model to show how gestures can be recovered from the acoustic signal. In a final research line, we test a hypothesis that, with respect to recovery of gestures, perception of speech is not special because, in all of auditory perception, listeners use acoustic signals as information for their causal sources in the environment.
Showing the most recent 10 out of 457 publications