How do listeners accommodate signal variability in connected speech in order to achieve perceptual constancy? What do listeners perceive when they perceive speech (the acoustic signal or the vocal tract gestures that produce the acoustic signal)? These fundamental questions about speech perception continue to be unresolved despite extensive investigation (Lotto &Kluender, 1998;Fowler, 2006). The proposed project lies at the intersection of these questions and investigates compensation for coarticulation - the phenomenon that listeners'perception of a phonetic segment is altered by the characteristics of surrounding segments. In this proposal, we focus on the question of what information listeners use to compensate for coarticulation. Understanding this specific issue would contribute directly to answering the aforementioned fundamental questions and provide clarity to the exiting debate. In order to achieve this aim, we adopt a balanced theoretical approach by assembling a team of investigators that have investigated this phenomenon from different theoretical perspectives. Through this collaboration, we attempt to investigate compensation for coarticulation by combining novel manipulations (e.g., using signal transformed non-native speech contexts, filtered speech) and extending established techniques (e.g., using audiovisual speech, sinewave speech). This approach is not only valuable in understanding how speech perception works but will also aid in efforts to develop robust automatic speech recognition systems as well as inform interventions for patients who have trouble producing intact coarticulated speech (e.g., acquired apraxia of speech in adults, developmental apraxia of speech in children, [e.g., Southwood, 1997;Whiteside et al., 2010]). Furthermore, this program could provide empirical guidance as to the objects of speech that could inform language deficit interventions that make theory-based assumptions that it is best to focus on the auditory-sensory level (e.g. Earobics, Cognitive concepts Inc, 1998) or the perceptual-gestural level (e.g., Lindamood &Lindamood, 2000).
The proposed project investigates how listeners deal with the effects of coarticulation and is directly relevant to inform interventions for patients with developmental apraxia of speech (in children) and acquired apraxia of speech (in adults) in which these individuals'ability to produce intact connected speech is disrupted (e.g., Whiteside et al., 2010). In addition to the clinical application this research will directly inform our efforts to develop automatic speech recognition systems.
|Viswanathan, Navin; Kelty-Stephen, Damian G (2018) Comparing speech and nonspeech context effects across timescales in coarticulatory contexts. Atten Percept Psychophys 80:316-324|
|Viswanathan, Navin; Stephens, Joseph D W (2016) Compensation for visually specified coarticulation in liquid-stop contexts. Atten Percept Psychophys 78:2341-2347|
|Viswanathan, Navin; Kokkinakis, Kostas; Williams, Brittany T (2016) Spatially separating language masker from target results in spatial and linguistic masking release. J Acoust Soc Am 140:EL465|
|Magnuson, James S (2015) Phoneme restoration and empirical coverage of interactive activation and adaptive resonance models of human speech processing. J Acoust Soc Am 137:1481-92|
|Viswanathan, Navin; Magnuson, James S; Fowler, Carol A (2014) Information for coarticulation: Static signal properties or formant dynamics? J Exp Psychol Hum Percept Perform 40:1228-36|
|Viswanathan, Navin; Dorsi, Josh; George, Stephanie (2014) The role of speech-specific properties of the background in the irrelevant sound effect. Q J Exp Psychol (Hove) 67:581-9|
|Viswanathan, Navin; Magnuson, James S; Fowler, Carol A (2013) Similar response patterns do not imply identical origins: an energetic masking account of nonspeech effects in compensation for coarticulation. J Exp Psychol Hum Percept Perform 39:1181-92|