Everyday speech is often produced imperfectly, requiring listeners to navigate a stream of input interrupted by disfluencies, and these can affect how spoken language is processed and ultimately interpreted. Disfluencies may impede communication in a wide range of settings, from customer service to medical provision to public safety. Some disfluencies are unproductive, while others, including self-corrections, may play a crucial role in managing and maintaining the communication of information. However, in general, little is known about how self-corrections from healthy adults and individuals with a speech disorder such as stuttering are processed. Two contrasting theoretical approaches to this question are the ambiguity resolution and noisy channel models of disfluency processing. The main aim of this project is to investigate disfluency processing with an innovative, mixed-methods approach using utterances spoken by healthy individuals and people who stutter. The experiments use both naturally generated disfluent speech and constructed examples, and the constructed examples will be elicited from both people who do and do not stutter. The experimental paradigms to be used are the most sophisticated available for examining the online processing of speech: recording eye movements to depicted referents mentioned in spoken utterances (the Visual World Paradigm), and recording Event-Related Potentials in response to specific properties of spoken utterances containing self-corrections. Participants will be healthy college students from the University of South Carolina and the University of South Florida undergraduate communities.
The Specific Aims are: (1) To use neural methods and eyetracking to systematically investigate the factors that allow listeners to process self-correction disfluencies in real time;and (2) to further discriminate between the ambiguity resolution and noisy channel models by using speech from speakers with different production systems: people who do and do not stutter. An additional goal is to provide preliminary information about the possibility of assessing therapeutically modified speech using a methodology based on measures of online spoken language processing. The project is highly innovative because of (a) the focus on self-correction disfluencies and the use of online techniques (eyetracking and electrophysiology) to examine how they are processed as they are encountered in real time;(b) the testing of theories which assume that the same tools that are used by comprehenders to process regular utterances are also used to handle self-corrections, avoiding the need for special-purpose mechanisms;and (c) the examination of how speech produced by people who stutter is processed in real-time by healthy adults, which may lead to an implicit, performance-based instrument for assessing the success of therapeutic interventions to treat stuttering.
Disfluencies in everyday speech can affect communication in a wide range of settings, but little is known about how they are processed, and even less about how they are processed in therapeutically-modified speech (e.g., speech produced by people who stutter after undergoing speech therapy). The goal of this project is to investigate these questions using an innovative, mixed-methods approach and contrasting an ambiguity resolution versus a noisy channel model of disfluency processing.
Lowder, Matthew W; Ferreira, Fernanda (2016) Prediction in the processing of repair disfluencies: Evidence from the visual-world paradigm. J Exp Psychol Learn Mem Cogn 42:1400-16 |
Lowder, Matthew W; Ferreira, Fernanda (2016) Prediction in the Processing of Repair Disfluencies. Lang Cogn Neurosci 31:73-79 |