Autism spectrum disorders (ASD) refer to a continuum of severe neuropsychiatric disorders characterized by deficits in communication, social reciprocity and the presence of restricted, repetitive behaviors. For typical listeners, visual information from a speaker's face influences what is heard. Individuals with ASD show reduced social gaze to faces, which suggests that much information about visual speech is lost in this population. Accordingly, children and adolescents with ASD appear to be less susceptible to visual speech information. The goal of this project is to examine sensitivity to visual speech information in high-functioning, verbal children with ASD. This application uses innovative visual tracking methodology to (1) evaluate the degree to which children with ASD integrate audiovisual (AV) speech in comparison to typically developing (TD) controls in clear and noisy listening conditions, (2) examine ASD and TD perceivers' ability to detect asynchrony in AV speech, which is related to AV integration in typical perceivers, and (3) assess the gaze behavior of ASD and TD perceivers to a speaker's face. Limitations in productive language and difficulty with social interaction in low-functioning, non-verbal children with ASD have led to their under-representation in perceptual studies of speech. Thus, an additional goal of this application is to develop a training procedure for low-functioning, nonverbal children with ASD and younger TD children that will allow for their participation in perceptual studies of auditory and AV speech.
The careful evaluation of sensitivity to visual information for speech has important practical and theoretical implications for the understanding of perceptual processing of speech in high-functioning children with ASD, including in (1) the identification and characterization of AV integration deficits and (2) the design of targeted interventions. Further, the development of a training procedure for non-verbal low-functioning children with ASD and for younger, TD children will allow for greater understanding of perceptual processing of speech in non-verbal children and serve as a model for future intervention. ? ? ?
Irwin, Julia; Turcios, Jacqueline (2017) Teaching and learning guide for audiovisual speech perception: A new approach and implications for clinical populations. Lang Linguist Compass 11:92-97 |
Irwin, Julia; Brancazio, Lawrence; Volpe, Nicole (2017) The development of gaze to a speaking face. J Acoust Soc Am 141:3145 |
Irwin, Julia; Preston, Jonathan; Brancazio, Lawrence et al. (2015) Development of an audiovisual speech perception app for children with autism spectrum disorders. Clin Linguist Phon 29:76-83 |
Irwin, Julia R; Brancazio, Lawrence (2014) Seeing to hear? Patterns of gaze to speaking faces in children with autism spectrum disorders. Front Psychol 5:397 |
Irwin, Julia R; Moore, Dina L; Tornatore, Lauren A et al. (2012) Promoting Emerging Language and Literacy During Storytime. Child Libr 10:20-23 |
Irwin, Julia R; Frost, Stephen J; Mencl, W Einar et al. (2011) Functional activation for imitation of seen and heard speech. J Neurolinguistics 24:611-618 |
Irwin, Julia R; Tornatore, Lauren A; Brancazio, Lawrence et al. (2011) Can children with autism spectrum disorders ""hear"" a speaking face? Child Dev 82:1397-403 |