Autism spectrum disorders (ASD) refer to a continuum of severe neuropsychiatric disorders characterized by deficits in communication, social reciprocity and the presence of restricted, repetitive behaviors. For typical listeners, visual information from a speaker's face influences what is heard. Individuals with ASD show reduced social gaze to faces, which suggests that much information about visual speech is lost in this population. Accordingly, children and adolescents with ASD appear to be less susceptible to visual speech information. The goal of this project is to examine sensitivity to visual speech information in high-functioning, verbal children with ASD. This application uses innovative visual tracking methodology to (1) evaluate the degree to which children with ASD integrate audiovisual (AV) speech in comparison to typically developing (TD) controls in clear and noisy listening conditions, (2) examine ASD and TD perceivers' ability to detect asynchrony in AV speech, which is related to AV integration in typical perceivers, and (3) assess the gaze behavior of ASD and TD perceivers to a speaker's face. Limitations in productive language and difficulty with social interaction in low-functioning, non-verbal children with ASD have led to their under-representation in perceptual studies of speech. Thus, an additional goal of this application is to develop a training procedure for low-functioning, nonverbal children with ASD and younger TD children that will allow for their participation in perceptual studies of auditory and AV speech.

Public Health Relevance

The careful evaluation of sensitivity to visual information for speech has important practical and theoretical implications for the understanding of perceptual processing of speech in high-functioning children with ASD, including in (1) the identification and characterization of AV integration deficits and (2) the design of targeted interventions. Further, the development of a training procedure for non-verbal low-functioning children with ASD and for younger, TD children will allow for greater understanding of perceptual processing of speech in non-verbal children and serve as a model for future intervention. ? ? ?

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Small Research Grants (R03)
Project #
5R03DC007339-02
Application #
7264553
Study Section
Special Emphasis Panel (ZDC1-SRB-Y (52))
Program Officer
Cooper, Judith
Project Start
2006-09-01
Project End
2009-08-31
Budget Start
2007-09-01
Budget End
2008-08-31
Support Year
2
Fiscal Year
2007
Total Cost
$80,454
Indirect Cost
Name
Haskins Laboratories, Inc.
Department
Type
DUNS #
060010147
City
New Haven
State
CT
Country
United States
Zip Code
06511
Irwin, Julia; Turcios, Jacqueline (2017) Teaching and learning guide for audiovisual speech perception: A new approach and implications for clinical populations. Lang Linguist Compass 11:92-97
Irwin, Julia; Brancazio, Lawrence; Volpe, Nicole (2017) The development of gaze to a speaking face. J Acoust Soc Am 141:3145
Irwin, Julia; Preston, Jonathan; Brancazio, Lawrence et al. (2015) Development of an audiovisual speech perception app for children with autism spectrum disorders. Clin Linguist Phon 29:76-83
Irwin, Julia R; Brancazio, Lawrence (2014) Seeing to hear? Patterns of gaze to speaking faces in children with autism spectrum disorders. Front Psychol 5:397
Irwin, Julia R; Moore, Dina L; Tornatore, Lauren A et al. (2012) Promoting Emerging Language and Literacy During Storytime. Child Libr 10:20-23
Irwin, Julia R; Frost, Stephen J; Mencl, W Einar et al. (2011) Functional activation for imitation of seen and heard speech. J Neurolinguistics 24:611-618
Irwin, Julia R; Tornatore, Lauren A; Brancazio, Lawrence et al. (2011) Can children with autism spectrum disorders ""hear"" a speaking face? Child Dev 82:1397-403