The proposed research investigates how and under what conditions various aspects of social events become salient, attended, and perceived and how this changes across development from infancy through early childhood. In particular, this proposal explores the developmental course of infants' perception of faces, voices, and amodal properties of speech (tempo, rhythm, and intensity) in unimodal auditory, unimodal visual, and multimodal, audiovisual stimulation using convergent measures of heart rate, eye tracking, and infant controlled visual habituation. Predictions concerning the role of redundancy across the senses in promoting and organizing the development of attention, perception, and learning about different properties of events in multimodal and unimodal stimulation, generated from of our model of selective attention (the intersensory redundancy hypothesis) will be tested.
Five specific aims systematically explore the conditions that facilitate versus attenuate learning about faces, voices, and amodal properties of speech. By investigating multimodal and unimodal perception under a single framework, we will provide a basis for integrating separate literatures and reveal important interactions between modality of stimulation (unimodal, multimodal) and attention to properties of events (redundantly versus nonredundantly specified) that cannot be detected in separate research designs. We use a novel combination of convergent measures: visual habituation and recovery reveal what properties of audiovisual speech events infants detect (faces, voices, amodal properties of speech), heart rate indexes the depth and efficiency of processing, and eye tracking reveals what features of dynamic faces infants selectively attend under different conditions (redundant vs nonredundant). By including measures across different levels of analysis, critical controls for amount and type of stimulation, manipulations of task difficulty, and effects of repeated exposure, we will reveal much more about the nature, basis, and processes underlying the attentional salience of social events than can be revealed by separate studies or single measures. Patterns of selective attention and learning about social events that converge with those of our prior studies of nonsocial events will suggest that general perceptual processes govern attention and learning in this domain. Our goals are to advance developmental theory in the area of attention and perception, and to establish norms for infant sensitivity to intersensory and unimodal information about critical aspects of social events that can be used for assessing atypical patterns of development and can serve as a basis for interventions. Assessing multimodal conditions will also enhance ecological validity and foster translation of findings to real world learning contexts.
This research will reveal new information about the nature, basis, and development of attention to and perception of faces, voices, and amodal aspects of speech (tempo, rhythm, intensity) in unimodal visual, unimodal auditory, and multimodal audiovisual stimulation in infants and young children. Convergent measures of visual habituation, heart rate, and eye tracking, about the nature of selective attention to different properties of social events under different conditions, will provide a comprehensive and integrated picture of processes usually studied separately. Findings will provide a wealth of critical information about typical developmental patterns at a level of detail that is novel and necessary for identifying atypical patterns of development, including social attention deficits characteristic of autism. Findings are easily translated to real world settings and can serve as a basis for interventions for developmental delays. ? ? ?
Showing the most recent 10 out of 23 publications