This subproject is one of many research subprojects utilizing the resources provided by a Center grant funded by NIH/NCRR. The subproject and investigator (PI) may have received primary funding from another NIH source, and thus could be represented in other CRISP entries. The institution listed is for the Center, which is not necessarily the institution for the investigator. This project combines eye-tracking and PET imaging techniques with a preferential looking task in rhesus macaques to assess the neural substrates underlying cross-modal integration of socially salient cues, a skill thought to be critical in self-regulation of social behavior. We have purchased all equipment, acquired all of the monkey face stimuli and sound vocalizations, and set-up all the equipment for the recording of the visual search patterns while the animals are passively viewing congruent (vocalization corresponding to facial emotions) and incongruent stimuli (vocalization not corresponding to facial emotions). Pilot testing has been completed on two normal adult monkeys and their data replicate previous findings such that normally developing monkeys exhibit an innate ability to integrate the audio and visual components of complex social cues. Using these criteria, we have now begun to test the experimental animals with neonatal lesions of the amygdala and orbital frontal cortex and the sham-operated controls. During the current year, we have also purchased the eye-tracking system and are currently training 6 monkeys on the paradigm. Initial tests indicate that animals rapidly habituate to stimuli presentation. We made significant adjustment in the schedule of stimuli presentation and inclusion of additional stimuli to reduce habituation during the 25 minute testing period required for PET imaging. The first subject will be ready for imaging trials during the coming year.
Showing the most recent 10 out of 912 publications