Localization of objects requires that spatial information are the same regardless of the type of sensory input. For example, if a pedestrian approaches an intersection and a car honks, the pedestrian must identify the source of the sound cue in space and link it accurately to a corresponding visual cue in order to avoid confusion (there may be many cars) if not disaster (he may be in danger of collision). Thus the two sensory inputs must be in concordance. In the process of orienting toward the car, an eye and head movement are generated, thereby providing additional sensory cues from proprioceptive (neck) and vestibular (head) origin. These must be accurately registered internally so that the brain's depiction of the car in space remains correct. Having realized that the light had not changed, the pedestrian might then reorient toward the crosswalk button and guide his hand to press this new target. Though simple in concept, this set of behaviors requires that the brain integrate auditory, visual, proprioceptive, and vestibular inputs accurately and synchronously. All inputs must be spatially concordant; that is, the sense of a particular target location must be the same for the auditory and visual inputs. These must be integrated with internal signals conveying where the eyes are in the head, where the head is in space, where the head is on the body (neck position), and where the body is relative to the ground. Errors in these various components (sensory or motor) will result in inaccurate localization of external targets, and therefore erroneous behavior. New methods, such as """"""""virtual reality"""""""" technology, will be used to control and shape the visual and auditory world, while new motion control devices will allow manipulation of vestibular and somatic variables. A visual display panel within arm's reach, a virtual auditory stimulus capability, and a head and finger tracking device will be added to our sled/rotator laboratory. The sled/rotator permits precise control of subject motion in space (vestibular stimuli). The addition of virtual auditory stimuli allows us to coordinate and manipulate vestibular and auditory spatial cues smoothly and independently. Visual target presentation capabilities will allow us to also independently manipulate spatial visual targets in space. These combined features will provide true multisensory stimulus capabilities across three sensory modalities, with which we can assess each influence in isolation or in combination. This past year, we have set up a development lab (light tight and near anechoic) in order to instrument and test auditory and visual stimulus methods and tasking procedures. We have begun to assess virtual auditory techniques in direct comparison with real stimuli. We are also testing auditory-visual concordance paradigms and reaching tasks before porting the system to the sled-rotator lab.

Agency
National Institute of Health (NIH)
Institute
National Center for Research Resources (NCRR)
Type
Biotechnology Resource Grants (P41)
Project #
5P41RR009283-07
Application #
6339396
Study Section
Project Start
2000-08-01
Project End
2001-07-31
Budget Start
1998-10-01
Budget End
1999-09-30
Support Year
7
Fiscal Year
2000
Total Cost
$37,685
Indirect Cost
Name
University of Rochester
Department
Type
DUNS #
208469486
City
Rochester
State
NY
Country
United States
Zip Code
14627
Rothkopf, Constantin A; Ballard, Dana H (2013) Modular inverse reinforcement learning for visuomotor behavior. Biol Cybern 107:477-90
Fernandez, Roberto; Duffy, Charles J (2012) Early Alzheimer's disease blocks responses to accelerating self-movement. Neurobiol Aging 33:2551-60
Velarde, Carla; Perelstein, Elizabeth; Ressmann, Wendy et al. (2012) Independent deficits of visual word and motion processing in aging and early Alzheimer's disease. J Alzheimers Dis 31:613-21
Rothkopf, Constantin A; Ballard, Dana H (2010) Credit assignment in multiple goal embodied visuomotor behavior. Front Psychol 1:173
Huxlin, Krystel R; Martin, Tim; Kelly, Kristin et al. (2009) Perceptual relearning of complex visual motion after V1 damage in humans. J Neurosci 29:3981-91
Rothkopf, Constantin A; Ballard, Dana H (2009) Image statistics at the point of gaze during human navigation. Vis Neurosci 26:81-92
Jovancevic-Misic, Jelena; Hayhoe, Mary (2009) Adaptive gaze control in natural environments. J Neurosci 29:6234-8
Kavcic, Voyko; Ni, Hongyan; Zhu, Tong et al. (2008) White matter integrity linked to functional impairments in aging and early Alzheimer's disease. Alzheimers Dement 4:381-9
Droll, Jason A; Hayhoe, Mary M; Triesch, Jochen et al. (2005) Task demands control acquisition and storage of visual information. J Exp Psychol Hum Percept Perform 31:1416-38
Bayliss, Jessica D; Inverso, Samuel A; Tentler, Aleksey (2004) Changing the P300 brain computer interface. Cyberpsychol Behav 7:694-704

Showing the most recent 10 out of 28 publications