Humans make decisions and perform actions in situations in which all aspects of the decision or action are potentially stochastic. There are fiv components to the planning of an action based on sensory information. First, the subject has prior information about the state of the environment including the current positions and velocities of nearby objects and of the subject's own body, which can be summarized as a probability distribution across possible world states. Second, the subject has sensory input about the current state of the environment, which is uncertain due to physical and neural noise. Third, these two sources of information are combined to decide on an intended action (button press, arm or eye movement, or a complex plan that includes responses to potential subsequent sensory inputs). Fourth, the resulting action can differ from the intended one due to motor noise. Finally, the interaction of the resulting action with the current environment leads to a consequence (a loss or gain), and this consequence may be uncertain as well. As a result of all these stochastic components, visual tasks and movement planning require a calculation that is equivalent to decision-making under risk. In our recent work, we have demonstrated that humans are nearly optimal in visuomotor tasks in that they maximize expected gain, and other circumstances in which human behavior is suboptimal. We propose experiments to better understand the nature of human behavior in visual and visuomotor tasks. We often use tasks with an experimenter-specified reward/penalty structure;this novel approach allows us to compare behavior with the optimal strategy that maximizes expected gain. We ask the following questions and propose experiments to address each. (1) How is behavior planned in visually guided movements? We will investigate the coordinate systems used to encode visually guided actions and how the encoding of movements affects the ability of the visuomotor system to adapt to changing conditions. (2) What does human performance in visual search tasks with clearly defined gains and losses tell us about the encoding of visual patterns and visual uncertainty? We will compare human performance in visual search tasks to ideal-observer models that maximize expected gain in situations with asymmetric payoffs. The results of these experiments will enable us to distinguish different hypotheses about the encoding of visual information in the periphery. In both aims we use patterns of visuomotor performance (while performing a reach, saccade, or keypress) to learn about the underlying encoding of visual stimuli, uncertainty, and visually guided movement. These studies will shed light on the way in which visual stimuli and movements are encoded, and on how vision is used to guide action.
The proposed work benefits public health by characterizing the behavioral and neural mechanisms that are involved with making perceptual decisions or using sensory information to control movements. We show how optimal decisions and movement plans must take into account prior knowledge, the uncertainty of visual information, the variability of motor response and the consequences of action. A variety of medical conditions can impact both the reliability of visual information (e.g., cataract, amblyopia, etc.) and the quality of motor output and response to risk (e.g., Parkinson's disease, Huntington's disease, stroke). The proposed research will improve our understanding of how visual patterns and planned movements are encoded so as to optimiaze a perceptual decision or movement plan, and thus can serve to help in the design of rehabilitative plans when sensory input or motor output is disrupted (change in bias, gain and/or variability) by disease or other health- related conditions.
Showing the most recent 10 out of 113 publications