Humans make decisions and perform actions in situations in which all aspects of the decision or action are potentially stochastic. There are fiv components to the planning of an action based on sensory information. First, the subject has prior information about the state of the environment including the current positions and velocities of nearby objects and of the subject's own body, which can be summarized as a probability distribution across possible world states. Second, the subject has sensory input about the current state of the environment, which is uncertain due to physical and neural noise. Third, these two sources of information are combined to decide on an intended action (button press, arm or eye movement, or a complex plan that includes responses to potential subsequent sensory inputs). Fourth, the resulting action can differ from the intended one due to motor noise. Finally, the interaction of the resulting action with the current environment leads to a consequence (a loss or gain), and this consequence may be uncertain as well. As a result of all these stochastic components, visual tasks and movement planning require a calculation that is equivalent to decision-making under risk. In our recent work, we have demonstrated that humans are nearly optimal in visuomotor tasks in that they maximize expected gain, and other circumstances in which human behavior is suboptimal. We propose experiments to better understand the nature of human behavior in visual and visuomotor tasks. We often use tasks with an experimenter-specified reward/penalty structure;this novel approach allows us to compare behavior with the optimal strategy that maximizes expected gain. We ask the following questions and propose experiments to address each. (1) How is behavior planned in visually guided movements? We will investigate the coordinate systems used to encode visually guided actions and how the encoding of movements affects the ability of the visuomotor system to adapt to changing conditions. (2) What does human performance in visual search tasks with clearly defined gains and losses tell us about the encoding of visual patterns and visual uncertainty? We will compare human performance in visual search tasks to ideal-observer models that maximize expected gain in situations with asymmetric payoffs. The results of these experiments will enable us to distinguish different hypotheses about the encoding of visual information in the periphery. In both aims we use patterns of visuomotor performance (while performing a reach, saccade, or keypress) to learn about the underlying encoding of visual stimuli, uncertainty, and visually guided movement. These studies will shed light on the way in which visual stimuli and movements are encoded, and on how vision is used to guide action.

Public Health Relevance

The proposed work benefits public health by characterizing the behavioral and neural mechanisms that are involved with making perceptual decisions or using sensory information to control movements. We show how optimal decisions and movement plans must take into account prior knowledge, the uncertainty of visual information, the variability of motor response and the consequences of action. A variety of medical conditions can impact both the reliability of visual information (e.g., cataract, amblyopia, etc.) and the quality of motor output and response to risk (e.g., Parkinson's disease, Huntington's disease, stroke). The proposed research will improve our understanding of how visual patterns and planned movements are encoded so as to optimiaze a perceptual decision or movement plan, and thus can serve to help in the design of rehabilitative plans when sensory input or motor output is disrupted (change in bias, gain and/or variability) by disease or other health- related conditions.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY008266-24
Application #
8658071
Study Section
(SPC)
Program Officer
Wiggs, Cheri
Project Start
1989-08-01
Project End
2015-04-30
Budget Start
2014-05-01
Budget End
2015-04-30
Support Year
24
Fiscal Year
2014
Total Cost
Indirect Cost
Name
New York University
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
City
New York
State
NY
Country
United States
Zip Code
10012
Aschner, Amir; Solomon, Samuel G; Landy, Michael S et al. (2018) Temporal Contingencies Determine Whether Adaptation Strengthens or Weakens Normalization. J Neurosci 38:10129-10142
Protonotarios, Emmanouil D; Griffin, Lewis D; Johnston, Alan et al. (2018) A spatial frequency spectral peakedness model predicts discrimination performance of regularity in dot patterns. Vision Res 149:102-114
Locke, Shannon M; Landy, Michael S (2017) Temporal causal inference with stochastic audiovisual sequences. PLoS One 12:e0183776
Rizzo, John-Ross; Hosseini, Maryam; Wong, Eric A et al. (2017) The Intersection between Ocular and Manual Motor Control: Eye-Hand Coordination in Acquired Brain Injury. Front Neurol 8:227
Norton, Elyse H; Fleming, Stephen M; Daw, Nathaniel D et al. (2017) Suboptimal Criterion Learning in Static and Dynamic Environments. PLoS Comput Biol 13:e1005304
Rizzo, John-Ross; Fung, James K; Hosseini, Maryam et al. (2017) Eye Control Deficits Coupled to Hand Control Deficits: Eye-Hand Incoordination in Chronic Cerebral Injury. Front Neurol 8:330
Rizzo, John-Ross; Hudson, Todd E; Abdou, Andrew et al. (2017) Disrupted Saccade Control in Chronic Cerebral Injury: Upper Motor Neuron-Like Disinhibition in the Ocular Motor System. Front Neurol 8:12
Sun, Peng; Landy, Michael S (2016) A Two-Stage Process Model of Sensory Discrimination: An Alternative to Drift-Diffusion. J Neurosci 36:11259-11274
Westrick, Zachary M; Heeger, David J; Landy, Michael S (2016) Pattern Adaptation and Normalization Reweighting. J Neurosci 36:9805-16
Hudson, Todd E; Landy, Michael S (2016) Sinusoidal error perturbation reveals multiple coordinate systems for sensorymotor adaptation. Vision Res 119:82-98

Showing the most recent 10 out of 113 publications