Our long-term goal is to understand how humans perform natural tasks given realistic visual input. Object perception is critical for the everyday tasks of recognition, planning, and motor actions. Through vision, we infer intrinsic properties of objects, including their shapes, sizes, materials, as well as their identities. We also infer their depths and movement relationships to each other and ourselves, as well as determine how to use this information. The remarkable fact is that the human visual system provides a high level of functionality despite complex and objectively ambiguous retinal input. Current machine vision systems do not come close to normal human visual competence. In contrast, our daily visual judgments are unambiguous, and our actions are reliable. How is this accomplished? Our conceptual approach to this question is motivated by our previous work on object perception as Bayesian statistical inference, and its implications for how human perception gathers and integrates information about scenes and objects to reduce uncertainty, resolve ambiguity and achieve action goals. Our experimental approach to this question grows out of our team's past accomplishments in using behavioral techniques such as interocular suppression, high-field functional magnetic resonance imaging and analysis, and Bayesian observer analysis of human behavioral performance. We combine our conceptual and experimental approaches to address a new set of questions. In three series of experiments, we aim to better understand: 1) the relationship between cortical activity and the perceptual organization of image features into unambiguous object properties and structures (Within-object interactions);2) how visual information about other objects and surfaces reduces uncertainty about the representation of an object's properties and depth relations (Between-object interactions);and 3) whether and how information and uncertainty may be processed differently depending on the viewer-object interactions demanded by task, as predicted by theory (Viewer-object interactions).

Public Health Relevance

Our research uses behavioral, brain imaging, and computational methods to investigate how the human visual system organizes perceptual information into objects and object relationships, and how that information is used for actions. We expect our results to provide knowledge that will help us to understand circuitry patterns within and among visual areas and pathways in the brain and their relationships to visual functions. Deficits in such circuitry are believed to underly a number of clinical problems including amblyopia, object agnosia, and schizophrenia.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY015261-06
Application #
7945289
Study Section
Central Visual Processing Study Section (CVP)
Program Officer
Steinmetz, Michael A
Project Start
2003-12-01
Project End
2012-08-31
Budget Start
2010-09-01
Budget End
2012-08-31
Support Year
6
Fiscal Year
2010
Total Cost
$404,391
Indirect Cost
Name
University of Minnesota Twin Cities
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
555917996
City
Minneapolis
State
MN
Country
United States
Zip Code
55455
Fulvio, Jacqueline M; Maloney, Laurence T; Schrater, Paul R (2015) Revealing individual differences in strategy selection through visual motion extrapolation. Cogn Neurosci 6:169-79
Lin, Zhicheng (2013) Object-centered representations support flexible exogenous visual attention across translation and reflection. Cognition 129:221-31
Qiu, Cheng; Kersten, Daniel; Olman, Cheryl A (2013) Segmentation decreases the magnitude of the tilt illusion. J Vis 13:19
Hauffen, Karin; Bart, Eugene; Brady, Mark et al. (2012) Creating objects and object categories for studying perception and perceptual learning. J Vis Exp :e3358
Lin, Zhicheng; He, Sheng (2012) Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking. J Vis 12:
Olman, Cheryl A; Harel, Noam; Feinberg, David A et al. (2012) Layer-specific fMRI reflects different neuronal computations at different depths in human V1. PLoS One 7:e32536
Zhang, Peng; Jiang, Yi; He, Sheng (2012) Voluntary attention modulates processing of eye-specific visual information. Psychol Sci 23:254-60
Lin, Zhicheng; He, Sheng (2012) Emergent filling in induced by motion integration reveals a high-level mechanism in filling in. Psychol Sci 23:1534-41
Battaglia, Peter W; Kersten, Daniel; Schrater, Paul R (2011) How haptic size sensations improve distance perception. PLoS Comput Biol 7:e1002080
Doerschner, Katja; Fleming, Roland W; Yilmaz, Ozgur et al. (2011) Visual motion and the perception of surface material. Curr Biol 21:2010-6

Showing the most recent 10 out of 45 publications