Our ability to visually interpret the world around us depends on bottom-up computations that extract relevant information from the sensory inputs but it also depends on our accumulated core knowledge about the world providing top-down signals based on prior experience. The goal of this proposal is to study the mechanisms by which visual information is integrated spatially and temporally to combine bottom-up and top- down knowledge. Towards this goal, we combine behavioral measurements, invasive neurophysiological recordings, and computational models. The behavioral data will provide critical constraints about human integrative abilities, particularly through eye movements and the dynamics of recognition. The invasive neurophysiological data will provide high spatiotemporal resolution of neural activity along the inferior temporal cortex and the interactions with pre-frontal cortex, which is hypothesized to be critical for conveying the type of top-down signals required for recognition. Ultimately, a central goal of our proposal is to formalize our understanding of these integrative process via a quantitative computational model. This computational model should be able to capture the behavioral and physiological results and provide testable predictions. During the current award, we have made significant progress towards elucidating the mechanisms underlying pattern completion whereby the visual system is capable of inferring the identity of objects from partial information. Here we consider a set of images and videos that are ?minimal? in the sense that they are recognizable but where any further reduction in the amount of spatial or temporal information renders them unrecognizable. We have strong preliminary evidence that suggests that state-of-the-art purely bottom-up theories of recognition instantiated by deep convolutional networks cannot explain human behavior and physiology. Therefore, these types of stimuli provide an ideal arena to investigate how top-down signals, presumably from pre-frontal cortex, modulate the responses along ventral visual cortex to orchestrate recognition. Understanding the neural mechanisms by which core knowledge is incorporated into sensory processing is arguably one of the greatest challenges in Cognitive Science and may have important implications for many neurological and psychiatric conditions that are characterized by dysfunctional top-down signaling and remain poorly understood.

Public Health Relevance

Interpreting the world around us requires putting together current sensory experiences and our prior experience. It has been known for a long time that such prior core knowledge of the world plays a critical role in our perceptions. Yet, the mechanisms by which such experiences are merged with sensory stimuli remain poorly understood and elusive. Here we combine behavioral measurements, direct recordings of neural data from inside the human brain, and computational models to investigate the neural circuits that combine the senses and prior experiences. There are multiple neurological and psychiatric conditions that are characterized by abnormal top-down signaling, conditions that remain poorly understood and where successful treatment will necessitate deep understanding of their mechanistic basis. The current proposal combines state-of-the-art technologies, methods and models to tackle these questions and begin to shed light on one of the most challenging mysteries of the mind, the interplay between the senses and high-level cognition.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
2R01EY026025-04
Application #
9817777
Study Section
Mechanisms of Sensory, Perceptual, and Cognitive Processes Study Section (SPC)
Program Officer
Flanders, Martha C
Project Start
2016-03-01
Project End
2022-08-31
Budget Start
2019-09-30
Budget End
2020-08-31
Support Year
4
Fiscal Year
2019
Total Cost
Indirect Cost
Name
Boston Children's Hospital
Department
Type
DUNS #
076593722
City
Boston
State
MA
Country
United States
Zip Code
02115
Isik, Leyla; Singer, Jedediah; Madsen, Joseph R et al. (2018) What is changing when: Decoding visual information in movies from human intracranial recordings. Neuroimage 180:147-159
Tang, Hanlin; Schrimpf, Martin; Lotter, William et al. (2018) Recurrent computations for visual pattern completion. Proc Natl Acad Sci U S A 115:8835-8840
Zhang, Mengmi; Feng, Jiashi; Ma, Keng Teck et al. (2018) Finding any Waldo with zero-shot invariant and efficient visual search. Nat Commun 9:3730
Kreiman, Gabriel (2017) A null model for cortical representations with grandmothers galore. Lang Cogn Neurosci 32:274-285
Tang, Hanlin; Yu, Hsiang-Yu; Chou, Chien-Chen et al. (2016) Cascade of neural processing orchestrates cognitive control in human frontal cortex. Elife 5:
Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel (2016) There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task. Cereb Cortex 26:3064-82
Gómez-Laberge, Camille; Smolyanskaya, Alexandra; Nassi, Jonathan J et al. (2016) Bottom-Up and Top-Down Input Augment the Variability of Cortical Neurons. Neuron 91:540-547
Tang, Hanlin; Singer, Jed; Ison, Matias J et al. (2016) Predicting episodic memory formation for movie events. Sci Rep 6:30175