Many aspects of cognition extend over time. Hearing a fragment of sound, we perceive it as part of a mockingbird?s melody; hearing one word, we understand it as part of a meaningful sentence. Therefore, our brains must possess the ability to integrate information over time. However, information cannot be integrated indiscriminately: the subject of a new sentence is not necessarily related to the verb of the previous sentence, and we may want to keep these items of information separate. The goal of this work is to understand the algorithms by which our brains flexibly integrate related information while separating unrelated information. In addition, we aim to understand how temporal integration and separation are implemented in cortical circuits, and how we can manipulate these brain processes. Our prior work suggests that brains perform this task in a distributed manner: almost all regions of the human cortex can integrate information over time. Early sensory regions integrate over short periods (milliseconds to seconds) and they pass information to higher-order regions, which integrate over longer periods (seconds to minutes). We propose the following algorithm for this process: each cortical region maintains its own local memory, and attempts to form a synthesized joint representation of its local memory and any input that it receives. When this synthesis is successful, a cortical region will pass forward its synthesized representation to the next stage of processing. But if the synthesis is unsuccessful, then its local memory will be reset. For example, an early cortical region may synthesize syllables within a word but then reset its context at the beginning of a new word. To test whether the brain is using this algorithm, we will model fMRI activity in the brains of people who integrate and separate sequences of information as they listen to complex narratives. To understand how temporal integration and separation are implemented in the activity of cortical circuits, we will also measure ECoG signals, which provide a direct read-out of synchronous and asynchronous activity in cortical neurons. We hypothesize that cortical circuits become less synchronized when they cannot integrate new input with prior context. We will test whether this decrease in low-frequency synchronization allows increased information flow from the world into the cortical hierarchy. Finally, we will develop tools to control the state of temporal integration or separation, using electrical modulation to increase or decrease synchronization in cortical circuits. Altogether, this work provides tools to detect and manipulate temporal integration and separation processes in the human brain, as well as a computational framework for how we parse sequences of information. Our approach is innovative because we develop multi-scale experimental paradigms and combine data across fMRI, ECoG, and modeling. We expect this work to help reveal the hierarchical mechanisms by which human cortical circuits integrate and separate sequences of information.

Public Health Relevance

This project is proposed to understand how patterns of information, such as sequences of words, are combined over time in the human brain. The insights gained from this work could inform the design of future treatments for schizophrenia and autism, in which the process of combining sequences of information is disrupted. More generally, combining information over time is necessary for comprehending language and for accumulating information to make decisions.

National Institute of Health (NIH)
National Institute of Mental Health (NIMH)
Research Project (R01)
Project #
Application #
Study Section
Cognition and Perception Study Section (CP)
Program Officer
Vaziri, Siavash
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
Johns Hopkins University
Schools of Arts and Sciences
United States
Zip Code