This application describes a 3-year training plan that will enable me, a cognitive neuroscientist with prior training in electroencephalography (EEG), to conduct research on contextual memory representation using neuroimaging (fMRI) and computational modeling. EEG is useful for examining the timing properties of neural activity, but cannot localize activity to specific regions of the brain. In this proposal, I will receive training on a high-spatial resolution neuroimaging technique (fMRI), which will allow me to develop theories of neural function that are constrained by both space and time. I will also build on my prior degree in applied statistics and receive additional training in computational neuroscience, which will enable me to develop computational theories at the macro-circuit level. I will be supervised by Dr. Sharon Thompson-Schill, an expert fMRI experimentalist and theorist of lateral prefrontal cortex function, who has extensive experience researching the context-dependent nature of semantic memory. I will be co-supervised by Dr. Anna Schapiro, an expert on statistical learning and computational modeling of the brain. I propose to examine how prefrontal cortex (PFC) represents statistical dependencies among sequentially presented visual and auditory input. I will examine how the temporal extent and level of abstraction of sequential representations changes across ventral PFC. This will connect findings from several literatures, ranging from decision-making to emotion processing and language comprehension, within a single unifying framework. In addition, I will explore whether ?deep? or ?shallow? recurrent neural networks better capture the sensitivity profile of ventral PFC, informing the question of whether the brain conducts ?deep? learning.
In Aim 1, I will conduct behavioral piloting and collect data for two neuroimaging experiments on hierarchical sequential processing. I will have participants learn the statistical properties of hierarchically organized sequences of abstract visual (Aim 1a&b) and auditory (Aim 1b) images. I then test for neural sensitivity to statistical learning at each hierarchical level using pattern similarity analysis, comparing the neural response to the sequences before and after learning.
In Aim 2, I will conduct computational modeling of the neuroimaging data in Aim 1, with held out data to ensure robustness and reproducibility. I compare the neuroimaging data to internal model representations derived from single-layer (?shallow?) and multi-layer (?deep?) recurrent neural networks trained on the same sequences as the humans in Aim 1. By modeling the neural representation of context itself, the current proposal will help fill a critical gap in our understanding of how the brain predicts upcoming sensory input, enabling rapid processing of the world around us. It will also inform our understanding of several psychiatric disorders that involve prefrontal cortex disfunction and disturbances of contextual processing, such as schizophrenia, anxiety and depression.
As we navigate the world we are bombarded by a flood of sensations that our brain must rapidly make sense of, and understanding the context we are in as it relates to the likelihood of upcoming experiences is important for enhancing the speed and accuracy of this process. This proposal examines how sensory context is represented in the brain, by having people learn statistical regularities in artificial sequences of images and syllables and modeling neural activity in prefrontal cortex as they are exposed to different sensory ?contexts? using fMRI. Through computational modeling of prefrontal cortex, we can enhance our understanding of the function of a brain region implicated in many neuropsychological disorders where the perception of context is distorted or disrupted, including anxiety, depression, and schizophrenia.