Fundamental to curing a variety of disorders that affect learning and memory is an understanding of how learning and memory occur. That is, now that we are beginning to understand the cellular and subcellular events that lead to synaptic modification, it is time to ask how such microscopic changes are integrated into, and used by, the functioning nervous system. As our current knowledge stands, it is far from obvious how stored memories of polysensory events that occur in a temporally distributed manner (e.g., I spent yesterday afternoon at the grocery store) can be recalled and used by the brain as a more or less discretely coded event. It is our long term objective to explain such issues of learning and memory in terms of information processing. To accomplish this objective the proposed research aims to create a quantitative theory that defines information processing as a function of synaptic modifications, neuronal physiologies, and the neuroanatomies of different brain regions. The proposed research concentrates on the architectures and cellular physiologies of the hippocampus as they are currently understood or as they might actually be (given the limits of the biological research performed to date). The problem of accurate prediction, as is required of small animals in a spatial task, focuses our attention on a specific problem. By arriving at an abstract definition of the prediction problem and by considering certain basic facts of the nervous system in the context of the theory of computational complexity, we develop the issue of preprocessing signals in a neural network reflecting current knowledge of the hippocampus. This preprocessing for improving prediction is accomplished by recoding process that temporally compresses and statistically simplifies temporally distributed, polysensory signals. The methods of the research are computer simulations and theorem proving. By virtue of the abstract approach inherent in the use of information theoretic measures, it is possible to study input environments that are no more, or less, than environments defined by their statistics. The modeled brain networks transform sequences of inputs that are multivariate 0's and 1's into outputs that are also multivariate sequences of 0's and 1's. The statistics of the output sequences are then analyzed, and the information loss is evaluated. Additionally, the availability of the recoded information for a neuron-generated prediction is evaluated via a multivariate measure of statistical dependence. This evaluation is critical because the whole point of transforming signals - i.e., taking the information present in one representation of the input environment and recoding it as another representation of the same input - is to make a representation that is more usable by successive brain regions (minimizing statistical dependence can be very helpful). The computer simulations are repeated using a Monte Carlo procedure in order to quantify the average behavior of each network studied. Such results lead to conjectured theorems that, if possible, are rigorous statements concerning the average ability of each network to recode information. This recoding is specified in terms of information loss and statistical dependence and is a function of the statistics of the input environment, particular neuronal properties, and the anatomy under study (e.g. the synaptic modification rules, the existence of feedforward inhibition, etc).

Agency
National Institute of Health (NIH)
Institute
National Institute of Mental Health (NIMH)
Type
Research Project (R01)
Project #
1R01MH048161-01
Application #
3387744
Study Section
Special Emphasis Panel (SRCM)
Project Start
1991-04-01
Project End
1994-03-31
Budget Start
1991-04-01
Budget End
1992-03-31
Support Year
1
Fiscal Year
1991
Total Cost
Indirect Cost
Name
University of Virginia
Department
Type
Schools of Medicine
DUNS #
001910777
City
Charlottesville
State
VA
Country
United States
Zip Code
22904
Rodriguez, Paul; Levy, William B (2004) Configural representations in transverse patterning with a hippocampal model. Neural Netw 17:175-90
Sullivan, D W; Levy, W B (2004) Quantal synaptic failures enhance performance in a minimal hippocampal model. Network 15:45-67
Levy, William B; Baxter, Robert A (2002) Energy-efficient neuronal computation via quantal synaptic failures. J Neurosci 22:4746-55
Shon, A P; Wu, X B; Sullivan, D W et al. (2002) Initial state randomness improves sequence learning in a model hippocampal network. Phys Rev E Stat Nonlin Soft Matter Phys 65:031914
Rodriguez, P; Levy, W B (2001) A model of hippocampal activity in trace conditioning: where's the trace? Behav Neurosci 115:1224-38
Smith, A C; Wu, X B; Levy, W B (2000) Controlling activity fluctuations in large, sparsely connected random networks. Network 11:63-81
Greene, A J; Prepscius, C; Levy, W B (2000) Primacy versus recency in a quantitative model: activity is the critical distinction. Learn Mem 7:48-57
August, D A; Levy, W B (1999) Temporal sequence compression by an integrate-and-fire model of hippocampal area CA3. J Comput Neurosci 6:71-90
Levy, W B; Delic, H; Adelsberger-Mangan, D M (1999) The statistical relationship between connectivity and neural activity in fractionally connected feed-forward networks. Biol Cybern 80:131-9
Wu, X; Tyrcha, J; Levy, W B (1998) A neural network solution to the transverse patterning problem depends on repetition of the input code. Biol Cybern 79:203-13

Showing the most recent 10 out of 23 publications