Associative memory is a fundamental property of the nervous system. It allows us to retrieve memories from partial or corrupted input, and thus plays a critical role in processing and categorizing information. The mechanisms that underlie associative memory are not yet understood. The leading model, a theoretical model, proposes that the nervous system performs associative memory by implementing attractor networks. In such networks, input, in the form of patterns of action potentials, provides partial information about a memory; the dynamics of the network then drives the neural activity to an attractor - a stable state in activity space - that corresponds to a complete representation of the memory. While the attractor model is a valuable construct, it is an idealized one - there is a large gap between the model and real neuronal networks. Our goal is to determine whether the attractor model is viable for real neuronal networks. The proposal is divided into two parts. The first is to determine whether networks with biologically -realistic properties can act as attractor networks. We will use analytical approaches, primarily mean field theory, to determine, in the abstract, the conditions necessary for the existence of attractors, and we will assess whether biological networks can satisfy these conditions. Large-scale simulations of realistic networks will then be performed to verify our analytical findings. The second is to determine how attractors can be learned. We will apply patterned input to a network, and examine network behavior as a function of the synaptic learning rule and the patterns of input to the network. In sum, our goal is to determine whether attractor networks can be realized in biological neuronal networks, and, if so, to determine the input patterns that will result in their formation. Successful completion of this project will provide us with experimentally testable predictions, predictions that can serve as a guide for investigating attractor networks in the nervous system.

Agency
National Institute of Health (NIH)
Institute
National Institute of Mental Health (NIMH)
Type
Research Project (R01)
Project #
5R01MH062447-03
Application #
6637608
Study Section
Special Emphasis Panel (ZRG1-IFCN-8 (01))
Program Officer
Glanzman, Dennis L
Project Start
2001-03-01
Project End
2004-02-29
Budget Start
2003-03-01
Budget End
2004-02-29
Support Year
3
Fiscal Year
2003
Total Cost
$114,375
Indirect Cost
Name
University of California Los Angeles
Department
Neurosciences
Type
Schools of Medicine
DUNS #
092530369
City
Los Angeles
State
CA
Country
United States
Zip Code
90095
London, Michael; Roth, Arnd; Beeren, Lisa et al. (2010) Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex. Nature 466:123-7
Roudi, Yasser; Nirenberg, Sheila; Latham, Peter E (2009) Pairwise maximum entropy models for studying large biological systems: when they can work and when they can't. PLoS Comput Biol 5:e1000380
Beck, Jeffrey M; Ma, Wei Ji; Kiani, Roozbeh et al. (2008) Probabilistic population codes for Bayesian decision making. Neuron 60:1142-52
Roudi, Yasser; Latham, Peter E (2007) A balanced memory network. PLoS Comput Biol 3:1679-700
Ma, Wei Ji; Beck, Jeffrey M; Latham, Peter E et al. (2006) Bayesian inference with probabilistic population codes. Nat Neurosci 9:1432-8
Latham, Peter E; Nirenberg, Sheila (2005) Synergy, redundancy, and independence in population codes, revisited. J Neurosci 25:5195-206
Latham, Peter E; Nirenberg, Sheila (2004) Computing and stability in cortical networks. Neural Comput 16:1385-412
Series, Peggy; Latham, Peter E; Pouget, Alexandre (2004) Tuning curve sharpening for orientation selectivity: coding efficiency and the impact of correlations. Nat Neurosci 7:1129-35
Latham, Peter E; Deneve, Sophie; Pouget, Alexandre (2003) Optimal computation with attractor networks. J Physiol Paris 97:683-94
Brunel, Nicolas; Latham, Peter E (2003) Firing rate of the noisy quadratic integrate-and-fire neuron. Neural Comput 15:2281-306

Showing the most recent 10 out of 12 publications