The Recursive Auto Associative Memory (RAAM) architecture for compositional neural encoding has been the basis for numerous studies and prototype models for neural network algorithms applied to linguistic and symbolic reasoning tasks. However, to date, the scale and understanding of the representational model has been very limited. This is due to several logical conundrums inherent in the original model, related to the separation of terminal from non-terminal patterns. A deeper understanding of the logical structure of recursive encoding, and a new mathematical understanding of the fractal limit dynamics of recurrent networks, results in a novel and powerful revision of the RAAM architecture which resolve the logical problems and allows for potentially infinite structures to be cleanly and precisely represented by fixed-dimensional neural activity patterns. In this grant we will be exploring, refining, and exploiting this new architecture and building larger-scale models, using both serial and massively parallel implementations, which will increase the applicability of neural network learning systems to areas with stronger data structure requirements. Scientifically, this work builds strong connections between traditional cognitive science capacities, nonlinear dynamics, and neurally plausible computational models.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
9529298
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
1996-05-01
Budget End
2000-12-31
Support Year
Fiscal Year
1995
Total Cost
$231,972
Indirect Cost
Name
Brandeis University
Department
Type
DUNS #
City
Waltham
State
MA
Country
United States
Zip Code
02454