The Recursive Auto Associative Memory (RAAM) architecture for compositional neural encoding has been the basis for numerous studies and prototype models for neural network algorithms applied to linguistic and symbolic reasoning tasks. However, to date, the scale and understanding of the representational model has been very limited. This is due to several logical conundrums inherent in the original model, related to the separation of terminal from non-terminal patterns. A deeper understanding of the logical structure of recursive encoding, and a new mathematical understanding of the fractal limit dynamics of recurrent networks, results in a novel and powerful revision of the RAAM architecture which resolve the logical problems and allows for potentially infinite structures to be cleanly and precisely represented by fixed-dimensional neural activity patterns. In this grant we will be exploring, refining, and exploiting this new architecture and building larger-scale models, using both serial and massively parallel implementations, which will increase the applicability of neural network learning systems to areas with stronger data structure requirements. Scientifically, this work builds strong connections between traditional cognitive science capacities, nonlinear dynamics, and neurally plausible computational models.