The brain's ability to perform complex forms of pattern recognition, such as speech discrimination, far exceeds that of the best computer programs. One of the strengths of human pattern recognition is its seamless processing of the temporal structure and temporal features of stimuli. For example, the phrase "he gave her cat food" can convey two different meanings depending on whether the speaker emphasizes the pause between "her" and "cat," or "cat" and "food." Attempts to emulate the brain's ability to discriminate such patterns using artificial neural networks have had only limited success. These models, however, have traditionally not captured how the brain processes temporal information. Indeed most of these models have treated time as equivalent to a spatial dimension, in essence assuming that the same input is buffered and played at different delays. Similarly, more traditional approaches to pattern recognition, which generally rely on discrete time bins, also do not capture how the brain processes temporal information. The goal of the current research is to use a framework, referred to as state-dependent networks or reservoir computing, to simulate the brain's ability to process both the spatial and temporal features of stimuli. A critical component of this framework is that temporal information is automatically encoded in the state of the network as a result of the interaction between incoming stimuli and internal states of recurrent networks.

This project will develop a general model of spatiotemporal pattern recognition focusing on speech discrimination. The model will incorporate plasticity, a critical characteristic of the brain that has eluded previous state-dependent network models. Plasticity is a cardinal feature of the brain's computational power. For example, in the context of speech recognition, even at the age of 6 months, the brains of babies are tuned to recognize sounds of their native language. This ability is an example of experience-dependent cortical plasticity and it relies in part on synaptic plasticity and cortical reorganization. Incorporating synaptic plasticity into recurrent networks has proven to be a very challenging problem as a result of the inherent nonlinear and feedback dynamics of recurrent networks. The current project will use a novel unsupervised form of synaptic plasticity--based on empirically observed forms of plasticity referred to as homeostatic synaptic plasticity--to endow state-dependent networks with the ability to adapt and self-tune to the stimulus set the network is exposed to. This project interfaces recent advances in theoretical neuroscience and novel approaches in machine learning. The results will help develop artificial neural networks that capture the brain's ability to process temporal information and reorganize in response to experience.

Project Report

A conspicuous ability of the brain is to seamlessly assimilate and process both the spatial and temporal features of sensory stimuli. This ability is necessary for our effective interaction with the external world – particularly for complex forms of sensory and motor processing, such as speech and music. Understanding and harnessing the brain’s computational strategies has been a long sought, but elusive goal. Our recent results contribute significantly to the understanding or how complex computations emerge from the dynamics of recurrent neural networks. Specifically, we have demonstrated a novel and powerful computational regime in recurrent networks, which we refer to as a dynamic attractor computation. Dynamic Attractor Computing We have developed a learning algorithm for recurrent neural networks composed of firing rate neurons. It is well established that the computational regimes with the most computational potential are precisely those that exhibit chaotic behavior. In other words, networks that create rich, high-dimensional, time-varying patterns are also extremely sensitive to noise (that is, they are chaotic). We have developed a novel approach that essentially "tames" the chaos in these networks, and in the processes creates a novel dynamic regime: a dynamic attractor—that is, the network can produce stable patterns of activity that even if perturbed can return to the trained trajectory, behaving as a "dynamic attractor". We have shown that this regime significantly improves on the ability of recurrent network to code time and generate complex spatiotemporal motor patterns such as handwriting. Figure 1 provides an example of a computational feature of this network. It can be trained to generate handwritten words, and if the network is perturbed it can return to complete the word. Order selectivity based on short-term synaptic plasticity and plasticity of short-term plasticity. We have recently finished a second component of this project which focuses on more realistic (spike-based) models of order selectivity. Order-selectivity is a fundamental computation in the nervous system, and critical to many forms of sensory processing including speech discrimination (e.g., /de/ /lay/ versus /lay/ /dy/). We have demonstrated that a universal form of short-term synaptic plasticity (paired-pulse depression of IPSPs) is well suited to account for order-selectivity. And using small and large scale networks demonstrated that short-term plasticity can account for much of the observed experimental data on order-selectivity.

Project Start
Project End
Budget Start
2011-09-01
Budget End
2014-08-31
Support Year
Fiscal Year
2011
Total Cost
$249,616
Indirect Cost
Name
University of California Los Angeles
Department
Type
DUNS #
City
Los Angeles
State
CA
Country
United States
Zip Code
90095