Neural interfaces hold great potential to restore movement and communication function in millions of patients with paralysis, neuromuscular disorders, traumatic brain injury, stroke, or communication disorders. These systems rely on neural decoding algorithms to translate recorded neural activity into, for example, movements of a prosthetic limb or intended speech sounds. However, many technical challenges limit the predictive accuracy of these decoding models, preventing widespread deployment of restorative neuroprosthetic devices. A key challenge is the limited data available to train decoding models. In existing approaches, models must be trained separately for each individual subject, usually during simple behavioral tasks, and consequently fail to generalize well to new subjects or complex naturalistic behavioral settings. This proposal leverages recent advances in machine learning that directly address these limitations by developing a new decoding model framework that is capable of combining neural data across many subjects and tasks, as well as incorporating large-scale simulated data to improve prediction accuracy. In this framework, a single global model learns an internal representation of the neural system that is invariant to variations in behavioral task and stimulus set, anatomical variations, and functional variations in neural tuning of the underlying neuronal population (i.e., population-invariant neural decoding). This global model can therefore be applied and calibrated to new subjects and behavioral tasks where little or no additional training data is available. Using an existing intracranial neural data set collected from a large number of different subjects and stimulus sets, the project will establish this new modeling approach based on deep learning architectures that are explicitly designed to incorporate data pooled across subjects and tasks. We propose to show, through validation on measured intracranial neural data, that a global neuronal population-invariant decoding model substantially improves model prediction accuracy and generalization relative to existing state-of-the-art neural decoding models across many subjects. Development and validation of this approach will open new avenues for researchers to combine disparate data sets, for example, enabling community development, improvement, and sharing of ?open source? models that can be shared among research groups and effectively applied across research studies. Thus, the proposal seeks to address key limitations in present neural decoding model approaches which must ultimately convert measured neural activity into useful behavioral or communication parameters across a wide range of subjects and complex behaviors in order for translational effects to be realized in important clinical applications of neural interfaces.
Neural interfaces rely on predictive models to restore lost motor or communication function in millions of patients with paralysis, neuromuscular disorders, traumatic brain injury, stroke, or communication disorders. This project leverages novel machine learning models that allow neural data to be pooled across large-scale multi-subject data sets, permitting an order of magnitude increase in model complexity and prediction accuracy that is immediately generalizable across large patient populations with disordered speech or motor function. The outcome of this project is a new modeling framework where a single, highly performant global model can be easily calibrated to individual patient cases with limited data while maintaining high prediction accuracy.