Recent technological and scientific developments have made neurally controlled prosthetics increasingly viable. Researchers have capitalized on improvements in both our knowledge of signal processing in the brain and increases in computing power to create computer systems that, once interfaced to the nervous system, can be used by subjects to perform a variety of tasks. Implicit in 5 much of this work is the idea that these systems will be able to take advantage of learning and adaptability in the nervous system, and in fact much system design relies on this untested assumption. In the work proposed here, we will explicitly test the ability of the brain to learn to control these neuroprosthetic systems under a variety of challenges. To this end, we will train non-human primates to control the motion of a sphere in a virtual environment by using brain signals. To achieve this, we 10 record the signals of thirty to fifty individual neurons, and use a linear algorithm to combine those signals and convert them into a velocity vector that drives the sphere. Once an animal is able to perform this task, we will begin to challenge the animal by introducing a variety of changes to the way that the sphere responds to the neural signals.
In Specific Aim 1, we will determine how well subjects can learn to control the cursor when we rotate the velocity vector into a new direction, separated by as 15 much as 90 degrees from the original direction.
In Specific Aim 2 we will test to see how readily the animals can learn to control the cursor when we arbitrarily reassign the properties of the neurons prior to computing the control signal. These experiments will test the ability of animals to change the tuning properties of individual neurons as a group and in isolation. With these two experiments, we will gain substantial knowledge of the ability of the brain to remap itself for the purpose of controlling 20 artificial devices. This will be crucial in designing future applications for neuroprosthetic controllers. In addition, we'd like to know how many separate algorithms a single set of neurons can learn to control. For example, a quadriplegic may want to control a wheelchair, a robotic arm, and type emails, all with one implant.
In Specific Aim 3, we will test the ability of animals to learn to control multiple control algorithms from a single set of neurons by asking animals to learn multiple separate mappings 25 between neural activity and motion of the sphere in blocks of trials. This experiment will offer insight on how to best design algorithms that can successfully control multiple neuroprosthetic devices. Attaining these three objectives will significantly advance the design of state-of-the-art neuroprosthetic systems.

Public Health Relevance

Losing a limb or the ability to control a limb has severe personal and societal impacts: it is imperative that effort be devoted to improving existing prosthetic technology to improve quality of life for those afflicted by these debilitating conditions. By improving our understanding of design issues in building neuroprosthetic controllers, we will enhance the functionality of systems built to repair or remediate such injury. In this proposal we focus on determining fundamental aspects of how the nervous system will interact with prosthetic systems, which is a crucial component in increasingly functional neuroprosthetic design.

Agency
National Institute of Health (NIH)
Institute
National Institute of Neurological Disorders and Stroke (NINDS)
Type
Research Project (R01)
Project #
5R01NS063372-02
Application #
7644301
Study Section
Special Emphasis Panel (ZRG1-NT-K (01))
Program Officer
Kleitman, Naomi
Project Start
2008-07-01
Project End
2012-03-31
Budget Start
2009-04-01
Budget End
2010-03-31
Support Year
2
Fiscal Year
2009
Total Cost
$266,875
Indirect Cost
Name
Arizona State University-Tempe Campus
Department
Administration
Type
Other Domestic Higher Education
DUNS #
943360412
City
Tempe
State
AZ
Country
United States
Zip Code
85287
McAndrew, R M; Lingo VanGilder, J L; Naufel, S N et al. (2012) Individualized recording chambers for non-human primate neurophysiology. J Neurosci Methods 207:86-90
Rincon-Gonzalez, L; Naufel, S N; Santos, V J et al. (2012) Interactions between tactile and proprioceptive representations in haptics. J Mot Behav 44:391-401
Rincon-Gonzalez, Liliana; Warren, Jay P; Meller, David M et al. (2011) Haptic interaction of touch and proprioception: implications for neuroprosthetics. IEEE Trans Neural Syst Rehabil Eng 19:490-500
Rincon-Gonzalez, Liliana; Buneo, Christopher A; Helms Tillery, Stephen I (2011) The proprioceptive map of the arm is systematic and stable, but idiosyncratic. PLoS One 6:e25214