INTELLIGENT CONTROL OF UPPER EXTREMITY NEURAL PROSTHESES A common feature of spinal cord injury (SCI) and neurological movement disorders is that the peripheral neuromuscular system remains intact. Functional Electrical Stimulation (FES) offers the potential to restore movement in these individuals. Impressive improvements in electrode and sensor hardware have recently been made, but development of control algorithms for complex dynamic movements remains difficult. Reinforcement learning (RL) is a technique from artificial intelligence that has the potential to overcome this problem. A RL-based control system learns from experience how to control movement, in very much the same way as an infant. The system receives information from multiple sensors, as well as a reward signal, and generates actions, i.e. muscle stiimulation levels, that are initially random. The system will learn to predict the consequences of its actions and will ultimately converge to a control strategy that maximizes the sum of rewards over time. An essential feature of RL is that the control strategy is not created by the designer, but is learned from experience. This learning process could ultimately result in motor behavior of much higher quality than can be achieved with traditionally designed feedback control systems, which tend to """"""""fight"""""""" rather than exploit the natural dynamics of the body such as inertia, pendulum and mass-spring mechanisms. Furthermore, a self learning system has the advantage that it can adapt itself to the user's body mass, muscle strength, as well as variations in electrode location. The long-term goal is a system that integrates high-level commands from the user with signals from implanted sensors to produce intelligent and adaptive motor function. Feasibility of this concept will be tested here for FES control of six muscles in the upper extremity, to perform the task of reaching in the horizontal plane. The following specific aims are proposed: (1) Implementation of RL control on a virtual arm with computer-generated commands and rewards, (2) RL control on a virtual arm, with commands and rewards given by a human operator, and (3) RL control of muscles in a paralyzed arm in two subjects with high cervical spinal cord injury, with commands and rewards given by the user via a head tracker based input device.

Agency
National Institute of Health (NIH)
Institute
Eunice Kennedy Shriver National Institute of Child Health & Human Development (NICHD)
Type
Exploratory/Developmental Grants (R21)
Project #
1R21HD049662-01
Application #
6908435
Study Section
Special Emphasis Panel (ZRG1-MOSS-G (02))
Program Officer
Quatrano, Louis A
Project Start
2005-07-01
Project End
2007-06-30
Budget Start
2005-07-01
Budget End
2006-06-30
Support Year
1
Fiscal Year
2005
Total Cost
$164,156
Indirect Cost
Name
Cleveland Clinic Lerner
Department
Other Basic Sciences
Type
Schools of Medicine
DUNS #
135781701
City
Cleveland
State
OH
Country
United States
Zip Code
44195
Jagodnik, Kathleen M; van den Bogert, Antonie J (2010) Optimization and evaluation of a proportional derivative controller for planar arm movement. J Biomech 43:1086-91
Thomas, Philip; Branicky, Michael; van den Bogert, Antonie et al. (2009) Application of the Actor-Critic Architecture to Functional Electrical Stimulation Control of a Human Arm. Proc Innov Appl Artif Intell Conf 2009:165-172