This project investigates new reinforcement learning algorithms to enable long-term real-time autonomous learning by cyber-physical systems (CPS). The complexity of CPS makes hand-programming safe and efficient controllers for them difficult. For CPS to meet their potential, they need methods that enable them to learn and adapt to novel situations that they were not programmed for. Reinforcement learning (RL) is a paradigm for learning sequential decision making processes with potential for solving this problem. However, existing RL algorithms do not meet all of the requirements of learning in CPS. Efficacy of the new algorithms for CPS is evaluated in the context of smart buildings and autonomous vehicles.

Cyber-physical systems (CPS) have the potential to revolutionize society by enabling smart buildings, transportation, medical technology, and electric grids. Success of this project could lead to a new generation of CPS that are capable of adapting to their situation and improving their performance autonomously over time. In addition to the traditional methods of dissemination, this project will develop and release open-source code implementing the new reinforcement learning algorithms. Education and outreach activities associated with the project include a Freshman Research Initiative course, participation in a UT Austin annual open house that draws in many underrepresented minorities to interest the public in computer science and science in general, and the department's annual summer school for high school girls called First Bytes.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
1330072
Program Officer
David Corman
Project Start
Project End
Budget Start
2013-10-01
Budget End
2017-09-30
Support Year
Fiscal Year
2013
Total Cost
$499,760
Indirect Cost
Name
University of Texas Austin
Department
Type
DUNS #
City
Austin
State
TX
Country
United States
Zip Code
78759