Machine learning and artificial intelligence are among the most important general purpose technologies for the coming decades, with potential to transform all aspects of society from health, to manufacturing, to business, to education, and to security. In the last decade, there have been very important and impressive advances in machine learning driven by the use of deep neural networks, innovative training algorithms, computational resources including specialized hardware (graphics processors, tensor processing units), and large datasets. Some of these developments have connections to emerging understanding from neuroscience on how the human brain learns to makes decisions in real-time. However, there are major challenges in the use of these techniques in real-time control and decision making for engineering systems where stability, reliability, and safety are paramount concerns. This project aims to connect major advances in machine learning and neuroscience to control systems and thereby advance myriad application domains. Modern engineered systems are increasingly complicated. They comprise large heterogeneous distributed networks of (IoT) connected devices, systems, and human/social agents, e.g., transportation, energy, water, manufacturing, health and agriculture. A major challenge is performance, stability and reliability of these systems under large uncertainties. The goal is to expand our understanding and integration of learning and control to derive principles and algorithms for the development of learning-based control systems for a variety of engineering applications.

While there are significant historical connections between reinforcement learning and stochastic dynamic control, the potential for leveraging ongoing and future advances in machine learning for control remains significantly under- explored. The field of control systems has deep and solid theoretical and mathematical foundations with comprehensive and well-established frameworks for linear, nonlinear, robust, adaptive, stochastic, distributed, and model-predictive control systems. Equally importantly, control systems have applications in multiple domains, such as aerospace, automotive, manufacturing, energy, transportation, agriculture, water, and many other engineered and socio-technical systems. Despite this rich spectrum of theoretical foundations and important applications, the domain of applicability of traditional control techniques is limited to situations where good mathematical models of the underlying systems are available, and where the environmental uncertainty is not too large. This exploratory research project is aimed at overcoming these limitations via novel problem formulations in systems and control inspired by new insights coming from recent developments in machine learning. A key focus will be on novel control architectures inspired by neuroscience and reinforcement learning. Besides architectural innovations, the project will explore questions of stability, performance, and uncertainty by integrating ideas from rapid (one-shot) learning, meta-learning, and episodic control into control algorithms. The ideas from this project will be at the core of a new graduate level course in learning for control which will be taught at the University of California, Irvine. The resulting course materials will be made available to the research community and will benefit interested graduate students across the nation. In addition, short courses will be offered at major professional conferences, e. g., American Control Conference, IEEE Conference on Decision and Control.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2018-10-01
Budget End
2021-09-30
Support Year
Fiscal Year
2018
Total Cost
$299,333
Indirect Cost
Name
University of California Irvine
Department
Type
DUNS #
City
Irvine
State
CA
Country
United States
Zip Code
92697