Innovations in autonomy continue to produce systems that perceive, learn, decide, and act on their own. Many companies are now building self-driving vehicles and medical robots, and the development of advanced autonomous systems is already a billion-dollar industry. These new technologies offer oversight, advanced automation, and autonomous instruments, and they are adaptable to changing situations, knowledge, and constraints. However, introducing new technologies into our technical and social infrastructures has profound implications, and thus requires establishing confidence in their behavior to avoid potential harm. The effectiveness and broader acceptability of autonomous smart systems therefore rely on the ability of these systems to explain their decisions. Building trust in artificial intelligence (AI) systems is a critical requirement in human-robot interaction, and essential for realizing the full spectrum of societal and industrial benefits from AI.

This proposal identifies two critical factors for establishing the trustworthiness of autonomous systems: explainability and risk-awareness. The proposed research will provide new algorithms and guidance to enable real-world applications, opening trustworthy reinforcement-learning techniques to a wide variety of practical applications such as control, robotics, e-commerce, and medical treatment. Overall, this research will produce, first, an explainable and data-efficient hierarchical sequential decision-making framework based on symbolic planning and hierarchical reinforcement learning; second, an explainable policy-search framework that can learn explainable policies via integrating inductive logic programming and reinforcement learning; and third, improved approaches to risk-sensitive policy search that are easy to use (for example, without the burden of tuning multi-timescale stepsizes). The theoretical contribution of this research is to significantly improve data-driven policy search in interactive sequential decision-making systems by developing a theory of trust that facilitates and informs smart interactive learning processes.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
1910794
Program Officer
Roger Mailler
Project Start
Project End
Budget Start
2019-10-01
Budget End
2022-09-30
Support Year
Fiscal Year
2019
Total Cost
$418,170
Indirect Cost
Name
Auburn University
Department
Type
DUNS #
City
Auburn
State
AL
Country
United States
Zip Code
36832