Learning algorithms are now pervasively deployed in robotic systems. However, safe learning procedures with high-probability theoretical guarantees on the acceptability of the predictions have been far less studied, especially for robotic systems trained with data collected from experts and making decisions sequentially. The PIs shall bring together ideas and techniques from statistics, machine learning, and mathematical optimization, to design the next generation of imitation learning approaches with provable safety guarantees for several classes of modern robots that interact with humans.
The project aims to: (1) develop new formulations of safe imitation learning; (2) design fast learning algorithms with theoretical guarantees on safety; (3) explore trust-building processes for beneficial human-machine interaction. The research approaches will be evaluated in several robotic problem domains, including robotic manipulation and wheeled mobile robot navigation.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.