The proposed effort will help develop systems in which humans and autonomy are responsible for collective information acquisition, perception, cognition and decision-making. Such collective operation is a necessity as much as it is an augmenting technology. In assistive robotics, for example, the autonomy exists to support functionality that the human users cannot perform. On the other hand, in cases in which a human can adequately operate a platform (e.g., semi-autonomous unmanned vehicles), she effectively augments the robot's abilities. Establishing provable trust is one of the most pressing bottlenecks in deploying autonomous systems at scale. Embedding a human as a user, information source or decision aid into the operation of autonomous systems amplifies the difficulty. While humans offer cognitive capabilities that complement machine implementable functionalities, the impact of this synergy is contingent on the system's ability to infer the intent, preferences and limitations of the human and the imperfections imposed by the interfaces between the human and the autonomous system. We expect the proposed theory, methods and tools to cut across the spectrum of cyberphysical systems that are to work with and in the vicinity of humans. Such systems include, to name a few, human-robot interactions, a range of assistive medical devices, semi-autonomous driving or safety augmentation systems in modern automobiles and control rooms of large-scale plants.

The proposed effort targets a major gap in theory and tools for the design of human-embedded autonomous systems. Its objective is to develop languages, algorithms and demonstrations for the formal specification and automated synthesis of shared control protocols. Our technical approach is based on bridging formal methods, controls, learning and human behavioral modeling. It is based on three main research thrusts. (i) Specifications and modeling for shared control: What does it mean to be provably correct in human-embedded autonomous systems, and how can we represent correctness in formal specifications? (ii) Automated synthesis of shared control protocols: How can we mathematically abstract shared control, and automatically synthesize shared control protocols from formal specifications? (iii) Shared control through human-autonomy interfaces: How can we account for the limitations in expressivity, precision and bandwidth of human-autonomy interfaces, and co-design controllers and interfaces? The mathematically-based specifications and automated synthesis algorithms will diffuse the process of building trust throughout the design, have the potential to mitigate the need for purely empirical testing, and diagnose failure modes in advance of costly and restricted user studies. This systematic and early integration will help develop autonomous systems in which the operator and autonomy protocols are equally essential components of the same system and reduce the so-called ``automation surprises." While we expect the theoretical and algorithmic outcomes of the proposed effort to be application- and hardware-agnostic, we concretize our research plan in a specific hardware platform. It is composed of an existing quadrotor testbed with 3D motion capture; human monitoring and decoding functionality through neural, visual, audial and biopotential signals; and human-autonomy interfaces with virtual reality embeddings.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Application #
1652113
Program Officer
Sandip Roy
Project Start
Project End
Budget Start
2017-04-01
Budget End
2022-03-31
Support Year
Fiscal Year
2016
Total Cost
$401,915
Indirect Cost
Name
University of Texas Austin
Department
Type
DUNS #
City
Austin
State
TX
Country
United States
Zip Code
78759