Intelligent autonomous systems (IAMs) are on the verge of being widely deployed in domains in which they will interact closely with people (e.g., personal assistance, healthcare, driverless cars, search and rescue), and they will be expected to navigate this ethically-charged landscape responsibly. As correct ethical behavior not only involves not doing certain things, but also doing certain things to bring about an ideal state of affairs, ethical issues concerning the behavior of IAMs are likely to elude simple, static solutions and exceed the grasp of their designers. The PI argues that the behavior of such systems should be guided by explicit ethical principles abstracted from particular cases where a consensus of ethicists exists. Laying the foundations for such technology is the focus of this exploratory research. Project outcomes will help alleviate concerns with intelligent autonomous systems, since the behavior of such systems guided by ethical principles is likely to be more acceptable in real-world environments than that of such systems without this dimension. Indeed, ethical intelligent autonomous systems capable of functioning with more autonomy might be permitted to assist human beings in a wider range of domains. The PI expects that an important byproduct of this work is that a more thorough understanding of the ethical theory involved is likely to result, as it is made concrete.

To ensure correct ethical behavior, IAMs should weigh alternative possible actions against each other to determine which is ethically preferable at any given moment. The PI will leverage his previous research in developing and deploying principles that weigh the ethical preference of actions, and justify action choices for autonomous systems across multiple domains, to develop a paradigm of case-supported principle-based behavior (CPB) in which intensionally defined ethical action preference is abstracted from particular cases of ethical dilemmas and used to order ethically significant actions. The PI will refine and codify CPB using the domain of eldercare robots as a test bed by, first employing a virtual component based upon simulation and then reifying this simulation in a laboratory setting. In particular, the PI will define a comprehensive set of ethically significant actions for an eldercare robot. He will then develop and validate via an Ethical Turing Test an ethical principle that can be used to order this set by ethical preferences, and use this principle to guide the behavior of a simulated Unbounded Robotics UBR-1 robot with this set of behaviors situated in a Gazebo simulation of an assisted-living facility. He will next reify this simulation with an actual UBR-1 robot in a laboratory setting. Finally, he will refine and codify the requirements, methods, implementation specifics, and testing aspects of the case-supported principle-based behavior paradigm. Besides developing and implementing a general methodology for ensuring ethical behavior in intelligent autonomous systems, this research will provide evidence that ethical principles and decision-making can be computed and function effectively in domains where machines are likely to interact with human beings.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1449155
Program Officer
Ephraim Glinert
Project Start
Project End
Budget Start
2014-09-01
Budget End
2021-08-31
Support Year
Fiscal Year
2014
Total Cost
$241,193
Indirect Cost
Name
University of Hartford
Department
Type
DUNS #
City
West Hartford
State
CT
Country
United States
Zip Code
06117