Research under this award is developing efficient and effective methods for strategic decision making by an individual artificial agent cohabiting with other agents in uncertain environments. For example, how should an autonomous unmanned aerial vehicle decide between closer surveillance of a possible fugitive or intercepting the target who may be aware of the monitoring? Toward this goal, the research is identifying the sources of computational complexity and understanding the conflicting interrelationship between computational efficiency and decision-making effectiveness. This problem of individual decision making in uncertain multiagent settings is formalized using a recognized framework that combines the decision-theoretic paradigm of partially observable Markov decision processes (POMDPs) with elements of Bayesian games and interactive epistemology. In this framework, called interactive POMDP (I-POMDP), the research utilizes innovative ways of minimally modeling contextual knowledge in multiagent settings, exploits novel decision-making heuristics and embedded structure in problems.
Integration of research and education is manifest in the development and delivery of a multi-disciplinary course on strategic decision making under uncertainty, which integrates and compares normative theories with real human decision-making behavior.
By combining aspects of decision and game theories, both of which seek to understand normative ways of decision making, with attention to real human decision-making behavior, this research is contributing to long-term research and development of artificial agents that can assist with rational, long-term decision making and planning in areas including emergency response, environmental sustainability, autonomous vehicles and many others.