The success of AI technologies has resulted in their widespread deployment, with algorithms for reasoning under uncertainty, such as machine learning, having a particularly high impact. A challenge that is often ignored, however, is the adversarial nature of many domains, in which social, economic, and political interests may try to manipulate intelligent systems into making costly mistakes. While AI has a long history in playing adversarial games, such as chess and poker, the approaches have not been appropriate for many real-world situations. The goal of the proposed research is to develop a general framework for adversarial AI that is far broader in scope and applicability, building on insights from game theory, AI planning, and cybersecurity.
A key modeling insight of the proposed research is that attacks across a broad array of settings can be modeled as planning problems, so that robust algorithms can be fundamentally viewed as interdicting attack plans. Our research will develop new foundational techniques for scalable plan interdiction under uncertainty, building off of the framework of Stackelberg games. Proposed techniques will leverage a combination of abstraction, factored representation of state, and value function approximation. In addition, novel scalable algorithms will be developed for multi-stage interdiction problems, modeled as sequential stochastic games, considering both perfect and imperfect information. Moreover, the research will make novel modeling and algorithmic contributions in multi-defender and multi-attacker interdiction games. Finally, in the more applied arena, the research will make significant intellectual contributions in applying advances in adversarial AI to model problems exhibiting important adversarial aspects, such as privacy-preserving data sharing, access control and audit policies, and vaccine design.