Artificial intelligence, in the form of machine learning (ML), is rapidly transforming the world. Today, ML is responsible for an ever-growing spectrum of sensitive decisions from loan decisions, to diagnosing diseases, to autonomous driving. With ML spreading across many industries, the issue of explainability, i.e., explaining the decisions of opaque models in ML, has taken center stage. Despite much interest and progress in the explainability question, the research in the area is still nascent and does not capture the full spectrum of ML models used in practice and the forms of explanation that are of interest to users and subjects of those models. This project explores a range of explanation tasks can be enabled by (and benefit from) program-synthesis technology as developed by the formal-methods community. The project's impact is to lay logical foundations for explainability of AI decisions, and thus has the potential to ensure transparency in our increasingly autonomous world.
The project's novelty is to use program synthesis to automatically construct simple, coherent, human-readable explanations, in the form of high-level programs, of a ML model or its decisions. The project investigates techniques for synthesizing actionable explanations of the decisions made by a traditional (non-sequence) ML models as well as recurrent models. From a technical viewpoint, this project develops new program-synthesis techniques that leverage optimization technologies and apply them to the unique problem setup presented by explainable machine learning. Second, the project develops algorithms for automata learning, synthesis of regular expressions, and synthesis of temporal-logic formulae and uses them to explain the predictions of sequence models.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.