Hundreds of thousands of people across science, government, and business use automatic probabilistic inference tools. In these tools, people carefully state their assumptions, and then combine them with observed data. However, these automatic tools only work at a relatively modest scale, and ever-growing datasets require more powerful methods. Recent years have seen the development of a novel strategy for inference that has been able to address data orders of magnitude larger. However, great care and expertise is needed to wield these methods successfully, putting them out of reach for most potential users. This project seeks to promote the progress of science by making these large-scale techniques more automatic, putting them within reach of the vast majority of users not able to invest huge amounts of effort in manual algorithmic engineering.
This project advances methodology for automatic and general-purpose variational inference, with the goal of answering two questions. The first question is when does variational inference work. This is paramount, since no method can succeed on all problems. We take three directions, namely new diagnostic error measures, improved scalability for diagnostics, and an empirical evaluation on a corpus of real non-expert models gathered from an integrated course. The second question is how to automate algorithmic design choices. Variational inference algorithms require many delicate design choices, currently made manually. The core idea to automate these decisions is to maintain statistics so the effect of any set of choices on optimization speed can be predicted. This project will contribute 1) a corpus and evaluation of automatic inference on non-expert models, 2) improved diagnostic performance measures, and 3) methods to automatically make variational inference choices, guided by convergence rates.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.