The ever-evolving and improving data acquisition techniques in science and engineering allow practitioners to measure thousands to millions of observation units in a cheap and online fashion. Examples include recommender systems, neural recordings, and influenza prediction, where decision-making is constantly updated as more data arrives and thus the problems are best represented with models capturing the online nature of data. With collective efforts from statistics, computer science, and optimization, a large toolbox has been developed to efficiently make an update to the model based on one data point at a time, whereas much less is explored about how to quantify uncertainty of the predictions in these online learning problems. For example, to what extent can we trust the predictions of an online learning algorithm, and how different would the predictions be by updating the model with more data? The investigator will develop a new framework involving theory, algorithms, and software to quantify uncertainty for a large class of online learning algorithms. The methods developed through this research project will be applied to large-scale classification problems in online learning, with the goal of enhancing interpretability of the predictions. All methods will be implemented in software that will be broadly disseminated to practitioners who work on online learning tasks. The investigator will develop new courses at both undergraduate and graduate levels based on the research output to increase interaction across statistics, computer science, and optimization, and will mentor students with interests in these fields.

More specifically, this project will focus on the uncertainty arising from training on large-scale datasets with stochastic gradient descent and many of its variants, a class of immensely popular online learning algorithms that sequentially update the model parameters using computationally cheap but noisy gradients. The algorithmic randomness of stochastic gradient descent is potentially non-negligible and could even jeopardize the interpretation of predictions at worst. Taking a fully inferential viewpoint, the proposed research has a detailed research agenda that aims to obtain an in-depth understanding of uncertainty quantification for stochastic gradient descent through three fundamental topics: (1) constructing confidence intervals for online learning with convex objectives, (2) quantifying uncertainty for deep neural networks, and (3) accelerating stochastic optimization in the online setting. Taken together, the proposed research projects will build a firm foundation for integrating statistical inferential ideas into stochastic optimization using streaming data, with research output feeding back into the development of practical methodology for analyzing online datasets. The completion of this work will bring in various perspectives from statistics, optimization, and machine learning, leading to a comprehensive understanding of inferential properties of online learning algorithms and improved trustworthiness of the application of stochastic gradient descent in a wide range of scientific and engineering problems.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Mathematical Sciences (DMS)
Application #
1847415
Program Officer
Gabor Szekely
Project Start
Project End
Budget Start
2019-06-01
Budget End
2024-05-31
Support Year
Fiscal Year
2018
Total Cost
$160,002
Indirect Cost
Name
University of Pennsylvania
Department
Type
DUNS #
City
Philadelphia
State
PA
Country
United States
Zip Code
19104