Algorithmic decision-making is ubiquitous in the modern era. Our society uses algorithms to solve problems ranging from making investment decisions in personal financial planning, to allocating resources in large-scale computing systems such as data centers. Often, these problems are difficult because of uncertainty about the future. In algorithmic theory, traditionally conservative approaches are used which provide relatively weak but highly robust guarantees that hold no matter how the future unfolds. In practice, a more promising alternative is the use of machine-learning techniques to make algorithmic choices for the future based on knowledge of past data. By implicitly assuming that the future will mirror the past, one can provide stronger guarantees and better empirical performance. However, the "worst-case" robustness of the previous approach is not available, which is important if the implicit assumption of 'past predicts the future' no longer holds true. This project seeks to combine the two approaches and get the best of both worlds by exploring the interface between algorithm design and machine learning. The end goal is a comprehensive toolbox for algorithmic decision-making under uncertainty that is both robust and has good performance. In addition to this research component, the project will train graduate and undergraduate researchers in theoretical computer science, with an emphasis on participation of underrepresented groups.

The investigators' approach is to rethink each of these individual toolboxes to take advantage of the other -- namely incorporating machine-learned advice in algorithm design, and conversely, training machine learning models for algorithmic objectives. The main intellectual thrust of this project is to use machine-learned predictions to improve the quality of algorithms, and conversely, to design learning models that can be specifically trained for optimization objectives. This will be explored in two main directions: the first part considers Machine Learning as a Black Box. Here, the optimization algorithm merely consumes the predictions from the learning model. This is often the case in practice, particularly when the predictions are generated by complex systems such as deep neural networks. In this case, the focus will be on ensuring that we do not over-fit the predictions, on deciding what input parameters to predict in the first place, and on choosing between multiple alternative prediction models based on their relative accuracy, reliability, and costs. In the second part (Machine Learning as a White Box), the focus is on a more integrated design, where the optimization algorithm interacts with the learning model at runtime and ask adaptives queries. More ambitiously, the project explores a significant redesign of the end-to-end system, including the learning models and the optimization algorithms, for specific optimization tasks. This work will rely on techniques from online algorithms, stochastic and robust optimization, and learning theory, and build connections between these fields to address the central questions of algorithmic decision making under uncertainty.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Application #
1955785
Program Officer
A. Funda Ergun
Project Start
Project End
Budget Start
2020-05-01
Budget End
2024-04-30
Support Year
Fiscal Year
2019
Total Cost
$296,306
Indirect Cost
Name
Carnegie-Mellon University
Department
Type
DUNS #
City
Pittsburgh
State
PA
Country
United States
Zip Code
15213