Privacy-preserving optimization algorithms are essential tools for solving machine learning (ML) problems while protecting the privacy of individuals in the datasets used for training ML models. Despite the recent advances, there is a lack of a theoretical foundation for understanding their performance and hence their use in practice is limited because of utility concerns. This project seeks to develop a theory to understand the performance of private optimizers and use it to guide the design of algorithms with reliable and robust performance. To this end, the project focuses on the three main challenges related to differentially private learning: (i) bridging the gap between theory and practice by developing a unified theoretical framework that can be used to better understand and explain the performance of private optimizers; (ii) applying the theory to guide the design of private optimizers whose privacy and utility guarantees have robustness to hyperparameter choices; (iii) extending the framework, established principles, and algorithms to deep learning models.
The project’s novelty is in providing a unified theoretical framework that enables rigorous performance analysis of private optimizers. By providing a theoretical foundation, this project will help accelerate research on differentially private learning, for example, by allowing the principled design of robust and reliable training algorithms. More broadly, this project has a great potential to accelerate advances in other domains of science by providing tools to share and analyze sensitive data without having to sacrifice the privacy of individuals. The project involves both graduate and undergraduate students in this research.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.