Modern networks of remote devices, such as mobile phones, wearable devices, and autonomous vehicles, generate massive amounts of data each day. This rich data has the potential to power a wide range of statistical machine learning-based applications, such as learning the activities of mobile phone users, adapting to pedestrian behavior in autonomous vehicles, predicting health events like low blood sugar from wearable devices, or detecting burglaries within smart homes. Due to the growing storage and computational power of remote devices, as well as privacy concerns associated with personal data, it is increasingly attractive to store and process data directly on each device. In the burgeoning field of "federated learning," the aim is to use a central server to learn statistical models from data stored across these remote devices, while relying on substantial computation from each device. Federated learning can be naturally cast through the lens of mathematical optimization, a key component in formulating and training most machine learning models. This project focuses on tackling several of the unique statistical and systems challenges associated with federated optimization. As part of this project, a novel open-source benchmarking framework is also being developed to concretely define the research challenges in federated learning and promote reproducibility in empirical evaluations. This project involves participation from students from underrepresented populations.
The focus of this project is to develop a novel suite of optimization methods to tackle the unique challenges of learning on remote devices, including (a) expensive communication between remote devices and a central server; (b) high variability in data, computational resources, and communication bandwidth across devices; and (c) a very small fraction of remote devices participating in the training process at any one time. While numerous optimization methods in the data center setting have been proposed to tackle (a), none allow significant flexibility in terms of (b) and (c). Further, the limited number of recently introduced federated methods either lack theoretical convergence guarantees or do not adequately address these three challenges. This project aims to develop a suite of federated optimization methods to tackle these issues, specifically developing and understanding techniques for: convex optimization, non-convex optimization, and network-aware optimization. These methods will unleash the computational power of federated networks to train highly-accurate predictive models while adhering to strict systems, network, and privacy constraints. This project leverages ideas from optimization, statistics, machine learning, distributed computing, and sensor networks. In addition to developing foundational federated optimization methods, the broader impact of this project includes the creation of a novel open-source benchmarking framework.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.